id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
41,796 | https://en.wikipedia.org/wiki/Time-division%20multiplexing | Time-division multiplexing (TDM) is a method of transmitting and receiving independent signals over a common signal path by means of synchronized switches at each end of the transmission line so that each signal appears on the line only a fraction of time according to agreed rules, e.g. with each transmitter working in turn. It can be used when the bit rate of the transmission medium exceeds that of the signal to be transmitted. This form of signal multiplexing was developed in telecommunications for telegraphy systems in the late 19th century but found its most common application in digital telephony in the second half of the 20th century.
History
Time-division multiplexing was first developed for applications in telegraphy to route multiple transmissions simultaneously over a single transmission line. In the 1870s, Émile Baudot developed a time-multiplexing system of multiple Hughes telegraph machines.
In 1944, the British Army used the Wireless Set No. 10 to multiplex 10 telephone conversations over a microwave relay as far as 50 miles. This allowed commanders in the field to keep in contact with the staff in England across the English Channel.
In 1953, a 24-channel time-division multiplexer was placed in commercial operation by RCA Communications to send audio information between RCA's facility on Broad Street, New York, their transmitting station at Rocky Point and the receiving station at Riverhead, Long Island, New York. The communication was by a microwave system throughout Long Island. The experimental TDM system was developed by RCA Laboratories between 1950 and 1953.
In 1962, engineers from Bell Labs developed the first D1 channel banks, which combined 24 digitized voice calls over a four-wire copper trunk line between Bell central office analogue switches. A channel bank at each end of the line allowed the single line to carry short portions, each of a second, of up to 24 voice calls, in turn. The discrete signals on the trunk line carried 1.544 Mbit/s divided into separate frames per second, each composed of 24 contiguous octets and one framing bit. Each octet in a frame carried a single telephone call in turn. Thus each of 24 voice calls was encoded into two constant-bit-rate streams of 64 kbit/s (one in each direction), and converted back to conventional analog signals by the complementary equipment on the receiving end of the trunk line.
Technology
Time-division multiplexing is used primarily for digital signals but may be applied in analog multiplexing, as above, in which two or more signals or bit streams are transferred appearing simultaneously as sub-channels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent time slots of fixed length, one for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during time slot 1, sub-channel 2 during time slot 2, etc. One TDM frame consists of one time slot per sub-channel, and usually a synchronization channel and sometimes an error correction channel. After all of these the cycle starts again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc.
Application examples
The plesiochronous digital hierarchy (PDH) system, also known as the PCM system, for digital transmission of several telephone calls over the same four-wire copper cable (T-carrier or E-carrier) or fiber in the circuit-switched digital telephone network
The synchronous digital hierarchy (SDH)/synchronous optical networking (SONET) network transmission standards that have replaced PDH
The Basic Rate Interface and Primary Rate Interface for the Integrated Services Digital Network (ISDN)
The RIFF (WAV) audio standard interleaves left and right stereo signals on a per-sample basis
TDM can be further extended into the time-division multiple access (TDMA) scheme, where several stations connected to the same physical medium, for example sharing the same frequency channel, can communicate. Application examples include:
The GSM telephone system
The Tactical Data Links Link 16 and Link 22
Multiplexed digital transmission
In circuit-switched networks, such as the public switched telephone network (PSTN), it is desirable to transmit multiple subscriber calls over the same transmission medium to effectively utilize the bandwidth of the medium. TDM allows transmitting and receiving telephone switches to create channels (tributaries) within a transmission stream. A standard DS0 voice signal has a data bit rate of 64 kbit/s. A TDM circuit runs at a much higher signal bandwidth, permitting the bandwidth to be divided into time frames (time slots) for each voice signal which is multiplexed onto the line by the transmitter. If the TDM frame consists of n voice frames, the line bandwidth is n*64 kbit/s.
Each voice time slot in the TDM frame is called a channel. In European systems, standard TDM frames contain 30 digital voice channels (E1), and in American systems (T1), they contain 24 channels. Both standards also contain extra bits (or bit time slots) for signaling and synchronization bits.
Multiplexing more than 24 or 30 digital voice channels is called higher order multiplexing. Higher order multiplexing is accomplished by multiplexing the standard TDM frames. For example, a European 120 channel TDM frame is formed by multiplexing four standard 30 channel TDM frames. At each higher order multiplex, four TDM frames from the immediate lower order are combined, creating multiplexes with a bandwidth of n*64 kbit/s, where n = 120, 480, 1920, etc.
Telecommunications systems
There are three types of synchronous TDM: T1, SONET/SDH, and ISDN.
Plesiochronous digital hierarchy (PDH) was developed as a standard for multiplexing higher order frames. PDH created larger numbers of channels by multiplexing the standard Europeans 30 channel TDM frames. This solution worked for a while; however PDH suffered from several inherent drawbacks which ultimately resulted in the development of the Synchronous Digital Hierarchy (SDH). The requirements which drove the development of SDH were these:
Be synchronous – All clocks in the system must align with a reference clock.
Be service-oriented – SDH must route traffic from End Exchange to End Exchange without worrying about exchanges in between, where the bandwidth can be reserved at a fixed level for a fixed period of time.
Allow frames of any size to be removed or inserted into an SDH frame of any size.
Easily manageable with the capability of transferring management data across links.
Provide high levels of recovery from faults.
Provide high data rates by multiplexing any size frame, limited only by technology.
Give reduced bit rate errors.
SDH has become the primary transmission protocol in most PSTN networks. It was developed to allow streams 1.544 Mbit/s and above to be multiplexed, in order to create larger SDH frames known as Synchronous Transport Modules (STM). The STM-1 frame consists of smaller streams that are multiplexed to create a 155.52 Mbit/s frame. SDH can also multiplex packet based frames e.g. Ethernet, PPP and ATM.
While SDH is considered to be a transmission protocol (Layer 1 in the OSI Reference Model), it also performs some switching functions, as stated in the third bullet point requirement listed above. The most common SDH Networking functions are these:
SDH Crossconnect – The SDH Crossconnect is the SDH version of a time–space–time crosspoint switch. It connects any channel on any of its inputs to any channel on any of its outputs. The SDH Crossconnect is used in Transit Exchanges, where all inputs and outputs are connected to other exchanges.
SDH Add–Drop Multiplexer – The SDH Add–Drop Multiplexer (ADM) can add or remove any multiplexed frame down to 1.544 Mb. Below this level, standard TDM can be performed. SDH ADMs can also perform the task of an SDH Crossconnect and are used in End Exchanges where the channels from subscribers are connected to the core PSTN network.
SDH network functions are connected using high-speed optic fibre. Optic fibre uses light pulses to transmit data and is therefore extremely fast. Modern optic fibre transmission makes use of wavelength-division multiplexing (WDM) where signals transmitted across the fibre are transmitted at different wavelengths, creating additional channels for transmission. This increases the speed and capacity of the link, which in turn reduces both unit and total costs.
Statistical version
Statistical time-division multiplexing (STDM) is an advanced version of TDM in which both the address of the terminal and the data itself are transmitted together for better routing. Using STDM allows bandwidth to be split over one line. Many college and corporate campuses use this type of TDM to distribute bandwidth.
On a 10-Mbit line entering a network, STDM can be used to provide 178 terminals with a dedicated 56k connection (178 * 56k = 9.96 Mb). A more common use however is to only grant the bandwidth when that much is needed. STDM does not reserve a time slot for each terminal, rather it assigns a slot when the terminal is requiring data to be sent or received.
In its primary form, TDM is used for circuit mode communication with a fixed number of channels and constant bandwidth per channel. Bandwidth reservation distinguishes time-division multiplexing from statistical multiplexing such as statistical time-division multiplexing. In pure TDM, the time slots are recurrent in a fixed order and pre-allocated to the channels, rather than scheduled on a packet-by-packet basis.
In dynamic TDMA, a scheduling algorithm dynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream. Dynamic TDMA is used in:
HIPERLAN/2
Dynamic synchronous transfer mode
IEEE 802.16a
Asynchronous time-division multiplexing (ATDM), is an alternative nomenclature in which STDM designates synchronous time-division multiplexing, the older method that uses fixed time slots.
See also
References
Multiplexing
Synchronization
he:ריבוב#TDM | Time-division multiplexing | [
"Engineering"
] | 2,125 | [
"Telecommunications engineering",
"Synchronization"
] |
41,797 | https://en.wikipedia.org/wiki/Time-domain%20reflectometer | A time-domain reflectometer (TDR) is an electronic instrument used to determine the characteristics of electrical lines by observing reflected pulses. It can be used to characterize and locate faults in metallic cables (for example, twisted pair wire or coaxial cable),
and to locate discontinuities in a connector, printed circuit board, or any other electrical path.
Description
A TDR measures reflections along a conductor. In order to measure those reflections, the TDR will transmit an incident signal onto the conductor and listen for its reflections. If the conductor is of a uniform impedance and is properly terminated, then there will be no reflections and the remaining incident signal will be absorbed at the far-end by the termination. Instead, if there are impedance variations, then some of the incident signal will be reflected back to the source. A TDR is similar in principle to radar.
The impedance of the discontinuity can be determined from the amplitude of the reflected signal. The distance to the reflecting impedance can also be determined from the time that a pulse takes to return. The limitation of this method is the minimum system rise time. The total rise time consists of the combined rise time of the driving pulse and that of the oscilloscope or sampler that monitors the reflections.
Method
The TDR analysis begins with the propagation of a step or impulse of energy into a system and the subsequent observation of the energy reflected by the system. By analyzing the magnitude, duration and shape of the reflected waveform, the nature of the impedance variation in the transmission system can be determined.
If a pure resistive load is placed on the output of the reflectometer and a step signal is applied, a step signal is observed on the display, and its height is a function of the resistance. The magnitude of the step produced by the resistive load may be expressed as a fraction of the input signal as given by:
where is the characteristic impedance of the transmission line.
Reflection
Generally, the reflections will have the same shape as the incident signal, but their sign and magnitude depend on the change in impedance level. If there is a step increase in the impedance, then the reflection will have the same sign as the incident signal; if there is a step decrease in impedance, the reflection will have the opposite sign. The magnitude of the reflection depends not only on the amount of the impedance change, but also upon the loss in the conductor.
The reflections are measured at the output/input to the TDR and displayed or plotted as a function of time. Alternatively, the display can be read as a function of cable length because the speed of signal propagation is almost constant for a given transmission medium.
Because of its sensitivity to impedance variations, a TDR may be used to verify cable impedance characteristics, splice and connector locations and associated losses, and estimate cable lengths.
Incident signal
TDRs use different incident signals. Some TDRs transmit a pulse along the conductor; the resolution of such instruments is often the width of the pulse. Narrow pulses can offer good resolution, but they have high frequency signal components that are attenuated in long cables. The shape of the pulse is often a half cycle sinusoid. For longer cables, wider pulse widths are used.
Fast rise time steps are also used. Instead of looking for the reflection of a complete pulse, the instrument is concerned with the rising edge, which can be very fast. A 1970s technology TDR used steps with a rise time of 25 ps.
Still other TDRs transmit complex signals and detect reflections with correlation techniques. See spread-spectrum time-domain reflectometry.
Variations and extensions
The equivalent device for optical fiber is an optical time-domain reflectometer.
Time-domain transmissometry (TDT) is an analogous technique that measures the transmitted (rather than reflected) impulse. Together, they provide a powerful means of analysing electrical or optical transmission media such as coaxial cable and optical fiber.
Variations of TDR exist. For example, spread-spectrum time-domain reflectometry (SSTDR) is used to detect intermittent faults in complex and high-noise systems such as aircraft wiring. Coherent optical time domain reflectometry (COTDR) is another variant, used in optical systems, in which the returned signal is mixed with a local oscillator and then filtered to reduce noise.
Example traces
These traces were produced by a time-domain reflectometer made from common lab equipment connected to approximately of coaxial cable having a characteristic impedance of 50 ohms. The propagation velocity of this cable is approximately 66% of the speed of light in vacuum.
These traces were produced by a commercial TDR using a step waveform with a 25 ps risetime, a sampling head with a 35 ps risetime, and an SMA cable. The far end of the SMA cable was left open or connected to different adapters. It takes about 3 ns for the pulse to travel down the cable, reflect, and reach the sampling head. A second reflection (at about 6 ns) can be seen in some traces; it is due to the reflection seeing a small mismatch at the sampling head and causing another "incident" wave to travel down the cable.
Explanation
If the far end of the cable is shorted, that is, terminated with an impedance of zero ohms, and when the rising edge of the pulse is launched down the cable, the voltage at the launching point "steps up" to a given value instantly and the pulse begins propagating in the cable towards the short. When the pulse encounters the short, no energy is absorbed at the far end. Instead, an inverted pulse reflects back from the short towards the launching end. It is only when this reflection finally reaches the launch point that the voltage at this point abruptly drops back to zero, signaling the presence of a short at the end of the cable. That is, the TDR has no indication that there is a short at the end of the cable until its emitted pulse can travel in the cable and the echo can return. It is only after this round-trip delay that the short can be detected by the TDR. With knowledge of the signal propagation speed in the particular cable-under-test, the distance to the short can be measured.
A similar effect occurs if the far end of the cable is an open circuit (terminated into an infinite impedance). In this case, though, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage.
Perfect termination at the far end of the cable would entirely absorb the applied pulse without causing any reflection, rendering the determination of the actual length of the cable impossible. In practice, some small reflection is nearly always observed.
The magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 (open circuit) to −1 (short circuit). The value of zero means that there is no reflection. The reflection coefficient is calculated as follows:
where Zo is defined as the characteristic impedance of the transmission medium and Zt is the impedance of the termination at the far end of the transmission line.
Any discontinuity can be viewed as a termination impedance and substituted as Zt. This includes abrupt changes in the characteristic impedance. As an example, a trace width on a printed circuit board doubled at its midsection would constitute a discontinuity. Some of the energy will be reflected back to the driving source; the remaining energy will be transmitted. This is also known as a scattering junction.
Usage
Time domain reflectometers are commonly used for in-place testing of very long cable runs, where it is impractical to dig up or remove what may be a kilometers-long cable. They are indispensable for preventive maintenance of telecommunication lines, as TDRs can detect resistance on joints and connectors as they corrode, and increasing insulation leakage as it degrades and absorbs moisture, long before either leads to catastrophic failures. Using a TDR, it is possible to pinpoint a fault to within centimetres.
TDRs are also very useful tools for technical surveillance counter-measures, where they help determine the existence and location of wire taps. The slight change in line impedance caused by the introduction of a tap or splice will show up on the screen of a TDR when connected to a phone line.
TDR equipment is also an essential tool in the failure analysis of modern high-frequency printed circuit boards with signal traces crafted to emulate transmission lines. Observing reflections can detect any unsoldered pins of a ball grid array device. Short-circuited pins can also be detected similarly.
The TDR principle is used in industrial settings, in situations as diverse as the testing of integrated circuit packages to measuring liquid levels. In the former, the time domain reflectometer is used to isolate failing sites in the same. The latter is primarily limited to the process industry.
In level measurement
In a TDR-based level measurement device, the device generates an impulse that propagates down a thin waveguide (referred to as a probe) – typically a metal rod or a steel cable. When this impulse hits the surface of the medium to be measured, part of the impulse reflects back up the waveguide. The device determines the fluid level by measuring the time difference between when the impulse was sent and when the reflection returned. The sensors can output the analyzed level as a continuous analog signal or switch output signals. In TDR technology, the impulse velocity is primarily affected by the permittivity of the medium through which the pulse propagates, which can vary greatly by the moisture content and temperature of the medium. In many cases, this effect can be corrected without undue difficulty. In some cases, such as in boiling and/or high temperature environments, the correction can be difficult. In particular, determining the froth (foam) height and the collapsed liquid level in a frothy / boiling medium can be very difficult.
Used in anchor cables in dams
The Dam Safety Interest Group of CEA Technologies, Inc. (CEATI), a consortium of electrical power organizations, has applied Spread-spectrum time-domain reflectometry to identify potential faults in concrete dam anchor cables. The key benefit of Time Domain reflectometry over other testing methods is the non-destructive method of these tests.
Used in the earth and agricultural sciences
A TDR is used to determine moisture content in soil and porous media. Over the last two decades, substantial advances have been made measuring moisture in soil, grain, food stuff, and sediment. The key to TDR's success is its ability to accurately determine the permittivity (dielectric constant) of a material from wave propagation, due to the strong relationship between the permittivity of a material and its water content, as demonstrated in the pioneering works of Hoekstra and Delaney (1974) and Topp et al. (1980). Recent reviews and reference work on the subject include, Topp and Reynolds (1998), Noborio (2001), Pettinellia et al. (2002), Topp and Ferre (2002) and Robinson et al. (2003). The TDR method is a transmission line technique, and determines apparent permittivity (Ka) from the travel time of an electromagnetic wave that propagates along a transmission line, usually two or more parallel metal rods embedded in soil or sediment. The probes are typically between 10 and 30 cm long and connected to the TDR via coaxial cable.
In geotechnical engineering
Time domain reflectometry has also been utilized to monitor slope movement in a variety of geotechnical settings, including highway cuts, rail beds, and open pit mines (Dowding & O'Connor, 1984, 2000a, 2000b; Kane & Beck, 1999). In TDR stability monitoring applications, a coaxial cable is installed in a vertical borehole passing through the region of concern. The electrical impedance at any point along a coaxial cable changes with deformation of the insulator between the conductors. A brittle grout surrounds the cable to translate earth movement into an abrupt cable deformation that shows up as a detectable peak in the reflectance trace. Until recently, the technique was relatively insensitive to small slope movements and could not be automated because it relied on human detection of changes in the reflectance trace over time. Farrington and Sargand (2004) developed a simple signal processing technique using numerical derivatives to extract reliable indications of slope movement from the TDR data much earlier than by conventional interpretation.
Another application of TDRs in geotechnical engineering is to determine the soil moisture content. This can be done by placing the TDRs in different soil layers and measuring the time of start of precipitation and the time that TDR indicates an increase in the soil moisture content. The depth of the TDR (d) is a known factor and the other is the time it takes the drop of water to reach that depth (t); therefore the speed of water infiltration (v) can be determined. This is a good method to assess the effectiveness of Best Management Practices (BMPs) in reducing stormwater surface runoff.
In semiconductor device analysis
Time domain reflectometry is used in semiconductor failure analysis as a non-destructive method for the location of defects in semiconductor device packages. The TDR provides an electrical signature of individual conductive traces in the device package, and is useful for determining the location of opens and shorts.
In aviation wiring maintenance
Time domain reflectometry, specifically spread-spectrum time-domain reflectometry is used on aviation wiring for both preventive maintenance and fault location. Spread spectrum time domain reflectometry has the advantage of precisely locating the fault location within thousands of miles of aviation wiring. Additionally, this technology is worth considering for real time aviation monitoring, as spread spectrum reflectometry can be employed on live wires.
This method has been shown to be useful to locating intermittent electrical faults.
Multi carrier time domain reflectometry (MCTDR) has also been identified as a promising method for embedded EWIS diagnosis or troubleshooting tools. Based on the injection of a multicarrier signal (respecting EMC and harmless for the wires), this smart technology provides information for the detection, localization and characterization of electrical defects (or mechanical defects having electrical consequences) in the wiring systems.
Hard fault (short, open circuit) or intermittent defects can be detected very quickly increasing the reliability of wiring systems and improving their maintenance.
See also
Frequency domain sensor
Murray loop bridge
Noise-domain reflectometry
Nicolson–Ross–Weir method
Optical time-domain reflectometer
Return loss
Standing wave ratio
References
Further reading
Hoekstra, P. and A. Delaney, 1974. "Dielectric properties of soils at UHF and microwave frequencies". Journal of Geophysical Research 79:1699–1708.
Smith, P., C. Furse, and J. Gunther, 2005. "Analysis of spread spectrum time domain reflectometry for wire fault location". IEEE Sensors Journal 5:1469–1478.
Waddoups, B., C. Furse and M. Schmidt. "Analysis of Reflectometry for Detection of Chafed Aircraft Wiring Insulation". Department of Electrical and Computer Engineering. Utah State University.
Noborio K. 2001. "Measurement of soil water content and electrical conductivity by time domain reflectometry: A review". Computers and Electronics in Agriculture 31:213–237.
Pettinelli E., A. Cereti, A. Galli, and F. Bella, 2002. "Time domain reflectometry: Calibration techniques for accurate measurement of the dielectric properties of various materials". Review of Scientific Instruments 73:3553–3562.
Robinson D.A., S.B. Jones, J.M. Wraith, D. Or and S.P. Friedman, 2003 "A review of advances in dielectric and electrical conductivity measurements in soils using time domain reflectometry". Vadose Zone Journal 2: 444–475.
Robinson, D. A., C. S. Campbell, J. W. Hopmans, B. K. Hornbuckle, Scott B. Jones, R. Knight, F. Ogden, J. Selker, and O. Wendroth, 2008. "Soil moisture measurement for ecological and hydrological watershed-scale observatories: A review." Vadose Zone Journal 7: 358-389.
Topp G.C., J.L. Davis and A.P. Annan, 1980. "Electromagnetic determination of soil water content: measurements in coaxial transmission lines". Water Resources Research 16:574–582.
Topp G.C. and W.D. Reynolds, 1998. "Time domain reflectometry: a seminal technique for measuring mass and energy in soil". Soil Tillage Research 47:125–132.
Topp, G.C. and T.P.A. Ferre, 2002. "Water content", in Methods of Soil Analysis. Part 4. (Ed. J.H. Dane and G.C. Topp), SSSA Book Series No. 5. Soil Science Society of America, Madison WI.
Dowding, C.H. & O'Connor, K.M. 2000a. "Comparison of TDR and Inclinometers for Slope Monitoring". Geotechnical Measurements—Proceedings of Geo-Denver2000: 80–81. Denver, CO.
Dowding, C.H. & O'Connor, K.M. 2000b. "Real Time Monitoring of Infrastructure using TDR Technology". Structural Materials Technology NDT Conference 2000
Kane, W.F. & Beck, T.J. 1999. "Advances in Slope Instrumentation: TDR and Remote Data Acquisition Systems". Field Measurements in Geomechanics, 5th International Symposium on Field Measurements in Geomechanics: 101–105. Singapore.
Farrington, S.P. and Sargand, S.M., "Advanced Processing of Time Domain Reflectometry for Improved Slope Stability Monitoring", Proceedings of the Eleventh Annual Conference on Tailings and Mine Waste, October, 2004.
Scarpetta, M.; Spadavecchia, M.; Adamo, F.; Ragolia, M.A.; Giaquinto, N. ″Detection and Characterization of Multiple Discontinuities in Cables with Time-Domain Reflectometry and Convolutional Neural Networks″. Sensors 2021, 21, 8032. https://doi.org/10.3390/s21238032
Duncan, D.; Trabold, T.A.; Mohr, C.L.; Berrett, M.K. "MEASUREMENT OF LOCAL VOID FRACTION AT ELEVATED TEMPERATURE AND PRESSURE". Third World Conference on Experimental Heat Transfer, Fluid Mechanics and Thermodynamics, Honolulu, Hawaii, USA, 31 October-5 November 1993. https://www.mohr-engineering.com/guided-radar-liquid-level-documents-EFP.php
External links
Radiodetection Extended Training – ABC's of TDR's
Work begins to repair severed net
TDR for Digital Cables – TDR for Microwave/RF and Digital Cables
TDR vs FDR: Distance to Fault
Electronic test equipment
Soil physics
Semiconductor analysis | Time-domain reflectometer | [
"Physics",
"Technology",
"Engineering"
] | 4,016 | [
"Applied and interdisciplinary physics",
"Electronic test equipment",
"Measuring instruments",
"Soil physics"
] |
41,799 | https://en.wikipedia.org/wiki/Time%20standard | A time standard is a specification for measuring time: either the rate at which time passes or points in time or both. In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. An example of a kind of time standard can be a time scale, specifying a method for measuring divisions of time. A standard for civil time can specify both time intervals and time-of-day.
Standardized time measurements are made using a clock to count periods of some period changes, which may be either the changes of a natural phenomenon or of an artificial machine.
Historically, time standards were often based on the Earth's rotational period. From the late 18 century to the 19th century it was assumed that the Earth's daily rotational rate was constant. Astronomical observations of several kinds, including eclipse records, studied in the 19th century, raised suspicions that the rate at which Earth rotates is gradually slowing and also shows small-scale irregularities, and this was confirmed in the early twentieth century. Time standards based on Earth rotation were replaced (or initially supplemented) for astronomical use from 1952 onwards by an ephemeris time standard based on the Earth's orbital period and in practice on the motion of the Moon. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards, for most practical purposes, by newer time standards based wholly or partly on atomic time.
Various types of second and day are used as the basic time interval for most time scales. Other intervals of time (minutes, hours, and years) are usually defined in terms of these two.
Terminology
The term "time" is generally used for many close but different concepts, including:
instant as an object – one point on the time axis. Being an object, it has no value;
date as a quantity characterising an instant. As a quantity, it has a value which may be expressed in a variety of ways, for example "2014-04-26T09:42:36,75" in ISO standard format, or more colloquially such as "today, 9:42 a.m.";
time interval as an object – part of the time axis limited by two instants. Being an object, it has no value;
duration as a quantity characterizing a time interval. As a quantity, it has a value, such as a number of minutes, or may be described in terms of the quantities (such as times and dates) of its beginning and end.
chronology, an ordered sequence of events in the past. Chronologies can be put into chronological groups (periodization). One of the most important systems of periodization is the geologic time scale, which is a system of periodizing the events that shaped the Earth and its life. Chronology, periodization, and interpretation of the past are together known as the study of history.
Definitions of the second
There have only ever been three definitions of the second: as a fraction of the day, as a fraction of an extrapolated year, and as the microwave frequency of a caesium atomic clock.
In early history, clocks were not accurate enough to track seconds. After the invention of mechanical clocks, the CGS system and MKS system of units both defined the second as of a mean solar day. MKS was adopted internationally during the 1940s.
In the late 1940s, quartz crystal oscillator clocks could measure time more accurately than the rotation of the Earth. Metrologists also knew that Earth's orbit around the Sun (a year) was much more stable than Earth's rotation. This led to the definition of ephemeris time and the tropical year, and the ephemeris second was defined as "the fraction of the tropical year for 1900 January 0 at 12 hours ephemeris time". This definition was adopted as part of the International System of Units in 1960.
Most recently, atomic clocks have been developed that offer improved accuracy. Since 1967, the SI base unit for time is the SI second, defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (at a temperature of 0 K and at mean sea level). The SI second is the basis of all atomic timescales, e.g. coordinated universal time, GPS time, International Atomic Time, etc.
Current time standards
Geocentric Coordinate Time (TCG) is a coordinate time having its spatial origin at the center of Earth's mass. TCG is a theoretical ideal, and any particular realization will have measurement error.
International Atomic Time (TAI) is the primary physically realized time standard. TAI is produced by the International Bureau of Weights and Measures (BIPM), and is based on the combined input of many atomic clocks around the world, each corrected for environmental and relativistic effects (both gravitational and because of speed, like in GNSS). TAI is not related to TCG directly but rather is a realization of Terrestrial Time (TT), a theoretical timescale that is a rescaling of TCG such that the time rate approximately matches proper time at mean sea level.
Universal Time (UT1) is the Earth Rotation Angle (ERA) linearly scaled to match historical definitions of mean solar time at 0° longitude. At high precision, Earth's rotation is irregular and is determined from the positions of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as GPS satellite orbits.
Coordinated Universal Time (UTC) is an atomic time scale designed to approximate UT1. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the "leap second". To date these steps (and difference "TAI-UTC") have always been positive.
The Global Positioning System broadcasts a very precise time signal worldwide, along with instructions for converting GPS time (GPST) to UTC. It was defined with a constant offset from TAI: GPST = TAI - 19 s. The GPS time standard is maintained independently but regularly synchronized with or from, UTC time.
Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. The offset is chosen such that a new day starts approximately while the Sun is crossing the nadir meridian. Alternatively the difference is not really fixed, but it changes twice a year by a round amount, usually one hour, see Daylight saving time.
Julian day number is a count of days elapsed since Greenwich mean noon on 1 January 4713 B.C., Julian proleptic calendar. The Julian Date is the Julian day number followed by the fraction of the day elapsed since the preceding noon. Conveniently for astronomers, this avoids the date skip during an observation night. Modified Julian day (MJD) is defined as MJD = JD - 2400000.5. An MJD day thus begins at midnight, civil date. Julian dates can be expressed in UT1, TAI, TT, etc. and so for precise applications the timescale should be specified, e.g. MJD 49135.3824 TAI.
Barycentric Coordinate Time (TCB) is a coordinate time having its spatial origin at the center of mass of the Solar System, which is called the barycenter.
Conversions
Conversions between atomic time systems (TAI, GPST, and UTC) are for the most part exact. However, GPS time is a measured value as opposed to a computed "paper" scale. As such it may differ from UTC(USNO) by a few hundred nanoseconds, which in turn may differ from official UTC by as much as 26 nanoseconds. Conversions for UT1 and TT rely on published difference tables which are specified to 10 microseconds and 0.1 nanoseconds respectively.
Definitions:
LS = TAI − UTC = leap seconds from USNO Table of Leap Seconds
DUT1 = UT1 − UTC published in IERS Bulletins or U.S. Naval Observatory EO
DTT = TT − TAI − 32.184 s published in BIPM's TT(BIPM) tables.
TCG is linearly related to TT as: TCG − TT = LG × (JD − 2443144.5) × 86400 seconds, with the scale difference LG defined as 6.969290134 exactly.
TCB is a linear transformation of TDB and TDB differs from TT in small, mostly periodic terms. Neglecting these terms (on the order of 2 milliseconds for several millennia around the present epoch), TCB is related to TT by: TCB − TT = LB × (JD − 2443144.5) × 86400 seconds. The scale difference LB has been defined by the IAU to be 1.550519768e-08 exactly.
Time standards based on Earth rotation
Apparent solar time or true solar time is based on the solar day, which is the period between one solar noon (passage of the real Sun across the meridian) and the next. A solar day is approximately 24 hours of mean time. Because the Earth's orbit around the Sun is elliptical, and because of the obliquity of the Earth's axis relative to the plane of the orbit (the ecliptic), the apparent solar day varies a few dozen seconds above or below the mean value of 24 hours. As the variation accumulates over a few weeks, there are differences as large as 16 minutes between apparent solar time and mean solar time (see Equation of time). However, these variations cancel out over a year. There are also other perturbations such as Earth's wobble, but these are less than a second per year.
Sidereal time is time by the stars. A sidereal rotation is the time it takes the Earth to make one revolution with rotation to the stars, approximately 23 hours 56 minutes 4 seconds. A mean solar day is about 3 minutes 56 seconds longer than a mean sidereal day, or more than a mean sidereal day. In astronomy, sidereal time is used to predict when a star will reach its highest point in the sky. For accurate astronomical work on land, it was usual to observe sidereal time rather than solar time to measure mean solar time, because the observations of 'fixed' stars could be measured and reduced more accurately than observations of the Sun (in spite of the need to make various small compensations, for refraction, aberration, precession, nutation and proper motion). It is well known that observations of the Sun pose substantial obstacles to the achievement of accuracy in measurement. In former times, before the distribution of accurate time signals, it was part of the routine work at any observatory to observe the sidereal times of meridian transit of selected 'clock stars' (of well-known position and movement), and to use these to correct observatory clocks running local mean sidereal time; but nowadays local sidereal time is usually generated by computer, based on time signals.
Mean solar time was a time standard used especially at sea for navigational purposes, calculated by observing apparent solar time and then adding to it a correction, the equation of time, which compensated for two known irregularities in the length of the day, caused by the ellipticity of the Earth's orbit and the obliquity of the Earth's equator and polar axis to the ecliptic (which is the plane of the Earth's orbit around the sun). It has been superseded by Universal Time.
Greenwich Mean Time was originally mean time deduced from meridian observations made at the Royal Greenwich Observatory (RGO). The principal meridian of that observatory was chosen in 1884 by the International Meridian Conference to be the Prime Meridian. GMT either by that name or as 'mean time at Greenwich' used to be an international time standard, but is no longer so; it was initially renamed in 1928 as Universal Time (UT) (partly as a result of ambiguities arising from the changed practice of starting the astronomical day at midnight instead of at noon, adopted as from 1 January 1925). UT1 is still in reality mean time at Greenwich. Today, GMT is a time zone but is still the legal time in the UK in winter (and as adjusted by one hour for summer time). But Coordinated Universal Time (UTC) (an atomic-based time scale which is always kept within 0.9 second of UT1) is in common actual use in the UK, and the name GMT is often used to refer to it. (See articles Greenwich Mean Time, Universal Time, Coordinated Universal Time and the sources they cite.)
Versions of Universal Time such as UT0 and UT2 have been defined but are no longer in use.
Time standards for planetary motion calculations
Ephemeris time (ET) and its successor time scales described below have all been intended for astronomical use, e.g. in planetary motion calculations, with aims including uniformity, in particular, freedom from irregularities of Earth rotation. Some of these standards are examples of dynamical time scales and/or of coordinate time scales. Ephemeris Time was from 1952 to 1976 an official time scale standard of the International Astronomical Union; it was a dynamical time scale based on the orbital motion of the Earth around the Sun, from which the ephemeris second was derived as a defined fraction of the tropical year. This ephemeris second was the standard for the SI second from 1956 to 1967, and it was also the source for calibration of the caesium atomic clock; its length has been closely duplicated, to within 1 part in 1010, in the size of the current SI second referred to atomic time. This Ephemeris Time standard was non-relativistic and did not fulfil growing needs for relativistic coordinate time scales. It was in use for the official almanacs and planetary ephemerides from 1960 to 1983, and was replaced in official almanacs for 1984 and after, by numerically integrated Jet Propulsion Laboratory Development Ephemeris DE200 (based on the JPL relativistic coordinate time scale Teph).
For applications at the Earth's surface, ET's official replacement was Terrestrial Dynamical Time (TDT), which maintained continuity with it. TDT is a uniform atomic time scale, whose unit is the SI second. TDT is tied in its rate to the SI second, as is International Atomic Time (TAI), but because TAI was somewhat arbitrarily defined at its inception in 1958 to be initially equal to a refined version of UT, TDT was offset from TAI, by a constant 32.184 seconds. The offset provided a continuity from Ephemeris Time to TDT. TDT has since been redefined as Terrestrial Time (TT).
For the calculation of ephemerides, Barycentric Dynamical Time (TDB) was officially recommended to replace ET. TDB is similar to TDT but includes relativistic corrections that move the origin to the barycenter, hence it is a dynamical time at the barycenter. TDB differs from TT only in periodic terms. The difference is at most 2 milliseconds. Deficiencies were found in the definition of TDB (though not affecting Teph), and TDB has been replaced by Barycentric Coordinate Time (TCB) and Geocentric Coordinate Time (TCG), and redefined to be JPL ephemeris time argument Teph, a specific fixed linear transformation of TCB. As defined, TCB (as observed from the Earth's surface) is of divergent rate relative to all of ET, Teph and TDT/TT; and the same is true, to a lesser extent, of TCG. The ephemerides of Sun, Moon and planets in current widespread and official use continue to be those calculated at the Jet Propulsion Laboratory (updated as from 2003 to DE405) using as argument Teph.
See also
Atomic clock
Clock synchronization
Clock signal
Epoch (astronomy)
Frequency standard
Radio clock
Time in astronomy
Time signal
Time metrology
Time transfer
Timekeeping on Mars
Orbital period as unit of time
Notes
References
Citations
Sources
Explanatory Supplement to the Astronomical Almanac, P. K. Seidelmann, ed., University Science Books, 1992, .
External links
Current time according to the bservatory (get the current time)
Systems of Time by Demetrios Matsakis, Director, Time Service Dept., United States Naval Observatory
USNO article on the definition of seconds and leap seconds
A history of astronomical time scales by Steve Allen
Why is a minute divided into 60 seconds, an hour into 60 minutes, yet there are only 24 hours in a day Ask the Experts – March 5, 2021. SCIENTIFIC AMERICAN
Timekeeping | Time standard | [
"Physics",
"Astronomy"
] | 3,518 | [
"Physical quantities",
"Time",
"Timekeeping",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
41,800 | https://en.wikipedia.org/wiki/T%20interface | A T-interface or T reference point is used for basic rate access in an Integrated Services Digital Network (ISDN) environment. It is a User–network interface reference point that is characterized by a four-wire, 144 kbit/s (2B+D) user rate.
Other characteristics of a T-interface are:
it accommodates the link access and transport layer function in the ISDN architecture
it is located at the user premises
it is distance sensitive to the servicing Network termination 1
it functions in a manner similar to that of the Channel service units (CSUs) and the Data service units (DSUs).
The T interface is electrically equivalent to the S interface, and the two are jointly referred to as the S/T interface.
See also
R interface
S interface
U interface
References
Networking hardware
Integrated Services Digital Network | T interface | [
"Technology",
"Engineering"
] | 170 | [
"Computing stubs",
"Computer networks engineering",
"Computer hardware stubs",
"Networking hardware"
] |
41,801 | https://en.wikipedia.org/wiki/Toll%20switching%20trunk | In telecommunications, a toll switching trunk or toll connecting trunk is a trunk connecting an end office to a toll center as the first stage of concentration for intertoll or long-distance traffic.
Operator assistance or participation may be an optional function. In U.S. common carrier telephony service, a toll center designated Class 4C is an office where assistance in completing incoming calls is provided in addition to other traffic; a toll center designated Class 4P is an office where operators handle only outbound calls, or where switching is performed without operator assistance.
References
See also
Class-4 telephone switch
Communication circuits
Telecommunications economics | Toll switching trunk | [
"Engineering"
] | 125 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,802 | https://en.wikipedia.org/wiki/Total%20harmonic%20distortion | The total harmonic distortion (THD or THDi) is a measurement of the harmonic distortion present in a signal and is defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. Distortion factor, a closely related term, is sometimes used as a synonym.
In audio systems, lower distortion means that the components in a loudspeaker, amplifier or microphone or other equipment produce a more accurate reproduction of an audio recording.
In radio communications, devices with lower THD tend to produce less unintentional interference with other electronic devices. Since harmonic distortion can potentially widen the frequency spectrum of the output emissions from a device by adding signals at multiples of the input frequency, devices with high THD are less suitable in applications such as spectrum sharing and spectrum sensing.
In power systems, lower THD implies lower peak currents, less heating, lower electromagnetic emissions, and less core loss in motors. IEEE Standard 519-2022 covers the recommended practice and requirements for harmonic control in electric power systems.
Definitions and examples
To understand a system with an input and an output, such as an audio amplifier, we start with an ideal system where the transfer function is linear and time-invariant. When a sinusoidal signal of frequency ω passes through a non-ideal, non-linear device, additional content is added at multiples nω (harmonics) of the original frequency. THD is a measure of that additional signal content not present in the input signal.
When the main performance criterion is the "purity" of the original sine wave (in other words, the contribution of the original frequency with respect to its harmonics), the measurement is most commonly defined as the ratio of the RMS amplitude of a set of higher harmonic frequencies to the RMS amplitude of the first harmonic, or fundamental frequency
where Vn is the RMS value of the nth harmonic voltage, and V1 is the RMS value of the fundamental component.
In practice, the THDF is commonly used in audio distortion specifications (percentage THD); however, THD is a non-standardized specification, and the results between manufacturers are not easily comparable. Since individual harmonic amplitudes are measured, it is required that the manufacturer disclose the test signal frequency range, level and gain conditions, and number of measurements taken. It is possible to measure the full 20 Hz–20 kHz range using a sweep (though distortion for a fundamental above 10 kHz is inaudible).
Measurements for calculating the THD are made at the output of a device under specified conditions. The THD is usually expressed in percent or in dB relative to the fundamental as distortion attenuation.
A variant definition uses the fundamental plus harmonics as the reference:
These can be distinguished as THDF (for "fundamental"), and THDR (for "root mean square"). THDR cannot exceed 100%. At low distortion levels, the difference between the two calculation methods is negligible. For instance, a signal with THDF of 10% has a very similar THDR of 9.95%. However, at higher distortion levels the discrepancy becomes large. For instance, a signal with THDF 266% has a THDR of 94%. A pure square wave with infinite harmonics has THDF of 48.3% and THDR of 43.5%.
Some use the term "distortion factor" as a synonym for THDR, while others use it as a synonym for THDF.
The International Electrotechnical Commission (IEC) also defines another term total harmonic factor for the "ratio of the RMS value of the harmonic content of an alternating quantity to the RMS value of the quantity" using a different equation.
THD+N
THD+N means total harmonic distortion plus noise. This measurement is much more common and more comparable between devices. It is usually measured by inputting a sine wave, notch-filtering the output, and comparing the ratio between the output signal with and without the sine wave:
Like the THD measurement, this is a ratio of RMS amplitudes and can be measured as THDF (bandpassed or calculated fundamental as the denominator) or, more commonly, as THDR (total distorted signal as the denominator).
A meaningful measurement must include the bandwidth of measurement. This measurement includes effects from ground-loop power-line hum, high-frequency interference, intermodulation distortion between these tones and the fundamental, and so on, in addition to harmonic distortion. For psychoacoustic measurements, a weighting curve is applied such as A-weighting or ITU-R BS.468, which is intended to accentuate what is most audible to the human ear, contributing to a more accurate measurement. A-weighting is a rough way to estimate the frequency sensitivity of every persons' ears, as it does not take into account the non-linear behavior of the ear. The loudness model proposed by Zwicker includes these complexities. The model is described in the German standard DIN45631
For a given input frequency and amplitude, THD+N is reciprocal to SINAD, provided that both measurements are made over the same bandwidth.
Measurement
The distortion of a waveform relative to a pure sinewave can be measured either by using a THD analyzer to analyse the output wave into its constituent harmonics and noting the amplitude of each relative to the fundamental; or by cancelling out the fundamental with a notch filter and measuring the remaining signal, which will be total aggregate harmonic distortion plus noise.
Given a sinewave generator of very low inherent distortion, it can be used as input to amplification equipment, whose distortion at different frequencies and signal levels can be measured by examining the output waveform.
There is electronic equipment both to generate sinewaves and to measure distortion; but a general-purpose digital computer equipped with a sound card can carry out harmonic analysis with suitable software. Different software can be used to generate sinewaves, but the inherent distortion may be too high for measurement of very low-distortion amplifiers.
Interpretation
For many purposes, different types of harmonics are not equivalent. For instance, crossover distortion at a given THD is much more audible than clipping distortion at the same THD, since the harmonics produced by crossover distortion are nearly as strong at higher-frequency harmonics, such as 10× to 20× the fundamental, as they are at lower-frequency harmonics like 3× or 5× the fundamental. Those harmonics appearing far away in frequency from a fundamental (desired signal) are not as easily masked by that fundamental. In contrast, at the onset of clipping, harmonics first appear at low-order frequencies and gradually start to occupy higher-frequency harmonics. A single THD number is therefore inadequate to specify audibility and must be interpreted with care. Taking THD measurements at different output levels would expose whether the distortion is clipping (which decreases with an decreasing level) or crossover (which stays constant with varying output level, and thus is a greater percentage of the sound produced at low volumes).
THD is a summation of a number of harmonics equally weighted, even though research performed decades ago identifies that lower-order harmonics are harder to hear at the same level, compared with higher-order ones. In addition, even-order harmonics are said to be generally harder to hear than odd-order. A number of methods have been developed to estimate the actual audibility of THD, used to quantify crossover distortion or loudspeaker rub and buzz, such as "high-order harmonic distortion" (HOHD) or "higher harmonic distortion" (HHD) which measures only the 10th and higher harmonics, or metrics that apply psychoacoustic loudness curves to the residual.
Examples
For many standard signals, the above criterion may be calculated analytically in a closed form. For example, a pure square wave has THDF equal to
The sawtooth signal possesses
The pure symmetrical triangle wave has
For the rectangular pulse train with the duty cycle μ (called sometimes the cyclic ratio), the THDF has the form
and logically, reaches the minimum (≈0.483) when the signal becomes symmetrical μ = 0.5, i.e. the pure square wave. Appropriate filtering of these signals may drastically reduce the resulting THD. For instance, the pure square wave filtered by the Butterworth low-pass filter of the second order (with the cutoff frequency set equal to the fundamental frequency) has THDF of 5.3%, while the same signal filtered by the fourth-order filter has THDF of 0.6%. However, analytic computation of the THDF for complicated waveforms and filters often represents a difficult task, and the resulting expressions may be quite laborious to obtain. For example, the closed-form expression for the THDF of the sawtooth wave filtered by the first-order Butterworth low-pass filter is simply
while that for the same signal filtered by the second-order Butterworth filter is given by a rather cumbersome formula
Yet, the closed-form expression for the THDF of the pulse train filtered by the pth-order Butterworth low-pass filter is even more complicated and has the following form:
where μ is the duty cycle, 0 < μ < 1, and
See also
Audio system measurements
Signal-to-noise ratio
Timbre
References
External links
Conversion: Distortion attenuation in dB to distortion factor THD in %
Swept Harmonic Distortion Measurements
Harmonic Distortion Measurements in the Presence of Noise
Electrical parameters
Audio amplifier specifications | Total harmonic distortion | [
"Engineering"
] | 1,954 | [
"Electronic engineering",
"Electrical engineering",
"Audio engineering",
"Audio amplifier specifications",
"Electrical parameters"
] |
41,805 | https://en.wikipedia.org/wiki/Transceiver | In radio communication, a transceiver is an electronic device which is a combination of a radio transmitter and a receiver, hence the name. It can both transmit and receive radio waves using an antenna, for communication purposes. These two related functions are often combined in a single device to reduce manufacturing costs. The term is also used for other devices which can both transmit and receive through a communications channel, such as optical transceivers which transmit and receive light in optical fiber systems, and bus transceivers which transmit and receive digital data in computer data buses.
Radio transceivers are widely used in wireless devices. One large use is in two-way radios, which are audio transceivers used for bidirectional person-to-person voice communication. Examples are cell phones, which transmit and receive the two sides of a phone conversation using radio waves to a cell tower, cordless phones in which both the phone handset and the base station have transceivers to communicate both sides of the conversation, and land mobile radio systems like walkie-talkies and CB radios. Another large use is in wireless modems in mobile networked computer devices such laptops, pads, and cellphones, which both transmit digital data to and receive data from a wireless router. Aircraft carry automated microwave transceivers called transponders which, when they are triggered by microwaves from an air traffic control radar, transmit a coded signal back to the radar to identify the aircraft. Satellite transponders in communication satellites receive digital telecommunication data from a satellite ground station, and retransmit it to another ground station.
History
The transceiver first appeared in the 1920s. Before then, receivers and transmitters were manufactured separately and devices that wanted to receive and transmit data required both components. Almost all amateur radio equipment today uses transceivers, but there is an active market for pure radio receivers, which are mainly used by shortwave listening operators.
Analog
Analog transceivers use frequency modulation to send and receive data. Although this technique limits the complexity of the data that can be broadcast, analog transceivers operate very reliably and are used in many emergency communication systems. They are also cheaper than digital transceivers, which makes them popular with the CB and HAM radio communities.
Digital
Digital transceivers send and receive binary data over radio waves. This allows more types of data to be broadcast, including video and encrypted communication, which is commonly used by police and fire departments. Digital transmissions tend to be clearer and more detailed than their analog counterparts. Many modern wireless devices operate on digital transmissions.
Usage
Telephony
In a wired telephone, the handset contains the transmitter (for speaking) and receiver (for listening). Despite being able to transmit and receive data, the whole unit is colloquially referred to as a "receiver". On a mobile telephone or other radiotelephone, the entire unit is a transceiver for both audio and radio.
A cordless telephone uses an audio and radio transceiver for the handset, and a radio transceiver for the base station. If a speakerphone is included in a wired telephone base or in a cordless base station, the base also becomes an audio transceiver.
A modem is similar to a transceiver in that it sends and receives a signal, but a modem uses modulation and demodulation. It modulates the signal being transmitted and demodulates the signal being received.
Ethernet
Transceivers are called Medium Attachment Units (MAUs) in IEEE 802.3 documents and were widely used in 10BASE2 and 10BASE5 Ethernet networks. Fiber-optic gigabit, 10 Gigabit Ethernet, 40 Gigabit Ethernet, and 100 Gigabit Ethernet utilize GBIC, SFP, SFP+, QSFP, XFP, XAUI, CXP, and CFP transceiver systems.
Regulation
Because transceivers are capable of broadcasting information over airwaves, they are required to adhere to various regulations. In the United
States, the Federal Communications Commission oversees their use. Transceivers must meet certain standards and capabilities depending on their intended use, and manufacturers must comply with these requirements. However, transceivers can be modified by users to violate FCC regulations. For instance, they might be used to broadcast on a frequency or channel that they should not have access to. For this reason, the FCC monitors not only the production but also the use of these devices.
See also
Two-way radio
4P4C, de facto standard connector for telephone handsets
Duplex, two-way communications capability
Radar beacon
Transmitter
Radio transmitter design
Radio receiver
Radio receiver design
Transponder
References
Rutledge, D. (1999). The electronics of radio. Cambridge [England]; New York: Cambridge University Press.
Reinhart, R. C. K. (2004). Reconfigurable transceiver and software-defined radio architecture and technology evaluated for NASA space communications. https://ntrs.nasa.gov/search.jsp?R=20050215177
Govinfo. (n.d.). Retrieved February 29, 2020, from https://www.govinfo.gov/app/details/CFR-2010-title47-vol1/CFR-2010-title47-vol1-sec2-926
Haring, K. (2007). Ham radio's technical culture (Inside technology). Cambridge, Mass.: MIT Press.
External links
, John Stone Stone, "Apparatus for simultaneously transmitting and receiving space telegraph signals"
7 MHz SSB transceiver
Networking hardware
Radio electronics
Telecommunications equipment | Transceiver | [
"Engineering"
] | 1,173 | [
"Radio electronics",
"Computer networks engineering",
"Networking hardware"
] |
41,810 | https://en.wikipedia.org/wiki/Transmission%20level%20point | In telecommunications, a transmission level point (TLP) is a test point in an electronic circuit that is typically a transmission channel. At the TLP, a test signal may be introduced or measured. Various parameters, such as the power of the signal, noise, voltage levels, wave forms, may be measured at the TLP.
The nominal transmission level at a TLP is a function of system design and is an expression of the design gain or attenuation (loss).
Voice-channel transmission levels at test points are measured in decibel-milliwatts (dBm) at a frequency of ~1000 hertz.
The dBm is an absolute reference level measurement (see ) with respect to 1 mW power. When the nominal signal power is at the TLP, the test point is called a zero transmission level point, or zero-dBm TLP. The abbreviation dBm0 stands for the power in dBm measured at a zero transmission level point. The TLP is thus characterized by the relation:
TLP = dBm — dBm0
The term TLP is commonly used as if it were a unit, preceded by the nominal level for the test point. For example, the expression refers to a . If for instance a signal is specified as -13 dBm0 at a particular point and -6 dBm is measured at that point, the TLP is +7 TLP.
The level at a TLP where an end instrument, such as a telephone set, is connected is usually specified as .
See also
Alignment level
Nominal level
Digital milliwatt
References
External links
Zero transmission level point
Telecommunications engineering | Transmission level point | [
"Engineering"
] | 333 | [
"Electrical engineering",
"Telecommunications engineering"
] |
41,811 | https://en.wikipedia.org/wiki/Transmission%20line | In electrical engineering, a transmission line is a specialized cable or other structure designed to conduct electromagnetic waves in a contained manner. The term applies when the conductors are long enough that the wave nature of the transmission must be taken into account. This applies especially to radio-frequency engineering because the short wavelengths mean that wave phenomena arise over very short distances (this can be as short as millimetres depending on frequency). However, the theory of transmission lines was historically developed to explain phenomena on very long telegraph lines, especially submarine telegraph cables.
Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed lines or feeders), distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses. RF engineers commonly use short pieces of transmission line, usually in the form of printed planar transmission lines, arranged in certain patterns to build circuits such as filters. These circuits, known as distributed-element circuits, are an alternative to traditional circuits using discrete capacitors and inductors.
Overview
Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power, which reverses direction 100 to 120 times per second, and audio signals. However, they are not generally used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line (ladder line, twisted pair), coaxial cable, and planar transmission lines such as stripline and microstrip. The higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength.
At frequencies of microwave and higher, power losses in transmission lines become excessive, and waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves. Some sources define waveguides as a type of transmission line; however, this article will not include them.
History
Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin, and Oliver Heaviside. In 1855, Lord Kelvin formulated a diffusion model of the current in a submarine cable. The model correctly predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885, Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations.
The four terminal model
For the purposes of analysis, an electrical transmission line can be modelled as a two-port network (also called a quadripole), as follows:
In the simplest case, the network is assumed to be linear (i.e. the complex voltage across either port is proportional to the complex current flowing into it when there are no reflections), and the two ports are assumed to be interchangeable. If the transmission line is uniform along its length, then its behaviour is largely described by a two parameters called characteristic impedance, symbol Z0 and propagation delay, symbol . Z0 is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, and about 300 ohms for a common type of untwisted pair used in radio transmission. Propagation delay is proportional to the length of the transmission line and is never less than the length divided by the speed of light. Typical delays for modern communication transmission lines vary from to .
When sending power down a transmission line, it is usually desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched.
Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ohmic or resistive loss (see ohmic heating). At high frequencies, another effect called dielectric loss becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating). The transmission line is modelled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line.
The total loss of power in a transmission line is often specified in decibels per metre (dB/m), and usually depends on the frequency of the signal. The manufacturer often supplies a chart showing the loss in dB/m at a range of frequencies. A loss of 3 dB corresponds approximately to a halving of the power.
Propagation delay is often specified in units of nanoseconds per metre. While propagation delay usually depends on the frequency of the signal, transmission lines are typically operated over frequency ranges where the propagation delay is approximately constant.
Telegrapher's equations
The telegrapher's equations (or just telegraph equations) are a pair of linear differential equations which describe the voltage () and current () on an electrical transmission line with distance and time. They were developed by Oliver Heaviside who created the transmission line model, and are based on Maxwell's equations.
The transmission line model is an example of the distributed-element model. It represents the transmission line as an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line:
The distributed resistance of the conductors is represented by a series resistor (expressed in ohms per unit length).
The distributed inductance (due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (in henries per unit length).
The capacitance between the two conductors is represented by a shunt capacitor (in farads per unit length).
The conductance of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (in siemens per unit length).
The model consists of an infinite series of the elements shown in the figure, and the values of the components are specified per unit length so the picture of the component can be misleading. , , , and may also be functions of frequency. An alternative notation is to use , , and to emphasize that the values are derivatives with respect to length. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the propagation constant, attenuation constant and phase constant.
The line voltage and the current can be expressed in the frequency domain as
(see differential equation, angular frequency ω and imaginary unit )
Special case of a lossless line
When the elements and are negligibly small the transmission line is considered as a lossless structure. In this hypothetical case, the model depends only on the and elements which greatly simplifies the analysis. For a lossless transmission line, the second order steady-state Telegrapher's equations are:
These are wave equations which have plane waves with equal propagation speed in the forward and reverse directions as solutions. The physical significance of this is that electromagnetic waves propagate down transmission lines and in general, there is a reflected component that interferes with the original signal. These equations are fundamental to transmission line theory.
General case of a line with losses
In the general case the loss terms, and , are both included, and the full form of the Telegrapher's equations become:
where is the (complex) propagation constant. These equations are fundamental to transmission line theory. They are also wave equations, and have solutions similar to the special case, but which are a mixture of sines and cosines with exponential decay factors. Solving for the propagation constant in terms of the primary parameters , , , and gives:
and the characteristic impedance can be expressed as
The solutions for and are:
The constants must be determined from boundary conditions. For a voltage pulse , starting at and moving in the positive direction, then the transmitted pulse at position can be obtained by computing the Fourier Transform, , of , attenuating each frequency component by , advancing its phase by , and taking the inverse Fourier Transform. The real and imaginary parts of can be computed as
with
the right-hand expressions holding when neither , nor , nor is zero, and with
where atan2 is the everywhere-defined form of two-parameter arctangent function, with arbitrary value zero when both arguments are zero.
Alternatively, the complex square root can be evaluated algebraically, to yield:
and
with the plus or minus signs chosen opposite to the direction of the wave's motion through the conducting medium. ( is usually negative, since and are typically much smaller than and , respectively, so is usually positive. is always positive.)
Special, low loss case
For small losses and high frequencies, the general equations can be simplified: If and then
Since an advance in phase by is equivalent to a time delay by , can be simply computed as
Heaviside condition
The Heaviside condition is .
If R, G, L, and C are constants that are not frequency dependent and the Heaviside condition is met,
then waves travel down the transmission line without dispersion distortion.
Input impedance of transmission line
The characteristic impedance of a transmission line is the ratio of the amplitude of a single voltage wave to its current wave. Since most transmission lines also have a reflected wave, the characteristic impedance is generally not the impedance that is measured on the line.
The impedance measured at a given distance from the load impedance may be expressed as
,
where is the propagation constant and is the voltage reflection coefficient measured at the load end of the transmission line. Alternatively, the above formula can be rearranged to express the input impedance in terms of the load impedance rather than the load voltage reflection coefficient:
.
Input impedance of lossless transmission line
For a lossless transmission line, the propagation constant is purely imaginary, , so the above formulas can be rewritten as
where is the wavenumber.
In calculating the wavelength is generally different inside the transmission line to what it would be in free-space. Consequently, the velocity factor of the material the transmission line is made of needs to be taken into account when doing such a calculation.
Special cases of lossless transmission lines
Half wave length
For the special case where where n is an integer (meaning that the length of the line is a multiple of half a wavelength), the expression reduces to the load impedance so that
for all This includes the case when , meaning that the length of the transmission line is negligibly small compared to the wavelength. The physical significance of this is that the transmission line can be ignored (i.e. treated as a wire) in either case.
Quarter wave length
For the case where the length of the line is one quarter wavelength long, or an odd multiple of a quarter wavelength long, the input impedance becomes
Matched load
Another special case is when the load impedance is equal to the characteristic impedance of the line (i.e. the line is matched), in which case the impedance reduces to the characteristic impedance of the line so that
for all and all .
Short
For the case of a shorted load (i.e. ), the input impedance is purely imaginary and a periodic function of position and wavelength (frequency)
Open
For the case of an open load (i.e. ), the input impedance is once again imaginary and periodic
Matrix parameters
The simulation of transmission lines embedded into larger systems generally utilize admittance parameters (Y matrix), impedance parameters (Z matrix), and/or scattering parameters (S matrix) that embodies the full transmission line model needed to support the simulation.
Admittance parameters
Admittance (Y) parameters may be defined by applying a fixed voltage to one port (V1) of a transmission line with the other end shorted to ground and measuring the resulting current running into each port (I1, I2) and computing the admittance on each port as a ratio of I/V The admittance parameter Y11 is I1/V1, and the admittance parameter Y12 is I2/V1. Since transmission lines are electrically passive and symmetric devices, Y12 = Y21, and Y11 = Y22.
For lossless and lossy transmission lines respectively, the Y parameter matrix is as follows:
Impedance parameters
Impedance (Z) parameter may defines by applying a fixed current into one port (I1) of a transmission line with the other port open and measuring the resulting voltage on each port (V1, V2) and computing the impedance parameter Z11 is V1/I1, and the impedance parameter Z12 is V2/I1. Since transmission lines are electrically passive and symmetric devices, V12 = V21, and V11 = V22.
In the Y and Z matrix definitions, and . Unlike ideal lumped 2 port elements (resistors, capacitors, inductors, etc.) which do not have defined Z parameters, transmission lines have an internal path to ground, which permits the definition of Z parameters.
For lossless and lossy transmission lines respectively, the Z parameter matrix is as follows:
Scattering parameters
Scattering (S) matrix parameters model the electrical behavior of the transmission line with matched loads at each termination.
For lossless and lossy transmission lines respectively, the S parameter matrix is as follows, using standard hyperbolic to circular complex translations.
Variable definitions
In all matrix parameters above, the following variable definitions apply:
= characteristic impedance
Zp = port impedance, or termination impedance
= the propagation constant per unit length
= attenuation constant in nepers per unit length
= wave number or phase constant radians per unit length
= frequency radians / second
= Speed of propagation
= wave length in unit length
L = inductance per unit length
C = capacitance per unit length
= effective dielectric constant
= 299,792,458 meters / second = Speed of light in a vacuum
Coupled transmission lines
Transmission lines may be placed in proximity to each other such that they electrically interact, such as two microstrip lines in close proximity. Such transmission lines are said to be coupled transmission lines. Coupled transmission lines are characterized by an even and odd mode analysis. The even mode is characterized by excitation of the two conductors with a signal of equal amplitude and phase. The odd mode is characterized by excitation with signals of equal and opposite magnitude. The even and odd modes each have their own characteristic impedances (Zoe, Zoo) and phase constants (). Lossy coupled transmission lines have their own even and odd mode attenuation constants (), which in turn leads to even and odd mode propagation constants ().
Coupled matrix parameters
Coupled transmission lines may be modeled using even and odd mode transmission line parameters defined in the prior paragraph as shown with ports 1 and 2 on the input and ports 3 and 4 on the output,
..
Practical types
Coaxial cable
Coaxial lines confine virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them.
In radio-frequency applications up to a few gigahertz, the wave propagates in the transverse electric and magnetic mode (TEM) only, which means that the electric and magnetic fields are both perpendicular to the direction of propagation (the electric field is radial, and the magnetic field is circumferential). However, at frequencies for which the wavelength (in the dielectric) is significantly shorter than the circumference of the cable other transverse modes can propagate. These modes are classified into two groups, transverse electric (TE) and transverse magnetic (TM) waveguide modes. When more than one mode can exist, bends and other irregularities in the cable geometry can cause power to be transferred from one mode to another.
The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. In the middle 20th century they carried long distance telephone connections.
Planar lines
Planar transmission lines are transmission lines with conductors, or in some cases dielectric strips, that are flat, ribbon-shaped lines. They are used to interconnect components on printed circuits and integrated circuits working at microwave frequencies because the planar type fits in well with the manufacturing methods for these components. Several forms of planar transmission lines exist.
Microstrip
A microstrip circuit uses a thin flat conductor which is parallel to a ground plane. Microstrip can be made by having a strip of copper on one side of a printed circuit board (PCB) or ceramic substrate while the other side is a continuous ground plane. The width of the strip, the thickness of the insulating layer (PCB or ceramic) and the dielectric constant of the insulating layer determine the characteristic impedance. Microstrip is an open structure whereas coaxial cable is a closed structure.
Stripline
A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line.
Coplanar waveguide
A coplanar waveguide consists of a center strip and two adjacent outer conductors, all three of them flat structures that are deposited onto the same insulating substrate and thus are located in the same plane ("coplanar"). The width of the center conductor, the distance between inner and outer conductors, and the relative permittivity of the substrate determine the characteristic impedance of the coplanar transmission line.
Balanced lines
A balanced line is a transmission line consisting of two conductors of the same type, and equal impedance to ground and other circuits. There are many formats of balanced lines, amongst the most common are twisted pair, star quad and twin-lead.
Twisted pair
Twisted pairs are commonly used for terrestrial telephone communications. In such cables, many pairs are grouped together in a single cable, from two to several thousand. The format is also used for data network distribution inside buildings, but the cable is more expensive because the transmission line parameters are tightly controlled.
Star quad
Star quad is a four-conductor cable in which all four conductors are twisted together around the cable axis. It is sometimes used for two circuits, such as 4-wire telephony and other telecommunications applications. In this configuration each pair uses two non-adjacent conductors. Other times it is used for a single, balanced line, such as audio applications and 2-wire telephony. In this configuration two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together.
When used for two circuits, crosstalk is reduced relative to cables with two separate twisted pairs.
When used for a single, balanced line, magnetic interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by coupling transformers.
The combined benefits of twisting, balanced signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low signal level applications such as microphone cables, even when installed very close to a power cable. The disadvantage is that star quad, in combining two conductors, typically has double the capacitance of similar two-conductor twisted and shielded audio cable. High capacitance causes increasing distortion and greater loss of high frequencies as distance increases.
Twin-lead
Twin-lead consists of a pair of conductors held apart by a continuous insulator. By holding the conductors a known distance apart, the geometry is fixed and the line characteristics are reliably consistent. It is lower loss than coaxial cable because the characteristic impedance of twin-lead is generally higher than coaxial cable, leading to lower resistive losses due to the reduced current. However, it is more susceptible to interference.
Lecher lines
Lecher lines are a form of parallel conductor that can be used at UHF for creating resonant circuits. They are a convenient practical format that fills the gap between lumped components (used at HF/VHF) and resonant cavities (used at UHF/SHF).
Single-wire line
Unbalanced lines were formerly much used for telegraph transmission, but this form of communication has now fallen into disuse. Cables are similar to twisted pair in that many cores are bundled into the same cable but only one conductor is provided per circuit and there is no twisting. All the circuits on the same route use a common path for the return current (earth return). There is a power transmission version of single-wire earth return in use in many locations.
General applications
Signal transfer
Electrical transmission lines are very widely used to transmit high frequency signals over long or short distances with minimum power loss. One familiar example is the down lead from a TV or radio aerial to the receiver.
Transmission line circuits
A large variety of circuits can also be constructed with transmission lines including impedance matching circuits, filters, power dividers and directional couplers.
Stepped transmission line
A stepped transmission line is used for broad range impedance matching. It can be considered as multiple transmission line segments connected in series, with the characteristic impedance of each individual element to be . The input impedance can be obtained from the successive application of the chain relation
where is the wave number of the -th transmission line segment and is the length of this segment, and is the front-end impedance that loads the -th segment.
Because the characteristic impedance of each transmission line segment is often different from the impedance of the fourth, input cable (only shown as an arrow marked on the left side of the diagram above), the impedance transformation circle is off-centred along the axis of the Smith Chart whose impedance representation is usually normalized against .
Approximating lumped elements
At higher frequencies, the reactive parasitic effects of real world lumped elements, including inductors and capacitors, limits their usefulness. Therefore, it is sometimes useful to approximate the electrical characteristics of inductors and capacitors with transmission lines at the higher frequencies using Richards' Transformations and then substitute the transmission lines for the lumped elements.
More accurate forms of multimode high frequency inductor modeling with transmission lines exist for advanced designers.
Stub filters
If a short-circuited or open-circuited transmission line is wired in parallel with a line used to transfer signals from point A to point B, then it will function as a filter. The method for making stubs is similar to the method for using Lecher lines for crude frequency measurement, but it is 'working backwards'. One method recommended in the RSGB's radiocommunication handbook is to take an open-circuited length of transmission line wired in parallel with the feeder delivering signals from an aerial. By cutting the free end of the transmission line, a minimum in the strength of the signal observed at a receiver can be found. At this stage the stub filter will reject this frequency and the odd harmonics, but if the free end of the stub is shorted then the stub will become a filter rejecting the even harmonics.
Wideband filters can be achieved using multiple stubs. However, this is a somewhat dated technique. Much more compact filters can be made with other methods such as parallel-line resonators.
Pulse generation
Transmission lines are used as pulse generators. By charging the transmission line and then discharging it into a resistive load, a rectangular pulse equal in length to twice the electrical length of the line can be obtained, although with half the voltage. A Blumlein transmission line is a related pulse forming device that overcomes this limitation. These are sometimes used as the pulsed power sources for radar transmitters and other devices.
Sound
The theory of sound wave propagation is very similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are also used to build structures to conduct acoustic waves; and these are called acoustic transmission lines.
See also
Artificial transmission line
Longitudinal electromagnetic wave
Propagation velocity
Radio frequency power transmission
Time domain reflectometer
References
Part of this article was derived from Federal Standard 1037C.
Further reading
(May need to add "http://www.keysight.com" to your Java Exception Site list.)
External links
Signal cables
Telecommunications engineering
Transmission lines
Distributed element circuits | Transmission line | [
"Engineering"
] | 5,219 | [
"Electrical engineering",
"Electronic engineering",
"Telecommunications engineering",
"Distributed element circuits"
] |
41,812 | https://en.wikipedia.org/wiki/Transmission%20medium | A transmission medium is a system or substance that can mediate the propagation of signals for the purposes of telecommunication. Signals are typically imposed on a wave of some kind suitable for the chosen medium. For example, data can modulate sound, and a transmission medium for sounds may be air, but solids and liquids may also act as the transmission medium. Vacuum or air constitutes a good transmission medium for electromagnetic waves such as light and radio waves. While a material substance is not required for electromagnetic waves to propagate, such waves are usually affected by the transmission media they pass through, for instance, by absorption or reflection or refraction at the interfaces between media. Technical devices can therefore be employed to transmit or guide waves. Thus, an optical fiber or a copper cable is used as transmission media.
Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides. It may also pass through any physical material that is transparent to the specific wavelength, such as water, air, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Historically, science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, and so can travel through the vacuum of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions.
Optical medium
Telecommunications
A physical medium in data communications is the transmission path over which a signal propagates. Many different types of transmission media are used as communications channel.
In many cases, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path; examples of guided media include phone lines, twisted pair cables, coaxial cables, and optical fibers. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include microwave, radio or infrared. Unguided media provide a means for transmitting electromagnetic waves but do not guide them; examples are propagation through air, vacuum and seawater.
The term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to both guided and unguided media.
Simplex versus duplex
A signal transmission may be simplex, half-duplex, or full-duplex.
In simplex transmission, signals are transmitted in only one direction; one station is a transmitter and the other is the receiver. In the half-duplex operation, both stations may transmit, but only one at a time. In full-duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at the same time.
Types
In general, a transmission medium can be classified as
linear, if different waves at any particular point in the medium can be superposed;
bounded, if it is finite in extent, otherwise unbounded;
uniform or homogeneous, if its physical properties are unchanged at different points;
isotropic, if its physical properties are the same in different directions.
There are two main types of transmission media:
guided media—waves are guided along a solid medium such as a transmission line;
unguided media—transmission and reception are achieved by means of an antenna.
One of the most common physical media used in networking is copper wire. Copper wire to carry signals to long distances using relatively low amounts of power. The unshielded twisted pair (UTP) is eight strands of copper wire, organized into four pairs.
Guided media
Twisted pair
Twisted pair cabling is a type of wiring in which two conductors of a single circuit are twisted together for the purposes of improving electromagnetic compatibility. Compared to a single conductor or an untwisted balanced pair, a twisted pair reduces electromagnetic radiation from the pair and crosstalk between neighboring pairs and improves rejection of external electromagnetic interference. It was invented by Alexander Graham Bell.
Coaxial cable
Coaxial cable, or coax (pronounced ) is a type of electrical cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables also have an insulating outer sheath or jacket. The term coaxial comes from the inner conductor and the outer shield sharing a geometric axis. Coaxial cable was invented by English physicist, engineer, and mathematician Oliver Heaviside, who patented the design in 1880.
Coaxial cable is a type of transmission line, used to carry high frequency electrical signals with low losses. It is used in such applications as telephone trunk lines, broadband internet networking cables, high-speed computer data busses, carrying cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line.
Optical fiber
Optical fiber, which has emerged as the most commonly used transmission medium for long-distance communications, is a thin strand of glass that guides light along its length. Four major factors favor optical fiber over copper: data rates, distance, installation, and costs. Optical fiber can carry huge amounts of data compared to copper. It can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost approximately a dollar a foot, the same as copper.
Multimode and single mode are two types of commonly used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles.
An optical fiber is a flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer excessively. Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, some of them being fiber optic sensors and fiber lasers.
Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than .
Being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors.
The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian physicist Narinder Singh Kapany, who is widely acknowledged as the father of fiber optics.
Unguided transmission media
Radio
Radio propagation is the behavior of radio waves as they travel, or are propagated, from one point to another, or into various parts of the atmosphere. As a form of electromagnetic radiation, like light waves, radio waves are affected by the phenomena of reflection, refraction, diffraction, absorption, polarization, and scattering. Understanding the effects of varying conditions on radio propagation has many practical applications, from choosing frequencies for international shortwave broadcasters, to designing reliable mobile telephone systems, to radio navigation, to operation of radar systems.
Different types of propagation are used in practical radio transmission systems. Line-of-sight propagation means radio waves that travel in a straight line from the transmitting antenna to the receiving antenna. Line of sight transmission is used to medium-range radio transmission such as cell phones, cordless phones, walkie-talkies, wireless networks, FM radio and television broadcasting and radar, and satellite communication, such as satellite television. Line-of-sight transmission on the surface of the Earth is limited to the distance to the visual horizon, which depends on the height of transmitting and receiving antennas. It is the only propagation method possible at microwave frequencies and above. At microwave frequencies, moisture in the atmosphere (rain fade) can degrade transmission.
At lower frequencies in the MF, LF, and VLF bands, due to diffraction radio waves can bend over obstacles like hills, and travel beyond the horizon as surface waves which follow the contour of the Earth. These are called ground waves. AM broadcasting stations use ground waves to cover their listening areas. As the frequency gets lower, the attenuation with distance decreases, so very low frequency (VLF) and extremely low frequency (ELF) ground waves can be used to communicate worldwide. VLF and ELF waves can penetrate significant distances through water and earth, and these frequencies are used for mine communication and military communication with submerged submarines.
At medium wave and shortwave frequencies (MF and HF bands) radio waves can refract from a layer of charged particles (ions) high in the atmosphere, called the ionosphere. This means that radio waves transmitted at an angle into the sky can be reflected back to Earth beyond the horizon, at great distances, even transcontinental distances. This is called skywave propagation. It is used by amateur radio operators to talk to other countries and shortwave broadcasting stations that broadcast internationally. Skywave communication is variable, dependent on conditions in the upper atmosphere; it is most reliable at night and in the winter. Due to its unreliability, since the advent of communication satellites in the 1960s, many long-range communication that previously used skywaves now use satellites.
In addition, there are several less common radio propagation mechanisms, such as tropospheric scattering (troposcatter) and near vertical incidence skywave (NVIS) which are used in specialized communication systems.
Digital encoding
Transmission and reception of data is typically performed in four steps:
At the transmitting end, the data is encoded to a binary representation.
A carrier signal is modulated as specified by the binary representation.
At the receiving end, the carrier signal is demodulated into a binary representation.
The data is decoded from the binary representation.
See also
Excitable medium
Luminiferous aether
References
Electromagnetic radiation | Transmission medium | [
"Physics"
] | 2,415 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
41,813 | https://en.wikipedia.org/wiki/Transmit-after-receive%20time%20delay | In telecommunications, transmit-after-receive time delay is the time interval from removal of RF energy at the local receiver input until the local transmitter is automatically keyed on and the transmitted RF signal amplitude has increased to 90% of its steady-state value. An Exception: High-frequency (HF) transceiver equipment is normally not designed with an interlock between receiver squelch and transmitter on-off key. The transmitter can be keyed on at any time, independent of whether or not a signal is being received at the receiver input.
See also
Attack-time delay
Receive-after-transmit time delay
References
Telecommunications engineering
Radio technology | Transmit-after-receive time delay | [
"Technology",
"Engineering"
] | 131 | [
"Information and communications technology",
"Electrical engineering",
"Telecommunications engineering",
"Radio technology"
] |
41,817 | https://en.wikipedia.org/wiki/Transponder | In telecommunications, a transponder is a device that, upon receiving a signal, emits a different signal in response. The term is a blend of transmitter and responder.
In air navigation or radio frequency identification, a flight transponder is an automated transceiver in an aircraft that emits a coded identifying signal in response to an interrogating received signal.
In a communications satellite, a satellite transponder receives signals over a range of uplink frequencies, usually from a satellite ground station; the transponder amplifies them, and re-transmits them on a different set of downlink frequencies to receivers on Earth, often without changing the content of the received signal or signals.
Satellite/broadcast communications
A communications satellite’s channels are called transponders because each is a separate transceiver or repeater. With digital video data compression and multiplexing, several video and audio channels may travel through a single transponder on a single wideband carrier. Original analog video only has one channel per transponder, with subcarriers for audio and automatic transmission identification service (ATIS). Non-multiplexed radio stations can also travel in single channel per carrier (SCPC) mode, with multiple carriers (analog or digital) per transponder. This allows each station to transmit directly to the satellite, rather than paying for a whole transponder, or using landlines to send it to an earth station for multiplexing with other stations.
Optical communications
In fiber-optic communications, a transponder is the element that sends and receives the optical signal from a fiber. A transponder is typically characterized by its data rate and the maximum distance the signal can travel.
The term "transponder" can apply to different items with important functional differences, mentioned across academic and commercial literature:
according to one description, a transponder and transceiver are both functionally similar devices that convert a full-duplex electrical signal into a full-duplex optical signal. The difference between the two is that transceivers interface electrically with the host system using a serial interface, whereas transponders use a parallel interface to do so. In this view, transponders provide easier-to-handle lower-rate parallel signals, but are bulkier and consume more power than transceivers.
according to another description, transceivers are limited to providing an electrical-optical function only (not differentiating between serial or parallel electrical interfaces), whereas transponders convert an optical signal at one wavelength to an optical signal at another wavelength (typically ITU standardized for DWDM communication). As such, transponders can be considered as two transceivers placed back-to-back. This view also seems to be held by, for example, Fujitsu.
As a result, differences in transponder functionality also might influence the functional description of related optical modules like transceivers and muxponders.
Aviation
Another type of transponder occurs in identification friend or foe (IFF) systems in military aviation and in air traffic control secondary surveillance radar (beacon radar) systems for general aviation and commercial aviation.
Primary radar works best with large all-metal aircraft, but not so well on small, composite aircraft. Its range is also limited by terrain and rain or snow and also detects unwanted objects such as automobiles, hills and trees. Furthermore, it cannot always estimate the altitude of an aircraft. Secondary radar overcomes these limitations but it depends on a transponder in the aircraft to respond to interrogations from the ground station to make the plane more visible.
Depending on the type of interrogation, the transponder sends back a transponder code (or "squawk code", Mode A) or altitude information (Mode C) to help air traffic controllers to identify the aircraft and to maintain separation between planes. Another mode called Mode S (Mode Select) is designed to help avoiding over-interrogation of the transponder (having many radars in busy areas) and to allow automatic collision avoidance. Mode S transponders are backward compatible with Modes A and C. Mode S is mandatory in controlled airspace in many countries. Some countries have also required, or are moving toward requiring, that all aircraft be equipped with Mode S, even in uncontrolled airspace. However, in the field of general aviation there have been objections to these moves, because of the cost, size, limited benefit to the users in uncontrolled airspace, and, in the case of balloons and gliders, the power requirements during long flights.
Transponders are used on some military aircraft to ensure ground personnel can verify the functionality of a missile’s flight termination system prior to launch. Such radar-enhancing transponders are needed as the enclosed weapon bays on modern aircraft interfere with prelaunch, flight termination system verification performed by range safety personnel during training test launches. The transponders re-radiate the signals allowing for much longer communication distances.
Marine
The International Maritime Organization's International Convention for the Safety of Life at Sea (SOLAS) requires the Automatic Identification System (AIS) to be fitted aboard international voyaging ships with , and all passenger ships regardless of size. AIS transmitters/receivers are generally called transponders, but they generally transmit autonomously, although coast stations can interrogate class B transponders on smaller vessels for additional information. In addition, navigational aids often have transponders called RACON (radar beacons) designed to make them stand out on a ship's radar screen.
Sonar transponders operate under water and are used to measure distance and form the basis of underwater location marking, position tracking and navigation.
Other applications
Electronic toll collection
Electronic toll collection systems such as E-ZPass in the eastern United States use RFID transponders to identify vehicles.
Lap timing
Transponders are used in races for lap timing. A cable loop is dug into the race circuit near to the start/finish line. Each individual runner or car has an active transponder with a unique ID code. When the individual passes the start/finish line, the lap time and the racing position is shown on the score board.
Passive and active RFID systems are used in motor sports, and off-road events such as Enduro and Hare and Hounds racing, the riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.
NASCAR uses transponders and cable loops placed at numerous points around the track to determine the lineup during a caution period. This system replaced a dangerous race back to the start-finish line.
Car keys
Many modern automobiles have keys with transponders hidden inside the plastic head of the key. The user of the car may not even be aware that the transponder is there, because there are no buttons to press. When a key is inserted into the ignition lock cylinder and turned, the car's computer sends a signal to the transponder. Unless the transponder replies with a valid code, the computer will not allow the engine to be started. Transponder keys have no battery; they are energized by the signal itself.
Gated communities
Transponders may also be used by residents to enter their gated communities.
However, having more than one transponder causes problems. If a resident's car with simple transponder is parked in the vicinity, any vehicle can come up to the automated gate, triggering the gate interrogation signal, which may get an acceptable response from the resident's car. Such units properly installed might involve beamforming, unique transponders for each vehicle, or simply obliging vehicles to be stored away from the gate.
See also
Acronyms and abbreviations in avionics
Transponder car key
Transceiver
Muxponder
Rebecca/Eureka transponding radar
References
External links
Transponding with DCC - Transponding in model railroading
Communication circuits
Radio electronics
Radar
Motorsport terminology
Radio-frequency identification
Wireless
fi:Toisiotutka#Toisiotutkavastain | Transponder | [
"Engineering"
] | 1,666 | [
"Radio electronics",
"Telecommunications engineering",
"Wireless",
"Radio-frequency identification",
"Communication circuits"
] |
41,820 | https://en.wikipedia.org/wiki/Transverse%20redundancy%20check | In telecommunications, a transverse redundancy check (TRC) or vertical redundancy check is a redundancy check for synchronized parallel bits applied once per bit time, across the bit streams. This requires additional parallel channels for the check bit or bits.
The term usually applies to a single parity bit, although it could also be used to refer to a larger Hamming code.
The adjective "transverse" is most often used when it is used in combination with additional error control coding, such as a longitudinal redundancy check. Although parity alone can only detect and not correct errors, it can be part of a system for correcting errors.
An example of a TRC is the parity written to the 9th track of a 9-track tape.
References
Error detection and correction | Transverse redundancy check | [
"Engineering"
] | 161 | [
"Error detection and correction",
"Reliability engineering"
] |
41,823 | https://en.wikipedia.org/wiki/Tropospheric%20wave | In telecommunications, a tropospheric wave is a radio wave that travels via reflection in the troposphere. Trophospheric waves are propagated from a place of abrupt change in the dielectric constant, or its gradient. In some cases, a ground wave may be so altered that new components appear to arise from reflection in regions of rapidly changing dielectric constant. When these components are distinguishable from the other components, they are called "tropospheric waves."
References
Radio frequency propagation | Tropospheric wave | [
"Physics",
"Materials_science"
] | 109 | [
"Physical phenomena",
"Materials science stubs",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Electromagnetism stubs"
] |
41,826 | https://en.wikipedia.org/wiki/Trusted%20computing%20base | The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.
The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.
Definition and characterization
The term goes back to John Rushby, who defined it as the combination of operating system kernel and trusted processes. The latter refers to processes which are allowed to violate the system's access-control rules.
In the classic paper Authentication in Distributed Systems: Theory and Practice Lampson et al. define the TCB of a computer system as simply
a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.
Both definitions, while clear and convenient, are neither theoretically exact nor intended to be, as e.g. a network server process under a UNIX-like operating system might fall victim to a security breach and compromise an important part of the system's security, yet is not part of the operating system's TCB. The Orange Book, another classic computer security literature reference, therefore provides a more formal definition of the TCB of a computer system, as
the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.
In other words, trusted computing base (TCB) is a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy.
The Orange Book further explains that
[t]he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.
In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In operating systems, this typically consists of the kernel (or microkernel) and a select set of system utilities (for example, setuid programs and daemons in UNIX systems). In programming languages designed with built-in security features, such as Java and E, the TCB is formed of the language runtime and standard library.
Properties
Predicated upon the security policy
As a consequence of the above Orange Book definition, the boundaries of the TCB depend closely upon the specifics of how the security policy is fleshed out. In the network server example above, even though, say, a Web server that serves a multi-user application is not part of the operating system's TCB, it has the responsibility of performing access control so that the users cannot usurp the identity and privileges of each other. In this sense, it definitely is part of the TCB of the larger computer system that comprises the UNIX server, the user's browsers and the Web application; in other words, breaching into the Web server through e.g. a buffer overflow may not be regarded as a compromise of the operating system proper, but it certainly constitutes a damaging exploit on the Web application.
This fundamental relativity of the boundary of the TCB is exemplified by the concept of the 'target of evaluation' ('TOE') in the Common Criteria security process: in the course of a Common Criteria security evaluation, one of the first decisions that must be made is the boundary of the audit in terms of the list of system components that will come under scrutiny.
A prerequisite to security
Systems that don't have a trusted computing base as part of their design do not provide security of their own: they are only secure insofar as security is provided to them by external means (e.g. a computer sitting in a locked room without a network connection may be considered secure depending on the policy, regardless of the software it runs). This is because, as David J. Farber et al. put it, [i]n a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers. As far as computer security is concerned, reasoning about the security properties of a computer system requires being able to make sound assumptions about what it can, and more importantly, cannot do; however, barring any reason to believe otherwise, a computer is able to do everything that a general Von Neumann machine can. This obviously includes operations that would be deemed contrary to all but the simplest security policies, such as divulging an email or password that should be kept secret; however, barring special provisions in the architecture of the system, there is no denying that the computer could be programmed to perform these undesirable tasks.
These special provisions that aim at preventing certain kinds of actions from being executed, in essence, constitute the trusted computing base. For this reason, the Orange Book (still a reference on the design of secure operating systems ) characterizes the various security assurance levels that it defines mainly in terms of the structure and security features of the TCB.
Software parts of the TCB need to protect themselves
As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the von Neumann architecture implemented by virtually all modern computers: since machine code can be processed as just another kind of data, it can be read and overwritten by any program. This can be prevented by special memory management provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to.
In many modern CPUs, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the memory management unit (MMU), which is programmable by the operating system to allow and deny a running program's access to specific ranges of the system memory. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called supervisor mode; compared to more crude approaches (such as storing the TCB in ROM, or equivalently, using the Harvard architecture), it has the advantage of allowing security-critical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own.
Trusted vs. trustworthy
As stated above, trust in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is “trusted” first and foremost in the sense that it has to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests to the practical limits of such trust.
The alternative is formal software verification, which uses mathematical proof techniques to show the absence of bugs. Researchers at NICTA and its spinout Open Kernel Labs have recently performed such a formal verification of seL4, a member of the L4 microkernel family, proving functional correctness of the C implementation of the kernel.
This makes seL4 the first operating-system kernel which closes the gap between trust and trustworthiness, assuming the mathematical proof is free from error.
TCB size
Due to the aforementioned need to apply costly techniques such as formal verification or manual review, the size of the TCB has immediate consequences on the economics of the TCB assurance process, and the trustworthiness of the resulting product (in terms of the mathematical expectation of the number of bugs not found during the verification or review). In order to reduce costs and security risks, the TCB should therefore be kept as small as possible. This is a key argument in the debate preferring microkernels to monolithic kernels.
Examples
AIX materializes the trusted computing base as an optional component in its install-time package management system.
See also
Black box
Orange Book
Trust anchor
Hardware security
References
Computer security procedures | Trusted computing base | [
"Engineering"
] | 1,762 | [
"Cybersecurity engineering",
"Computer security procedures"
] |
41,827 | https://en.wikipedia.org/wiki/Turnkey | A turnkey, a turnkey project, or a turnkey operation (also spelled turn-key) is a type of project that is constructed so that it can be sold to any buyer as a completed product. This is contrasted with build to order, where the constructor builds an item to the buyer's exact specifications, or when an incomplete product is sold with the assumption that the buyer would complete it.
A turnkey project or contract as described by Duncan Wallace (1984) is
A turnkey contract is typically a construction contract under which a contractor is employed to plan, design and build a project or an infrastructure and do any other necessary development to make it functional or ‘ready to use’ at an agreed price and by a fixed date.
In turnkey contracts, most of the time the employer provides the primary design. The contractor must follow the primary design provided by the employer.
A turnkey computer system is a complete computer including hardware, operating system and application(s) designed and sold to satisfy specific business requirements.
Common usage
Turnkey refers to something that is ready for immediate use, generally used in the sale or supply of goods or services. The word is a reference to the fact that the customer, upon receiving the product, just needs to turn the ignition key to make it operational, or that the key just needs to be turned over to the customer. Turnkey is commonly used in the construction industry, for instance, in which it refers to bundling of materials and labour by the home builder or general contractor to complete the home without owner involvement. The word is often used to describe a home built on the developer's land with the developer's financing ready for the customer to move in. If a contractor builds a "turnkey home" it frames the structure and finish the interior; everything is completed down to the cabinets and carpet. Turnkey is also commonly used in motorsports to describe a car being sold with powertrain (engine, transmission, etc.) to contrast with a vehicle sold without one so that other components may be re-used.
Similarly, this term may be used to advertise the sale of an established business, including all the equipment necessary to run it, or by a business-to-business supplier providing complete packages for business start-up. An example would be the creation of a "turnkey hospital" which would be building a complete medical.
In manufacturing, the turnkey manufacturing contractor (the business that takes on the turnkey project) normally provide help during the initial design process, machining and tooling, quality assurance, to production, packaging and delivery. Turnkey manufacturing have advantages in saving production time, single point of contact, cost savings and price certainty and quality assurance.
Specific usage
The term turnkey is also often used in the technology industry, most commonly to describe pre-built computer "packages" in which everything needed to perform a certain type of task (e.g. audio editing) is put together by the supplier and sold as a bundle. This often includes a computer with pre-installed software, various types of hardware, and accessories. Such packages are commonly called appliances. A website with a ready-made solutions and some configurations is called a turnkey website.
In real estate, turnkey is defined as a home or property that is ready for occupation for its intended purpose, i.e., a home that is fully functional, needs no upgrading or repairs (move-in ready). In commercial use, a building set up to do auto repairs would be defined as turnkey if it came fully stocked with all needed machinery and tools for that particular trade. The turnkey process includes all of the steps involved to open a location including the site selection, negotiations, space planning, construction coordination and complete installation. "Turnkey real estate" also refers to a type of investment. This process includes the purchase, construction or rehab (of an existing site), the leasing out to tenants, and then the sale of the property to a buyer. The buyer is purchasing an investment property which is producing a stream of income.
In drilling, the term indicates an arrangement where a contractor must fully complete a well up to some milestone to receive any payment (in exchange for greater compensation upon completion).
See also
Commercial off-the-shelf
Engineering, procurement and construction
Turnkey supplier
Value-added reseller
References
Business law
Facilities engineering
Product management
Software features
Management cybernetics | Turnkey | [
"Technology",
"Engineering"
] | 894 | [
"Building engineering",
"Facilities engineering",
"Mechanical engineering by discipline",
"Software features"
] |
41,828 | https://en.wikipedia.org/wiki/Two-out-of-five%20code | A two-out-of-five code is a constant-weight code that provides exactly ten possible combinations of two bits, and is thus used for representing the decimal digits using five bits. Each bit is assigned a weight, such that the set bits sum to the desired value, with an exception for zero.
According to Federal Standard 1037C:
each decimal digit is represented by a binary numeral consisting of five bits of which two are of one kind, called ones, and three are of the other kind, called zeros, and
the usual weights assigned to the bit positions are 0-1-2-3-6. However, in this scheme, zero is encoded as binary 01100; strictly speaking the 0-1-2-3-6 previously claimed is just a mnemonic device.
The weights give a unique encoding for most digits, but allow two encodings for 3: 0+3 or 10010 and 1+2 or 01100. The former is used to encode the digit 3, and the latter is used to represent the otherwise unrepresentable zero.
The IBM 7070, IBM 7072, and IBM 7074 computers used this code to represent each of the ten decimal digits in a machine word, although they numbered the bit positions 0-1-2-3-4, rather than with weights. Each word also had a sign flag, encoded using a two-out-of-three code, that could be A Alphanumeric, − Minus, or + Plus. When copied to a digit, the three bits were placed in bit positions 0-3-4. (Thus producing the numeric values 3, 6 and 9, respectively.)
A variant is the United States Postal Service POSTNET barcode, used to represent the ZIP Code for automated mail sorting and routing equipment. This uses two tall bars as ones and three short bars as zeros. Here, the weights assigned to the bit positions are 7-4-2-1-0. Again, zero is encoded specially, using the 7+4 combination (binary 11000) that would naturally encode 11. This method was also used in North American telephone multi-frequency and crossbar switching systems.
The USPS Postal Alpha Numeric Encoding Technique (PLANET) uses the same weights, but with the opposite bar-height convention.
The Code 39 barcode uses weights 1-2-4-7-0 (i.e. LSB first, Parity bit last) for the widths of its bars, but it also encodes two bits of extra information in the spacing between bars. The || ||| spacing is used for digits.
The following table represents decimal digits from 0 to 9 in various two-out-of-five code systems:
The requirement that exactly two bits be set is strictly stronger than a parity check; like all constant-weight codes, a two-out-of-five code can detect not only any single-bit error, but any unidirectional error -- cases in which all the individual bit errors are of a single type (all 0→1 or all 1→0).
See also
Bi-quinary coded decimal
References
Barcodes
Computer arithmetic
Telephony signals | Two-out-of-five code | [
"Mathematics"
] | 664 | [
"Computer arithmetic",
"Arithmetic"
] |
41,831 | https://en.wikipedia.org/wiki/Telephony | Telephony ( ) is the field of technology involving the development, application, and deployment of telecommunications services for the purpose of electronic transmission of voice, fax, or data, between distant parties. The history of telephony is intimately linked to the invention and development of the telephone.
Telephony is commonly referred to as the construction or operation of telephones and telephonic systems and as a system of telecommunications in which telephonic equipment is employed in the transmission of speech or other sound between points, with or without the use of wires. The term is also used frequently to refer to computer hardware, software, and computer network systems, that perform functions traditionally performed by telephone equipment. In this context the technology is specifically referred to as Internet telephony, or voice over Internet Protocol (VoIP).
Overview
The first telephones were connected directly in pairs. Each user had a separate telephone wired to each locations to be reached. This quickly became inconvenient and unmanageable when users wanted to communicate with more than a few people. The invention of the telephone exchange provided the solution for establishing telephone connections with any other telephone in service in the local area. Each telephone was connected to the exchange at first with one wire, later one wire pair, the local loop. Nearby exchanges in other service areas were connected with trunk lines, and long-distance service could be established by relaying the calls through multiple exchanges.
Initially, exchange switchboards were manually operated by an attendant, commonly referred to as the "switchboard operator". When a customer cranked a handle on the telephone, it activated an indicator on the board in front of the operator, who would in response plug the operator headset into that jack and offer service. The caller had to ask for the called party by name, later by number, and the operator connected one end of a circuit into the called party jack to alert them. If the called station answered, the operator disconnected their headset and completed the station-to-station circuit. Trunk calls were made with the assistance of other operators at other exchangers in the network.
Until the 1970s, most telephones were permanently wired to the telephone line installed at customer premises. Later, conversion to installation of jacks that terminated the inside wiring permitted simple exchange of telephone sets with telephone plugs and allowed portability of the set to multiple locations in the premises where jacks were installed. The inside wiring to all jacks was connected in one place to the wire drop which connects the building to a cable. Cables usually bring a large number of drop wires from all over a district access network to one wire center or telephone exchange. When a telephone user wants to make a telephone call, equipment at the exchange examines the dialed telephone number and connects that telephone line to another in the same wire center, or to a trunk to a distant exchange. Most of the exchanges in the world are interconnected through a system of larger switching systems, forming the public switched telephone network (PSTN).
In the second half of the 20th century, fax and data became important secondary applications of the network created to carry voices, and late in the century, parts of the network were upgraded with ISDN and DSL to improve handling of such traffic.
Today, telephony uses digital technology (digital telephony) in the provisioning of telephone services and systems. Telephone calls can be provided digitally, but may be restricted to cases in which the last mile is digital, or where the conversion between digital and analog signals takes place inside the telephone. This advancement has reduced costs in communication, and improved the quality of voice services. The first implementation of this, ISDN, permitted all data transport from end-to-end speedily over telephone lines. This service was later made much less important due to the ability to provide digital services based on the Internet protocol suite.
Since the advent of personal computer technology in the 1980s, computer telephony integration (CTI) has progressively provided more sophisticated telephony services, initiated and controlled by the computer, such as making and receiving voice, fax, and data calls with telephone directory services and caller identification. The integration of telephony software and computer systems is a major development in the evolution of office automation. The term is used in describing the computerized services of call centers, such as those that direct your phone call to the right department at a business you're calling. It is also sometimes used for the ability to use your personal computer to initiate and manage phone calls (in which case you can think of your computer as your personal call center).
Digital telephony
Digital telephony is the use of digital electronics in the operation and provisioning of telephony systems and services. Since the late 20th century, a digital core network has replaced the traditional analog transmission and signaling systems, and much of the access network has also been digitized.
Starting with the development of transistor technology, originating from Bell Telephone Laboratories in 1947, to amplification and switching circuits in the 1950s, the public switched telephone network (PSTN) has gradually moved towards solid-state electronics and automation. Following the development of computer-based electronic switching systems incorporating metal–oxide–semiconductor (MOS) and pulse-code modulation (PCM) technologies, the PSTN gradually evolved towards the digitization of signaling and audio transmissions. Digital telephony has since dramatically improved the capacity, quality and cost of the network. Digitization allows wideband voice on the same channel, with improved quality of a wider analog voice channel.
History
The earliest end-to-end analog telephone networks to be modified and upgraded to transmission networks with Digital Signal 1 (DS1/T1) carrier systems date back to the early 1960s. They were designed to support the basic 3 kHz voice channel by sampling the bandwidth-limited analog voice signal and encoding using pulse-code modulation (PCM). Early PCM codec-filters were implemented as passive resistorcapacitorinductor filter circuits, with analog-to-digital conversion (for digitizing voices) and digital-to-analog conversion (for reconstructing voices) handled by discrete devices. Early digital telephony was impractical due to the low performance and high costs of early PCM codec-filters.
Practical digital telecommunication was enabled by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET), which led to the rapid development and wide adoption of PCM digital telephony. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. MOS technology was initially overlooked by Bell because they did not find it practical for analog telephone applications, before it was commercialized by Fairchild and RCA for digital electronics such as computers.
MOS technology eventually became practical for telephone applications with the MOS mixed-signal integrated circuit, which combines analog and digital signal processing on a single chip, developed by former Bell engineer David A. Hodges with Paul R. Gray at UC Berkeley in the early 1970s. In 1974, Hodges and Gray worked with R.E. Suarez to develop MOS switched capacitor (SC) circuit technology, which they used to develop a digital-to-analog converter (DAC) chip, using MOS capacitors and MOSFET switches for data conversion. MOS analog-to-digital converter (ADC) and DAC chips were commercialized by 1974.
MOS SC circuits led to the development of PCM codec-filter chips in the late 1970s. The silicon-gate CMOS (complementary MOS) PCM codec-filter chip, developed by Hodges and W.C. Black in 1980, has since been the industry standard for digital telephony. By the 1990s, telecommunication networks such as the public switched telephone network (PSTN) had been largely digitized with very-large-scale integration (VLSI) CMOS PCM codec-filters, widely used in electronic switching systems for telephone exchanges, private branch exchanges (PBX) and key telephone systems (KTS); user-end modems; data transmission applications such as digital loop carriers, pair gain multiplexers, telephone loop extenders, integrated services digital network (ISDN) terminals, digital cordless telephones and digital cell phones; and applications such as speech recognition equipment, voice data storage, voice mail and digital tapeless answering machines. The bandwidth of digital telecommunication networks has been rapidly increasing at an exponential rate, as observed by Edholm's law, largely driven by the rapid scaling and miniaturization of MOS technology.
Uncompressed PCM digital audio with 8-bit depth and 8kHz sample rate requires a bit rate of 64kbit/s, which was impractical for early digital telecommunication networks with limited network bandwidth. A solution to this issue was linear predictive coding (LPC), a speech coding data compression algorithm that was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. LPC was capable of audio data compression down to 2.4kbit/s, leading to the first successful real-time conversations over digital networks in the 1970s. LPC has since been the most widely used speech coding method. Another audio data compression method, a discrete cosine transform (DCT) algorithm called the modified discrete cosine transform (MDCT), has been widely adopted for speech coding in voice-over-IP (VoIP) applications since the late 1990s.
The development of transmission methods such as SONET and fiber optic transmission further advanced digital transmission. Although analog carrier systems existed that multiplexed multiple analog voice channels onto a single transmission medium, digital transmission allowed lower cost and more channels multiplexed on the transmission medium. Today the end instrument often remains analog but the analog signals are typically converted to digital signals at the serving area interface (SAI), central office (CO), or other aggregation point. Digital loop carriers (DLC) and fiber to the x place the digital network ever closer to the customer premises, relegating the analog local loop to legacy status.
IP telephony
The field of technology available for telephony has broadened with the advent of new communication technologies. Telephony now includes the technologies of Internet services and mobile communication, including video conferencing.
The new technologies based on Internet Protocol (IP) concepts are often referred to separately as voice over IP (VoIP) telephony, also commonly referred to as IP telephony or Internet telephony. Unlike traditional phone service, IP telephony service is relatively unregulated by government. In the United States, the Federal Communications Commission (FCC) regulates phone-to-phone connections, but says they do not plan to regulate connections between a phone user and an IP telephony service provider.
A specialization of digital telephony, Internet Protocol (IP) telephony involves the application of digital networking technology that was the foundation to the Internet to create, transmit, and receive telecommunications sessions over computer networks. Internet telephony is commonly known as voice over Internet Protocol (VoIP), reflecting the principle, but it has been referred with many other terms. VoIP has proven to be a disruptive technology that is rapidly replacing traditional telephone infrastructure technologies. As of January 2005, up to 10% of telephone subscribers in Japan and South Korea have switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing". As of 2006, many VoIP companies offer service to consumers and businesses.
A significant advancement in mobile telephony has been the integration of IP technologies into mobile networks, notably through Voice over LTE (VoLTE) and Voice over 5G (Vo5G). These technologies enable voice calls to be transmitted over the same IP-based infrastructure used for data services, offering improved call quality and faster connections compared to traditional circuit-switched networks. VoLTE and Vo5G are becoming the standard for mobile voice communication in many regions, as mobile operators transition to all-IP networks.
IP telephony uses an Internet connection and hardware IP phones, analog telephone adapters, or softphone computer applications to transmit conversations encoded as data packets. While one of the most common and cost-effective uses of IP telephony is through connections over WiFi hotspots, it is also employed on private networks and over other types of Internet connections, which may or may not have a direct link to the global telephone network.
Social impact research
Direct person-to-person communication includes non-verbal cues expressed in facial and other bodily articulation, that cannot be transmitted in traditional voice telephony. Video telephony restores such interactions to varying degrees. Social Context Cues Theory is a model to measure the success of different types of communication in maintaining the non-verbal cues present in face-to-face interactions. The research examines many different cues, such as the physical context, different facial expressions, body movements, tone of voice, touch and smell.
Various communication cues are lost with the usage of the telephone. The communicating parties are not able to identify the body movements, and lack touch and smell. Although this diminished ability to identify social cues is well known, Wiesenfeld, Raghuram, and Garud point out that there is a value and efficiency to the type of communication for different tasks. They examine work places in which different types of communication, such as the telephone, are more useful than face-to-face interaction.
The expansion of communication to mobile telephone service has created a different filter of the social cues than the land-line telephone. The use of instant messaging, such as texting, on mobile telephones has created a sense of community. In The Social Construction of Mobile Telephony it is suggested that each phone call and text message is more than an attempt to converse. Instead, it is a gesture which maintains the social network between family and friends. Although there is a loss of certain social cues through telephones, mobile phones bring new forms of expression of different cues that are understood by different audiences. New language additives attempt to compensate for the inherent lack of non-physical interaction.
Another social theory supported through telephony is the Media Dependency Theory. This theory concludes that people use media or a resource to attain certain goals. This theory states that there is a link between the media, audience, and the large social system. Telephones, depending on the person, help attain certain goals like accessing information, keeping in contact with others, sending quick communication, entertainment, etc.
See also
Extended area service
History of the telephone
Invention of the telephone
List of telephony terminology
Stimulus protocol
References
History of telecommunications
Telecommunications | Telephony | [
"Technology"
] | 3,058 | [
"Information and communications technology",
"Telecommunications"
] |
41,834 | https://en.wikipedia.org/wiki/Uninterruptible%20power%20supply | An uninterruptible power supply (UPS) or uninterruptible power source is a type of continual power system that provides automated backup electric power to a load when the input power source or mains power fails. A UPS differs from a traditional auxiliary/emergency power system or standby generator in that it will provide near-instantaneous protection from input power interruptions by switching to energy stored in battery packs, supercapacitors or flywheels. The on-battery run-times of most UPSs are relatively short (only a few minutes) but sufficient to "buy time" for initiating a standby power source or properly shutting down the protected equipment. Almost all UPSs also contain integrated surge protection to shield the output appliances from voltage spikes.
A UPS is typically used to protect hardware such as computers, hospital equipment, data centers, telecommunications equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss. UPS units range in size from ones designed to protect a single computer (around 200 volt-ampere rating) to large units powering entire data centers or buildings.
Common power problems
The primary role of any UPS is to provide short-term power when the input power source fails. However, most UPS units are also capable in varying degrees of correcting common utility power problems:
Voltage spike or sustained overvoltage
Momentary or sustained reduction in input voltage
Voltage sag
Noise, defined as a high frequency transient or oscillation, usually injected into the line by nearby equipment
Instability of the mains frequency
Harmonic distortion, defined as a departure from the ideal sinusoidal waveform expected on the line
Some manufacturers of UPS units categorize their products in accordance with the number of power-related problems they address.
A UPS unit may also introduce problems with electric power quality. To prevent this, a UPS should be selected not only by capacity but also by the quality of power that is required by the equipment that is being supplied.
Technologies
The three general categories of modern UPS systems are on-line, line-interactive and standby:
An online UPS uses a "double conversion" method of accepting AC input, rectifying to DC for passing through the rechargeable battery (or battery strings), then inverting back to 120 V/230 V AC for powering the protected equipment.
A line-interactive UPS maintains the inverter in line and redirects the battery's DC current path from the normal charging mode to supplying current when power is lost.
In a standby ("off-line") system the load is powered directly by the input power and the backup power circuitry is only invoked when the utility power fails.
Most UPS below one kilovolt-ampere (1 kVA) are of the line-interactive or standby variety which are usually less expensive.
For large power units, dynamic uninterruptible power supplies (DUPS) are sometimes used. A synchronous motor/alternator is connected on the mains via a choke. Energy is stored in a flywheel. When the mains power fails, an eddy-current regulation maintains the power on the load as long as the flywheel's energy is not exhausted. DUPS are sometimes combined or integrated with a diesel generator that is turned on after a brief delay, forming a diesel rotary uninterruptible power supply (DRUPS).
Offline/standby
The offline/standby UPS offers only the most basic features, providing surge protection and battery backup. The protected equipment is normally connected directly to incoming utility power. When the incoming voltage falls below or rises above a predetermined level the UPS turns on its internal DC-AC inverter circuitry, which is powered from an internal storage battery. The UPS then mechanically switches the connected equipment onto its DC-AC inverter output. The switch-over time can be as long as 25 milliseconds depending on the amount of time it takes the standby UPS to detect the lost utility voltage. The UPS will be designed to power certain equipment, such as a personal computer, without any objectionable dip or brownout to that device.
Line-interactive
The line-interactive UPS is similar in operation to a standby UPS but with the addition of a multi-tap variable-voltage autotransformer. This is a special type of transformer that can add or subtract powered coils of wire, thereby increasing or decreasing the magnetic field and the output voltage of the transformer. This may also be performed by a buck–boost transformer which is distinct from an autotransformer, since the former may be wired to provide galvanic isolation.
This type of UPS is able to tolerate continuous undervoltage brownouts and overvoltage surges without consuming the limited reserve battery power. It instead compensates by automatically selecting different power taps on the autotransformer. Depending on the design, changing the autotransformer tap can cause a very brief output power disruption, which may cause UPSs equipped with a power-loss alarm to "chirp" for a moment.
This has become popular even in the cheapest UPSes because it takes advantage of components already included. The main 50/60 Hz transformer used to convert between line voltage and battery voltage needs to provide two slightly different turns ratios: One to convert the battery output voltage (typically a multiple of 12 V) to line voltage, and a second one to convert the line voltage to a slightly higher battery charging voltage (such as a multiple of 14 V). The difference between the two voltages is because charging a battery requires a delta voltage (up to 13–14 V for charging a 12 V battery). Furthermore, it is easier to do the switching on the line-voltage side of the transformer because of the lower currents on that side.
To gain the buck/boost feature, all that is required is two separate switches so that the AC input can be connected to one of the two primary taps, while the load is connected to the other, thus using the main transformer's primary windings as an autotransformer. The battery can still be charged while "bucking" an overvoltage, but while "boosting" an undervoltage, the transformer output is too low to charge the batteries.
Autotransformers can be engineered to cover a wide range of varying input voltages, but this requires more taps and increases complexity, as well as the expense of the UPS. It is common for the autotransformer to cover a range only from about 90 V to 140 V for 120 V power, and then switch to battery if the voltage goes much higher or lower than that range.
In low-voltage conditions the UPS will use more current than normal, so it may need a higher current circuit than a normal device. For example, to power a 1000 W device at 120 V, the UPS will draw 8.33 A. If a brownout occurs and the voltage drops to 100 V, the UPS will draw 10 A to compensate. This also works in reverse, so that in an overvoltage condition, the UPS will need less current.
Online/double-conversion
In an online UPS, the batteries are always connected to the inverter, so that no power transfer switches are necessary. When power loss occurs, the rectifier simply drops out of the circuit and the batteries keep the power steady and unchanged. When power is restored, the rectifier resumes carrying most of the load and begins charging the batteries, though the charging current may be limited to prevent the high-power rectifier from damaging the batteries. The main advantage of an online UPS is its ability to provide an "electrical firewall" between the incoming utility power and sensitive electronic equipment.
The online UPS is ideal for environments where electrical isolation is necessary or for equipment that is very sensitive to power fluctuations. Although it was at one time reserved for very large installations of 10 kW or more, advances in technology have now permitted it to be available as a common consumer device, supplying 500 W or less. The online UPS may be necessary when the power environment is "noisy", when utility power sags, outages and other anomalies are frequent, when protection of sensitive IT equipment loads is required, or when operation from an extended-run backup generator is necessary.
The basic technology of the online UPS is the same as in a standby or line-interactive UPS. However, it typically costs much more, due to it having a much greater current AC-to-DC battery-charger/rectifier, and with the rectifier and inverter designed to run continuously with improved cooling systems. It is called a double-conversion UPS due to the rectifier directly driving the inverter, even when powered from normal AC current.
Online UPS typically has a static transfer switch (STS) for increasing reliability.
Other designs
Hybrid topology/double conversion on demand
These hybrid rotary UPS designs do not have official designations, although one name used by UTL is "double conversion on demand". This style of UPS is targeted towards high-efficiency applications while still maintaining the features and protection level offered by double conversion.
A hybrid (double conversion on demand) UPS operates as an off-line/standby UPS when power conditions are within a certain preset window. This allows the UPS to achieve very high efficiency ratings. When the power conditions fluctuate outside of the predefined windows, the UPS switches to online/double-conversion operation. In double-conversion mode the UPS can adjust for voltage variations without having to use battery power, can filter out line noise and control frequency.
Ferroresonant
Ferroresonant units operate in the same way as a standby UPS unit; however, they are online with the exception that a ferroresonant transformer, is used to filter the output. This transformer is designed to hold energy long enough to cover the time between switching from line power to battery power and effectively eliminates the transfer time. Many ferroresonant UPSs are 82–88% efficient (AC/DC-AC) and offer excellent isolation.
The transformer has three windings, one for ordinary mains power, the second for rectified battery power, and the third for output AC power to the load.
This once was the dominant type of UPS and is limited to around the range. These units are still mainly used in some industrial settings (oil and gas, petrochemical, chemical, utility, and heavy industry markets) due to the robust nature of the UPS. Many ferroresonant UPSs utilizing controlled ferro technology may interact with power-factor-correcting equipment. This will result in fluctuating output voltage of the UPS, but may be corrected by reducing the load levels or adding other linear type loads.
DC power
A UPS designed for powering DC equipment is very similar to an online UPS, except that it does not need an output inverter. Also, if the UPS's battery voltage is matched with the voltage the device needs, the device's power supply will not be needed either. Since one or more power conversion steps are eliminated, this increases efficiency and run time.
Many systems used in telecommunications use an extra-low voltage "common battery" 48 V DC power, because it has less restrictive safety regulations, such as being installed in conduit and junction boxes. DC has typically been the dominant power source for telecommunications, and AC has typically been the dominant source for computers and servers.
There has been much experimentation with 48 V DC power for computer servers, in the hope of reducing the likelihood of failure and the cost of equipment. However, to supply the same amount of power, the current would be higher than an equivalent 115 V or 230 V circuit; greater current requires larger conductors or more energy lost as heat.
High voltage DC (380 V) is finding use in some data center applications and allows for small power conductors, but is subject to the more complex electrical code rules for safe containment of high voltages.
For lower power devices that run on 5 V, some portable battery banks can work as a UPS.
Rotary
A rotary UPS uses the inertia of a high-mass spinning flywheel (flywheel energy storage) to provide short-term ride-through in the event of power loss. The flywheel also acts as a buffer against power spikes and sags, since such short-term power events are not able to appreciably affect the rotational speed of the high-mass flywheel. It is also one of the oldest designs, predating vacuum tubes and integrated circuits.
It can be considered to be on line since it spins continuously under normal conditions. However, unlike a battery-based UPS, flywheel-based UPS systems typically provide 10 to 20 seconds of protection before the flywheel has slowed and power output stops. It is traditionally used in conjunction with standby generators, providing backup power only for the brief period of time the engine needs to start running and stabilize its output.
The rotary UPS is generally reserved for applications needing more than 10,000 W of protection, to justify the expense and benefit from the advantages rotary UPS systems bring. A larger flywheel or multiple flywheels operating in parallel will increase the reserve running time or capacity.
Because the flywheels are a mechanical power source, it is not necessary to use an electric motor or generator as an intermediary between it and a diesel engine designed to provide emergency power. By using a transmission gearbox, the rotational inertia of the flywheel can be used to directly start up a diesel engine, and once running, the diesel engine can be used to directly spin the flywheel. Multiple flywheels can likewise be connected in parallel through mechanical countershafts, without the need for separate motors and generators for each flywheel.
They are normally designed to provide very high current output compared to a purely electronic UPS, and are better able to provide inrush current for inductive loads such as motor startup or compressor loads, as well as medical MRI and cath lab equipment. It is also able to tolerate short-circuit conditions up to 17 times larger than an electronic UPS, permitting one device to blow a fuse and fail while other devices still continue to be powered from the rotary UPS.
Its life cycle is usually far greater than a purely electronic UPS, up to 30 years or more. But they do require periodic downtime for mechanical maintenance, such as ball bearing replacement. In larger systems, redundancy of the system ensures the availability of processes during this maintenance. Battery-based designs do not require downtime if the batteries can be hot-swapped, which is usually the case for larger units. Newer rotary units use technologies such as magnetic bearings and air-evacuated enclosures to increase standby efficiency and reduce maintenance to very low levels.
Typically, the high-mass flywheel is used in conjunction with a motor-generator system. These units can be configured as:
A motor driving a mechanically connected generator,
A combined synchronous motor and generator wound in alternating slots of a single rotor and stator,
A hybrid rotary UPS, designed similar to an online UPS, except that it uses the flywheel in place of batteries. The rectifier drives a motor to spin the flywheel, while a generator uses the flywheel to power the inverter.
In case No. 3, the motor generator can be synchronous/synchronous or induction/synchronous. The motor side of the unit in case Nos. 2 and 3 can be driven directly by an AC power source (typically when in inverter bypass), a 6-step double-conversion motor drive, or a 6-pulse inverter. Case No. 1 uses an integrated flywheel as a short-term energy source instead of batteries to allow time for external, electrically coupled gensets to start and be brought online. Case Nos. 2 and 3 can use batteries or a free-standing electrically coupled flywheel as the short-term energy source.
Form factors
Smaller UPS systems come in several different forms and sizes. However, the two most common forms are tower and rack-mount.
Tower models stand upright on the ground or on a desk or shelf, and are typically used in network workstations or desktop computer applications. Rack-mount models can be mounted in standard 19-inch rack enclosures and can require anywhere from 1U to 12U (rack units). They are typically used in server and networking applications. Some devices feature user interfaces that rotate 90°, allowing the devices to be mounted vertically on the ground or horizontally as would be found in a rack.
Applications
N + 1
In large business environments where reliability is of great importance, a single huge UPS can also be a single point of failure that can disrupt many other systems. To provide greater reliability, multiple smaller UPS modules and batteries can be integrated together to provide redundant power protection equivalent to one very large UPS. "N + 1" means that if the load can be supplied by N modules, the installation will contain N + 1 modules. In this way, failure of one module will not impact system operation.
Multiple redundancy
Many computer servers offer the option of redundant power supplies, so that in the event of one power supply failing, one or more other power supplies are able to power the load. This is a critical point – each power supply must be able to power the entire server by itself.
Redundancy is further enhanced by plugging each power supply into a different circuit (i.e. to a different circuit breaker).
Redundant protection can be extended further yet by connecting each power supply to its own UPS. This provides double protection from both a power supply failure and a UPS failure, so that continued operation is assured. This configuration is also referred to as 1 + 1 or 2N redundancy. If the budget does not allow for two identical UPS units then it is common practice to plug one power supply into mains power and the other into the UPS.
Outdoor use
When a UPS system is placed outdoors, it should have some specific features that guarantee that it can tolerate weather without any effects on performance. Factors such as temperature, humidity, rain, and snow among others should be considered by the manufacturer when designing an outdoor UPS system. Operating temperature ranges for outdoor UPS systems could be around −40 °C to +55 °C.
Outdoor UPS systems can either be pole, ground (pedestal), or host mounted. Outdoor environment could mean extreme cold, in which case the outdoor UPS system should include a battery heater mat, or extreme heat, in which case the outdoor UPS system should include a fan system or an air conditioning system.
A solar inverter, or PV inverter, or solar converter, converts the variable direct current (DC) output of a photovoltaic (PV) solar panel into a utility frequency alternating current (AC) that can be fed into a commercial electrical grid or used by a local, off-grid electrical network. It is a critical BOS–component in a photovoltaic system, allowing the use of ordinary AC-powered equipment. Solar inverters have special functions adapted for use with photovoltaic arrays, including maximum power point tracking and anti-islanding protection.
Harmonic distortion
The output of some electronic UPSes can have a significant departure from an ideal sinusoidal waveform. This is especially true of inexpensive consumer-grade single-phase units designed for home and office use. These often utilize simple switching AC power supplies and the output resembles a square wave rich in harmonics. These harmonics can cause interference with other electronic devices including radio communication, and some devices (e.g. inductive loads such as AC motors) may perform with reduced efficiency or not at all. More sophisticated (and expensive) UPS units can produce nearly pure sinusoidal AC power.
Power factor
A problem in the combination of a double-conversion UPS and a generator is the voltage distortion created by the UPS. The input of a double-conversion UPS is essentially a big rectifier. The current drawn by the UPS is non-sinusoidal. This can cause the voltage from the AC mains or a generator to also become non-sinusoidal. The voltage distortion then can cause problems in all electrical equipment connected to that power source, including the UPS itself. It will also cause more power to be lost in the wiring supplying power to the UPS due to the spikes in current flow. This level of "noise" is measured as a percentage of "total harmonic distortion of the current" (THDI). Classic UPS rectifiers have a THDI level of around 25%–30%. To reduce voltage distortion, this requires heavier mains wiring or generators more than twice as large as the UPS.
There are several solutions to reduce the THDI in a double-conversion UPS:
Classic solutions such as passive filters reduce THDI to 5%–10% at full load. They are reliable, but big and only work at full load, and present their own problems when used in tandem with generators.
An alternative solution is an active filter. Through the use of such a device, THDI can drop to 5% over the full power range. The newest technology in double-conversion UPS units is a rectifier that does not use classic rectifier components (thyristors and diodes) but uses high-frequency components instead. A double-conversion UPS with an insulated-gate bipolar transistor rectifier and inductor can have a THDI as small as 2%. This completely eliminates the need to oversize the generator (and transformers), without additional filters, investment cost, losses, or space.
Communication
Power management (PM) requires:
The UPS to report its status to the computer it powers via a communications link such as a serial port, Ethernet and Simple Network Management Protocol, GSM/GPRS or USB
A subsystem in the OS that processes the reports and generates notifications, PM events, or commands an ordered shut down. Some UPS manufacturers publish their communication protocols, but other manufacturers (such as APC) use proprietary protocols.
The basic computer-to-UPS control methods are intended for one-to-one signaling from a single source to a single target. For example, a single UPS may connect to a single computer to provide status information about the UPS, and allow the computer to control the UPS. Similarly, the USB protocol is also intended to connect a single computer to multiple peripheral devices.
In some situations, it is useful for a single large UPS to be able to communicate with several protected devices. For traditional serial or USB control, a signal replication device may be used, which for example allows one UPS to connect to five computers using serial or USB connections. However, the splitting is typically only one direction from UPS to the devices to provide status information. Return control signals may only be permitted from one of the protected systems to the UPS.
As Ethernet has increased in common use since the 1990s, control signals are now commonly sent between a single UPS and multiple computers using standard Ethernet data communication methods such as TCP/IP. The status and control information is typically encrypted so that, for example, an outside hacker can not gain control of the UPS and command it to shut down.
Distribution of UPS status and control data requires that all intermediary devices such as Ethernet switches or serial multiplexers be powered by one or more UPS systems, in order for the UPS alerts to reach the target systems during a power outage. To avoid the dependency on Ethernet infrastructure, the UPSs can be connected directly to the main control server by using a GSM/GPRS channel also. The SMS or GPRS data packets sent from UPSs trigger software to shut down the PCs to reduce the load.
Batteries
There are three main types of UPS batteries: Valve Regulated Lead Acid (VRLA), Flooded Cell or VLA batteries, and lithium-ion batteries.
The run-time for a battery-operated UPS depends on the type and size of batteries and rate of discharge, and the efficiency of the inverter. The total capacity of a lead–acid battery is a function of the rate at which it is discharged, which is described as Peukert's law.
Manufacturers supply run-time rating in minutes for packaged UPS systems. Larger systems (such as for data centers) require detailed calculation of the load, inverter efficiency, and battery characteristics to ensure the required endurance is attained.
Common battery characteristics and load testing
When a lead–acid battery is charged or discharged, this initially affects only the reacting chemicals, which are at the interface between the electrodes and the electrolyte. With time, the charge stored in the chemicals at the interface, often called "interface charge", spreads by diffusion of these chemicals throughout the volume of the active material.
If a battery has been completely discharged (e.g. the car lights were left on overnight) and next is given a fast charge for only a few minutes, then during the short charging time it develops only a charge near the interface. The battery voltage may rise to be close to the charger voltage so that the charging current decreases significantly. After a few hours, this interface charge will not spread to the volume of the electrode and electrolyte, leading to an interface charge so low that it may be insufficient to start a car.
Due to the interface charge, brief UPS self-test functions lasting only a few seconds may not accurately reflect the true runtime capacity of a UPS, and instead an extended recalibration or rundown test that deeply discharges the battery is needed.
The deep discharge testing is itself damaging to batteries due to the chemicals in the discharged battery starting to crystallize into highly stable molecular shapes that will not re-dissolve when the battery is recharged, permanently reducing charge capacity. In lead-acid batteries, this is known as sulfation, but deep-discharge damage also affects other types such as nickel-cadmium batteries and lithium batteries. Therefore, it is commonly recommended that rundown tests be performed infrequently, such as every six months to a year.
Testing of strings of batteries/cells
Multi-kilowatt commercial UPS systems with large and easily accessible battery banks are capable of isolating and testing individual cells within a battery string, which consists of either combined-cell battery units (such as 12 V lead acid batteries) or individual chemical cells wired in series. Isolating a single cell and installing a jumper in place of it allows the one battery to be discharge-tested, while the rest of the battery string remains charged and available to provide protection.
It is also possible to measure the electrical characteristics of individual cells in a battery string, using intermediate sensor wires that are installed at every cell-to-cell junction, and monitored both individually and collectively. Battery strings may also be wired as series-parallel, for example, two sets of 20 cells. In such a situation it is also necessary to monitor current flow between parallel strings, as current may circulate between the strings to balance out the effects of weak cells, dead cells with high resistance, or shorted cells. For example, stronger strings can discharge through weaker strings until voltage imbalances are equalized, and this must be factored into the individual inter-cell measurements within each string.
Series-parallel battery interactions
Battery strings wired in series-parallel can develop unusual failure modes due to interactions between the multiple parallel strings. Defective batteries in one string can adversely affect the operation and lifespan of good or new batteries in other strings. These issues also apply to other situations where series-parallel strings are used, not just in UPS systems but also in electric vehicle applications.
Consider a series-parallel battery arrangement with all good cells, and one becomes shorted or dead:
The failed cell will reduce the maximum developed voltage for the entire series string it is within.
Other series strings wired in parallel with the degraded string will now discharge through the degraded string until their voltage matches the voltage of the degraded string, potentially overcharging and leading to electrolyte boiling and outgassing from the remaining good cells in the degraded string. These parallel strings can now never be fully recharged, as the increased voltage will bleed off through the string containing the failed battery.
Charging systems may attempt to gauge battery string capacity by measuring overall voltage. Due to the overall string voltage depletion due to the dead cells, the charging system may detect this as a state of discharge, and will continuously attempt to charge the series-parallel strings, which leads to continuous overcharging and damage to all the cells in the degraded series string containing the damaged battery.
If lead-acid batteries are used, all cells in the formerly good parallel strings will begin to sulfate due to the inability for them to be fully recharged, resulting in the storage capacity of these cells being permanently damaged, even if the damaged cell in the one degraded string is eventually discovered and replaced with a new one.
The only way to prevent these subtle series-parallel string interactions is by not using parallel strings at all and using separate charge controllers and inverters for individual series strings.
Series new/old battery interactions
Even just a single string of batteries wired in series can have adverse interactions if new batteries are mixed with old batteries. Older batteries tend to have reduced storage capacity, and so will both discharge faster than new batteries and also charge to their maximum capacity more rapidly than new batteries.
As a mixed string of new and old batteries is depleted, the string voltage will drop, and when the old batteries are exhausted the new batteries still have charge available. The newer cells may continue to discharge through the rest of the string, but due to the low voltage this energy flow may not be useful and may be wasted in the old cells as resistance heating.
For cells that are supposed to operate within a specific discharge window, new cells with more capacity may cause the old cells in the series string to continue to discharge beyond the safe bottom limit of the discharge window, damaging the old cells.
When recharged, the old cells recharge more rapidly, leading to a rapid rise of voltage to near the fully charged state, but before the new cells with more capacity have fully recharged. The charge controller detects the high voltage of a nearly fully charged string and reduces current flow. The new cells with more capacity now charge very slowly, so slowly that the chemicals may begin to crystallize before reaching the fully charged state, reducing new cell capacity over several charge/discharge cycles until their capacity more closely matches the old cells in the series string.
For such reasons, some industrial UPS management systems recommend periodic replacement of entire battery arrays potentially using hundreds of expensive batteries, due to these damaging interactions between new batteries and old batteries, within and across series and parallel strings.
Standards
IEC 62040-1:2017 Uninterruptible power systems (UPS) – Part 1: General and safety requirements for UPS
IEC 62040-2:2016 Uninterruptible power systems (UPS) – Part 2: Electromagnetic compatibility (EMC) requirements
IEC 62040-3:2021 Uninterruptible power systems (UPS) – Part 3: Method of specifying the performance and test requirements
IEC 62040-4:2013 Uninterruptible power systems (UPS) – Part 4: Environmental aspects – Requirements and reporting
See also
Battery room
Emergency power system
Fuel cell applications
IT baseline protection
Power conditioner
Dynamic voltage restoration
Net metering system with energy storage
Surge protector
Switched-mode power supply (SMPS)
Switched-mode power supply applications
Emergency light
References
External links
Electric power systems components
Fault tolerance
Voltage stability | Uninterruptible power supply | [
"Physics",
"Engineering"
] | 6,500 | [
"Physical quantities",
"Reliability engineering",
"Fault tolerance",
"Voltage",
"Voltage stability"
] |
41,835 | https://en.wikipedia.org/wiki/Universal%20Time | Universal Time (UT or UT1) is a time standard based on Earth's rotation. While originally it was mean solar time at 0° longitude, precise measurements of the Sun are difficult. Therefore, UT1 is computed from a measure of the Earth's angle with respect to the International Celestial Reference Frame (ICRF), called the Earth Rotation Angle (ERA, which serves as the replacement for Greenwich Mean Sidereal Time). UT1 is the same everywhere on Earth. UT1 is required to follow the relationship
ERA = 2π(0.7790572732640 + 1.00273781191135448 · Tu) radians
where Tu = (Julian UT1 date − 2451545.0).
History
Prior to the introduction of standard time, each municipality throughout the clock-using world set its official clock, if it had one, according to the local position of the Sun (see solar time). This served adequately until the introduction of rail travel in Britain, which made it possible to travel fast enough over sufficiently long distances as to require continuous re-setting of timepieces as a train progressed in its daily run through several towns. Starting in 1847, Britain established Greenwich Mean Time, the mean solar time at Greenwich, England, to solve this problem: all clocks in Great Britain were set to this time regardless of local solar noon. Using telescopes, GMT was calibrated to the mean solar time at the prime meridian through the Royal Observatory, Greenwich. Chronometers or telegraphy were used to synchronize these clocks.
As international commerce increased, the need for an international standard of time measurement emerged. Several authors proposed a "universal" or "cosmic" time (see ). The development of Universal Time began at the International Meridian Conference. At the end of this conference, on 22 October 1884, the recommended base reference for world time, the "universal day", was announced to be the local mean solar time at the Royal Observatory in Greenwich, counted from 0 hours at Greenwich mean midnight. This agreed with the civil Greenwich Mean Time used on the island of Great Britain since 1847. In contrast, astronomical GMT began at mean noon, i.e. astronomical day X began at noon of civil day X. The purpose of this was to keep one night's observations under one date. The civil system was adopted as of 0 hours (civil) 1 January 1925. Nautical GMT began 24 hours before astronomical GMT, at least until 1805 in the Royal Navy, but persisted much later elsewhere because it was mentioned at the 1884 conference. Greenwich was chosen because by 1884 two-thirds of all nautical charts and maps already used it as their prime meridian.
During the period between 1848 and 1972, all of the major countries adopted time zones based on the Greenwich meridian.
In 1928, the term Universal Time (UT) was introduced by the International Astronomical Union to refer to GMT, with the day starting at midnight. The term was recommended as a more precise term than Greenwich Mean Time, because GMT could refer to either an astronomical day starting at noon or a civil day starting at midnight. As the general public had always begun the day at midnight, the timescale continued to be presented to them as Greenwich Mean Time.
When introduced, broadcast time signals were based on UT, and hence on the rotation of the Earth. In 1955 the BIH adopted a proposal by William Markowitz, effective 1 January 1956, dividing UT into UT0 (UT as formerly computed), UT1 (UT0 corrected for polar motion) and UT2 (UT0 corrected for polar motion and seasonal variation). UT1 was the version sufficient for "many astronomical and geodetic applications", while UT2 was to be broadcast over radio to the public.
UT0 and UT2 soon became irrelevant due to the introduction of Coordinated Universal Time (UTC). Starting in 1956, WWV broadcast an atomic clock signal stepped by 20 ms increments to bring it into agreement with UT1. The up to 20 ms error from UT1 is on the same order of magnitude as the differences between UT0, UT1, and UT2. By 1960, the U.S. Naval Observatory, the Royal Greenwich Observatory, and the UK National Physical Laboratory had developed UTC, with a similar stepping approach. The 1960 URSI meeting recommended that all time services should follow the lead of the UK and US and broadcast coordinated time using a frequency offset from cesium aimed to match the predicted progression of UT2 with occasional steps as needed. Starting 1 January 1972, UTC was defined to follow UT1 within 0.9 seconds rather than UT2, marking the decline of UT2.
Modern civil time generally follows UTC. In some countries, the term Greenwich Mean Time persists in common usage to this day in reference to UT1, in civil timekeeping as well as in astronomical almanacs and other references. Whenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1. The difference between UT1 and UTC is known as DUT1.
Adoption in various countries
The table shows the dates of adoption of time zones based on the Greenwich meridian, including half-hour zones.
Apart from Nepal Standard Time (UTC+05:45), the Chatham Standard Time Zone (UTC+12:45) used in New Zealand's Chatham Islands and the officially unsanctioned Central Western Time Zone (UTC+8:45) used in Eucla, Western Australia and surrounding areas, all time zones in use are defined by an offset from UTC that is a multiple of half an hour, and in most cases a multiple of an hour.
Measurement
Historically, Universal Time was computed from observing the position of the Sun in the sky. But astronomers found that it was more accurate to measure the rotation of the Earth by observing stars as they crossed the meridian each day. Nowadays, UT in relation to International Atomic Time (TAI) is determined by Very Long Baseline Interferometry (VLBI) observations of the positions of distant celestial objects (stars and quasars), a method which can determine UT1 to within 15 microseconds or better.
The rotation of the Earth and UT are monitored by the International Earth Rotation and Reference Systems Service (IERS). The International Astronomical Union also is involved in setting standards, but the final arbiter of broadcast standards is the International Telecommunication Union or ITU.
The rotation of the Earth is somewhat irregular and also is very gradually slowing due to tidal acceleration. Furthermore, the length of the second was determined from observations of the Moon between 1750 and 1890. All of these factors cause the modern mean solar day, on the average, to be slightly longer than the nominal 86,400 SI seconds, the traditional number of seconds per day. As UT is thus slightly irregular in its rate, astronomers introduced Ephemeris Time, which has since been replaced by Terrestrial Time (TT). Because Universal Time is determined by the Earth's rotation, which drifts away from more precise atomic-frequency standards, an adjustment (called a leap second) to this atomic time is needed since () 'broadcast time' remains broadly synchronised with solar time. Thus, the civil broadcast standard for time and frequency usually follows International Atomic Time closely, but occasionally step (or "leap") in order to prevent them from drifting too far from mean solar time.
Barycentric Dynamical Time (TDB), a form of atomic time, is now used in the construction of the ephemerides of the planets and other solar system objects, for two main reasons. First, these ephemerides are tied to optical and radar observations of planetary motion, and the TDB time scale is fitted so that Newton's laws of motion, with corrections for general relativity, are followed. Next, the time scales based on Earth's rotation are not uniform and therefore, are not suitable for predicting the motion of bodies in our solar system.
Alternate versions
UT1 is the principal form of Universal Time. However, there are also several other infrequently used time standards that are referred to as Universal Time, which agree within 0.03 seconds with UT1:
UT0 is Universal Time determined at an observatory by observing the diurnal motion of stars or extragalactic radio sources, and also from ranging observations of the Moon and artificial Earth satellites. The location of the observatory is considered to have fixed coordinates in a terrestrial reference frame (such as the International Terrestrial Reference Frame) but the position of the rotational axis of the Earth wanders over the surface of the Earth; this is known as polar motion. UT0 does not contain any correction for polar motion while UT1 does include them. The difference between UT0 and UT1 is on the order of a few tens of milliseconds. The designation UT0 is no longer in common use.
UT1R is a smoothed version of UT1, filtering out periodic variations due to tides. It includes 62 smoothing terms, with periods ranging from 5.6 days to 18.6 years. UT1R is still in use in the technical literature but rarely used elsewhere.
UT2 is a smoothed version of UT1, filtering out periodic seasonal variations. It is mostly of historic interest and rarely used anymore. It is defined by
where t is the time as fraction of the Besselian year.
See also
Airy mean time on Mars
Earth orientation parameters
List of international common standards
Unix time
Notes
Citations
References
Discusses the history of time standardization.
. Names have not been updated.
External links
Time Lord by Clark Blaise: a biography of Sanford Fleming and the idea of standard time
Time scales
Time in astronomy | Universal Time | [
"Physics",
"Astronomy"
] | 1,981 | [
"Time in astronomy",
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
41,837 | https://en.wikipedia.org/wiki/Telecommunications%20link | In a telecommunications network, a link is a communication channel that connects two or more devices for the purpose of data transmission. The link may be a dedicated physical link or a virtual circuit that uses one or more physical links or shares a physical link with other telecommunications links.
A telecommunications link is generally based on one of several types of information transmission paths such as those provided by communication satellites, terrestrial radio communications infrastructure and computer networks to connect two or more points.
The term link is widely used in computer networking to refer to the communications facilities that connect nodes of a network.
Sometimes the communications facilities that provide the communication channel that constitutes a link are also included in the definition of link.
Types
Point-to-point
A point-to-point link is a dedicated link that connects exactly two communication facilities (e.g., two nodes of a network, an intercom station at an entryway with a single internal intercom station, a radio path between two points, etc.).
Broadcast
Broadcast links connect two or more nodes and support broadcast transmission, where one node can transmit so that all other nodes can receive the same transmission. Classic Ethernet is an example.
Multipoint
Also known as a multidrop link, a multipoint link is a link that connects two or more nodes. Also known as general topology networks, these include ATM and Frame Relay links, as well as X.25 networks when used as links for a network-layer protocol like IP.
Unlike broadcast links, there is no mechanism to efficiently send a single message to all other nodes without copying and retransmitting the message.
Point-to-multipoint
A point-to-multipoint link (or simply a multipoint) is a specific type of multipoint link which consists of a central connection endpoint (CE) that is connected to multiple peripheral CEs. All of the peripheral CEs receive any transmission of data that originates from the central CE while any transmission of data that originates from any of the peripheral CEs is only received by the central CE.
Private and public
Links are often referred to by terms that refer to the ownership or accessibility of the link.
A private link is a link that is either owned by a specific entity or a link that is only accessible by a particular entity.
A public link is a link that uses the public switched telephone network or other public utility or entity to provide the link and which may also be accessible by anyone.
Direction
Uplink
Pertaining to radiocommunication service, an uplink (UL or U/L) is the portion of a feeder link used for the transmission of signals from an earth station to a space radio station, space radio system or high altitude platform station.
Pertaining to GSM and cellular networks, the radio uplink is the transmission path from the mobile station (cell phone) to a base station (cell site). Traffic and signalling flowing within the BSS and NSS may also be identified as uplink and downlink.
Pertaining to computer networks, an uplink is a connection from data communications equipment toward the network core. This is also known as an upstream connection.
Downlink
Pertaining to radiocommunication service, a downlink (DL or D/L) is the portion of a feeder link used for the transmission of signals from a space radio station, space radio system or high altitude platform station to an earth station.
In the context of satellite communications, a downlink (DL) is the link from a satellite to a ground station.
Pertaining to cellular networks, the radio downlink is the transmission path from a cell site to the cell phone. Traffic and signalling flowing within the base station subsystem (BSS) and network switching subsystem (NSS) may also be identified as uplink and downlink.
Pertaining to computer networks, a downlink is a connection from data communications equipment toward data terminal equipment. This is also known as a downstream connection.
Forward link
A forward link is the link from a fixed location (e.g., a base station) to a mobile user. If the link includes a communications relay satellite, the forward link will consist of both an uplink (base station to satellite) and a downlink (satellite to mobile user).
Reverse link
The reverse link (sometimes called a return channel) is the link from a mobile user to a fixed base station.
If the link includes a communications relay satellite, the reverse link will consist of both an uplink (mobile station to satellite) and a downlink (satellite to base station) which together constitute a half hop.
References
Telecommunications
Telecommunications engineering
Communication circuits
Broadcast engineering
Telecommunications infrastructure | Telecommunications link | [
"Technology",
"Engineering"
] | 930 | [
"Information and communications technology",
"Broadcast engineering",
"Telecommunications engineering",
"Telecommunications",
"Electronic engineering",
"Electrical engineering",
"Communication circuits"
] |
41,845 | https://en.wikipedia.org/wiki/Variable-length%20buffer | In telecommunications, a variable length buffer or elastic buffer is a buffer into which data may be entered at one rate and removed at another rate without changing the data sequence.
Most first-in first-out (FIFO) storage devices are variable-length buffers in that the input rate may be variable while the output rate is constant or the output rate may be variable while the input rate is constant. Various clocking and control systems are used to allow control of underflow or overflow conditions.
See also
Buffer (telecommunication)
Circular buffer
References
Synchronization
Computer memory | Variable-length buffer | [
"Engineering"
] | 116 | [
"Telecommunications engineering",
"Synchronization"
] |
41,849 | https://en.wikipedia.org/wiki/Viewdata | Viewdata is a Videotex implementation. It is a type of information retrieval service in which a subscriber can access a remote database via a common carrier channel, request data and receive requested data on a video display over a separate channel. Samuel Fedida, who had the idea for Viewdata in 1968, was credited as inventor of the system which was developed while working for the British Post Office which was the operator of the national telephone system. The first prototype became operational in 1974. The access, request and reception are usually via common carrier broadcast channels. This is in contrast with teletext.
Design
Viewdata offered a display of 40×24 characters, based on ISO 646 (IRV IA5) – 7 bits with no accented characters.
Originally, Viewdata was accessed with a special purpose terminal (or emulation software) and a modem running at ITU-T V.23 speed (1,200 bit/s down, 75 bit/s up). By 2004, it was normally accessed over TCP/IP using Viewdata client software on a personal computer running Microsoft Windows, or using a Web-based emulator.
Viewdata uses special symbols already widely available on telephone keypads: the "star" key and the "square" key, as formally standardised by the International Telecommunication Union. These are often treated as approximately corresponding to the ASCII asterisk (*) and number sign (#), which do not necessarily conform to the ITU specifications for the keypad symbols; the asterisk is also usually displayed smaller and raised.
These symbols appear as 'Sextile' and 'Viewdata square' in the Miscellaneous Symbols and Miscellaneous Technical Unicode blocks, respectively. The sextile was added due to its use in astrology, and the square had previously appeared in the BS_Viewdata character set, as a replacement for the underscore.
In 2013, the German national body submitted a Unicode Technical Committee proposal to align the Unicode reference glyphs with the ITU specifications for these symbols, and annotate them as telephone keypad symbols on the code charts. (Unicode 12.1), these changes have not been accepted/implemented.
Uses
Travel industry
As of 2015, Viewdata was still in use in the United Kingdom, mainly by the travel industry. Travel agents use it to look up the price and availability of package holidays and flights. Once they find what the customer is looking for they can place a booking.
There are a number of factors still holding up a move to a Web-based standard. Viewdata is regarded within the industry as low-cost and reliable, travel consultants have been trained to use Viewdata and would need training to book holidays on the Internet, and tour operators cannot agree on a Web-based standard.
Bulletin board systems
It was made in the late 1970s and early 1980s to make it easier for travel consultants to check availability and make bookings for holidays.
A number of Viewdata bulletin board systems existed in the 1980s, predominantly in the UK due to the proliferation of the BBC Micro, and a short-lived Viewdata Revival appeared in the late 1990s fuelled by the retrocomputing vogue. Some Viewdata boards still exist, with accessibility in the form of Java Telnet clients.
See also
Prestel
References
External links
Definition at The Institute for Telecommunication Sciences
Celebrating the Viewdata Revolution Including several Prestel Brochures
vd-view A Viewtex web client for TeeFax, Telstar, CCl4 and NXTel
vidtex An ncurses/terminal client for TeeFax, Telstar, CCl4, NXTel and others
Computer network technology
History of computing in the United Kingdom
History of telecommunications in the United Kingdom
Legacy systems
Videotex | Viewdata | [
"Technology"
] | 783 | [
"Legacy systems",
"Computer systems",
"History of computing",
"History of computing in the United Kingdom"
] |
41,851 | https://en.wikipedia.org/wiki/Virtual%20circuit | A virtual circuit (VC) is a means of transporting data over a data network, based on packet switching and in which a connection is first established across the network between two endpoints. The network, rather than having a fixed data rate reservation per connection as in circuit switching, takes advantage of the statistical multiplexing on its transmission links, an intrinsic feature of packet switching.
A 1978 standardization of virtual circuits by the CCITT imposes per-connection flow controls at all user-to-network and network-to-network interfaces. This permits participation in congestion control and reduces the likelihood of packet loss in a heavily loaded network. Some circuit protocols provide reliable communication service through the use of data retransmissions invoked by error detection and automatic repeat request (ARQ).
Before a virtual circuit may be used, it must be established between network nodes in the call setup phase. Once established, a bit stream or byte stream may be exchanged between the nodes, providing abstraction from low-level division into protocol data units, and enabling higher-level protocols to operate transparently.
An alternative to virtual-circuit networks are datagram networks.
Comparison with circuit switching
Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase. However, circuit switching provides a constant bit rate and latency, while these may vary in a virtual circuit service due to factors such as:
varying packet queue lengths in the network nodes,
varying bit rate generated by the application,
varying load from other users sharing the same network resources by means of statistical multiplexing, etc.
Virtual call capability
In telecommunications, a virtual call capability, sometimes called a virtual call facility, is a service feature in which:
a call set-up procedure and a call disengagement procedure determine the period of communication between two DTEs in which user data are transferred by a packet switched network
end-to-end transfer control of packets within the network is required
data may be delivered to the network by the call originator before the call access phase is completed, but the data are not delivered to the call receiver if the call attempt is unsuccessful
the network delivers all the user data to the call receiver in the same sequence in which the data are received by the network
multi-access DTEs may have several virtual calls in progress at the same time.
An alternative approach to virtual calls is connectionless communication using datagrams.
In the early 1970s, virtual call capability was developed by British Telecom for EPSS (building on the work of Donald Davies at the National Physical Laboratory). The concept was enhanced by Rémi Després as virtual circuits for the RCP experimental network of the French PTT.
Layer 4 virtual circuits
Connection oriented transport layer protocols such as TCP may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order. However, it is possible to use TCP as a virtual circuit, since TCP includes segment numbering that allows reordering on the receiver side to accommodate out-of-order delivery.
Layer 2/3 virtual circuits
Data link layer and network layer virtual circuit protocols are based on connection-oriented packet switching, meaning that data is always delivered along the same network path, i.e., through the same nodes. Advantages with this over connectionless packet switching are:
Bandwidth reservation during the connection establishment phase is supported, making guaranteed quality of service (QoS) possible. For example, a constant bit rate QoS class may be provided, resulting in emulation of circuit switching.
Less overhead is required since the packets are not routed individually and complete addressing information is not provided in the header of each data packet. Only a small virtual channel identifier (VCI) is required in each packet. Routing information is only transferred to the network nodes during the connection establishment phase.
The network nodes are faster and have higher capacity in theory since they are switches that only perform routing during the connection establishment phase, while connectionless network nodes are routers that perform routing for each packet individually. Switching only involves looking up the virtual channel identifier in a table rather than analyzing a complete address. Switches can easily be implemented in ASIC hardware, while routing is more complex and requires software implementation. However, because of the large market of IP routers, and because advanced IP routers support layer 3 switching, modern IP routers may today be faster than switches for connection-oriented protocols.
Example protocols
Examples of transport layer protocols that provide a virtual circuit:
Transmission Control Protocol (TCP), where a reliable virtual circuit is established on top of the underlying unreliable and connectionless IP protocol. The virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number. Guaranteed QoS is not provided.
Stream Control Transmission Protocol (SCTP), where a virtual circuit is established on top of the IP protocol.
Examples of network-layer and data-link-layer virtual circuit protocols, where data always is delivered over the same path:
X.25, where the VC is identified by a virtual channel identifier (VCI). X.25 provides reliable node-to-node communication and guaranteed QoS.
Frame Relay, where the VC is identified by a DLCI. Frame Relay is unreliable, but may provide guaranteed QoS.
Asynchronous Transfer Mode (ATM), where the circuit is identified by a virtual path identifier (VPI) and virtual channel identifier (VCI) pair. The ATM layer provides unreliable virtual circuits, but the ATM protocol provides for reliability through the ATM adaptation layer (AAL) Service Specific Convergence Sublayer (SSCS) (though it uses the terms assured and non-assured rather than reliable and unreliable).
General Packet Radio Service (GPRS)
Multiprotocol Label Switching (MPLS), which can be used for IP over virtual circuits. Each circuit is identified by a label. MPLS is unreliable but provides eight different QoS classes.
Permanent and switched virtual circuits in ATM, Frame Relay, and X.25
Switched virtual circuits (SVCs) are generally set up on a per-call basis and are disconnected when the call is terminated; however, a permanent virtual circuit (PVC) can be established as an option to provide a dedicated circuit link between two facilities. PVC configuration is usually preconfigured by the service provider. Unlike SVCs, PVC are usually very seldom broken/disconnected.
A switched virtual circuit (SVC) is a virtual circuit that is dynamically established on demand and is torn down when transmission is complete, for example after a phone call or a file download. SVCs are used in situations where data transmission is sporadic and/or not always between the same data terminal equipment (DTE) endpoints.
A permanent virtual circuit (PVC) is a virtual circuit established for repeated/continuous use between the same DTE. In a PVC, the long-term association is identical to the data transfer phase of a virtual call. Permanent virtual circuits eliminate the need for repeated call set-up and clearing.
Frame Relay is typically used to provide PVCs.
ATM provides both switched virtual connections and permanent virtual connections, as they are called in ATM terminology.
X.25 provides both virtual calls and PVCs, although not all X.25 service providers or DTE implementations support PVCs as their use was much less common than SVCs
See also
Data link connection identifier (DLCI)
Label switching
Protocol Wars
Traffic flow (computer networking)
References
Communication circuits
Network protocols
Packets (information technology)
Telephone services | Virtual circuit | [
"Engineering"
] | 1,565 | [
"Telecommunications engineering",
"Communication circuits"
] |
41,855 | https://en.wikipedia.org/wiki/Voice%20frequency | A voice frequency (VF) or voice band is the range of audio frequencies used for the transmission of speech.
Frequency band
In telephony, the usable voice frequency band ranges from approximately 300 to 3400 Hz. It is for this reason that the ultra low frequency band of the electromagnetic spectrum between 300 and 3000 Hz is also referred to as voice frequency, being the electromagnetic energy that represents acoustic energy at baseband. The bandwidth allocated for a single voice-frequency transmission channel is usually 4 kHz, including guard bands, allowing a sampling rate of 8 kHz to be used as the basis of the pulse-code modulation system used for the digital PSTN. Per the Nyquist–Shannon sampling theorem, the sampling frequency (8 kHz) must be at least twice the highest component of the voice frequency via appropriate filtering prior to sampling at discrete times (4 kHz) for effective reconstruction of the voice signal.
Fundamental frequency
The voiced speech of a typical adult male will have a fundamental frequency from 90 to 155 Hz, and that of a typical adult female from 165 to 255 Hz. Thus, the fundamental frequency of most speech falls below the bottom of the voice frequency band as defined. However, enough of the harmonic series will be present for the missing fundamental to create the impression of hearing the fundamental tone.
Wavelength
The speed of sound at room temperature (20°C) is 343.15 m/s. Using the formula
we have:
Typical female voices range from to .
Typical male voices range from to .
See also
Formant
Hearing (sense)
Voice call
References
Human voice
Telephony
Spectrum (physical sciences) | Voice frequency | [
"Physics"
] | 323 | [
"Waves",
"Physical phenomena",
"Spectrum (physical sciences)"
] |
41,860 | https://en.wikipedia.org/wiki/Wafer%20%28electronics%29 | In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as a crystalline silicon (c-Si, silicium), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells.
The wafer serves as the substrate for microelectronic devices built in and upon the wafer. It undergoes many microfabrication processes, such as doping, ion implantation, etching, thin-film deposition of various materials, and photolithographic patterning. Finally, the individual microcircuits are separated by wafer dicing and packaged as an integrated circuit.
History
In the semiconductor industry, the term wafer appeared in the 1950s to describe a thin round slice of semiconductor material, typically germanium or silicon. The round shape characteristic of these wafers comes from single-crystal ingots usually produced using the Czochralski method. Silicon wafers were first introduced in the 1940s.
By 1960, silicon wafers were being manufactured in the U.S. by companies such as MEMC/SunEdison. In 1965, American engineers Eric O. Ernst, Donald J. Hurd, and Gerard Seeley, while working under IBM, filed Patent US3423629A for the first high-capacity epitaxial apparatus.
Silicon wafers are made by companies such as Sumco, Shin-Etsu Chemical, Hemlock Semiconductor Corporation and Siltronic.
Production
Formation
Wafers are formed of highly pure,
nearly defect-free single crystalline material, with a purity of 99.9999999% (9N) or higher.
One process for forming crystalline wafers is known as the Czochralski method, invented by Polish chemist Jan Czochralski. In this process, a cylindrical ingot of high purity monocrystalline semiconductor, such as silicon or germanium, called a boule, is formed by pulling a seed crystal from a melt. Donor impurity atoms, such as boron or phosphorus in the case of silicon, can be added to the molten intrinsic material in precise amounts in order to dope the crystal, thus changing it into an extrinsic semiconductor of n-type or p-type.
The boule is then sliced with a wafer saw (a type of wire saw), machined to improve flatness, chemically etched to remove crystal damage from machining steps and finally polished to form wafers. The size of wafers for photovoltaics is 100–200 mm square and the thickness is 100–500 μm. Electronics use wafer sizes from 100 to 450 mm diameter. The largest wafers made have a diameter of 450 mm, but are not yet in general use.
Cleaning, texturing and etching
Wafers are cleaned with weak acids to remove unwanted particles. There are several standard cleaning procedures to make sure the surface of a silicon wafer contains no contamination. One of the most effective methods is the RCA clean.
When used for solar cells, the wafers are textured to create a rough surface to increase surface area and so their efficiency. The generated PSG (phosphosilicate glass) is removed from the edge of the wafer in the etching.
Wafer properties
Standard wafer sizes
Silicon substrate
Silicon wafers are available in a variety of diameters from 25.4 mm (1 inch) to 300 mm (11.8 inches). Semiconductor fabrication plants, colloquially known as fabs, are defined by the diameter of wafers that they are tooled to produce. The diameter has gradually increased to improve throughput and reduce cost with the current state-of-the-art fab using , with a proposal to adopt . Intel, TSMC, and Samsung were separately conducting research to the advent of "prototype" (research) fabs, though serious hurdles remain.
Wafers grown using materials other than silicon will have different thicknesses than a silicon wafer of the same diameter. Wafer thickness is determined by the mechanical strength of the material used; the wafer must be thick enough to support its own weight without cracking during handling. The tabulated thicknesses relate to when that process was introduced, and are not necessarily correct currently, for example the IBM BiCMOS7WL process is on 8-inch wafers, but these are only 200 μm thick. The weight of the wafer increases with its thickness and the square of its diameter. Date of introduction does not indicate that factories will convert their equipment immediately, in fact, many factories do not bother upgrading. Instead, companies tend to expand and build whole new lines with newer technologies, leaving a large spectrum of technologies in use at the same time.
Gallium Nitride substrate
GaN substrate wafers typically have had their own independent timelines, parallel but far lagging silicon substrate, but ahead of other substrates. The world's first 300 mm wafer made of GaN was announced in Sept 2024 by Infineon, suggesting in the coming future they could put into use the first factory with 300 mm GaN commercial output.
SiC substrate
Meanwhile world's first Silicon Carbide (SiC) 200 mm wafers were announced in July 2021 by ST Microelectronics. It is not known if SiC 200 mm has entered volume production as of 2024, as typically the largest fabs for SiC in commercial production remain at 150 mm.
Silicon on sapphire
Silicon on sapphire is different from silicon substrate as the substrate is sapphire, while superstrate is silicon, while epitaxal layers and doping can be anything. SOS in commercial production is typically maxed out at 150 mm wafer sizes as of 2024.
Gallium Arsenide substrate
GaAs wafers tend to be 150 mm at largest, in commercial production as of 2024.
Aluminum Nitride substrate
AlN tends to be 50 mm or 2 inch wafers in commercial production, while 100 mm or 4 inch wafers are being developed as of 2024 by wafer suppliers like Asahi Kasei. However, merely because a wafer exists commercially, does not imply in any way that processing equipment to produce chips on that wafer exists, indeed such equipment tends to lag development until paying end customer demand materializes. Even after equipment is developed (years), it can take further years for fabs to figure out how to use the machines productively.
Historical increases of wafer size
A unit of wafer fabrication step, such as an etch step, can produce more chips proportional to the increase in wafer area, while the cost of the unit fabrication step goes up more slowly than the wafer area. This was the cost basis for increasing wafer size. Conversion to 300 mm wafers from 200 mm wafers began in early 2000, and reduced the price per die for about 30–40%.Larger diameter wafers allow for more die per wafer.
Photovoltaic
M1 wafer size (156.75 mm) is in the process of being phased out in China as of 2020. Various nonstandard wafer sizes have arisen, so efforts to fully adopt the M10 standard (182 mm) are ongoing. Like other semiconductor fabrication processes, driving down costs has been the main driving factor for this attempted size increase, in spite of the differences in the manufacturing processes of different types of devices.
Crystalline orientation
Wafers are grown from crystal having a regular crystal structure, with silicon having a diamond cubic structure with a lattice spacing of 5.430710 Å (0.5430710 nm). When cut into wafers, the surface is aligned in one of several relative directions known as crystal orientations. Orientation is defined by the Miller index with (100) or (111) faces being the most common for silicon.
Orientation is important since many of a single crystal's structural and electronic properties are highly anisotropic. Ion implantation depths depend on the wafer's crystal orientation, since each direction offers distinct paths for transport.
Wafer cleavage typically occurs only in a few well-defined directions. Scoring the wafer along cleavage planes allows it to be easily diced into individual chips ("dies") so that the billions of individual circuit elements on an average wafer can be separated into many individual circuits.
Crystallographic orientation notches
Wafers under 200 mm diameter have flats cut into one or more sides indicating the crystallographic planes of the wafer (usually a {110} face). In earlier-generation wafers a pair of flats at different angles additionally conveyed the doping type (see illustration for conventions). Wafers of 200 mm diameter and above use a single small notch to convey wafer orientation, with no visual indication of doping type. 450 mm wafers are notchless, relying on a laser scribed structure on the wafer surface for orientation.
Impurity doping
Silicon wafers are generally not 100% pure silicon, but are instead formed with an initial impurity doping concentration between 1013 and 1016 atoms per cm3 of boron, phosphorus, arsenic, or antimony which is added to the melt and defines the wafer as either bulk n-type or p-type. However, compared with single-crystal silicon's atomic density of 5×1022 atoms per cm3, this still gives a purity greater than 99.9999%. The wafers can also be initially provided with some interstitial oxygen concentration. Carbon and metallic contamination are kept to a minimum. Transition metals, in particular, must be kept below parts per billion concentrations for electronic applications.
450 mm wafers
Challenges
There is considerable resistance to the 450 mm transition despite the possible productivity improvement, because of concern about insufficient return on investment. There are also issues related to increased inter-die / edge-to-edge wafer variation and additional edge defects. 450mm wafers are expected to cost 4 times as much as 300mm wafers, and equipment costs are expected to rise by 20 to 50%. Higher cost semiconductor fabrication equipment for larger wafers increases the cost of 450 mm fabs (semiconductor fabrication facilities or factories). Lithographer Chris Mack claimed in 2012 that the overall price per die for 450 mm wafers would be reduced by only 10–20% compared to 300 mm wafers, because over 50% of total wafer processing costs are lithography-related. Converting to larger 450 mm wafers would reduce price per die only for process operations such as etch where cost is related to wafer count, not wafer area. Cost for processes such as lithography is proportional to wafer area, and larger wafers would not reduce the lithography contribution to die cost.
Nikon planned to deliver 450-mm lithography equipment in 2015, with volume production in 2017. In November 2013 ASML paused development of 450-mm lithography equipment, citing uncertain timing of chipmaker demand.
In 2012, a group consisting of New York State (SUNY Poly/College of Nanoscale Science and Engineering (CNSE)), Intel, TSMC, Samsung, IBM, Globalfoundries and Nikon companies has formed a public-private partnership called Global 450mm Consortium (G450C, similar to SEMATECH) who made a 5-year plan (expiring in 2016) to develop a "cost effective wafer fabrication infrastructure, equipment prototypes and tools to enable coordinated industry transition to 450mm wafer level". In the mid of 2014 CNSE has announced that it will reveal first fully patterned 450mm wafers at SEMICON West. In early 2017, the G450C began to dismantle its activities over 450mm wafer research due to undisclosed reasons. Various sources have speculated that demise of the group came after charges of bid rigging made against Alain E. Kaloyeros, who at the time was a chief executive at the SUNY Poly. The industry realization of the fact that the 300mm manufacturing optimization is more cheap than costly 450mm transition may also have played a role.
The timeline for 450 mm has not been fixed. In 2012, it was expected that 450mm production would start in 2017, which never realized. Mark Durcan, then CEO of Micron Technology, said in February 2014 that he expects 450 mm adoption to be delayed indefinitely or discontinued. "I am not convinced that 450mm will ever happen but, to the extent that it does, it's a long way out in the future. There is not a lot of necessity for Micron, at least over the next five years, to be spending a lot of money on 450mm."
"There is a lot of investment that needs to go on in the equipment community to make that happen. And the value at the end of the day – so that customers would buy that equipment – I think is dubious." As of March 2014, Intel Corporation expected 450 mm deployment by 2020 (by the end of this decade). Mark LaPedus of semiengineering.com reported in mid-2014 that chipmakers had delayed adoption of 450 mm "for the foreseeable future." According to this report some observers expected 2018 to 2020, while G. Dan Hutcheson, chief executive of VLSI Research, didn't see 450mm fabs moving into production until 2020 to 2025.
The step up to 300 mm required major changes, with fully automated factories using 300 mm wafers versus barely automated factories for the 200 mm wafers, partly because a FOUP for 300 mm wafers weighs about 7.5 kilograms when loaded with 25 300 mm wafers where a SMIF weighs about 4.8 kilograms when loaded with 25 200 mm wafers, thus requiring twice the amount of physical strength from factory workers, and increasing fatigue. 300mm FOUPs have handles so that they can be still be moved by hand. 450mm FOUPs weigh 45 kilograms when loaded with 25 450 mm wafers, thus cranes are necessary to manually handle the FOUPs and handles are no longer present in the FOUP. FOUPs are moved around using material handling systems from Muratec or Daifuku. These major investments were undertaken in the economic downturn following the dot-com bubble, resulting in huge resistance to upgrading to 450 mm by the original timeframe. On the ramp-up to 450 mm, the crystal ingots will be 3 times heavier (total weight a metric ton) and take 2–4 times longer to cool, and the process time will be double. All told, the development of 450 mm wafers requires significant engineering, time, and cost to overcome.
Analytical die count estimation
In order to minimize the cost per die, manufacturers wish to maximize the number of dies that can be made from a single wafer; dies always have a square or rectangular shape due to the constraint of wafer dicing. In general, this is a computationally complex problem with no analytical solution, dependent on both the area of the dies as well as their aspect ratio (square or rectangular) and other considerations such as the width of the scribeline or saw lane, and additional space occupied by alignment and test structures. (By simplifying the problem so that the scribeline and saw lane are both zero-width, the wafer is perfectly circular with no flats, and the dies have a square aspect ratio, we arrive at the Gauss Circle Problem, an unsolved open problem in mathematics.)
Note that formulas estimating the gross dies per wafer (DPW) account only for the number of complete dies that can fit on the wafer; gross DPW calculations do not account for yield loss among those complete dies due to defects or parametric issues.
Nevertheless, the number of gross DPW can be estimated starting with the first-order approximation or floor function of wafer-to-die area ratio,
,
where
is the wafer diameter (typically in mm)
the size of each die (mm2) including the width of the scribeline ( or in the case of a saw lane, the kerf plus a tolerance).
This formula simply states that the number of dies which can fit on the wafer cannot exceed the area of the wafer divided by the area of each individual die. It will always overestimate the true best-case gross DPW, since it includes the area of partially patterned dies which do not fully lie on the wafer surface (see figure). These partially patterned dies don't represent complete ICs, so they usually cannot be sold as functional parts.
Refinements of this simple formula typically add an edge correction, to account for partial dies on the edge, which in general will be more significant when the area of the die is large compared to the total area of the wafer. In the other limiting case (infinitesimally small dies or infinitely large wafers), the edge correction is negligible.
The correction factor or correction term generally takes one of the forms cited by De Vries:
(area ratio – circumference/(die diagonal length))
or (area ratio scaled by an exponential factor)
or (area ratio scaled by a polynomial factor).
Studies comparing these analytical formulas to brute-force computational results show that the formulas can be made more accurate, over practical ranges of die sizes and aspect ratios, by adjusting the coefficients of the corrections to values above or below unity, and by replacing the linear die dimension with (average side length) in the case of dies with large aspect ratio:
or
or .
Compound semiconductors
While silicon is the prevalent material for wafers used in the electronics industry, other compound III-V or II-VI materials have also been employed. Gallium arsenide (GaAs), a III-V semiconductor produced via the Czochralski method, gallium nitride (GaN) and silicon carbide (SiC) are also common wafer materials, with GaN and sapphire being extensively used in LED manufacturing.
See also
Die preparation
Epitaxial wafer
Epitaxy
Monocrystalline silicon
Polycrystalline silicon
Rapid thermal processing
RCA clean
SEMI font
Silicon on insulator (SOI) wafers
Solar cell
Solar panel
Wafer bonding
References
External links
Evolution of the Silicon Wafer by F450C -An infographic about the history of the silicon wafer.
Semiconductor device fabrication | Wafer (electronics) | [
"Materials_science"
] | 3,746 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
41,863 | https://en.wikipedia.org/wiki/Waveguide | A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves.
Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law.
There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining.
The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances.
Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape.
Uses
The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves guided through a taut wire have been known for a long time, as well as sound through a hollow pipe such as a cave or medical stethoscope. Other uses of waveguides are in transmitting power between the components of a system such as radio, radar or optical devices. Waveguides are the fundamental principle of guided wave testing (GWT), one of the many methods of non-destructive evaluation.
Specific examples:
Optical fibers transmit light and signals for long distances with low attenuation and a wide usable range of wavelengths.
In a microwave oven a waveguide transfers power from the magnetron, where waves are formed, to the cooking chamber.
In a radar, a waveguide transfers radio frequency energy to and from the antenna, where the impedance needs to be matched for efficient power transmission (see below).
Rectangular and circular waveguides are commonly used to connect feeds of parabolic dishes to their electronics, either low-noise receivers or power amplifier/transmitters.
Waveguides are used in scientific instruments to measure optical, acoustic and elastic properties of materials and objects. The waveguide can be put in contact with the specimen (as in a medical ultrasonography), in which case the waveguide ensures that the power of the testing wave is conserved, or the specimen may be put inside the waveguide (as in a dielectric constant measurement, so that smaller objects can be tested and the accuracy is better.
A transmission line is a commonly used specific type of waveguide.
History
The first structure for guiding waves was proposed by J. J. Thomson in 1893, and was first experimentally tested by Oliver Lodge in 1894. The first mathematical analysis of electromagnetic waves in a metal cylinder was performed by Lord Rayleigh in 1897. For sound waves, Lord Rayleigh published a full mathematical analysis of propagation modes in his seminal work, "The Theory of Sound". Jagadish Chandra Bose researched millimeter wavelengths using waveguides, and in 1897 described to the Royal Institution in London his research carried out in Kolkata.
The study of dielectric waveguides (such as optical fibers, see below) began as early as the 1920s, by several people, most famous of which are Rayleigh, Sommerfeld and Debye. Optical fiber began to receive special attention in the 1960s due to its importance to the communications industry.
The development of radio communication initially occurred at the lower frequencies because these could be more easily propagated over large distances. The long wavelengths made these frequencies unsuitable for use in hollow metal waveguides because of the impractically large diameter tubes required. Consequently, research into hollow metal waveguides stalled and the work of Lord Rayleigh was forgotten for a time and had to be rediscovered by others. Practical investigations resumed in the 1930s by George C. Southworth at Bell Labs and Wilmer L. Barrow at MIT. Southworth at first took the theory from papers on waves in dielectric rods because the work of Lord Rayleigh was unknown to him. This misled him somewhat; some of his experiments failed because he was not aware of the phenomenon of waveguide cutoff frequency already found in Lord Rayleigh's work. Serious theoretical work was taken up by John R. Carson and Sallie P. Mead. This work led to the discovery that for the TE01 mode in circular waveguide losses go down with frequency and at one time this was a serious contender for the format for long-distance telecommunications.
The importance of radar in World War II gave a great impetus to waveguide research, at least on the Allied side. The magnetron, developed in 1940 by John Randall and Harry Boot at the University of Birmingham in the United Kingdom, provided a good power source and made microwave radar feasible. The most important centre of US research was at the Radiation Laboratory (Rad Lab) at MIT but many others took part in the US, and in the UK such as the Telecommunications Research Establishment. The head of the Fundamental Development Group at Rad Lab was Edward Mills Purcell. His researchers included Julian Schwinger, Nathan Marcuvitz, Carol Gray Montgomery, and Robert H. Dicke. Much of the Rad Lab work concentrated on finding lumped element models of waveguide structures so that components in waveguide could be analysed with standard circuit theory. Hans Bethe was also briefly at Rad Lab, but while there he produced his small aperture theory which proved important for waveguide cavity filters, first developed at Rad Lab. The German side, on the other hand, largely ignored the potential of waveguides in radar until very late in the war. So much so that when radar parts from a downed British plane were sent to Siemens & Halske for analysis, even though they were recognised as microwave components, their purpose could not be identified.
German academics were even allowed to continue publicly publishing their research in this field because it was not felt to be important.
Immediately after World War II waveguide was the technology of choice in the microwave field. However, it has some problems; it is bulky, expensive to produce, and the cutoff frequency effect makes it difficult to produce wideband devices. Ridged waveguide can increase bandwidth beyond an octave, but a better solution is to use a technology working in TEM mode (that is, non-waveguide) such as coaxial conductors since TEM does not have a cutoff frequency. A shielded rectangular conductor can also be used and this has certain manufacturing advantages over coax and can be seen as the forerunner of the planar technologies (stripline and microstrip). However, planar technologies really started to take off when printed circuits were introduced. These methods are significantly cheaper than waveguide and have largely taken its place in most bands. However, waveguide is still favoured in the higher microwave bands from around Ku band upwards.
Properties
Propagation modes and cutoff frequencies
A propagation mode in a waveguide is one solution of the wave equations, or, in other words, the form of the wave. Due to the constraints of the boundary conditions, there are only limited frequencies and forms for the wave function which can propagate in the waveguide. The lowest frequency in which a certain mode can propagate is the cutoff frequency of that mode. The mode with the lowest cutoff frequency is the fundamental mode of the waveguide, and its cutoff frequency is the waveguide cutoff frequency.
Propagation modes are computed by solving the Helmholtz equation alongside a set of boundary conditions depending on the geometrical shape and materials bounding the region. The usual assumption for infinitely long uniform waveguides allows us to assume a propagating form for the wave, i.e. stating that every field component has a known dependency on the propagation direction (i.e. ). More specifically, the common approach is to first replace all unknown time-varying fields (assuming for simplicity to describe the fields in cartesian components) with their complex phasors representation , sufficient to fully describe any infinitely long single-tone signal at frequency , (angular frequency ), and rewrite the Helmholtz equation and boundary conditions accordingly. Then, every unknown field is forced to have a form like , where the term represents the propagation constant (still unknown) along the direction along which the waveguide extends to infinity. The Helmholtz equation can be rewritten to accommodate such form and the resulting equality needs to be solved for and , yielding in the end an eigenvalue equation for and a corresponding eigenfunction for each solution of the former.
The propagation constant of the guided wave is complex, in general. For a lossless case, the propagation constant might be found to take on either real or imaginary values, depending on the chosen solution of the eigenvalue equation and on the angular frequency . When is purely real, the mode is said to be "below cutoff", since the amplitude of the field phasors tends to exponentially decrease with propagation; an imaginary , instead, represents modes said to be "in propagation" or "above cutoff", as the complex amplitude of the phasors does not change with .
Impedance matching
In circuit theory, the impedance is a generalization of electrical resistance in the case of alternating current, and is measured in ohms (). A waveguide in circuit theory is described by a transmission line having a length and characteristic impedance. In other words, the impedance indicates the ratio of voltage to current of the circuit component (in this case a waveguide) during propagation of the wave. This description of the waveguide was originally intended for alternating current, but is also suitable for electromagnetic and sound waves, once the wave and material properties (such as pressure, density, dielectric constant) are properly converted into electrical terms (current and impedance for example).
Impedance matching is important when components of an electric circuit are connected (waveguide to antenna for example): The impedance ratio determines how much of the wave is transmitted forward and how much is reflected. In connecting a waveguide to an antenna a complete transmission is usually required, so an effort is made to match their impedances.
The reflection coefficient can be calculated using: , where (Gamma) is the reflection coefficient (0 denotes full transmission, 1 full reflection, and 0.5 is a reflection of half the incoming voltage), and are the impedance of the first component (from which the wave enters) and the second component, respectively.
An impedance mismatch creates a reflected wave, which added to the incoming waves creates a standing wave. An impedance mismatch can be also quantified with the standing wave ratio (SWR or VSWR for voltage), which is connected to the impedance ratio and reflection coefficient by: , where are the minimum and maximum values of the voltage absolute value, and the VSWR is the voltage standing wave ratio, which value of 1 denotes full transmission, without reflection and thus no standing wave, while very large values mean high reflection and standing wave pattern.
Electromagnetic waveguides
Radio-frequency waveguides
Waveguides can be constructed to carry waves over a wide portion of the electromagnetic spectrum, but are especially useful in the microwave and optical frequency ranges. Depending on the frequency, they can be constructed from either conductive or dielectric materials. Waveguides are used for transferring both power and communication signals.
Optical waveguides
Waveguides used at optical frequencies are typically dielectric waveguides, structures in which a dielectric material with high permittivity, and thus high index of refraction, is surrounded by a material with lower permittivity. The structure guides optical waves by total internal reflection. An example of an optical waveguide is optical fiber.
Other types of optical waveguide are also used, including photonic-crystal fiber, which guides waves by any of several distinct mechanisms. Guides in the form of a hollow tube with a highly reflective inner surface have also been used as light pipes for illumination applications. The inner surfaces may be polished metal, or may be covered with a multilayer film that guides light by Bragg reflection (this is a special case of a photonic-crystal fiber). One can also use small prisms around the pipe which reflect light via total internal reflection —such confinement is necessarily imperfect, however, since total internal reflection can never truly guide light within a lower-index core (in the prism case, some light leaks out at the prism corners).
Acoustic waveguides
An acoustic waveguide is a physical structure for guiding sound waves. Sound in an acoustic waveguide behaves like electromagnetic waves on a transmission line. Waves on a string, like the ones in a tin can telephone, are a simple example of an acoustic waveguide. Another example are pressure waves in the pipes of an organ. The term acoustic waveguide is also used to describe elastic waves guided in micro-scale devices, like those employed in piezoelectric delay lines and in stimulated Brillouin scattering.
Mathematical waveguides
Waveguides are interesting objects of study from a strictly mathematical perspective. A waveguide (or tube) is defined as type of boundary condition on the wave equation such that the wave function must be equal to zero on the boundary and that the allowed region is finite in all dimensions but one (an infinitely long cylinder is an example.) A large number of interesting results can be proven from these general conditions. It turns out that any tube with a bulge (where the width of the tube increases) admits at least one bound state that exist inside the mode gaps. The frequencies of all the bound states can be identified by using a pulse short in time. This can be shown using the variational principles. An interesting result by Jeffrey Goldstone and Robert Jaffe is that any tube of constant width with a twist, admits a bound state.
Sound synthesis
Sound synthesis uses digital delay lines as computational elements to simulate wave propagation in tubes of wind instruments and the vibrating strings of string instruments.
See also
Circular polarization
Earth–ionosphere waveguide
Linear polarization
Orthomode transducer
Polarization
Flap attenuator
Notes
References
External links
Electromagnetic Waves and Antennas: Waveguides Sophocles J. Orfanidis, Department of Electrical and Computer Engineering, Rutgers University
Applied and interdisciplinary physics
Electrical components
Telecommunications equipment
British inventions
Electromagnetic radiation | Waveguide | [
"Physics",
"Technology",
"Engineering"
] | 3,100 | [
"Physical phenomena",
"Electrical components",
"Applied and interdisciplinary physics",
"Electromagnetic radiation",
"Radiation",
"Electrical engineering",
"Components"
] |
41,864 | https://en.wikipedia.org/wiki/Wave%20impedance | The wave impedance of an electromagnetic wave is the ratio of the transverse components of the electric and magnetic fields (the transverse components being those at right angles to the direction of propagation). For a transverse-electric-magnetic (TEM) plane wave traveling through a homogeneous medium, the wave impedance is everywhere equal to the intrinsic impedance of the medium. In particular, for a plane wave travelling through empty space, the wave impedance is equal to the impedance of free space. The symbol Z is used to represent it and it is expressed in units of ohms. The symbol η (eta) may be used instead of Z for wave impedance to avoid confusion with electrical impedance.
Definition
The wave impedance is given by
where is the electric field and is the magnetic field, in phasor representation. The impedance is, in general, a complex number.
In terms of the parameters of an electromagnetic wave and the medium it travels through, the wave impedance is given by
where μ is the magnetic permeability, ε is the (real) electric permittivity and σ is the electrical conductivity of the material the wave is travelling through (corresponding to the imaginary component of the permittivity multiplied by omega). In the equation, j is the imaginary unit, and ω is the angular frequency of the wave. Just as for electrical impedance, the impedance is a function of frequency. In the case of an ideal dielectric (where the conductivity is zero), the equation reduces to the real number
In free space
In free space the wave impedance of plane waves is:
(where ε0 is the permittivity constant in free space and μ0 is the permeability constant in free space). Now, since
(by definition of the metre),
.
Hence the value essentially depends on .
The currently accepted value of is
In an unbounded dielectric
In an isotropic, homogeneous dielectric with negligible magnetic properties, i.e. and . So, the value of wave impedance in a perfect dielectric is
,
where is the relative dielectric constant.
In a waveguide
For any waveguide in the form of a hollow metal tube, (such as rectangular guide, circular guide, or double-ridge guide), the wave impedance of a travelling wave is dependent on the frequency , but is the same throughout the guide. For transverse electric (TE) modes of propagation the wave impedance is:
where fc is the cut-off frequency of the mode, and for transverse magnetic (TM) modes of propagation the wave impedance is:
Above the cut-off (), the impedance is real (resistive) and the wave carries energy. Below cut-off the impedance is imaginary (reactive) and the wave is evanescent. These expressions neglect the effect of resistive loss in the walls of the waveguide. For a waveguide entirely filled with a homogeneous dielectric medium, similar expressions apply, but with the wave impedance of the medium replacing Z0. The presence of the dielectric also modifies the cut-off frequency fc.
For a waveguide or transmission line containing more than one type of dielectric medium (such as microstrip), the wave impedance will in general vary over the cross-section of the line.
See also
Characteristic impedance
Impedance (disambiguation)
Impedance of free space
References
Wave mechanics
Electromagnetic radiation | Wave impedance | [
"Physics"
] | 709 | [
"Physical phenomena",
"Electromagnetic radiation",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Radiation"
] |
41,868 | https://en.wikipedia.org/wiki/Wideband%20modem | In telecommunications, the term wideband modem has the following meanings:
A modem whose modulated output signal can have an essential frequency spectrum that is broader than that which can be wholly contained within, and faithfully transmitted through, a voice channel with a nominal 4 kHz bandwidth.
A modem whose bandwidth capability is greater than that of a narrowband modem.
References
Networking hardware
Modems | Wideband modem | [
"Technology",
"Engineering"
] | 80 | [
"Computing stubs",
"Computer networks engineering",
"Computer hardware stubs",
"Networking hardware"
] |
41,871 | https://en.wikipedia.org/wiki/Wireless%20mobility%20management | Wireless mobility management in Personal Communications Service (PCS) is the assigning and controlling of wireless links for terminal network connections. Wireless mobility management provides an "alerting" function for call completion to a wireless terminal, monitors wireless link performance to determine when an automatic link transfer is required, and coordinates link transfers between wireless access interfaces.
One use of this is wireless push technology, by pushing data across wireless networks, this coordinates the link transfers and pushes data between the backend and wireless device only when an established connection is found.
References
Push technology
Wireless networking | Wireless mobility management | [
"Technology",
"Engineering"
] | 111 | [
"Wireless networking",
"Computer networks engineering"
] |
41,878 | https://en.wikipedia.org/wiki/Zip-cord | Zip-cord is a type of electrical cable with two or more conductors held together by an insulating jacket that can be easily separated simply by pulling apart. In Australia it is known as 'figure-8' cable. The zip-cord term is also used with optical fiber cables consisting of two optical fibers joined in a similar manner. The design of zip-cord makes it easy to keep conductors that carry related electrical or optical signals together and helps avoid tangling of cables. Typical uses include lamp cord and speaker wire. Conductors may be identified by a color tracer on the insulation, or by a ridge molded into the insulation of one wire, or by a colored tracer thread inside the insulation. Zip cords are intended for use on portable equipment, and the US and Canadian electrical codes do not permit their use for permanently installed wiring of line-voltage circuits.
See also
Wire
Extension cord
References
Consumer electronics
Electrical wiring | Zip-cord | [
"Physics",
"Engineering"
] | 184 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
41,890 | https://en.wikipedia.org/wiki/Group%20theory | In abstract algebra, group theory studies the algebraic structures known as groups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
History
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
Main classes of groups
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
Permutation groups
The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself () by means of the left regular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for , the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals.
Matrix groups
The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.
Transformation groups
Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.
Abstract groups
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group began to take hold, where "abstract" means that the nature of the elements are ignored in such a way that two isomorphic groups are considered as the same group. A typical way of specifying an abstract group is through a presentation by generators and relations,
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.
Groups with additional structure
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),
are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group.
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.
Branches of group theory
Finite group theory
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
Finite groups often occur when considering symmetry of mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry.
Representation of groups
Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.
Lie theory
A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Combinatorial and geometric group theory
Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications . A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators , the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by For example, the group presentation describes a group which is isomorphic to A string consisting of generator symbols and their inverses is called a word.
Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation is isomorphic to the additive group Z of integers, although this may not be immediately apparent. (Writing , one has )
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X.
Connection of groups and symmetry
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation has the two solutions and . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.
Applications of group theory
Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theory
Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.
Algebraic topology
Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
Algebraic geometry
Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example the Hodge conjecture (in certain cases).) The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.
Algebraic number theory
Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula,
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
Harmonic analysis
Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.
Combinatorics
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
Music
The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group.
Physics
In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.
Chemistry and materials science
In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals.
Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, where n is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, , since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cn axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF3), the highest order of rotation axis is C3, so the principal axis of rotation is C3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is called σh (horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example, methane and other tetrahedral molecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Cryptography
Very large groups of prime order constructed in elliptic curve cryptography serve for public-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group.
See also
List of group theory topics
Examples of groups
Bass-Serre theory
Notes
References
Shows the advantage of generalising from group to groupoid.
An introductory undergraduate text in the spirit of texts by Gallian or Herstein, covering groups, rings, integral domains, fields and Galois theory. Free downloadable PDF with open-source GFDL license.
Conveys the practical value of group theory by explaining how it points to symmetries in physics and other sciences.
Ronan M., 2006. Symmetry and the Monster. Oxford University Press. . For lay readers. Describes the quest to find the basic building blocks for finite groups.
A standard contemporary reference.
Inexpensive and fairly readable, but somewhat dated in emphasis, style, and notation.
External links
History of the abstract group concept
Higher dimensional group theory This presents a view of group theory as level one of a theory that extends in all dimensions, and has applications in homotopy theory and to higher dimensional nonabelian methods for local-to-global problems.
Plus teacher and student package: Group Theory This package brings together all the articles on group theory from Plus, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge, exploring applications and recent breakthroughs, and giving explicit definitions and examples of groups.
This is a detailed exposition of contemporaneous understanding of Group Theory by an early researcher in the field.
ml:ഗ്രൂപ്പ് സിദ്ധാന്തം | Group theory | [
"Mathematics"
] | 5,519 | [
"Group theory",
"Fields of abstract algebra"
] |
41,891 | https://en.wikipedia.org/wiki/Stable%20nuclide | Stable nuclides are isotopes of a chemical element whose nucleons are in a configuration that does not permit them the surplus energy required to produce a radioactive emission. The nuclei of such isotopes are not radioactive and unlike radionuclides do not spontaneously undergo radioactive decay. When these nuclides are referred to in relation to specific elements they are usually called that element's stable isotopes.
The 80 elements with one or more stable isotopes comprise a total of 251 nuclides that have not been shown to decay using current equipment. Of these 80 elements, 26 have only one stable isotope and are called monoisotopic. The other 56 have more than one stable isotope. Tin has ten stable isotopes, the largest number of any element.
Definition of stability, and naturally occurring nuclides
Most naturally occurring nuclides are stable (about 251; see list at the end of this article), and about 35 more (total of 286) are known to be radioactive with long enough half-lives (also known) to occur primordially. If the half-life of a nuclide is comparable to, or greater than, the Earth's age (4.5 billion years), a significant amount will have survived since the formation of the Solar System, and then is said to be primordial. It will then contribute in that way to the natural isotopic composition of a chemical element. Primordial radioisotopes are easily detected with half-lives as short as 700 million years (e.g., U). This is the present limit of detection, as shorter-lived nuclides have not yet been detected undisputedly in nature except when recently produced, such as decay products or cosmic ray spallation.
Many naturally occurring radioisotopes (another 53 or so, for a total of about 339) exhibit still shorter half-lives than 700 million years, but they are made freshly, as daughter products of decay processes of primordial nuclides (for example, radium from uranium), or from ongoing energetic reactions, such as cosmogenic nuclides produced by present bombardment of Earth by cosmic rays (for example, C made from nitrogen).
Some isotopes that are classed as stable (i.e. no radioactivity has been observed for them) are predicted to have extremely long half-lives (sometimes 10 years or more). If the predicted half-life falls into an experimentally accessible range, such isotopes have a chance to move from the list of stable nuclides to the radioactive category, once their activity is observed. For example, Bi and W were formerly classed as stable, but were found to be alpha-active in 2003. However, such nuclides do not change their status as primordial when they are found to be radioactive.
Most stable isotopes on Earth are believed to have been formed in processes of nucleosynthesis, either in the Big Bang, or in generations of stars that preceded the formation of the Solar System. However, some stable isotopes also show abundance variations in the earth as a result of decay from long-lived radioactive nuclides. These decay-products are termed radiogenic isotopes, in order to distinguish them from the much larger group of 'non-radiogenic' isotopes.
Isotopes per element
Of the known chemical elements, 80 elements have at least one stable nuclide. These comprise the first 82 elements from hydrogen to lead, with the two exceptions, technetium (element 43) and promethium (element 61), that do not have any stable nuclides. As of 2024, there are total of 251 known "stable" nuclides. In this definition, "stable" means a nuclide that has never been observed to decay against the natural background. Thus, these elements have half-lives too long to be measured by any means, direct or indirect.
Stable isotopes:
1 element (tin) has 10 stable isotopes
5 elements have 7 stable isotopes apiece
7 elements have 6 stable isotopes apiece
11 elements have 5 stable isotopes apiece
9 elements have 4 stable isotopes apiece
5 elements have 3 stable isotopes apiece
16 elements have 2 stable isotopes apiece
26 elements have 1 single stable isotope.
These last 26 are thus called monoisotopic elements. The mean number of stable isotopes for elements which have at least one stable isotope is 251/80 = 3.1375.
Physical magic numbers and odd and even proton and neutron count
Stability of isotopes is affected by the ratio of protons to neutrons, and also by presence of certain magic numbers of neutrons or protons which represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. As in the case of tin, a magic number for Z, the atomic number, tends to increase the number of stable isotopes for the element.
Just as in the case of electrons, which have the lowest energy state when they occur in pairs in a given orbital, nucleons (both protons and neutrons) exhibit a lower energy state when their number is even, rather than odd. This stability tends to prevent beta decay (in two steps) of many even–even nuclides into another even–even nuclide of the same mass number but lower energy (and of course with two more protons and two fewer neutrons), because decay proceeding one step at a time would have to pass through an odd–odd nuclide of higher energy. Such nuclei thus instead undergo double beta decay (or are theorized to do so) with half-lives several orders of magnitude larger than the age of the universe. This makes for a larger number of stable even–even nuclides, which account for 150 of the 251 total. Stable even–even nuclides number as many as three isobars for some mass numbers, and up to seven isotopes for some atomic numbers.
Conversely, of the 251 known stable nuclides, only five have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, nitrogen-14, and tantalum-180m. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life >10 years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Odd–odd primordial nuclides are rare because most odd–odd nuclei beta-decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.
Yet another effect of the instability of an odd number of either type of nucleon is that odd-numbered elements tend to have fewer stable isotopes. Of the 26 monoisotopic elements (those with only one stable isotope), all but one have an odd atomic number, and all but one has an even number of neutrons: the single exception to both rules is beryllium.
The end of the stable elements occurs after lead, largely because nuclei with 128 neutrons—two neutrons above the magic number 126—are extraordinarily unstable and almost immediately alpha-decay. This contributes to the very short half-lives of astatine, radon, and francium. A similar phenomenon occurs to a much lesser extent with 84 neutrons—two neutrons above the magic number 82—where various isotopes of lanthanide elements alpha-decay.
Nuclear isomers, including a "stable" one
The 251 known stable nuclides include tantalum-180m, since even though its decay is automatically implied by its being "metastable", this has not been observed. All "stable" isotopes (stable by observation, not theory) are the ground states of nuclei, except for tantalum-180m, which is a nuclear isomer or excited state. The ground state, tantalum-180, is radioactive with half-life 8 hours; in contrast, the decay of the nuclear isomer is extremely strongly forbidden by spin-parity selection rules. It has been reported by direct observation that the half-life of Ta to gamma decay must be >10 years. Other possible modes of Ta decay (beta decay, electron capture, and alpha decay) have also never been observed.
Still-unobserved decay
It is expected that improvement of experimental sensitivity will allow discovery of very mild radioactivity of some isotopes now considered stable. For example, in 2003 it was reported that bismuth-209 (the only primordial isotope of bismuth) is very mildly radioactive, with half-life (1.9 ± 0.2) × 10 yr, confirming earlier theoretical predictions from nuclear physics that bismuth-209 would very slowly alpha decay.
Isotopes that are theoretically believed to be unstable but have not been observed to decay are termed observationally stable. Currently there are 105 "stable" isotopes which are theoretically unstable, 40 of which have been observed in detail with no sign of decay, the lightest in any case being Ar. Many "stable" nuclides are "metastable" in that they would release energy if they were to decay, and are expected to undergo very rare kinds of radioactive decay, including double beta decay.
146 nuclides from 62 elements with atomic numbers from 1 (hydrogen) through 66 (dysprosium) except 43 (technetium), 61 (promethium), 62 (samarium), and 63 (europium) are theoretically stable to any kind of nuclear decay — except for the theoretical possibility of proton decay, which has never been observed despite extensive searches for it; and spontaneous fission (SF), which is theoretically possible for the nuclides with atomic mass numbers ≥ 93.
Besides SF, other theoretical decay routes for heavier elements include:
alpha decay – 70 heavy nuclides (the lightest two are cerium-142 and neodymium-143)
double beta decay – 55 nuclides
beta decay – tantalum-180m
electron capture – tellurium-123, tantalum-180m
double electron capture
isomeric transition – tantalum-180m
These include all nuclides of mass 165 and greater. Argon-36 is the lightest known "stable" nuclide which is theoretically unstable.
The positivity of energy release in these processes means they are allowed kinematically (they do not violate conservation of energy) and, thus, in principle, can occur. They are not observed due to strong but not absolute suppression, by spin-parity selection rules (for beta decays and isomeric transitions) or by the thickness of the potential barrier (for alpha and cluster decays and spontaneous fission).
Summary table for numbers of each class of nuclides
This is a summary table from List of nuclides. Numbers are not exact and may change slightly in the future, as nuclides are observed to be radioactive, or new half-lives are determined to some precision.
List of stable nuclides
The primordial radionuclides are included for comparison; they are italicized and offset from the list of stable nuclides proper.
Abbreviations for predicted unobserved decay:
α for alpha decay, B for beta decay, 2B for double beta decay, E for electron capture, 2E for double electron capture, IT for isomeric transition, SF for spontaneous fission, * for the nuclides whose half-lives have lower bound. Double beta decay has only been listed when beta decay is not also possible.
^ Tantalum-180m is a "metastable isotope", meaning it is an excited nuclear isomer of tantalum-180. See isotopes of tantalum. However, the half-life of this nuclear isomer is so long that it has never been observed to decay, and it thus is an "observationally stable" primordial nuclide, a rare isotope of tantalum. This is the only nuclear isomer with a half-life so long that it has never been observed to decay. It is thus included in this list.
^^ Bismuth-209 was long believed to be stable, due to its half-life of 2.01×10 years, which is more than a billion times the age of the universe.
§ Europium-151 and samarium-147 are primordial nuclides with very long half-lives of 4.62×10 years and 1.066×10 years, respectively.
See also
Isotope geochemistry
List of elements by stability of isotopes
List of nuclides (991 nuclides in order of stability, all with half-lives over one hour)
Mononuclidic element
Periodic table
Primordial nuclide
Radionuclide
Stable isotope ratio
Table of nuclides
Valley of stability
References
Book references
External links
The LIVEChart of Nuclides – IAEA
AlphaDelta: Stable Isotope fractionation calculator
National Isotope Development Center Reference information on isotopes, and coordination and management of isotope production, availability, and distribution
Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development
Isosciences Use and development of stable isotope labels in synthetic and biological molecules
Stable
de:Isotop#Stabile Isotope
sv:Stabil isotop | Stable nuclide | [
"Physics",
"Chemistry"
] | 2,809 | [
"Isotopes",
"Nuclear physics"
] |
41,926 | https://en.wikipedia.org/wiki/Nearest%20neighbour%20algorithm | The nearest neighbour algorithm was one of the first algorithms used to solve the travelling salesman problem approximately. In that problem, the salesman starts at a random city and repeatedly visits the nearest city until all have been visited. The algorithm quickly yields a short tour, but usually not the optimal one.
Algorithm
These are the steps of the algorithm:
Initialize all vertices as unvisited.
Select an arbitrary vertex, set it as the current vertex u. Mark u as visited.
Find out the shortest edge connecting the current vertex u and an unvisited vertex v.
Set v as the current vertex u. Mark v as visited.
If all the vertices in the domain are visited, then terminate. Else, go to step 3.
The sequence of the visited vertices is the output of the algorithm.
The nearest neighbour algorithm is easy to implement and executes quickly, but it can sometimes miss shorter routes which are easily noticed with human insight, due to its "greedy" nature. As a general guide, if the last few stages of the tour are comparable in length to the first stages, then the tour is reasonable; if they are much greater, then it is likely that much better tours exist. Another check is to use an algorithm such as the lower bound algorithm to estimate if this tour is good enough.
In the worst case, the algorithm results in a tour that is much longer than the optimal tour. To be precise, for every constant r there is an instance of the traveling salesman problem such that the length of the tour computed by the nearest neighbour algorithm is greater than r times the length of the optimal tour. Moreover, for each number of cities there is an assignment of distances between the cities for which the nearest neighbour heuristic produces the unique worst possible tour. (If the algorithm is applied on every vertex as the starting vertex, the best path found will be better than at least N/2-1 other tours, where N is the number of vertices.)
The nearest neighbour algorithm may not find a feasible tour at all, even when one exists.
Notes
References
G. Gutin, A. Yeo and A. Zverovitch, Exponential Neighborhoods and Domination Analysis for the TSP, in The Traveling Salesman Problem and Its Variations, G. Gutin and A.P. Punnen (eds.), Kluwer (2002) and Springer (2007).
G. Gutin, A. Yeo and A. Zverovich, Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP. Discrete Applied Mathematics 117 (2002), 81–86.
J. Bang-Jensen, G. Gutin and A. Yeo, When the greedy algorithm fails. Discrete Optimization 1 (2004), 121–127.
G. Bendall and F. Margot, Greedy Type Resistance of Combinatorial Problems, Discrete Optimization 3 (2006), 288–298.
Travelling salesman problem
Approximation algorithms
Heuristic algorithms
Graph algorithms | Nearest neighbour algorithm | [
"Mathematics"
] | 602 | [
"Mathematical relations",
"Approximations",
"Approximation algorithms"
] |
41,927 | https://en.wikipedia.org/wiki/Signal%20generator | A signal generator is one of a class of electronic devices that generates electrical signals with set properties of amplitude, frequency, and wave shape. These generated signals are used as a stimulus for electronic measurements, typically used in designing, testing, troubleshooting, and repairing electronic or electroacoustic devices, though it often has artistic uses as well.
There are many different types of signal generators with different purposes and applications and at varying levels of expense. These types include function generators, RF and microwave signal generators, pitch generators, arbitrary waveform generators, digital pattern generators, and frequency generators. In general, no device is suitable for all possible applications.
A signal generator may be as simple as an oscillator with calibrated frequency and amplitude. More general-purpose signal generators allow control of all the characteristics of a signal. Modern general-purpose signal generators will have a microprocessor control and may also permit control from a personal computer. Signal generators may be free-standing self-contained instruments, or may be incorporated into more complex automatic test systems.
History
In June 1928, the General Radio 403 was the first commercial signal generator ever marketed. It supported a frequency range of 500 Hz to 1.5 MHz. Also, in April 1929, the first commercial frequency standard was marketed by General Radio with a frequency of 50 KHz.
General-purpose signal generators
Function generator
A function generator is a device which produces simple repetitive waveforms. Such devices contain an electronic oscillator, a circuit that is capable of creating a repetitive waveform. (Modern devices may use digital signal processing to synthesize waveforms, followed by a digital-to-analog converter, or DAC, to produce an analog output.) The most common waveform is a sine wave, but sawtooth, step (pulse), square, and triangular waveform oscillators are commonly available as are arbitrary waveform generators (AWGs). If the oscillator operates above the human hearing range (>20 kHz), the generator will often include some sort of modulation function such as amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM) as well as a second oscillator that provides an audio frequency modulation waveform.
Arbitrary waveform generator
An arbitrary waveform generator (AWG or ARB) is a sophisticated signal generator that generates arbitrary waveforms within published limits of frequency range, accuracy, and output level. Unlike a function generator that produces a small set of specific waveforms, an AWG allows the user to specify a source waveform in a variety of different ways. An AWG is generally more expensive than a function generator and often has less bandwidth. An AWG is used in higher-end design and test applications.
RF and microwave signal generators
RF (radio frequency) and microwave signal generators are used for testing components, receivers and test systems in a wide variety of applications including cellular communications, WiFi, WiMAX, GPS, audio and video broadcasting, satellite communications, radar and electronic warfare. RF and microwave signal generators normally have similar features and capabilities, but are differentiated by frequency range. RF signal generators typically range from a few kHz to 6 GHz, while microwave signal generators cover a much wider frequency range, from less than 1 MHz to at least 20 GHz. Some models go as high as 70 GHz with a direct coaxial output, and up to hundreds of GHz when used with external waveguide multiplier modules. RF and microwave signal generators can be classified further as analog or vector signal generators.
Analog signal generators
Analog signal generators based on a sine-wave oscillator were common before the inception of digital electronics, and are still used. There was a sharp distinction in purpose and design of radio-frequency and audio-frequency signal generators.
RF
RF signal generators produce continuous wave radio frequency signals of defined, adjustable, amplitude and frequency. Many models offer various types of analog modulation, either as standard equipment or as an optional capability to the base unit. This could include AM, FM, ΦM (phase modulation) and pulse modulation. A common feature is an attenuator to vary the signal’s output power. Depending on the manufacturer and model, output powers can range from −135 to +30 dBm. A wide range of output power is desirable, since different applications require different amounts of signal power. For example, if a signal has to travel through a very long cable out to an antenna, a high output signal may be needed to overcome the losses through the cable and still have sufficient power at the antenna. But when testing receiver sensitivity, a low signal level is required to see how the receiver behaves under low signal-to-noise conditions.
RF signal generators are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. Mobile, field-testing and airborne applications benefit from lighter, battery-operated platforms. In automated and production testing, web-browser access, which allows multi-source control, and faster frequency switching speeds improve test times and throughput.
RF signal generators are required for servicing and setting up radio receivers, and are used for professional RF applications.
RF signal generators are characterized by their frequency bands, power capabilities (−100 to +25 dBc), single side band phase noise at various carrier frequencies, spurs and harmonics, frequency and amplitude switching speeds and modulation capabilities.
AF
Audio-frequency signal generators generate signals in the audio-frequency range and above. An early example was the HP200A audio oscillator, the first product sold by the Hewlett-Packard Company in 1939. Applications include checking frequency response of audio equipment, and many uses in the electronic laboratory.
Equipment distortion can be measured using a very-low-distortion audio generator as the signal source, with appropriate equipment to measure output distortion harmonic-by-harmonic with a wave analyser, or simply total harmonic distortion. A distortion of 0.0001% can be achieved by an audio signal generator with a relatively simple circuit.
Vector signal generator
With the advent of digital communications systems, it is no longer possible to adequately test these systems with traditional analog signal generators. This has led to the development of the vector signal generator, which is also known as a digital signal generator. These signal generators are capable of generating digitally-modulated radio signals that may use any of a large number of digital modulation formats such as QAM, QPSK, FSK, BPSK, and OFDM. In addition, since modern commercial digital communication systems are almost all based on well-defined industry standards, many vector signal generators can generate signals based on these standards. Examples include GSM, W-CDMA (UMTS), CDMA2000, LTE, Wi-Fi (IEEE 802.11), and WiMAX (IEEE 802.16). In contrast, military communication systems such as JTRS, which place a great deal of importance on robustness and information security, typically use very proprietary methods. To test these types of communication systems, users will often create their own custom waveforms and download them into the vector signal generator to create the desired test signal.
Digital pattern generator
A logic signal generator or data pattern generator or digital pattern generator produces logic signals—that is, logical 1s and 0s in the form of conventional voltage levels. The usual voltage standards are LVTTL and LVCMOS.
It is different from a "pulse/pattern generator", which refers to signal generators able to generate logic pulses with different analog characteristics (such as pulse rise/fall time, high level length, ...).
A digital pattern generator is used as stimulus source for digital integrated circuits and embedded systems - for functional validation and testing.
Special purpose signal generators
In addition to the above general-purpose devices, there are several classes of signal generators designed for specific applications.
Pitch generators and audio generators
A pitch generator is a type of signal generator optimized for use in audio and acoustics applications. Pitch generators typically include sine waves over the human hearing range (20 Hz to 20 kHz). Sophisticated pitch generators will also include sweep generators (a function which varies the output frequency over a range, in order to make frequency-domain measurements), multipitch generators (which output several pitches simultaneously, and are used to check for intermodulation distortion and other non-linear effects), and tone bursts (used to measure response to transients). Pitch generators are typically used in conjunction with sound level meters, when measuring the acoustics of a room or a sound reproduction system, and/or with oscilloscopes or specialized audio analyzers.
Many pitch generators operate in the digital domain, producing output in various digital audio formats such as AES3, or SPDIF. Such generators may include special signals to stimulate various digital effects and problems, such as clipping, jitter, bit errors; they also often provide ways to manipulate the metadata associated with digital audio formats.
The term synthesizer is used for a device that generates audio signals for music, or that uses slightly more intricate methods.
Computer programs
Computer programs can be used to generate arbitrary waveforms on a general-purpose computer and output the waveform via an output interface. Such programs may be provided commercially or be freeware. Simple systems use a standard computer sound card as output device, limiting the accuracy of the output waveform and limiting frequency to lie within the audio-frequency band.
Video signal generator
A video signal generator is a device which outputs predetermined video and/or television waveforms, and other signals used to stimulate faults in, or aid in parametric measurements of, television and video systems. There are several different types of video signal generators in widespread use. Regardless of the specific type, the output of a video generator will generally contain synchronization signals appropriate for television, including horizontal and vertical sync pulses (in analog) or sync words (in digital). Generators of composite video signals (such as NTSC and PAL) will also include a colorburst signal as part of the output. Video signal generators are available for a wide variety of applications and for a wide variety of digital formats; many of these also include audio generation capability (as the audio track is an important part of any video or television program or motion picture).
See also
AN/URM-25D signal generator, 1950s hardware still in use today.
Digital pattern generator, for generating digital (logic) type of signals
Inductive amplifier, used to find an individual telephone cable pairs
References
External links
Function Generator & Arbitrary Waveform Generator Guidebook
Understanding signal generator specifications
Types of signal generators
Signal generator
Laboratory equipment
Electronic test equipment | Signal generator | [
"Technology",
"Engineering"
] | 2,168 | [
"Electronic test equipment",
"Measuring instruments"
] |
41,928 | https://en.wikipedia.org/wiki/Klein%20four-group | In mathematics, the Klein four-group is an abelian group with four elements, in which each element is self-inverse (composing it with itself produces the identity) and in which composing any two of the three non-identity elements produces the third one. It can be described as the symmetry group of a non-square rectangle (with the three non-identity elements being horizontal reflection, vertical reflection and 180-degree rotation), as the group of bitwise exclusive-or operations on two-bit binary values, or more abstractly as , the direct product of two copies of the cyclic group of order 2 by the Fundamental Theorem of Finitely Generated Abelian Groups. It was named Vierergruppe (), meaning four-group) by Felix Klein in 1884. It is also called the Klein group, and is often symbolized by the letter or as .
The Klein four-group, with four elements, is the smallest group that is not cyclic. Up to isomorphism, there is only one other group of order four: the cyclic group of order 4. Both groups are abelian.
Presentations
The Klein group's Cayley table is given by:
The Klein four-group is also defined by the group presentation
All non-identity elements of the Klein group have order 2, so any two non-identity elements can serve as generators in the above presentation. The Klein four-group is the smallest non-cyclic group. It is, however, an abelian group, and isomorphic to the dihedral group of order (cardinality) 4, symbolized (or , using the geometric convention); other than the group of order 2, it is the only dihedral group that is abelian.
The Klein four-group is also isomorphic to the direct sum , so that it can be represented as the pairs under component-wise addition modulo 2 (or equivalently the bit strings under bitwise XOR), with (0,0) being the group's identity element. The Klein four-group is thus an example of an elementary abelian 2-group, which is also called a Boolean group. The Klein four-group is thus also the group generated by the symmetric difference as the binary operation on the subsets of a powerset of a set with two elements—that is, over a field of sets with four elements, such as ; the empty set is the group's identity element in this case.
Another numerical construction of the Klein four-group is the set , with the operation being multiplication modulo 8. Here a is 3, b is 5, and is .
The Klein four-group also has a representation as real matrices with the operation being matrix multiplication:
On a Rubik's Cube, the "4 dots" pattern can be made in three ways (for example, M2 U2 M2 U2 F2 M2 F2), depending on the pair of faces that are left blank; these three positions together with the solved position form an example of the Klein group, with the solved position serving as the identity.
Geometry
In two dimensions, the Klein four-group is the symmetry group of a rhombus and of rectangles that are not squares, the four elements being the identity, the vertical reflection, the horizontal reflection, and a 180° rotation.
In three dimensions, there are three different symmetry groups that are algebraically the Klein four-group:
one with three perpendicular 2-fold rotation axes: the dihedral group
one with a 2-fold rotation axis, and a perpendicular plane of reflection:
one with a 2-fold rotation axis in a plane of reflection (and hence also in a perpendicular plane of reflection): .
Permutation representation
The three elements of order two in the Klein four-group are interchangeable: the automorphism group of V is thus the group of permutations of these three elements, that is, the symmetric group .
The Klein four-group's permutations of its own elements can be thought of abstractly as its permutation representation on four points:
{(), (1,2)(3,4), (1,3)(2,4), (1,4)(2,3)}
In this representation, is a normal subgroup of the alternating group (and also the symmetric group ) on four letters. It is also a transitive subgroup of that appears as a Galois group. In fact, it is the kernel of a surjective group homomorphism from to .
Other representations within S4 are:
They are not normal subgroups of S4.
Algebra
According to Galois theory, the existence of the Klein four-group (and in particular, the permutation representation of it) explains the existence of the formula for calculating the roots of quartic equations in terms of radicals, as established by Lodovico Ferrari: the map corresponds to the resolvent cubic, in terms of Lagrange resolvents.
In the construction of finite rings, eight of the eleven rings with four elements have the Klein four-group as their additive substructure.
If denotes the multiplicative group of non-zero reals and the multiplicative group of positive reals, then is the group of units of the ring , and is a subgroup of (in fact it is the component of the identity of ). The quotient group is isomorphic to the Klein four-group. In a similar fashion, the group of units of the split-complex number ring, when divided by its identity component, also results in the Klein four-group.
Graph theory
Among the simple connected graphs, the simplest (in the sense of having the fewest entities) that admits the Klein four-group as its automorphism group is the diamond graph shown below. It is also the automorphism group of some other graphs that are simpler in the sense of having fewer entities. These include the graph with four vertices and one edge, which remains simple but loses connectivity, and the graph with two vertices connected to each other by two edges, which remains connected but loses simplicity.
Music
In music composition, the four-group is the basic group of permutations in the twelve-tone technique. In that instance, the Cayley table is written
See also
Quaternion group
List of small groups
References
Further reading
M. A. Armstrong (1988) Groups and Symmetry, Springer Verlag, [ page 53].
W. E. Barnes (1963) Introduction to Abstract Algebra, D.C. Heath & Co., page 20.
External links
Finite groups | Klein four-group | [
"Mathematics"
] | 1,339 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
41,951 | https://en.wikipedia.org/wiki/Post%20and%20lintel | Post and lintel (also called prop and lintel, a trabeated system, or a trilithic system) is a building system where strong horizontal elements are held up by strong vertical elements with large spaces between them. This is usually used to hold up a roof, creating a largely open space beneath, for whatever use the building is designed. The horizontal elements are called by a variety of names including lintel, header, architrave or beam, and the supporting vertical elements may be called posts, columns, or pillars. The use of wider elements at the top of the post, called capitals, to help spread the load, is common to many architectural traditions.
Lintels
In architecture, a post-and-lintel or trabeated system refers to the use of horizontal stone beams or lintels which are borne by columns or posts. The name is from the Latin trabs, beam; influenced by trabeatus, clothed in the trabea, a ritual garment.
Post-and-lintel construction is one of four ancient structural methods of building, the others being the corbel, arch-and-vault, and truss.
A noteworthy example of a trabeated system is in Volubilis, from the Roman era, where one side of the Decumanus Maximus is lined with trabeated elements, while the opposite side of the roadway is designed in arched style.
History of lintel systems
The trabeated system is a fundamental principle of Neolithic architecture, ancient Indian architecture, ancient Greek architecture and ancient Egyptian architecture. Other trabeated styles are the Persian, Lycian, Japanese, traditional Chinese, and ancient Chinese architecture, especially in northern China, and nearly all the Indian styles. The traditions are represented in North and Central America by Mayan architecture, and in South America by Inca architecture. In all or most of these traditions, certainly in Greece and India, the earliest versions developed using wood, which were later translated into stone for larger and grander buildings. Timber framing, also using trusses, remains common for smaller buildings such as houses to the modern day.
Span limitations
There are two main forces acting upon the post and lintel system: weight carrying compression at the joint between lintel and post, and tension induced by deformation of self-weight and the load above between the posts. The two posts are under compression from the weight of the lintel (or beam) above. The lintel will deform by sagging in the middle because the underside is under tension and the upper is under compression.
The biggest disadvantage to lintel construction is the limited weight that can be held up, and the resulting small distances required between the posts. Ancient Roman architecture's development of the arch allowed for much larger structures to be constructed. The arcuated system spreads larger loads more effectively, and replaced the post-and-lintel system in most larger buildings and structures, until the introduction of steel girder beams and steel-reinforced concrete in the industrial era.
As with the Roman temple portico front and its descendants in later classical architecture, trabeated features were often retained in parts of buildings as an aesthetic choice. The classical orders of Greek origin were in particular retained in buildings designed to impress, even though they usually had little or no structural role.
Lintel reinforcement
The flexural strength of a stone lintel can be dramatically increased with the use of Post-tensioned stone.
See also
Architrave – structural lintel or beam resting on columns-pillars
Atalburu – Basque decorative lintel
Dolmen – Neolithic megalithic tombs with structural stone lintels
Dougong – traditional Chinese structural element
I-beam – steel lintels and beams
Marriage stone – decorative lintel
Opus caementicium
Structural design
Timber framing – post and beam systems
Stonehenge
Notes
References
Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Architectural elements
Ancient Roman architectural elements
Building
Building engineering
Doors
Windows
Timber framing
Structural system | Post and lintel | [
"Technology",
"Engineering"
] | 811 | [
"Structural engineering",
"Timber framing",
"Building engineering",
"Building",
"Structural system",
"Construction",
"Architectural elements",
"Civil engineering",
"Components",
"Architecture"
] |
41,957 | https://en.wikipedia.org/wiki/Electrical%20impedance | In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of the sinusoidal voltage between its terminals, to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage.
Impedance extends the concept of resistance to alternating current (AC) circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude.
Impedance can be represented as a complex number, with the same units as resistance, for which the SI unit is the ohm ().
Its symbol is usually , and it may be represented by writing its magnitude and phase in the polar form . However, Cartesian complex number representation is often more powerful for circuit analysis purposes.
The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law.
In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.
The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho.
Instruments used to measure the electrical impedance are called impedance analyzers.
History
Perhaps the earliest use of complex numbers in circuit analysis was by Johann Victor Wietlisbach in 1879 in analysing the Maxwell bridge. Wietlisbach avoided using differential equations by expressing AC currents and voltages as exponential functions with imaginary exponents (see ). Wietlisbach found the required voltage was given by multiplying the current by a complex number (impedance), although he did not identify this as a general parameter in its own right.
The term impedance was coined by Oliver Heaviside in July 1886. Heaviside recognised that the "resistance operator" (impedance) in his operational calculus was a complex number. In 1887 he showed that there was an AC equivalent to Ohm's law.
Arthur Kennelly published an influential paper on impedance in 1893. Kennelly arrived at a complex number representation in a rather more direct way than using imaginary exponential functions. Kennelly followed the graphical representation of impedance (showing resistance, reactance, and impedance as the lengths of the sides of a right angle triangle) developed by John Ambrose Fleming in 1889. Impedances could thus be added vectorially. Kennelly realised that this graphical representation of impedance was directly analogous to graphical representation of complex numbers (Argand diagram). Problems in impedance calculation could thus be approached algebraically with a complex number representation. Later that same year, Kennelly's work was generalised to all AC circuits by Charles Proteus Steinmetz. Steinmetz not only represented impedances by complex numbers but also voltages and currents. Unlike Kennelly, Steinmetz was thus able to express AC equivalents of DC laws such as Ohm's and Kirchhoff's laws. Steinmetz's work was highly influential in spreading the technique amongst engineers.
Introduction
In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part.
Complex impedance
The impedance of a two-terminal circuit element is represented as a complex quantity . The polar form conveniently captures both magnitude and phase characteristics as
where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument (commonly given the symbol ) gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current.
In Cartesian form, impedance is defined as
where the real part of impedance is the resistance and the imaginary part is the reactance .
Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.
Complex voltage and current
To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and .
The impedance of a bipolar circuit is defined as the ratio of these quantities:
Hence, denoting , we have
The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.
Validity of complex representation
This representation using complex exponentials may be justified by noting that (by Euler's formula):
The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term. The results are identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that
Ohm's law
The meaning of electrical impedance can be understood by substituting it into Ohm's law. Assuming a two-terminal circuit element with impedance is driven by a sinusoidal voltage or current as above, there holds
The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal).
Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin's theorem and Norton's theorem, can also be extended to AC circuits by replacing resistance with impedance.
Phasors
A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids (such as in AC circuits), where they can often reduce a differential equation problem to an algebraic one.
The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel.
Device examples
Resistor
The impedance of an ideal resistor is purely real and is called resistive impedance:
In this case, the voltage and current waveforms are proportional and in phase.
Inductor and capacitor
Ideal inductors and capacitors have a purely imaginary reactive impedance:
the impedance of inductors increases as frequency increases;
the impedance of capacitors decreases as frequency increases;
In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading.
Note the following identities for the imaginary unit and its reciprocal:
Thus the inductor and capacitor impedance equations can be rewritten in polar form:
The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship.
Deriving the device-specific impedances
What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis.
Resistor
For a resistor, there is the relation
which is Ohm's law.
Considering the voltage signal to be
it follows that
This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees.
This result is commonly expressed as
Capacitor
For a capacitor, there is the relation:
Considering the voltage signal to be
it follows that
and thus, as previously,
Conversely, if the current through the circuit is assumed to be sinusoidal, its complex representation being
then integrating the differential equation
leads to
The Const term represents a fixed potential bias superimposed to the AC sinusoidal potential, that plays no role in AC analysis. For this purpose, this term can be assumed to be 0, hence again the impedance
Inductor
For the inductor, we have the relation (from Faraday's law):
This time, considering the current signal to be:
it follows that:
This result is commonly expressed in polar form as
or, using Euler's formula, as
As in the case of capacitors, it is also possible to derive this formula directly from the complex representations of the voltages and currents, or by assuming a sinusoidal voltage between the two poles of the inductor. In the latter case, integrating the differential equation above leads to a constant term for the current, that represents a fixed DC bias flowing through the inductor. This is set to zero because AC analysis using frequency domain impedance considers one frequency at a time and DC represents a separate frequency of zero hertz in this context.
Generalised s-plane impedance
Impedance defined in terms of jω can strictly be applied only to circuits that are driven with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows:
For a DC circuit, this simplifies to . For a steady-state sinusoidal AC signal .
Formal derivation
The impedance of an electrical component is defined as the ratio between the Laplace transforms of the voltage over it and the current through it, i.e.
where is the complex Laplace parameter. As an example, according to the I-V-law of a capacitor, , from which it follows that .
In the phasor regime (steady-state AC, meaning all signals are represented mathematically as simple complex exponentials and oscillating at a common frequency ), impedance can simply be calculated as the voltage-to-current ratio, in which the common time-dependent factor cancels out:
Again, for a capacitor, one gets that , and hence . The phasor domain is sometimes dubbed the frequency domain, although it lacks one of the dimensions of the Laplace parameter. For steady-state AC, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular:
The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude;
The phase of the complex impedance is the phase shift by which the current lags the voltage.
These two relationships hold even after taking the real part of the complex exponentials (see phasors), which is the part of the signal one actually measures in real-life circuits.
Resistance vs reactance
Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:
In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.
Resistance
Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.
Reactance
Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it.
A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance does not dissipate any power.
Capacitive reactance
A capacitor has a purely reactive impedance that is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric.
The minus sign indicates that the imaginary part of the impedance is negative.
At low frequencies, a capacitor approaches an open circuit so no current flows through it.
A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.
Driven by an AC supply, a capacitor accumulates only a limited charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge accumulates and the smaller the opposition to the current.
Inductive reactance
Inductive reactance is proportional to the signal frequency and the inductance .
An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop.
For an inductor consisting of a coil with loops this gives:
The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.
Total reactance
The total reactance is given by
( is negative)
so that the total impedance is
Combining impedances
The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those for combining resistances, except that the numbers in general are complex numbers. The general case, however, requires equivalent impedance transforms in addition to series and parallel.
Series combination
For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances.
Or explicitly in real and imaginary terms:
Parallel combination
For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances.
Hence the inverse total impedance is the sum of the inverses of the component impedances:
or, when n = 2:
The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance .
Measurement
The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and other fields. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example, in a radio antenna, the standing wave ratio or reflection coefficient may be more useful than the impedance alone. The measurement of impedance requires the measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device.
The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude.
The use of an impulse response may be used in combination with the fast Fourier transform (FFT) to rapidly measure the electrical impedance of various electrical devices.
The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values, the impedance at any frequency can be calculated.
Example
Consider an LC tank circuit.
The complex impedance of the circuit is
It is immediately seen that the value of is minimal (actually equal to 0 in this case) whenever
Therefore, the fundamental resonance angular frequency is
Variable impedance
In general, neither impedance nor admittance can vary with time, since they are defined for complex exponentials in which . If the complex exponential voltage to current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many components and systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage to current ratios that seem to be linear time-invariant (LTI) for small signals and over small observation windows, so they can be roughly described as if they had a time-varying impedance. This description is an approximation: Over large signal swings or wide observation windows, the voltage to current relationship will not be LTI and cannot be described by impedance.
See also
Transmission line impedance
Notes
References
Kline, Ronald R., Steinmetz: Engineer and Socialist, Plunkett Lake Press, 2019 (ebook reprint of Johns Hopkins University Press, 1992 ).
External links
ECE 209: Review of Circuits as LTI Systems – Brief explanation of Laplace-domain circuit analysis; includes a definition of impedance.
Electrical resistance and conductance
Physical quantities
Antennas (radio) | Electrical impedance | [
"Physics",
"Mathematics"
] | 3,945 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Wikipedia categories named after physical quantities",
"Physical properties",
"Electrical resistance and conductance"
] |
41,958 | https://en.wikipedia.org/wiki/Lidar | Lidar (, also LIDAR, LiDAR or LADAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. Lidar may operate in a fixed direction (e.g., vertical) or it may scan multiple directions, in which case it is known as lidar scanning or 3D laser scanning, a special combination of 3-D scanning and laser scanning. Lidar has terrestrial, airborne, and mobile applications.
Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swathe mapping (ALSM), and laser altimetry. It is used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has also been increasingly used in control and navigation for autonomous cars and for the helicopter Ingenuity on its record-setting flights over the terrain of Mars.
The evolution of quantum technology has given rise to the emergence of Quantum Lidar, demonstrating higher efficiency and sensitivity when compared to conventional lidar systems.
History and etymology
Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging", derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems.
The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963, which had a range of 11 km and an accuracy of 4.5 m, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests that it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the Moon by 'lidar' (light radar) ..."
The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar.
Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the Moon.
Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar.
General description
Lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at resolution or better.
The essential concept of lidar was originated by E. H. Synge in 1930, who envisaged the use of powerful searchlights to probe the atmosphere. Indeed, lidar has since been used extensively for atmospheric research and meteorology. Lidar instruments fitted to aircraft and satellites carry out surveying and mapping a recent example being the U.S. Geological Survey Experimental Advanced Airborne Research Lidar. NASA has identified lidar as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar-landing vehicles.
Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 nanometers (ultraviolet). Typically, light is reflected via backscattering, as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal.
The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components.
Technology
Mathematical formula
A lidar determines the distance of an object or a surface with the formula:
where c is the speed of light, d is the distance between the detector and the object or surface being detected, and t is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector.
Design
The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers.
Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.).
Components
Lidar systems consist of several major components.
Laser
600–1,000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut-off system which turns the laser off at specific altitudes is used in order to make it eye-safe for the people on the ground.
One common alternative, 1,550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye. A trade-off though is that current detector technology is less advanced, so these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1,550 nm is not visible in night vision goggles, unlike the shorter 1,000 nm infrared laser.
Airborne topographic mapping lidars generally use 1,064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1,064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth.
Phased arrays
A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction.
Phased arrays have been used in radar since the 1940s. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. To achieve this the phase of each individual antenna (emitter) are precisely controlled. It is very difficult, if possible at all, to use the same technique in a lidar. The main problems are that all individual emitters must be coherent (technically coming from the same "master" oscillator or laser source), have dimensions about the wavelength of the emitted light (1 micron range) to act as a point source with their phases being controlled with high accuracy.
Several companies are working on developing commercial solid-state lidar units but these units utilize a different principle described in a Flash Lidar below.
Microelectromechanical machines
Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration.
Scanner and optics
Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror, and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal.
Photodetector and receiver electronics
Two main photodetector technologies are used in lidar: solid-state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design.
Position and navigation systems
Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU).
Sensor
Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash lidar allows for 3-D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion. This results in less distortion.
3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology.
Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter.
A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array.
In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at over Port-au-Prince was able to capture instantaneous snapshots of squares of the city at a resolution of , displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. New technologies for infrared single-photon counting LIDAR are advancing rapidly, including arrays and cameras in a variety of semiconductor and superconducting platforms.
Flash lidar
In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumination method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3-D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1-D or a 2-D sensor array, each pixel of which collects 3-D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors. Flash lidar is especially advantageous, when compared to scanning lidar, when the camera, scene, or both are moving, since the entire scene is illuminated at the same time. With scanning lidar, motion can cause "jitter" from the lapse in time as the laser rasters over the scene.
As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3-D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios.
Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications.
Classification
Based on orientation
Lidar can be oriented to nadir, zenith, or laterally. For example, lidar altimeters look down, an atmospheric lidar looks up, and lidar-based collision avoidance systems are side-looking.
Based on scanning mechanism
Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector.
Based on platform
Lidar applications can be divided into airborne and terrestrial types. The two types require scanners with varying specifications based on the data's purpose, the size of the area to be captured, the range of measurement desired, the cost of equipment, and more. Spaceborne platforms are also possible, see satellite laser altimetry.
Airborne
Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a 3-D point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water.
The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles.
Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc.
Multiple commercial lidar systems for unmanned aerial vehicles are currently on the market. These platforms can systematically scan large areas, or provide a cheaper alternative to manned aircraft for smaller scanning operations.
Airborne lidar bathymetry
The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions.
Water depth measurable by lidar depends on the clarity of the water and the absorption of the wavelength used. Water is most transparent to green and blue light, so these will penetrate deepest in clean water. Blue-green light of 532 nm produced by frequency doubled solid-state IR laser output is the standard for airborne bathymetry. This light can penetrate water but pulse strength attenuates exponentially with distance traveled through the water. Lidar can measure depths from about , with vertical accuracy in the order of . The surface reflection makes water shallower than about difficult to resolve, and absorption limits the maximum depth. Turbidity causes scattering and has a significant role in determining the maximum depth that can be resolved in most situations, and dissolved pigments can increase absorption depending on wavelength. Other reports indicate that water penetration tends to be between two and three times Secchi depth. Bathymetric lidar is most useful in the depth range in coastal mapping.
On average in fairly clear coastal seawater lidar can penetrate to about , and in turbid water up to about . An average value found by Saputra et al, 2021, is for the green laser light to penetrate water about one and a half to two times Secchi depth in Indonesian waters. Water temperature and salinity have an effect on the refractive index which has a small effect on the depth calculation.
The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar.
Full-waveform lidar
Airborne lidar systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3-D point clouds. Recent studies investigated voxelisation. The intensities of the waveform samples are inserted into a voxelised space (3-D grayscale image) building up a 3-D representation of the scanned area. Related metrics and information can then be extracted from that voxelised space. Structural information can be extracted using 3-D metrics from local areas and there is a case study that used the voxelisation approach for detecting dead standing Eucalypt trees in Australia.
Terrestrial
Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The 3-D point clouds acquired from these types of scanners can be matched with digital images taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point.
Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy.
Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation.
Applications
There are a wide variety of lidar applications, in addition to the applications listed below, as it is often mentioned in National lidar dataset programs. These applications are largely determined by the range of effective object detection; resolution, which is how accurately the lidar identifies and classifies objects; and reflectance confusion, meaning how well the lidar can see something in the presence of bright objects, like reflective signs or bright sun.
Companies are working to cut the cost of lidar sensors, currently anywhere from about US$1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets.
Agriculture
Agricultural robots have been used for a variety of purposes ranging from seed and fertilizer dispersions, sensing techniques as well as crop scouting for the task of weed control.
Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield.
Lidar is now used to monitor insects in the field. The use of lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China.
Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants.
Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage causes interference for agriculture equipment that would otherwise utilize a precise GNSS fix. Lidar sensors can detect and track the relative position of rows, plants, and other markers so that farming equipment can continue operating until a GNSS fix is reestablished.
Plant species classification
Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning. Lidar produces plant contours as a "point cloud" with range and reflectance values. This data is transformed, and features are extracted from it. If the species is known, the features are added as new data. The species is labelled and its features are initially stored as an example to identify the species in the real environment. This method is efficient because it uses a low-resolution lidar and supervised learning. It includes an easy-to-compute feature set with common statistical features which are independent of the plant size.
Archaeology
Lidar has many uses in archaeology, including planning of field campaigns, mapping features under forest canopy, and overview of broad, continuous features indistinguishable from the ground. Lidar can produce high-resolution datasets quickly and cheaply. Lidar-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation.
Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data were used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape.
In 2012, lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. In 2024, archaeologists using lidar discovered the Upano Valley sites.
Autonomous vehicles
Autonomous vehicles may use lidar for obstacle detection and avoidance to navigate safely through environments. The introduction of lidar was a pivotal occurrence that was the key enabler behind Stanley, the first autonomous vehicle to successfully complete the DARPA Grand Challenge. Point cloud output from the lidar sensor provides the necessary data for robot software to determine where potential obstacles exist in the environment and where the robot is in relation to those potential obstacles. Singapore's Singapore-MIT Alliance for Research and Technology (SMART) is actively developing technologies for autonomous lidar vehicles.
The very first generations of automotive adaptive cruise control systems used only lidar sensors.
Object detection for transportation systems
In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding the vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which add to the driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this.
Basics overview: Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead and the lower beams are used to detect lane markings and road features. The major advantage of using lidar is that the spatial structure is obtained and this data can be fused with other sensors such as radar, etc. to get a better picture of the vehicle environment in terms of static and dynamic properties of the objects present in the environment. Conversely, a significant issue with lidar is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from the lidar system are partially reflected off of rain droplets which adds noise to the data, called 'echoes'.
Below mentioned are various approaches of processing lidar data and using it along with data from other sensors through sensor fusion to detect the vehicle environment conditions.
Obstacle detection and road environment recognition using lidar
This method proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a two-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. The road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces.
Advantages
Roadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data the road border can also be recognized. Also, the usage of a sensor with weather-robust head helps to detect the objects even in bad weather conditions. Canopy Height Model before and after flood is a good example. Lidar can detect highly detailed canopy height data as well as its road border.
Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it.
Lidar systems provide better range and a large field of view, which helps in detecting obstacles on the curves. This is one of its major advantages over RADAR systems, which have a narrower field of view. The fusion of lidar measurement with different sensors makes the system robust and useful in real-time applications, since lidar dependent systems cannot estimate the dynamic information about the detected object.
It has been shown that lidar can be manipulated, such that self-driving cars are tricked into taking evasive action.
Ecology and conservation
Lidar has also found many applications for mapping natural and managed landscapes such as forests, wetlands, and grasslands. Canopy heights, biomass measurements, and leaf area can all be studied using airborne lidar systems. Similarly, lidar is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from lidar, including for recreational use such as in the production of orienteering maps. Lidar has also been applied to estimate and assess the biodiversity of plants, fungi, and animals. Using southern bull kelp in New Zealand, coastal lidar mapping data has been compared with population genomic evidence to form hypotheses regarding the occurrence and timing of prehistoric earthquake uplift events.
Forestry
Lidar systems have also been applied to improve forestry management. Measurements are used to take inventory in forest plots as well as calculate individual tree heights, crown width and crown diameter. Other statistical analysis use lidar data to estimate total plot information such as canopy volume, mean, minimum and maximum heights, vegetation cover, biomass, and carbon density. Aerial lidar has been used to map the bush fires in Australia in early 2020. The data was manipulated to view bare earth, and identify healthy and burned vegetation.
Geology and soil science
High-resolution digital elevation maps generated by airborne and stationary lidar have led to significant advances in geomorphology (the branch of geoscience concerned with the origin and evolution of the Earth surface topography). The lidar abilities to detect subtle topographic features such as river terraces and river channel banks, glacial landforms, to measure the land-surface elevation beneath the vegetation canopy, to better resolve spatial derivatives of elevation, to rockfall detection, to detect elevation changes between repeat surveys have enabled many novel studies of the physical and chemical processes that shape landscapes.
In 2005 the Tour Ronde in the Mont Blanc massif became the first high alpine mountain on which lidar was employed to monitor the increasing occurrence of severe rock-fall over large rock faces allegedly caused by climate change and degradation of permafrost at high altitude.
Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. Airborne lidar systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis.
The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships.
Atmosphere
Initially, based on ruby lasers, lidar for meteorological applications was constructed shortly after the invention of the laser and represents one of the first applications of laser technology. Lidar technology has since expanded vastly in capability and lidar systems are used to perform a range of measurements that include profiling clouds, measuring winds, studying aerosols, and quantifying various atmospheric components. Atmospheric components can in turn provide useful information including surface pressure (by measuring the absorption of oxygen or nitrogen), greenhouse gas emissions (carbon dioxide and methane), photosynthesis (carbon dioxide), fires (carbon monoxide), and humidity (water vapor). Atmospheric lidars can be either ground-based, airborne or satellite-based depending on the type of measurement.
Atmospheric lidar remote sensing works in two ways –
by measuring backscatter from the atmosphere, and
by measuring the scattered reflection off the ground (when the lidar is airborne) or other hard surface.
Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as NASA's conical-scanning HARLIE, have been used to measure atmospheric wind velocity. The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition.
Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.
The term, eolics, has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements.
The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (less than 1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane.
Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate imaging (millions of frames per second), as well as for speckle reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant.
Law enforcement
Lidar speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes. Additionally, it is used in forensics to aid in crime scene investigations. Scans of a scene are taken to record exact details of object placement, blood, and other important information for later review. These scans can also be used to determine bullet trajectory in cases of shootings.
Military
Few military applications are known to be in place and are classified (such as the lidar-based speed measurement of the AGM-129 ACM stealth nuclear cruise missile), but a considerable amount of research is underway in their use for imaging. Higher resolution systems collect enough detail to identify targets, such as tanks. Examples of military applications of lidar include the Airborne Laser Mine Detection System (ALMDS) for counter-mine warfare by Areté Associates.
A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents.
Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination.
The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997.
Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge.
A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar.
Mining
For the calculation of ore volumes is accomplished by periodic (monthly) scanning in areas of ore removal, then comparing surface data to the previous scan.
Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS) used in Rio Tinto's Mine of the Future.
Physics and astronomy
A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the Moon, allowing the position of the Moon to be measured with millimeter precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a lidar instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet. Laser altimeters produced global elevation models of Mars, the Moon (Lunar Orbiter Laser Altimeter (LOLA)) Mercury (Mercury Laser Altimeter (MLA)), NEAR–Shoemaker Laser Rangefinder (NLR). Future missions will also include laser altimeter experiments such as the Ganymede Laser Altimeter (GALA) as part of the Jupiter Icy Moons Explorer (JUICE) mission.
In September, 2008, the NASA Phoenix lander used lidar to detect snow in the atmosphere of Mars.
In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.
At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, lidar Thomson scattering is used to determine electron density and temperature profiles of the plasma.
Rock mechanics
Lidar has been widely used in rock mechanics for rock mass characterization and slope change detection. Some important geomechanical properties from the rock mass can be extracted from the 3-D point clouds obtained by means of the lidar. Some of these properties are:
Discontinuity orientation
Discontinuity spacing and RQD
Discontinuity aperture
Discontinuity persistence
Discontinuity roughness
Water infiltration
Some of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes.
THOR
THOR is a laser designed toward measuring Earth's atmospheric conditions. The laser enters a cloud cover and measures the thickness of the return halo. The sensor has a fiber optic aperture with a width of that is used to measure the return light.
Robotics
Lidar technology is being used in robotics for the perception of the environment as well as object classification. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and crewed vehicles with a high degree of precision. Lidar are also widely used in robotics for simultaneous localization and mapping and well integrated into robot simulators. Refer to the Military section above for further examples.
Spaceflight
Lidar is increasingly being utilized for rangefinding and orbital element calculation of relative velocity in proximity operations and stationkeeping of spacecraft. Lidar has also been used for atmospheric studies from space. Short pulses of laser light beamed from a spacecraft can reflect off tiny particles in the atmosphere and back to a telescope aligned with the spacecraft laser. By precisely timing the lidar echo, and by measuring how much laser light is received by the telescope, scientists can accurately determine the location, distribution and nature of the particles. The result is a revolutionary new tool for studying constituents in the atmosphere, from cloud droplets to industrial pollutants, which are difficult to detect by other means."
Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars.
Surveying
Airborne lidar sensors are used by companies in the remote sensing field. They can be used to create a DTM (Digital Terrain Model) or DEM (Digital Elevation Model); this is quite a common practice for larger areas as a plane can acquire wide swaths in a single flyover. Greater vertical accuracy of below can be achieved with a lower flyover, even in forests, where it is able to give the height of the canopy as well as the ground elevation. Typically, a GNSS receiver configured over a georeferenced control point is needed to link the data in with the WGS (World Geodetic System).
Lidar is also in use in hydrographic surveying. Depending upon the clarity of the water lidar can measure depths from with a vertical accuracy of and horizontal accuracy of .
Transport
Lidar has been used in the railroad industry to generate asset health reports for asset management and by departments of transportation to assess their road conditions. CivilMaps.com is a leading company in the field. Lidar has been used in adaptive cruise control (ACC) systems for automobiles. Systems such as those by Siemens, Hella, Ouster and Cepton use a lidar device mounted on the front of the vehicle, such as the bumper, to monitor the distance between the vehicle and any vehicle in front of it. In the event, the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to accelerate to a speed preset by the driver. Refer to the Military section above for further examples. A lidar-based device, the Ceilometer is used at airports worldwide to measure the height of clouds on runway approach paths.
Wind farm optimization
Lidar can be used to increase the energy output from wind farms by accurately measuring wind speeds and wind turbulence. Experimental lidar systems can be mounted on the nacelle of a wind turbine or integrated into the rotating spinner to measure oncoming horizontal winds, winds in the wake of the wind turbine, and proactively adjust blades to protect components and increase power. Lidar is also used to characterise the incident wind resource for comparison with wind turbine power production to verify the performance of the wind turbine by measuring the wind turbine's power curve. Wind farm optimization can be considered a topic in applied eolics. Another aspect of lidar in wind related industry is to use computational fluid dynamics over lidar-scanned surfaces in order to assess the wind potential, which can be used for optimal wind farms placement.
Solar photovoltaic deployment optimization
Lidar can also be used to assist planners and developers in optimizing solar photovoltaic systems at the city level by determining appropriate roof tops and for determining shading losses. Recent airborne laser scanning efforts have focused on ways to estimate the amount of solar light hitting vertical building facades, or by incorporating more detailed shading losses by considering the influence from vegetation and larger surrounding terrain.
Video games
Recent simulation racing games such as rFactor Pro, iRacing, Assetto Corsa and Project CARS increasingly feature race tracks reproduced from 3-D point clouds acquired through lidar surveys, resulting in surfaces replicated with centimeter or millimeter precision in the in-game 3-D environment.
The 2017 exploration game Scanner Sombre, by Introversion Software, uses lidar as a fundamental game mechanic.
In Build the Earth, lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data.
Other uses
The video for the 2007 song "House of Cards" by Radiohead was believed to be the first use of real-time 3-D laser scanning to record a music video. The range data in the video is not completely from a lidar, as structured light scanning is also used.
In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, lidar empowers portrait mode pictures with night mode, quickens auto focus and improves accuracy in the Measure app.
In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the season 40 premiere.
Alternative technologies
Computer stereo vision has shown promise as an alternative to lidar for close range applications.
See also
References
Further reading
Heritage, E. (2011). 3D laser scanning for heritage. Advice and guidance to users on laser scanning in archaeology and architecture. Available at www.english-heritage.org.uk. 3D Laser Scanning for Heritage Historic England.
Heritage, G., & Large, A. (Eds.). (2009). Laser scanning for the environmental sciences. John Wiley & Sons. .
Maltamo, M., Næsset, E., & Vauhkonen, J. (2014). Forestry Applications of Airborne Laser Scanning: Concepts and Case Studies (Vol. 27). Springer Science & Business Media. .
Shan, J., & Toth, C. K. (Eds.). (2008). Topographic laser ranging and scanning: principles and processing. CRC press. .
Vosselman, G., & Maas, H. G. (Eds.). (2010). Airborne and terrestrial laser scanning. Whittles Publishing. .
External links
The USGS Center for LIDAR Information Coordination and Knowledge (CLICK) – A website intended to "facilitate data access, user coordination and education of lidar remote sensing for scientific needs."
Meteorological instrumentation and equipment
Robotic sensing
Articles containing video clips | Lidar | [
"Technology",
"Engineering"
] | 11,205 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
41,961 | https://en.wikipedia.org/wiki/Data%20processing | Data processing is the collection and manipulation of digital data to produce meaningful information. Data processing is a form of information processing, which is the modification (processing) of information in any manner detectable by an observer.
Functions
Data processing may involve various processes, including:
Validation – Ensuring that supplied data is correct and relevant.
Sorting – "arranging items in some sequence and/or in different sets."
Summarization (statistical) or (automatic) – reducing detailed data to its main points.
Aggregation – combining multiple pieces of data.
Analysis – the "collection, organization, analysis, interpretation and presentation of data."
Reporting – list detail or summary data or computed information.
Classification – separation of data into various categories.
History
The United States Census Bureau history illustrates the evolution of data processing from manual through electronic procedures.
Manual data processing
Although widespread use of the term data processing dates only from the 1950s, data processing functions have been performed manually for millennia. For example, bookkeeping involves functions such as posting transactions and producing reports like the balance sheet and the cash flow statement. Completely manual methods were augmented by the application of mechanical or electronic calculators. A person whose job was to perform calculations manually or using a calculator was called a "computer."
The 1890 United States census schedule was the first to gather data by individual rather than household. A number of questions could be answered by making a check in the appropriate box on the form. From 1850 to 1880 the Census Bureau employed "a system of tallying, which, by reason of the increasing number of combinations of classifications required, became increasingly complex. Only a limited number of combinations could be recorded in one tally, so it was necessary to handle the schedules 5 or 6 times, for as many independent tallies." "It took over 7 years to publish the results of the 1880 census" using manual processing methods.
Automatic data processing
The term automatic data processing was applied to operations performed by means of unit record equipment, such as Herman Hollerith's application of punched card equipment for the 1890 United States census. "Using Hollerith's punchcard equipment, the Census Office was able to complete tabulating most of the 1890 census data in 2 to 3 years, compared with 7 to 8 years for the 1880 census. It is estimated that using Hollerith's system saved some $5 million in processing costs" in 1890 dollars even though there were twice as many questions as in 1880.
Computerized data processing
Computerized data processing, or electronic data processing represents a later development, with a computer used instead of several independent pieces of equipment. The Census Bureau first made limited use of electronic computers for the 1950 United States census, using a UNIVAC I system, delivered in 1952.
Other developments
The term data processing has mostly been subsumed by the more general term information technology (IT). The older term "data processing" is suggestive of older technologies. For example, in 1996 the Data Processing Management Association (DPMA) changed its name to the Association of Information Technology Professionals. Nevertheless, the terms are approximately synonymous.
Applications
Commercial data processing
Commercial data processing involves a large volume of input data, relatively few computational operations, and a large volume of output. For example, an insurance company needs to keep records on tens or hundreds of thousands of policies, print and mail bills, and receive and post payments.
Data analysis
In science and engineering, the terms data processing and information systems are considered too broad, and the term data processing is typically used for the initial stage followed by a data analysis in the second stage of the overall data handling.
Data analysis uses specialized algorithms and statistical calculations that are less often observed in a typical general business environment. For data analysis, software suites like SPSS or SAS, or their free counterparts such as DAP, gretl, or PSPP are often used. These tools are usually helpful for processing various huge data sets, as they are able to handle enormous amount of statistical analysis.
Systems
A data processing system is a combination of machines, people, and processes that for a set of inputs produces a defined set of outputs. The inputs and outputs are interpreted as data, facts, information etc. depending on the interpreter's relation to the system.
A term commonly used synonymously with data or storage (codes) processing system is information system. With regard particularly to electronic data processing, the corresponding concept is referred to as electronic data processing system.
Examples
Simple example
A very simple example of a data processing system is the process of maintaining a check register. Transactions— checks and deposits— are recorded as they occur and the transactions are summarized to determine a current balance. Monthly the data recorded in the register is reconciled with a hopefully identical list of transactions processed by the bank.
A more sophisticated record keeping system might further identify the transactions— for example deposits by source or checks by type, such as charitable contributions. This information might be used to obtain information like the total of all contributions for the year.
The important thing about this example is that it is a system, in which, all transactions are recorded consistently, and the same method of bank reconciliation is used each time.
Real-world example
This is a flowchart of a data processing system combining manual and computerized processing to handle accounts receivable, billing, and general ledger
See also
Big data
Computation
Computer science
Decision-making software
Information Age
Information and communications technology
Information technology
Scientific computing
Notes
External links
References
Further reading
Bourque, Linda B.; Clark, Virginia A. (1992) Processing Data: The Survey Example. (Quantitative Applications in the Social Sciences, no. 07-085). SAGE Publications.
Levy, Joseph (1967) Punched Card Data Processing. McGraw-Hill Book Company.
Computer data | Data processing | [
"Technology"
] | 1,160 | [
"Computer data",
"Data"
] |
41,968 | https://en.wikipedia.org/wiki/Gain%20%28electronics%29 | In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output port by adding energy converted from some power supply to the signal. It is usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input port. It is often expressed using the logarithmic decibel (dB) units ("dB gain"). A gain greater than one (greater than zero dB), that is, amplification, is the defining property of an active device or circuit, while a passive circuit will have a gain of less than one.
The term gain alone is ambiguous, and can refer to the ratio of output to input voltage (voltage gain), current (current gain) or electric power (power gain). In the field of audio and general purpose amplifiers, especially operational amplifiers, the term usually refers to voltage gain, but in radio frequency amplifiers it usually refers to power gain. Furthermore, the term gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar transistor normally refers to forward current transfer ratio, either hFE ("beta", the static ratio of Ic divided by Ib at some operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Ic against Ib at a point).
The gain of an electronic device or circuit generally varies with the frequency of the applied signal. Unless otherwise stated, the term refers to the gain for frequencies in the passband, the intended operating frequency range of the equipment.
The term gain has a different meaning in antenna design; antenna gain is the ratio of radiation intensity from a directional antenna to (mean radiation intensity from a lossless antenna).
Logarithmic units and decibels
Power gain
Power gain, in decibels (dB), is defined as follows:
where is the power applied to the input, is the power from the output.
A similar calculation can be done using a natural logarithm instead of a decimal logarithm, resulting in nepers instead of decibels:
Voltage gain
The power gain can be calculated using voltage instead of power using Joule's first law ; the formula is:
In many cases, the input impedance and output impedance are equal, so the above equation can be simplified to:
This simplified formula, the 20 log rule, is used to calculate a voltage gain in decibels and is equivalent to a power gain if and only if the impedances at input and output are equal.
Current gain
In the same way, when power gain is calculated using current instead of power, making the substitution , the formula is:
In many cases, the input and output impedances are equal, so the above equation can be simplified to:
This simplified formula is used to calculate a current gain in decibels and is equivalent to the power gain if and only if the impedances at input and output are equal.
The "current gain" of a bipolar transistor, or , is normally given as a dimensionless number, the ratio of to (or slope of the -versus- graph, for ).
In the cases above, gain will be a dimensionless quantity, as it is the ratio of like units (decibels are not used as units, but rather as a method of indicating a logarithmic relationship). In the bipolar transistor example, it is the ratio of the output current to the input current, both measured in amperes. In the case of other devices, the gain will have a value in SI units. Such is the case with the operational transconductance amplifier, which has an open-loop gain (transconductance) in siemens (mhos), because the gain is a ratio of the output current to the input voltage.
Example
Q. An amplifier has an input impedance of 50 ohms and drives a load of 50 ohms. When its input () is 1 volt, its output () is 10 volts. What is its voltage and power gain?
A. Voltage gain is simply:
The units V/V are optional but make it clear that this figure is a voltage gain and not a power gain.
Using the expression for power, P = V2/R, the power gain is:
Again, the units W/W are optional. Power gain is more usually expressed in decibels, thus:
Unity gain
A gain of factor 1 (equivalent to 0 dB) where both input and output are at the same voltage level and impedance is also known as unity gain.
See also
Active laser medium
Antenna gain
Aperture-to-medium coupling loss
Automatic gain control
Attenuation
Complex gain
DC offset
Effective radiated power
Gain before feedback
Insertion gain
Loop gain
Open-loop gain
Net gain
Power gain
Process gain
Transmitter power output
References
Antennas (radio)
Electronics concepts
Transfer functions
Electrical parameters | Gain (electronics) | [
"Engineering"
] | 1,060 | [
"Electrical engineering",
"Electrical parameters"
] |
41,976 | https://en.wikipedia.org/wiki/Franco%20Rasetti | Franco Dino Rasetti (August 10, 1901 – December 5, 2001) was an Italian (later naturalized American) physicist, paleontologist and botanist. Together with Enrico Fermi, he discovered key processes leading to nuclear fission. Rasetti refused to work on the Manhattan Project on moral grounds.
Life and career
Rasetti was born in Castiglione del Lago, Italy. He earned a Laurea in physics at the University of Pisa in 1923, and Fermi invited him to join his research group at the University of Rome.
In 1928-1929 during a stay at the California Institute of Technology (Caltech), he carried out experiments on the Raman effect. He measured a spectrum of dinitrogen in 1929 which provided the first experimental evidence that the atomic nucleus is not composed of protons and electrons, as was incorrectly believed at the time.
In 1930, he was appointed to the chair in spectroscopy at the Physics Institute of the University of Rome, at that time still located in Via Panisperna. His colleagues included Oscar D'Agostino, Emilio Segrè, Edoardo Amaldi, Ettore Majorana and Enrico Fermi, as well as the institute's director Orso Mario Corbino. Rasetti remained in this position until 1938.
Rasetti was one of Fermi's main collaborators in the study of neutrons and neutron-induced radioactivity. In 1934, he participated in the discovery of the artificial radioactivity of fluorine and aluminium which would be critical in the development of the atomic bomb.
In 1939 the advance of fascism and the deteriorating Italian political situation led him to leave Italy, following the example of his colleagues Fermi, Segré and Bruno Pontecorvo. With Fermi he had discovered the key to nuclear fission, but unlike many of his colleagues, he refused for moral reasons to work on the Manhattan project.
From 1939 to 1947, he taught at Laval University in Quebec City (Canada), where he was founding chairman of the physics department.
In 1947, he moved to the United States where he became a naturalized citizen in 1952. Until 1967, he held a chair in physics at Johns Hopkins University in Baltimore.
From the 1950s onward, he gradually shifted his commitment to naturalistic studies, which had been his great interest outside of physics already as a child. He devoted himself to geology, paleontology, entomology, and botany, becoming one of the most authoritative scholars of the Cambrian geological era.
He died in Waremme, Belgium at the age of 100. The Nature obituary noted that Rasetti was one of the most prolific generalists whose work and writing are noted for the elegance, simplicity and beauty.
Raman spectroscopy and the model of the atomic nucleus
After the discovery of Raman scattering by organic liquids, Rasetti decided to study the same phenomenon in gases at high pressure during his stay at Caltech in 1928–29. The spectra showed vibrational transitions with rotational fine structure. In the homonuclear diatomic molecules H2, N2 and O2, Rasetti found an alternation of strong and weak lines. This alternation was explained by Gerhard Herzberg and Walter Heitler as a consequence of nuclear spin isomerism.
For dihydrogen, each nucleus is a proton of spin 1/2, so that it can be shown using quantum mechanics and the Pauli exclusion principle that the odd rotational levels are more populated than the even levels. The transitions originating from odd levels are therefore more intense as observed by Rasetti. In dinitrogen, however, Rasetti observed that the lines originating from even levels are more intense. This implies by a similar analysis that the nuclear spin of nitrogen is an integer.
This result was difficult to understand at the time, however, because the neutron had not yet been discovered, and it was thought that the 14N nucleus contains 14 protons and 7 electrons, or an odd number (21) of particles in total which would correspond to a half-integral spin. The Raman spectrum observed by Rasetti provided the first experimental evidence that this proton-electron model of the nucleus is inadequate, because the predicted half-integral spin has as a consequence that transitions from odd rotational levels would be more intense than those from even levels, due to nuclear spin isomerism as shown by Herzberg and Heitler for dihydrogen. After the discovery of the neutron in 1932, Werner Heisenberg proposed that the nucleus contains protons and neutrons, and the 14N nucleus contains 7 protons and 7 neutrons. The even total number (14) of particles corresponds to an integral spin in agreement with Rasetti's spectrum.
He is also credited with the first example of electronic (as opposed to vibronic) Raman scattering in nitric oxide.
Awards
In 1952 he was awarded the Charles Doolittle Walcott Medal by the National Academy of Sciences for his contributions to Cambrian paleontology.
References
External links
Franco Dino Rasetti Papers from the Smithsonian Institution Archives
1901 births
2001 deaths
People from Castiglione del Lago
Italian emigrants to the United States
20th-century Italian physicists
Italian nuclear physicists
20th-century American physicists
American nuclear physicists
Academic staff of Université Laval
Italian men centenarians
American men centenarians
Academic staff of the Sapienza University of Rome
Charles Doolittle Walcott Medal winners
Italian paleontologists
Italian exiles
Spectroscopists | Franco Rasetti | [
"Physics",
"Chemistry"
] | 1,103 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
41,985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized.
The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment.
Definition
The shortest path problem can be defined for graphs whether undirected, directed, or mixed. The definition for undirected graphs states that every edge can be traversed in either direction. Directed graphs require that consecutive vertices be connected by an appropriate directed edge.
Two vertices are adjacent when they are both incident to a common edge. A path in an undirected graph is a sequence of vertices such that is adjacent to for . Such a path is called a path of length from to . (The are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.)
Let where is the edge incident to both and . Given a real-valued weight function , and an undirected (simple) graph , the shortest path from to is the path (where and ) that over all possible minimizes the sum When each edge in the graph has unit weight or , this is equivalent to finding the path with fewest edges.
The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations:
The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph.
The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph.
The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph.
These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices.
Algorithms
Several well-known algorithms exist for solving this problem and its variants.
Dijkstra's algorithm solves the single-source shortest path problem with only non-negative edge weights.
Bellman–Ford algorithm solves the single-source problem if edge weights may be negative.
A* search algorithm solves for single-pair shortest path using heuristics to try to speed up the search.
Floyd–Warshall algorithm solves all pairs shortest paths.
Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs.
Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node.
Additional algorithms and associated evaluations may be found in .
Single-source shortest paths
Undirected graphs
Unweighted graphs
Directed acyclic graphs
An algorithm using topological sorting can solve the single-source shortest path problem in time in arbitrarily-weighted directed acyclic graphs.
Directed graphs with nonnegative weights
The following table is taken from , with some corrections and additions.
A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges, assuming integer edge weights.
Directed graphs with arbitrary weights without negative cycles
Directed graphs with arbitrary weights with negative cycles
Finds a negative cycle or calculates distances to all vertices.
Planar graphs with nonnegative weights
Applications
Network flows are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node.
Shortest Path Problems can be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems.
Transformation Steps
Create a Residual Graph:
For each edge (u, v) in the original graph, create two edges in the residual graph:
(u, v) with capacity c(u, v)
(v, u) with capacity 0
The residual graph represents the remaining capacity available in the network.
Find the Shortest Path:
Use a shortest path algorithm (e.g., Dijkstra's algorithm, Bellman-Ford algorithm) to find the shortest path from the source node to the sink node in the residual graph.
Augment the Flow:
Find the minimum capacity along the shortest path.
Increase the flow on the edges of the shortest path by this minimum capacity.
Decrease the capacity of the edges in the forward direction and increase the capacity of the edges in the backward direction.
Update the Residual Graph:
Update the residual graph based on the augmented flow.
Repeat:
Repeat steps 2-4 until no more paths can be found from the source to the sink.
All-pairs shortest paths
The all-pairs shortest path problem finds the shortest paths between every pair of vertices , in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced by , who observed that it could be solved by a linear number of matrix multiplications that takes a total time of .
Undirected graph
Directed graph
Applications
Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available.
If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.
In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path.
A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film.
Other applications, often studied in operations research, include plant and facility layout, robotics, transportation, and VLSI design.
Road networks
A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension. There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs.
All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network.
The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond. Other techniques that have been used are:
ALT (A* search, landmarks, and triangle inequality)
Arc flags
Contraction hierarchies
Transit node routing
Reach-based pruning
Labeling
Hub labels
Related problems
For shortest path problems in computational geometry, see Euclidean shortest path.
The shortest multiple disconnected path is a representation of the primitive path network within the framework of Reptation theory. The widest path problem seeks a path so that the minimum label of any edge is as large as possible.
Other related problems may be classified into the following categories.
Paths with constraints
Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are called Constrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem, which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problem NP-complete (such problems are not believed to be efficiently solvable for large sets of data, see P = NP problem). Another NP-complete example requires a specific set of vertices to be included in the path, which makes the problem similar to the Traveling Salesman Problem (TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem of finding the longest path in a graph is also NP-complete.
Partial observability
The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic.
Strategic shortest paths
Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use a variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights.
Negative cycle detection
In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose:
The Bellman–Ford algorithm can be used to detect a negative cycle in time .
Cherkassky and Goldberg survey several other algorithms for negative cycle detection.
General algebraic framework on semirings: the algebraic path problem
Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a semiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the algebraic path problem.
Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures.
More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of valuation algebras.
Shortest path in stochastic time-dependent networks
In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network.
There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability.
To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such as dynamic programming and Dijkstra's algorithm . These methods use stochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length. The terms travel time reliability and travel time variability are used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions.
To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. The most reliable path is one that maximizes the probability of arriving on time given a travel time budget. An α-reliable path is one that minimizes the travel time budget required to arrive on time with a given probability.
See also
Bidirectional search, an algorithm that finds the shortest path between two vertices on a directed graph
Euclidean shortest path
Flow network
K shortest path routing
Min-plus matrix multiplication
Pathfinding
Shortest Path Bridging
Shortest path tree
TRILL (TRansparent Interconnection of Lots of Links)
References
Notes
Bibliography
Attributes Dijkstra's algorithm to Minty ("private communication") on p. 225.
Further reading
DTIC AD-661265.
Network theory
Graph distance
Polynomial-time problems
Computational problems in graph theory
Edsger W. Dijkstra | Shortest path problem | [
"Mathematics"
] | 2,918 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Polynomial-time problems",
"Network theory",
"Mathematical relations",
"Mathematical problems",
"Graph distance"
] |
41,993 | https://en.wikipedia.org/wiki/Intensity%20%28physics%29 | In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound), matter waves such as electrons in electron microscopes, and electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler.
The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech.
Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude.
Mathematical description
If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law.
Applying the law of conservation of energy, if the net power emanating is constant,
where
is the net power radiated;
is the intensity vector as a function of position;
the magnitude is the intensity as a function of position;
is a differential element of a closed surface that contains the source.
If one integrates a uniform intensity, , over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes
where
is the intensity at the surface of the sphere;
is the radius of the sphere;
is the expression for the surface area of a sphere.
Solving for gives
If the medium is damped, then the intensity drops off more quickly than the above equation suggests.
Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam, if is the complex amplitude of the electric field, then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by:
and the local intensity is obtained by multiplying this expression by the wave velocity,
where
is the refractive index;
is the speed of light in vacuum;
is the vacuum permittivity.
For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector.
Electron beams
For electron beams, intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography.
Alternative definitions
In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer.
See also
Field strength
Sound intensity
Magnitude (astronomy)
Footnotes
References
Optical quantities
Radiometry
Physical quantities | Intensity (physics) | [
"Physics",
"Mathematics",
"Engineering"
] | 858 | [
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Optical quantities",
"Physical properties",
"Radiometry"
] |
41,997 | https://en.wikipedia.org/wiki/Twin%20prime | A twin prime is a prime number that is either 2 less or 2 more than another prime number—for example, either member of the twin prime pair or In other words, a twin prime is a prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair.
Twin primes become increasingly rare as one examines larger ranges, in keeping with the general tendency of gaps between adjacent primes to become larger as the numbers themselves get larger. However, it is unknown whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there is a largest pair. The breakthrough
work of Yitang Zhang in 2013, as well as work by James Maynard, Terence Tao and others, has made substantial progress towards proving that there are infinitely many twin primes, but at present this remains unsolved.
Properties
Usually the pair is not considered to be a pair of twin primes.
Since 2 is the only even prime, this pair is the only pair of prime numbers that differ by one; thus twin primes are as closely spaced as possible for any other two primes.
The first several twin prime pairs are
.
Five is the only prime that belongs to two pairs, as every twin prime pair greater than is of the form for some natural number ; that is, the number between the two primes is a multiple of 6.
As a result, the sum of any pair of twin primes (other than 3 and 5) is divisible by 12.
Brun's theorem
In 1915, Viggo Brun showed that the sum of reciprocals of the twin primes was convergent.
This famous result, called Brun's theorem, was the first use of the Brun sieve and helped initiate the development of modern sieve theory. The modern version of Brun's argument can be used to show that the number of twin primes less than does not exceed
for some absolute constant
In fact, it is bounded above by
where is the twin prime constant (slightly less than 2/3), given below.
Twin prime conjecture
The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states that there are infinitely many primes such that is also prime. In 1849, de Polignac made the more general conjecture that for every natural number , there are infinitely many primes such that is also prime.
The of de Polignac's conjecture is the twin prime conjecture.
A stronger form of the twin prime conjecture, the Hardy–Littlewood conjecture (see below), postulates a distribution law for twin primes akin to the prime number theorem.
On 17 April 2013, Yitang Zhang announced a proof that there exists an integer that is less than 70 million, where there are infinitely many pairs of primes that differ by . Zhang's paper was accepted in early May 2013. Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhang's bound.
One year after Zhang's announcement, the bound had been reduced to 246, where it remains.
These improved bounds were discovered using a different approach that was simpler than Zhang's and was discovered independently by James Maynard and Terence Tao. This second approach also gave bounds for the smallest needed to guarantee that infinitely many intervals of width contain at least primes. Moreover (see also the next section) assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath Project wiki states that the bound is 12 and 6, respectively.
A strengthening of Goldbach’s conjecture, if proved, would also prove there is an infinite number of twin primes, as would the existence of Siegel zeroes.
Other theorems weaker than the twin prime conjecture
In 1940, Paul Erdős showed that there is a constant and infinitely many primes such that where denotes the next prime after . What this means is that we can find infinitely many intervals that contain two primes as long as we let these intervals grow slowly in size as we move to bigger and bigger primes. Here, "grow slowly" means that the length of these intervals can grow logarithmically. This result was successively improved; in 1986 Helmut Maier showed that a constant can be used. In 2004 Daniel Goldston and Cem Yıldırım showed that the constant could be improved further to In 2005, Goldston, Pintz, and Yıldırım established that can be chosen to be arbitrarily small,
i.e.
On the other hand, this result does not rule out that there may not be infinitely many intervals that contain two primes if we only allow the intervals to grow in size as, for example,
By assuming the Elliott–Halberstam conjecture or a slightly weaker version, they were able to show that there are infinitely many such that at least two of , , , , , , or are prime. Under a stronger hypothesis they showed that for infinitely many , at least two of , , , and are prime.
The result of Yitang Zhang,
is a major improvement on the Goldston–Graham–Pintz–Yıldırım result. The Polymath Project optimization of Zhang's bound and the work of Maynard have reduced the bound: the limit inferior is at most 246.
Conjectures
First Hardy–Littlewood conjecture
The first Hardy–Littlewood conjecture (named after G. H. Hardy and John Littlewood) is a generalization of the twin prime conjecture. It is concerned with the distribution of prime constellations, including twin primes, in analogy to the prime number theorem. Let denote the number of primes such that is also prime. Define the twin prime constant as
(Here the product extends over all prime numbers .) Then a special case of the first Hardy-Littlewood conjecture is that
in the sense that the quotient of the two expressions tends to 1 as approaches infinity. (The second ~ is not part of the conjecture and is proven by integration by parts.)
The conjecture can be justified (but not proven) by assuming that describes the density function of the prime distribution. This assumption, which is suggested by the prime number theorem, implies the twin prime conjecture, as shown in the formula for above.
The fully general first Hardy–Littlewood conjecture on prime -tuples (not given here) implies that the second Hardy–Littlewood conjecture is false.
This conjecture has been extended by Dickson's conjecture.
Polignac's conjecture
Polignac's conjecture from 1849 states that for every positive even integer , there are infinitely many consecutive prime pairs and such that (i.e. there are infinitely many prime gaps of size ). The case is the twin prime conjecture. The conjecture has not yet been proven or disproven for any specific value of , but Zhang's result proves that it is true for at least one (currently unknown) value of . Indeed, if such a did not exist, then for any positive even natural number there are at most finitely many such that for all and so for large enough we have which would contradict Zhang's result.
Large twin primes
Beginning in 2007, two distributed computing projects, Twin Prime Search and PrimeGrid, have produced several record-largest twin primes. , the current largest twin prime pair known is with 388,342 decimal digits. It was discovered in September 2016.
There are 808,675,888,577,436 twin prime pairs below .
An empirical analysis of all prime pairs up to 4.35 × shows that if the number of such pairs less than is then is about 1.7 for small and decreases towards about 1.3 as tends to infinity. The limiting value of is conjectured to equal twice the twin prime constant () (not to be confused with Brun's constant), according to the Hardy–Littlewood conjecture.
Other elementary properties
Every third odd number is divisible by 3, and therefore no three successive odd numbers can be prime unless one of them is 3. Therefore, 5 is the only prime that is part of two twin prime pairs. The lower member of a pair is by definition a Chen prime.
If m − 4 or m + 6 is also prime then the three primes are called a prime triplet.
It has been proven that the pair (m, m + 2) is a twin prime if and only if
For a twin prime pair of the form (6n − 1, 6n + 1) for some natural number n > 1, n must end in the digit 0, 2, 3, 5, 7, or 8 (). If n were to end in 1 or 6, 6n would end in 6, and 6n −1 would be a multiple of 5. This is not prime unless n = 1. Likewise, if n were to end in 4 or 9, 6n would end in 4, and 6n +1 would be a multiple of 5. The same rule applies modulo any prime p ≥ 5: If n ≡ ±6−1 (mod p), then one of the pair will be divisible by p and will not be a twin prime pair unless 6n = p ±1. p = 5 just happens to produce particularly simple patterns in base 10.
Isolated prime
An isolated prime (also known as single prime or non-twin prime) is a prime number p such that neither p − 2 nor p + 2 is prime. In other words, p is not part of a twin prime pair. For example, 23 is an isolated prime, since 21 and 25 are both composite.
The first few isolated primes are
2, 23, 37, 47, 53, 67, 79, 83, 89, 97, ... .
It follows from Brun's theorem that almost all primes are isolated in the sense that the ratio of the number of isolated primes less than a given threshold n and the number of all primes less than n tends to 1 as n tends to infinity.
See also
Cousin prime
Prime gap
Prime k-tuple
Prime quadruplet
Prime triplet
Sexy prime
References
Further reading
External links
Top-20 Twin Primes at Chris Caldwell's Prime Pages
Xavier Gourdon, Pascal Sebah: Introduction to Twin Primes and Brun's Constant
"Official press release" of 58711-digit twin prime record
The 20 000 first twin primes
Polymath: Bounded gaps between primes
Sudden Progress on Prime Number Problem Has Mathematicians Buzzing
Classes of prime numbers
Unsolved problems in number theory | Twin prime | [
"Mathematics"
] | 2,191 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in number theory",
"Number theory"
] |
41,999 | https://en.wikipedia.org/wiki/Franz%20Mertens | Franz Mertens (20 March 1840 – 5 March 1927) (also known as Franciszek Mertens) was a Polish mathematician. He was born in Schroda in the Grand Duchy of Posen, Kingdom of Prussia (now Środa Wielkopolska, Poland) and died in Vienna, Austria.
The Mertens function M(x) is the sum function for the Möbius function, in the theory of arithmetic functions. The Mertens conjecture concerning its growth, conjecturing it bounded by x1/2, which would have implied the Riemann hypothesis, is now known to be false (Odlyzko and te Riele, 1985). The Meissel–Mertens constant is analogous to the Euler–Mascheroni constant, but the harmonic series sum in its definition is only over the primes rather than over all integers and the logarithm is taken twice, not just once. Mertens's theorems are three 1874 results related to the density of prime numbers.
Erwin Schrödinger was taught calculus and algebra by Mertens.
His memory is honoured by the Franciszek Mertens Scholarship granted (from 2017) to those outstanding pupils of foreign secondary schools who wish to study at the Faculty of Mathematics and Computer Science of the Jagiellonian University in Kraków and were finalists of the national-level mathematics, or computer science olympiads, or they have participated in one of the following international olympiads: in mathematics (IMO), computer science (IOI), artificial intelligence (IOAI), astronomy (IAO), astronomy and astrophysics (IOAA), physics (IPhO), linguistics (IOL), European Girls' Mathematical Olympiad (EGMO), European Girls’ Olympiad in Informatics (EGOI), Romanian Masters of Mathematics (RMM), Romanian Masters of Informatics (RMI) or International Zhautykov Olympiad (IZhO).
See also
Mertens's theorems
Cauchy product
References
External links
1840 births
1927 deaths
People from Środa Wielkopolska
People from the Province of Posen
Mathematicians from the Kingdom of Prussia
Polish mathematicians
Mathematicians from Austria-Hungary
Austrian mathematicians
19th-century German mathematicians
20th-century German mathematicians
Humboldt University of Berlin alumni
Academic staff of Jagiellonian University
Academic staff of the University of Vienna
Number theorists | Franz Mertens | [
"Mathematics"
] | 479 | [
"Number theorists",
"Number theory"
] |
42,001 | https://en.wikipedia.org/wiki/Antonie%20van%20Leeuwenhoek | Antonie Philips van Leeuwenhoek ( ; ; 24 October 1632 – 26 August 1723) was a Dutch microbiologist and microscopist in the Golden Age of Dutch science and technology. A largely self-taught man in science, he is commonly known as "the Father of Microbiology", and one of the first microscopists and microbiologists. Van Leeuwenhoek is best known for his pioneering work in microscopy and for his contributions toward the establishment of microbiology as a scientific discipline.
Raised in Delft, Dutch Republic, Van Leeuwenhoek worked as a draper in his youth and founded his own shop in 1654. He became well-recognized in municipal politics and developed an interest in lensmaking. In the 1670s, he started to explore microbial life with his microscope.
Using single-lensed microscopes of his own design and make, Van Leeuwenhoek was the first to observe and to experiment with microbes, which he originally referred to as , or . He was the first to relatively determine their size. Most of the "animalcules" are now referred to as unicellular organisms, although he observed multicellular organisms in pond water. He was also the first to document microscopic observations of muscle fibers, bacteria, spermatozoa, red blood cells, crystals in gouty tophi, and among the first to see blood flow in capillaries. Although Van Leeuwenhoek did not write any books, he described his discoveries in chaotic letters to the Royal Society, which published many of his letters in their Philosophical Transactions.
Early life and career
Antonie van Leeuwenhoek was born in Delft, Dutch Republic, on 24 October 1632. On 4 November, he was baptized as Thonis. His father, Philips Antonisz van Leeuwenhoek, was a basket maker who died when Antonie was only five years old. His mother, Margaretha (Bel van den Berch), came from a well-to-do brewer's family. She remarried Jacob Jansz Molijn, a painter and the family moved to Warmond around 1640. Antonie had four older sisters: Margriet, Geertruyt, Neeltje, and Catharina. When he was around ten years old his step-father died. He was sent to live in Benthuizen with his uncle, an attorney. At the age of 16 he became a bookkeeper's apprentice (casher) at a linen-draper's shop at Warmoesstraat in Amsterdam, which was owned by William Davidson. Van Leeuwenhoek left there after six years.
In July 1654, Van Leeuwenhoek married Barbara de Mey in Delft, with whom he fathered one surviving daughter, Maria (four other children died in infancy). He would live and study for the rest of his life at Hypolytusbuurt in a house he bought in 1655. He opened a draper's shop, selling linen, yarn and ribbon to seamstresses and tailors. His status in Delft grew throughout the years. In 1660 he received a lucrative job as chamberlain for the sheriffs in the city hall, a position which he would hold for almost 40 years. His duties included maintaining the premises, heating, cleaning, opening for meetings, performing duties for those assembled, and maintaining silence on all matters discussed there.
In 1669 he was appointed as a land surveyor by the court of Holland; at some time he combined it with another municipal job, being the official "wine-gauger" of Delft and in charge of the city wine imports and taxation. His wife had died in 1666, and in 1671, Van Leeuwenhoek remarried to Cornelia Swalmius with whom he had no children.
Van Leeuwenhoek was a contemporary of another famous Delft citizen, the painter Johannes Vermeer, who was baptized just four days earlier. It has been suggested that he is the man portrayed in two Vermeer paintings of the late 1660s, The Astronomer and The Geographer, but others argue that there appears to be little physical similarity. Because they were both relatively important men in a city with only 24,000 inhabitants, living both close to the main market, it is likely they knew each other. Van Leeuwenhoek acted as the executor of Vermeer's will when the painter died in 1675.
Van Leeuwenhoek's religion was "Dutch Reformed" and Calvinist. Like Jan Swammerdam he often referred with reverence to the wonders God designed in making creatures great and small, and believed that his discoveries were merely further proof of the wonder of creation.
Microscopic study
While running his draper shop, Van Leeuwenhoek wanted to see the quality of the thread better than what was possible using the magnifying lenses of the time. He developed an interest in lensmaking, although few records exist of his early activity. By placing the middle of a small rod of soda lime glass in a hot flame, one can pull the hot section apart to create two long whiskers of glass. Then, by reinserting the end of one whisker into the flame, a very small, high-quality glass lens is created. Significantly, a May 2021 neutron tomography study of a high-magnification Leeuwenhoek microscope captured images of the short glass stem characteristic of this lens creation method. For lower magnifications he also made ground lenses. To help keep his methods confidential he apparently intentionally encouraged others to think grinding was his primary or only lens construction method.
Recognition by the Royal Society
After developing his method for creating powerful lenses and applying them to the study of the microscopic world, Van Leeuwenhoek introduced his work to his friend, the prominent Dutch physician Reinier de Graaf. When the Royal Society in London published the groundbreaking work of an Italian lensmaker in their journal Philosophical Transactions of the Royal Society, de Graaf wrote to the editor of the journal, Henry Oldenburg, with a ringing endorsement of Van Leeuwenhoek's microscopes which, he claimed, "far surpass those which we have hitherto seen". In response, in 1673 the society published a letter from Van Leeuwenhoek that included his microscopic observations on mold, bees, and lice. Then, in 1674, Van Leeuwenhoek made his most significant discovery. Starting from the assumption that life and motility are similar, he determined that the moving objects observed under his microscope were little animals. He later recorded his observations in his diary.
Van Leeuwenhoek's work fully captured the attention of the Royal Society, and he began corresponding regularly with the society regarding his observations. At first he had been reluctant to publicize his findings, regarding himself as a businessman with little scientific, artistic, or writing background, but de Graaf urged him to be more confident in his work. By the time Van Leeuwenhoek died in 1723, he had written some 190 letters to the Royal Society, detailing his findings in a wide variety of fields, centered on his work in microscopy. He only wrote letters in his own colloquial Dutch; he never published a proper scientific paper in Latin. He strongly preferred to work alone, distrusting the sincerity of those who offered their assistance. The letters were translated into Latin or English by Henry Oldenburg, who had learned Dutch for this very purpose. He was also the first to use the word animalcules to translate the Dutch words that Leeuwenhoek used to describe microorganisms. Despite the initial success of Van Leeuwenhoek's relationship with the Royal Society, soon relations became severely strained. His credibility was questioned when he sent the Royal Society a copy of his first observations of microscopic single-celled organisms dated 9 October 1676. Previously, the existence of single-celled organisms was entirely unknown. Thus, even with his established reputation with the Royal Society as a reliable observer, his observations of microscopic life were initially met with some skepticism.
Eventually, in the face of Van Leeuwenhoek's insistence, the Royal Society arranged for Alexander Petrie, minister to the English Reformed Church in Delft; Benedict Haan, at that time Lutheran minister at Delft; and Henrik Cordes, then Lutheran minister at the Hague, accompanied by Sir Robert Gordon and four others, to determine whether it was in fact Van Leeuwenhoek's ability to observe and reason clearly, or perhaps, the Royal Society's theories of life that might require reform. Finally in 1677, Van Leeuwenhoek's observations were fully acknowledged by the Royal Society.
Antonie van Leeuwenhoek was elected to the Royal Society in February 1680 on the nomination of William Croone, a then-prominent physician. Van Leeuwenhoek was "taken aback" by the nomination, which he considered a high honour, although he did not attend the induction ceremony in London, nor did he ever attend a Royal Society meeting. He had his portrait painted by Jan Verkolje with the certificate signed by James II of England on the table beside him.
Scientific fame
By the end of the seventeenth century, Van Leeuwenhoek had a virtual monopoly on microscopic study and discovery. His contemporary Robert Hooke, an early microscope pioneer, bemoaned that the field had come to rest entirely on one man's shoulders. In 1673, his first letter was published in the journal of the Royal Society of London. He was visited over the years by many notable individuals who gazed at the tiny creatures. One of the first was Jan Swammerdam. Around 1675, it was Johan Huydecoper, who was very interested in collecting and growing plants for his estate Goudestein, becoming in 1682 manager of the Hortus Botanicus Amsterdam. Christiaan Huygens, Leibniz (1676), John Locke (1678, 1685), James II of England (1679), William III of Orange, Mary II of England and Thomas Molyneux (in 1685) visited. In October 1697, Van Leeuwenhoek visited the Tsar Peter the Great on his boat, moored in the Schie or the Arsenaal. On this occasion, he presented the Tsar with an "eel-viewer", so Peter could study blood circulation whenever he wanted. In 1706, it was Govert Bidloo; in 1714, Richard Bradley (botanist); and, in 1716, Herman Boerhaave and Frederik Ruysch. To the disappointment of his guests, Van Leeuwenhoek refused to reveal the cutting-edge microscopes he relied on for his discoveries, instead showing visitors a collection of average-quality lenses.
Techniques
Antonie van Leeuwenhoek made more than 500 optical lenses. He also created at least 25 single-lens microscopes, of differing types, of which only nine have survived. These microscopes were made of silver or copper frames, holding hand-made lenses. Those that have survived are capable of magnification up to 275 times. It is suspected that Van Leeuwenhoek possessed some microscopes that could magnify up to 500 times. Although he has been widely regarded as a dilettante or amateur, his scientific research was of remarkably high quality.
The single-lens microscopes of Van Leeuwenhoek were relatively small devices, the largest being about 5 cm long. They are used by placing the lens very close in front of the eye. The other side of the microscope had a pin, where the sample was attached in order to stay close to the lens. There were also three screws to move the pin and the sample along three axes: one axis to change the focus, and the two other axes to navigate through the sample.
Van Leeuwenhoek maintained throughout his life that there are aspects of microscope construction "which I only keep for myself", in particular his most critical secret of how he made the lenses. For many years no one was able to reconstruct Van Leeuwenhoek's design techniques, but, in 1957, C.L. Stong used thin glass thread fusing instead of polishing, and successfully created some working samples of a Van Leeuwenhoek design microscope. Such a method was also discovered independently by A. Mosolov and A. Belkin at the Russian Novosibirsk State Medical Institute. In May 2021, researchers in the Netherlands published a non-destructive neutron tomography study of a Leeuwenhoek microscope. One image in particular shows a Stong/Mosolov-type spherical lens with a single short glass stem attached (Fig. 4). Such lenses are created by pulling an extremely thin glass filament, breaking the filament, and briefly fusing the filament end. The nuclear tomography article notes this lens creation method was first devised by Robert Hooke rather than Leeuwenhoek, which is ironic given Hooke's subsequent surprise at Leeuwenhoek's findings.
Van Leeuwenhoek used samples and measurements to estimate numbers of microorganisms in units of water. He also made good use of the huge advantage provided by his method. He studied a broad range of microscopic phenomena, and shared the resulting observations freely with groups such as the British Royal Society. Such work firmly established his place in history as one of the first and most important explorers of the microscopic world. Van Leeuwenhoek was one of the first people to observe cells, much like Robert Hooke. He also corresponded with Antonio Magliabechi.
Discoveries
Leeuwenhoek was one of the first to conduct experiments on himself. It was from his finger that blood was drawn for examination, and he placed pieces of his skin under a microscope, examining its structure in various parts of the body, and counting the number of vessels that permeate it.
Both Marcello Malpighi and Jan Swammerdam saw these structures before Leeuwenhoek, but Leeuwenhoek was the first to recognize what they are: red blood cells.
Infusoria (protists in modern zoological classification), in 1674
In 1675, he was studying a variety of minerals, especially salts, and parts of plants and animals.
The vacuole of the cell in 1676
Spermatozoa, in 1677
The banded pattern of muscular fibers, in 1682
Bacteria, (e.g., large Selenomonads from the human mouth), in 1683
It seems he used horseradish to find out what causes irritation on the tongue. He used the effect of vinegar.
Leeuwenhoek diligently began to search for his animalcules. He found them everywhere: in rotten water, in ditches, on his own teeth. "Although I am now fifty years old," he wrote to the Royal Society, "my teeth are well preserved, because I am in the habit of rubbing them with salt every morning." He described paradontitis.
In 1684 he published his research on the ovary.
In 1687, Van Leeuwenhoek reported his research on the coffee bean. He roasted the bean, cut it into slices and saw a spongy interior. The bean was pressed, and an oil appeared. He boiled the coffee with rain water twice and set it aside.
Leeuwenhoek corresponded regularly with Anthonie Heinsius, the Delft pensionary in the States of Holland and in 1687 member of the board of the Delft chamber of the VOC.
In 1696 Nicolaas Witsen sent him a map of Tartary and ore found near the Amur in Siberia.
Van Leeuwenhoek has been recognized as the first person to use a histological stain to color specimens observed under the microscope using saffron. He used this technique only once.
In 1702 he requested a book on Peruvian silver mines in Potosí.
Like Robert Boyle and Nicolaas Hartsoeker, Van Leeuwenhoek was interested in dried cochineal, trying to find out if the dye came from a berry or an insect.
He studied rainwater, the seeds of oranges, worms in sheep's liver, the eye of a whale, the blood of fishes, mites, coccinellidae, the skin of elephants, Celandine, and Cinchona.
Legacy and recognition
By the end of his life, Van Leeuwenhoek had written approximately 560 letters to the Royal Society and other scientific institutions concerning his observations and discoveries. Even during the last weeks of his life, Van Leeuwenhoek continued to send letters full of observations to London. The last few contained a precise description of his own illness. He suffered from a rare disease, an uncontrolled movement of the midriff, which now is named Van Leeuwenhoek's disease. He died at the age of 90, on 26 August 1723, and was buried four days later in the Oude Kerk in Delft.
In 1981, the British microscopist Brian J. Ford found that Van Leeuwenhoek's original specimens had survived in the collections of the Royal Society of London. They were found to be of high quality, and all were well preserved. Ford carried out observations with a range of single-lens microscopes, adding to our knowledge of Van Leeuwenhoek's work. In Ford's opinion, Leeuwenhoek remained imperfectly understood, the popular view that his work was crude and undisciplined at odds with the evidence of conscientious and painstaking observation. He constructed rational and repeatable experimental procedures and was willing to oppose received opinion, such as spontaneous generation, and he changed his mind in the light of evidence.
On his importance in the history of microbiology and science in general, the British biochemist Nick Lane wrote that he was "the first even to think of looking—certainly, the first with the power to see." His experiments were ingenious, and he was "a scientist of the highest calibre", attacked by people who envied him or "scorned his unschooled origins", not helped by his secrecy about his methods.
The Antoni van Leeuwenhoek Hospital in Amsterdam, named after Van Leeuwenhoek, is specialized in oncology. In 2004, a public poll in the Netherlands to determine the greatest Dutchman ("De Grootste Nederlander") named Van Leeuwenhoek the 4th-greatest Dutchman of all time.
On 24 October 2016, Google commemorated the 384th anniversary of Van Leeuwenhoek's birth with a Doodle that depicted his discovery of "little animals" or animalcules, now known as unicellular organisms.
The Leeuwenhoek Medal, Leeuwenhoek Lecture, Leeuwenhoek crater, Leeuwenhoeckia, Levenhookia (a genus in the family Stylidiaceae), Leeuwenhoekiella (an aerobic bacterial genus), and the scientific publication Antonie van Leeuwenhoek: International Journal of General and Molecular Microbiology are named after him.
As a fictional character, he appears as a flea circus owner, microscopist and magician in E.T.A. Hoffmann's novel Master Flea, together with Jan Swammerdam.
See also
Animalcule
Regnier de Graaf
Dutch Golden Age
History of microbiology
Microscopy
Microscope
Robert Hooke
Microscopic discovery of microorganisms
Microscopic scale
Science and technology in the Dutch Republic
Scientific Revolution
Nicolas Steno
Jan Swammerdam
Timeline of microscope technology
Johannes Vermeer
Notes
References
Sources
Cobb, Matthew: Generation: The Seventeenth-Century Scientists Who Unraveled the Secrets of Sex, Life, and Growth. (US: Bloomsbury, 2006)
Cobb, Matthew: The Egg and Sperm Race: The Seventeenth-Century Scientists Who Unlocked the Secrets of Sex and Growth. (London: Simon & Schuster, 2006)
Davids, Karel: The Rise and Decline of Dutch Technological Leadership: Technology, Economy and Culture in the Netherlands, 1350–1800 [2 vols.]. (Brill, 2008, )
Ford, Brian J.: Single Lens: The Story of the Simple Microscope. (London: William Heinemann, 1985, 182 pp)
Ford, Brian J.: The Revealing Lens: Mankind and the Microscope. (London: George Harrap, 1973, 208 pp)
Fournier, Marian: The Fabric of Life: The Rise and Decline of Seventeenth-Century Microscopy (Johns Hopkins University Press, 1996, )
Ratcliff, Marc J.: The Quest for the Invisible: Microscopy in the Enlightenment. (Ashgate, 2009, 332 pp)
Robertson, Lesley; Backer, Jantien et al.: Antoni van Leeuwenhoek: Master of the Minuscule. (Brill, 2016, )
Struik, Dirk J.: The Land of Stevin and Huygens: A Sketch of Science and Technology in the Dutch Republic during the Golden Century (Studies in the History of Modern Science). (Springer, 1981, 208 pp)
Wilson, Catherine: The Invisible World: Early Modern Philosophy and the Invention of the Microscope. (Princeton University Press, 1997, )
External links
to the Royal Society (archived)
The Correspondence of Anthonie van Leeuwenhoek in EMLO
Lens on Leeuwenhoek (site on Leeuwenhoek's life and observations; archived)
Vermeer connection website
University of California, Berkeley article on van Leeuwenhoek
Retrospective paper on the Leeuwenhoek research by Brian J. Ford.
Images seen through a van Leeuwenhoek microscope by Brian J. Ford.
Instructions on making a van Leeuwenhoek Microscope Replica by Alan Shinn (archived)
1632 births
1723 deaths
17th-century Dutch businesspeople
17th-century Dutch inventors
17th-century Dutch naturalists
17th-century Dutch biologists
18th-century Dutch biologists
Burials at the Oude Kerk, Delft
Dutch Calvinist and Reformed Christians
Dutch microbiologists
Fellows of the Royal Society
Microscopists
People from Delft
Protistologists
Dutch scientific instrument makers | Antonie van Leeuwenhoek | [
"Chemistry"
] | 4,613 | [
"Microscopists",
"Microscopy"
] |
42,005 | https://en.wikipedia.org/wiki/Collaborative%20software | Collaborative software or groupware is application software designed to help people working on a common task to attain their goals. One of the earliest definitions of groupware is "intentional group processes plus software to support them."
Regarding available interaction, collaborative software may be divided into real-time collaborative editing platforms that allow multiple users to engage in live, simultaneous, and reversible editing of a single file (usually a document); and version control (also known as revision control and source control) platforms, which allow users to make parallel edits to a file, while preserving every saved edit by users as multiple files that are variants of the original file.
Collaborative software is a broad concept that overlaps considerably with computer-supported cooperative work (CSCW). According to Carstensen and Schmidt (1999), groupware is part of CSCW. The authors claim that CSCW, and thereby groupware, addresses "how collaborative activities and their coordination can be supported by means of computer systems."
The use of collaborative software in the work space creates a collaborative working environment (CWE).
Collaborative software relates to the notion of collaborative work systems, which are conceived as any form of human organization that emerges any time that collaboration takes place, whether it is formal or informal, intentional or unintentional. Whereas the groupware or collaborative software pertains to the technological elements of computer-supported cooperative work, collaborative work systems become a useful analytical tool to understand the behavioral and organizational variables that are associated to the broader concept of CSCW.
History
Douglas Engelbart first envisioned collaborative computing in 1951 and documented his vision in 1962, with working prototypes in full operational use by his research team by the mid-1960s. He held the first public demonstration of his work in 1968 in what is now referred to as "The Mother of All Demos". The following year, Engelbart's lab was hooked into the ARPANET, the first computer network, enabling them to extend services to a broader userbase.
Online collaborative gaming software began between early networked computer users. In 1975, Will Crowther created Colossal Cave Adventure on a DEC PDP-10 computer. As internet connections grew, so did the numbers of users and multi-user games. In 1978 Roy Trubshaw, a student at University of Essex in the United Kingdom, created the game MUD (Multi-User Dungeon).
The US Government began using truly collaborative applications in the early 1990s. One of the first robust applications was the Navy's Common Operational Modeling, Planning and Simulation Strategy (COMPASS). The COMPASS system allowed up to 6 users to create point-to-point connections with one another; the collaborative session only remained while at least one user stayed active, and would have to be recreated if all six logged out. MITRE improved on that model by hosting the collaborative session on a server into which each user logged. Called the Collaborative Virtual Workstation (CVW), it allowed the session to be set up in a virtual file cabinet and virtual rooms, and left as a persistent session that could be joined later.
In 1996, Pavel Curtis, who had built MUDs at PARC, created PlaceWare, a server that simulated a one-to-many auditorium, with side chat between "seat-mates", and the ability to invite a limited number of audience members to speak. In 1997, engineers at GTE used the PlaceWare engine in a commercial version of MITRE's CVW, calling it InfoWorkSpace (IWS). In 1998, IWS was chosen as the military standard for the standardized Air Operations Center. The IWS product was sold to General Dynamics and then later to Ezenia.
Groupware
Collaborative software was originally designated as groupware and this term can be traced as far back as the late 1980s, when Richman and Slovak (1987) wrote: "Like an electronic sinew that binds teams together, the new groupware aims to place the computer squarely in the middle of communications among managers, technicians, and anyone else who interacts in groups, revolutionizing the way they work."
In 1978, Peter and Trudy Johnson-Lenz coined the term groupware; their initial 1978 definition of groupware was, "intentional group processes plus software to support them." Later in their article they went on to explain groupware as "computer-mediated culture... an embodiment of social organization in hyperspace." Groupware integrates co-evolving human and tool systems, yet is simply a single system.
In the early 1990s the first commercial groupware products were delivered, and big companies such as Boeing and IBM started using electronic meeting systems for key internal projects. Lotus Notes appeared as a major example of that product category, allowing remote group collaboration when the internet was still in its infancy. Kirkpatrick and Losee (1992) wrote then: "If GROUPWARE really makes a difference in productivity long term, the very definition of an office may change. You will be able to work efficiently as a member of a group wherever you have your computer. As computers become smaller and more powerful, that will mean anywhere." In 1999, Achacoso created and introduced the first wireless groupware.
Design and implementation
The complexity of groupware development is still an issue. One reason is the socio-technical dimension of groupware. Groupware designers do not only have to address technical issues (as in traditional software development) but also consider the organizational aspects and the social group processes that should be supported with the groupware application. Some examples for issues in groupware development are:
Persistence is needed in some sessions. Chat and voice communications are routinely non-persistent and evaporate at the end of the session. Virtual room and online file cabinets can persist for years. The designer of the collaborative space needs to consider the information duration needs and implement accordingly.
Authentication has always been a problem with groupware. When connections are made point-to-point, or when log-in registration is enforced, it is clear who is engaged in the session. However, audio and unmoderated sessions carry the risk of unannounced 'lurkers' who observe but do not announce themselves or contribute.
Until recently, bandwidth issues at fixed location limited full use of the tools. These are exacerbated with mobile devices.
Multiple input and output streams bring concurrency issues into the groupware applications.
Motivational issues are important, especially in settings without pre-defined group processes in place.
Closely related to the motivation aspect is the question of reciprocity. Ellis and others have shown that the distribution of efforts and benefits has to be carefully balanced in order to ensure that all required group members really participate.
Real-time communication via groupware can lead to a lot of noise, over-communication, and information overload.
One approach for addressing these issues is the use of design patterns for groupware design. The patterns identify recurring groupware design issues and discuss design choices in a way that all stakeholders can participate in the groupware development process.
Levels of collaboration
Groupware can be divided into three categories depending on the level of collaboration:
Communication can be thought of as unstructured interchange of information. A phone call and an instant messaging discussion are examples.
Conferencing (or collaboration level, as it is called in academic papers) refers to interactive work toward a shared goal. Brainstorming and voting are examples.
Coordination refers to complex interdependent work toward a shared goal. A good metaphor is to think about a sports team; everyone has to contribute the right play at the right time as well as adjust their play to the unfolding situation - but everyone is doing something different - in order for the team to win. It is complex interdependent work toward a shared goal.
Collaborative management (coordination) tools
Collaborative management tools facilitate and manage group activities. Examples include:
Document collaboration systems — help people work together on a single document or file to achieve a single final version
Electronic calendars (also called time management software) — schedule events and automatically notify and remind group members
Project management systems — schedule, track, and chart the steps in a project as it is being completed
Online proofing — share, review, approve, and reject web proofs, artwork, photos, or videos between designers, customers, and clients
Workflow systems — collaborative management of tasks and documents within a knowledge-based business process
Knowledge management systems — collect, organize, manage, and share various forms of information
Enterprise bookmarking — collaborative bookmarking engine to tag, organize, share, and search enterprise data
Extranet systems (sometimes also known as 'project extranets') — collect, organize, manage, and share information associated with the delivery of a project (e.g., the construction of a building)
Intranet systems — quickly share company information via internet to members within a company (e.g., marketing and product info)
Social software systems — organize social relations of groups
Online spreadsheets — collaborate and share structured data and information
Client portals — interact and share with clients in a private online environment
Collaborative software and human interaction
The design intent of collaborative software (groupware) is to transform the way documents and rich media are shared in order to enable more effective team collaboration.
Collaboration, with respect to information technology, seems to have several definitions. Some are defensible but others are so broad they lose any meaningful application. Understanding the differences in human interactions is necessary to ensure the appropriate technologies are employed to meet interaction needs.
There are three primary ways in which humans interact: conversations, transactions, and collaborations.
Conversational interaction is an exchange of information between two or more participants where the primary purpose of the interaction is discovery or relationship building. There is no central entity around which the interaction revolves but is a free exchange of information with no defined constraints, generally focused on personal experiences. Communication technology such as telephones, instant messaging, and e-mail are generally sufficient for conversational interactions.
Transactional interaction involves the exchange of transaction entities where a major function of the transaction entity is to alter the relationship between participants.
In collaborative interaction, the main function of the participants' relationship is to alter a collaboration entity (i.e., the converse of transactional). When teams collaborate on projects it is collaborative project management.
See also
Collaboration technologies
Enterprise portal
Intranet portal
List of collaborative software
List of social bookmarking websites
Closely related terms
Computer supported cooperative work
Integrated collaboration environment
Type of applications
Content management system
Customer relationship management software
Document management system
Enterprise content management
Intranet
Other related type of applications
Massively distributed collaboration
Online consultation
Online deliberation
Other related terms
Cloud collaboration
Collaborative innovation network
Commons-based peer production
Electronic business
Information technology management
Management information systems
Management
MediaWiki
Office of the future
Operational transformation
Organizational Memory System
Remote work
Wikipedia
Worknet
References
Lockwood, A. (2008). The Project Manager's Perspective on Project Management Software Packages. Avignon, France. Retrieved February 24, 2009.
Pedersen, A.A. (2008). Collaborative Project Management. Retrieved February 25, 2009.
Pinnadyne, Collaboration Made Easy. Retrieved November 15, 2009.
Romano, N.C., Jr., Nunamaker, J.F., Jr., Fang, C., & Briggs, R.O. (2003). A Collaborative Project Management Architecture. Retrieved February 25, 2009. System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on Volume, Issue, 6-9 Jan. 2003 Page(s): 12 pp
M.Katerine (kit) Brown, Brenda Huetture, and Char James-Tanny (2007), Managing Virtual Teams: Getting the Most from Wikis, Blogs, and Other Collaborative Tools, Worldware Publishing, Plano.
External links
Collaborative projects
Collective intelligence
Business software
Groupware
Multimodal interaction
Computer-mediated communication
Social software | Collaborative software | [
"Technology"
] | 2,412 | [
"Social software",
"Mobile content",
"Information systems",
"Computing and society",
"Computer-mediated communication"
] |
42,009 | https://en.wikipedia.org/wiki/Amedeo%20Avogadro | Lorenzo Romano Amedeo Carlo Avogadro, Count of Quaregna and Cerreto (, also , ; 9 August 17769 July 1856) was an Italian scientist, most noted for his contribution to molecular theory now known as Avogadro's law, which states that equal volumes of gases under the same conditions of temperature and pressure will contain equal numbers of molecules. In tribute to him, the ratio of the number of elementary entities (atoms, molecules, ions or other particles) in a substance to its amount of substance (the latter having the unit mole), , is known as the Avogadro constant. This constant is denoted NA, and is one of the seven defining constants of the SI.
Biography
Amedeo Avogadro was born in Turin to a noble family of the Kingdom of Sardinia (now part of Italy) in the year 1776. He graduated in ecclesiastical law at the late age of 20 and began to practice. Soon after, he dedicated himself to physics and mathematics (then called positive philosophy), and in 1809 started teaching them at a liceo (high school) in Vercelli, where his family lived and had some property.
In 1811, he published an article with the title Essai d'une manière de déterminer les masses relatives des molécules élémentaires des corps, et les proportions selon lesquelles elles entrent dans ces combinaisons ("Essay on a manner of Determining the Relative Masses of the Elementary Molecules of Bodies and the Proportions by Which They Enter These Combinations"), which contains Avogadro's hypothesis. Avogadro submitted this essay to Jean-Claude Delamétherie's Journal de Physique, de Chimie et d'Histoire naturelle ("Journal of Physics, Chemistry and Natural History").
In 1820, he became a professor of physics at the University of Turin. Turin was now the capital of the restored Savoyard Kingdom of Sardinia under Victor Emmanuel I. Avogadro was active in the revolutionary movement of March 1821. As a result, he lost his chair in 1823 (or, as the university officially declared, it was "very glad to allow this interesting scientist to take a rest from heavy teaching duties, in order to be able to give better attention to his researches"). Eventually, King Charles Albert granted a Constitution (Statuto Albertino) in 1848. Well before this, Avogadro had been recalled to the university in Turin in 1833, where he taught for another twenty years.
Little is known about Avogadro's private life, which appears to have been sober and religious. He married Felicita Mazzé and had six children. Avogadro held posts dealing with statistics, meteorology, and weights and measures (he introduced the metric system into Piedmont) and was a member of the Royal Superior Council on Public Instruction.
He died on 9 July 1856.
Accomplishments
In honour of Avogadro's contributions to molecular theory, the number of molecules per mole of a substance is named the Avogadro constant, NA. It is exactly The Avogadro constant is used to compute the results of chemical reactions. It allows chemists to determine the amounts of substances produced in a given reaction to a great degree of accuracy.
Johann Josef Loschmidt first calculated the value of the Avogadro constant, the number of particles in one mole, sometimes referred to as the Loschmidt number in German-speaking countries (Loschmidt constant now has another meaning).
Avogadro's law states that the relationship between the masses of the same volume of all gases (at the same temperature and pressure) corresponds to the relationship between their respective molecular weights. Hence, the relative molecular mass of a gas can be calculated from the mass of a sample of known volume.
Avogadro developed this hypothesis after Joseph Louis Gay-Lussac published his law on volumes (and combining gases) in 1808. The greatest problem Avogadro had to resolve was the confusion at that time regarding atoms and molecules. One of his most important contributions was clearly distinguishing one from the other, stating that gases are composed of molecules, and these molecules are composed of atoms. (For instance, John Dalton did not consider this possibility.) Avogadro did not actually use the word "atom" as the words "atom" and "molecule" were used almost without difference. He believed that there were three kinds of "molecules", including an "elementary molecule" (our "atom"). Also, he gave more attention to the definition of mass, as distinguished from weight.
In 1815, he published Mémoire sur les masses relatives des molécules des corps simples, ou densités présumées de leur gaz, et sur la constitution de quelques-uns de leur composés, pour servir de suite à l'Essai sur le même sujet, publié dans le Journal de Physique, juillet 1811 ("Note on the Relative Masses of Elementary Molecules, or Suggested Densities of Their Gases, and on the Constituents of Some of Their Compounds, As a Follow-up to the Essay on the Same Subject, Published in the Journal of Physics, July 1811") about gas densities.
In 1821 he published another paper, Nouvelles considérations sur la théorie des proportions déterminées dans les combinaisons, et sur la détermination des masses des molécules des corps (New Considerations on the Theory of Proportions Determined in Combinations, and on Determination of the Masses of Atoms) and shortly afterwards, Mémoire sur la manière de ramener les composès organiques aux lois ordinaires des proportions déterminées ("Note on the Manner of Finding the Organic Composition by the Ordinary Laws of Determined Proportions").
In 1841, he published his work in Fisica dei corpi ponderabili, ossia Trattato della costituzione materiale de' corpi, 4 volumes.
Response to the theory
The scientific community did not give great attention to Avogadro's theory, and it was not immediately accepted. André-Marie Ampère proposed a very similar theory three years later (in his ; "On the Determination of Proportions in which Bodies Combine According to the Number and the Respective Disposition of the Molecules by Which Their Integral Particles Are Made"), but the same indifference was shown to his theory as well.
Only through studies by Charles Frédéric Gerhardt and Auguste Laurent on organic chemistry was it possible to demonstrate that Avogadro's law explained why the same quantities of molecules in a gas have the same volume.
Unfortunately, related experiments with some inorganic substances showed seeming contradictions. This was finally resolved by Stanislao Cannizzaro, as announced at Karlsruhe Congress in 1860, four years after Avogadro's death. He explained that these exceptions were due to molecular dissociations at certain temperatures, and that Avogadro's law determined not only molecular masses but atomic masses as well.
In 1911, a meeting in Turin commemorated the hundredth anniversary of the publication of Avogadro's classic 1811 paper. King Victor Emmanuel III attended, and Avogadro's great contribution to chemistry was recognized.
Rudolf Clausius, with his kinetic theory on gases proposed in 1857, provided further evidence for Avogadro's law. Jacobus Henricus van 't Hoff showed that Avogadro's theory also held in dilute solutions.
Avogadro is hailed as a founder of the atomic-molecular theory.
See also
Avogadrite (mineral)
Avogadro (lunar crater)
References
Further reading
Morselli, Mario. (1984). Amedeo Avogadro, a Scientific Biography. Kluwer. .
Review of Morselli's book:
Pierre Radvanyi, "Two hypothesis of Avogadro", 1811 Avogadro's article analyzed on BibNum (click 'Télécharger').
External links
1776 births
1856 deaths
Scientists from Turin
Italian chemists
Fluid dynamicists
Academic staff of the University of Turin
Scientists from the Kingdom of Sardinia | Amedeo Avogadro | [
"Chemistry"
] | 1,653 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
42,020 | https://en.wikipedia.org/wiki/Synthetic%20radioisotope | A synthetic radioisotope is a radionuclide that is not found in nature: no natural process or mechanism exists which produces it, or it is so unstable that it decays away in a very short period of time. Frédéric Joliot-Curie and Irène Joliot-Curie were the first to produce a synthetic radioisotope in the 20th century. Examples include technetium-99 and promethium-146. Many of these are found in, and harvested from, spent nuclear fuel assemblies. Some must be manufactured in particle accelerators.
Production
Some synthetic radioisotopes are extracted from spent nuclear reactor fuel rods, which contain various fission products. For example, it is estimated that up to 1994, about 49,000 terabecquerels (78 metric tons) of technetium were produced in nuclear reactors; as such, anthropogenic technetium is far more abundant than technetium from natural radioactivity.
Some synthetic isotopes are produced in significant quantities by fission but are not yet being reclaimed. Other isotopes are manufactured by neutron irradiation of parent isotopes in a nuclear reactor (for example, technetium-97 can be made by neutron irradiation of ruthenium-96) or by bombarding parent isotopes with high energy particles from a particle accelerator.
Many isotopes, including radiopharmaceuticals, are produced in cyclotrons. For example, the synthetic fluorine-18 and oxygen-15 are widely used in positron emission tomography.
Uses
Most synthetic radioisotopes have a short half-life. Though a health hazard, radioactive materials have many medical and industrial uses.
Nuclear medicine
The field of nuclear medicine covers use of radioisotopes for diagnosis or treatment.
Diagnosis
Radioactive tracer compounds, radiopharmaceuticals, are used to observe the function of various organs and body systems. These compounds use a chemical tracer which is attracted to or concentrated by the activity which is being studied. That chemical tracer incorporates a short lived radioactive isotope, usually one which emits a gamma ray which is energetic enough to travel through the body and be captured outside by a gamma camera to map the concentrations. Gamma cameras and other similar detectors are highly efficient, and the tracer compounds are generally very effective at concentrating at the areas of interest, so the total amounts of radioactive material needed are very small.
The metastable nuclear isomer technetium-99m is a gamma-ray emitter widely used for medical diagnostics because it has a short half-life of 6 hours, but can be easily made in the hospital using a technetium-99m generator. Weekly global demand for the parent isotope molybdenum-99 was in 2010, overwhelmingly provided by fission of uranium-235.
Treatment
Several radioisotopes and compounds are used for medical treatment, usually by bringing the radioactive isotope to a high concentration in the body near a particular organ. For example, iodine-131 is used for treating some disorders and tumors of the thyroid gland.
Industrial radiation sources
Alpha particle, beta particle, and gamma ray radioactive emissions are industrially useful. Most sources of these are synthetic radioisotopes. Areas of use include the petroleum industry, industrial radiography, homeland security, process control, food irradiation and underground detection.
Footnotes
External links
Map of the Nuclides at LANL T-2 Website
Radioactivity
Radiopharmaceuticals
af:Radio-aktiewe isotoop | Synthetic radioisotope | [
"Physics",
"Chemistry"
] | 726 | [
"Medicinal radiochemistry",
"Radiopharmaceuticals",
"Nuclear physics",
"Chemicals in medicine",
"Radioactivity"
] |
42,021 | https://en.wikipedia.org/wiki/Trace%20radioisotope | A trace radioisotope is a radioisotope that occurs naturally in trace amounts (i.e. extremely small). Generally speaking, trace radioisotopes have half-lives that are short in comparison with the age of the Earth, since primordial nuclides tend to occur in larger than trace amounts. Trace radioisotopes are therefore present only because they are continually produced on Earth by natural processes. Natural processes which produce trace radioisotopes include cosmic ray bombardment of stable nuclides, ordinary alpha and beta decay of the long-lived heavy nuclides, thorium-232, uranium-238, and uranium-235, spontaneous fission of uranium-238, and nuclear transmutation reactions induced by natural radioactivity, such as the production of plutonium-239 and uranium-236 from neutron capture by natural uranium.
Elements
The elements that occur on Earth only in traces are listed below.
Isotopes of other elements (not exhaustive):
Tritium
Beryllium-7
Beryllium-10
Carbon-14
Fluorine-18
Sodium-22
Sodium-24
Magnesium-28
Silicon-31
Silicon-32
Phosphorus-32
Sulfur-35
Sulfur-38
Chlorine-34m
Chlorine-36
Chlorine-38
Chlorine-39
Argon-39
Argon-42
Calcium-41
Iron-52
Cobalt-55
Nickel-59
Copper-60
Germanium-64
Selenium-79
Krypton-81
Strontium-90
Rhodium-105
References
Radioactivity | Trace radioisotope | [
"Physics",
"Chemistry"
] | 318 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Radioactivity",
"Nuclear physics"
] |
42,052 | https://en.wikipedia.org/wiki/Mescaline | Mescaline, also known as mescalin or mezcalin, and in chemical terms 3,4,5-trimethoxyphenethylamine, is a naturally occurring psychedelic protoalkaloid of the substituted phenethylamine class, known for its hallucinogenic effects comparable to those of LSD and psilocybin. It binds to and activates certain serotonin receptors in the brain, producing hallucinogenic effects.
Biological sources
It occurs naturally in several species of cacti. It is also reported to be found in small amounts in certain members of the bean family, Fabaceae, including Senegalia berlandieri (syn. Acacia berlandieri), although these reports have been challenged and have been unsupported in any additional analyses.
As shown in the accompanying table, the concentration of mescaline in different specimens can vary largely within a single species. Moreover, the concentration of mescaline within a single specimen varies as well.
History and use
Peyote has been used for at least 5,700 years by Indigenous peoples of the Americas in Mexico. Europeans recorded use of peyote in Native American religious ceremonies upon early contact with the Huichol people in Mexico. Other mescaline-containing cacti such as the San Pedro have a long history of use in South America, from Peru to Ecuador. While religious and ceremonial peyote use was widespread in the Aztec Empire and northern Mexico at the time of the Spanish conquest, religious persecution confined it to areas near the Pacific coast and up to southwest Texas. However, by 1880, peyote use began to spread north of South-Central America with "a new kind of peyote ceremony" inaugurated by the Kiowa and Comanche people. These religious practices, incorporated legally in the United States in 1920 as the Native American Church, have since spread as far as Saskatchewan, Canada.
In traditional peyote preparations, the top of the cactus is cut off, leaving the large tap root along with a ring of green photosynthesizing area to grow new heads. These heads are then dried to make disc-shaped buttons. Buttons are chewed to produce the effects or soaked in water to drink. However, the taste of the cactus is bitter, so modern users will often grind it into a powder and pour it into capsules to avoid having to taste it. The typical dosage is 200–400 milligrams of mescaline sulfate or 178–356 milligrams of mescaline hydrochloride. The average peyote button contains about 25 mg mescaline. Some analyses of traditional preparations of San Pedro cactus have found doses ranging from 34 mg to 159 mg of total alkaloids, a relatively low and barely psychoactive amount. It appears that patients who receive traditional treatments with San Pedro ingest sub-psychoactive doses and do not experience psychedelic effects.
Botanical studies of peyote began in the 1840s and the drug was listed in the Mexican pharmacopeia. The first of mescal buttons was published by John Raleigh Briggs in 1887. Mescaline was first isolated and identified in 1896 or 1897 by the German chemist Arthur Heffter and his colleagues. He showed that mescaline was exclusively responsible for the psychoactive or hallucinogenic effects of peyote. However, other components of peyote, such as hordenine, pellotine, and anhalinine, are also active. Mescaline was first synthesized in 1919 by Ernst Späth.
In 1955, English politician Christopher Mayhew took part in an experiment for BBC's Panorama, in which he ingested 400 mg of mescaline under the supervision of psychiatrist Humphry Osmond. Though the recording was deemed too controversial and ultimately omitted from the show, Mayhew praised the experience, calling it "the most interesting thing I ever did".
Studies of the potential therapeutic effects of mescaline started in the 1950s.
The mechanism of action of mescaline, activation of the serotonin 5-HT2A receptors, became known in the 1990s.
Potential medical usage
Mescaline has a wide array of suggested medical usage, including treatment of depression, anxiety, PTSD, and alcoholism. However, its status as a Schedule I controlled substance in the Convention on Psychotropic Substances limits availability of the drug to researchers. Because of this, very few studies concerning mescaline's activity and potential therapeutic effects in people have been conducted since the early 1970s.
Behavioral and non-behavioral effects
Mescaline induces a psychedelic state comparable to those produced by LSD and psilocybin, but with unique characteristics. Subjective effects may include altered thinking processes, an altered sense of time and self-awareness, and closed- and open-eye visual phenomena.
Prominence of color is distinctive, appearing brilliant and intense. Recurring visual patterns observed during the mescaline experience include stripes, checkerboards, angular spikes, multicolor dots, and very simple fractals that turn very complex. The English writer Aldous Huxley described these self-transforming amorphous shapes as like animated stained glass illuminated from light coming through the eyelids in his autobiographical book The Doors of Perception (1954). Like LSD, mescaline induces distortions of form and kaleidoscopic experiences but they manifest more clearly with eyes closed and under low lighting conditions.
Heinrich Klüver coined the term "cobweb figure" in the 1920s to describe one of the four form constant geometric visual hallucinations experienced in the early stage of a mescaline trip: "Colored threads running together in a revolving center, the whole similar to a cobweb". The other three are the chessboard design, tunnel, and spiral. Klüver wrote that "many 'atypical' visions are upon close inspection nothing but variations of these form-constants."
As with LSD, synesthesia can occur especially with the help of music. An unusual but unique characteristic of mescaline use is the "geometrization" of three-dimensional objects. The object can appear flattened and distorted, similar to the presentation of a Cubist painting.
Mescaline elicits a pattern of sympathetic arousal, with the peripheral nervous system being a major target for this substance.
According to a research project in the Netherlands, ceremonial San Pedro use seems to be characterized by relatively strong spiritual experiences, and low incidence of challenging experiences.
Chemistry
Mescaline, also known as 3,4,5-trimethoxyphenethylamine (3,4,5-TMPEA), is a substituted phenethylamine derivative. It is closely structurally related to the catecholamine neurotransmitters dopamine, norepinephrine, and epinephrine.
The drug is relatively hydrophilic with low fat solubility. Its predicted log P (XLogP3) is 0.7.
Biosynthesis
Mescaline is biosynthesized from tyrosine, which, in turn, is derived from phenylalanine by the enzyme phenylalanine hydroxylase. In Lophophora williamsii (Peyote), dopamine converts into mescaline in a biosynthetic pathway involving m-O-methylation and aromatic hydroxylation.
Tyrosine and phenylalanine serve as metabolic precursors towards the synthesis of mescaline. Tyrosine can either undergo a decarboxylation via tyrosine decarboxylase to generate tyramine and subsequently undergo an oxidation at carbon 3 by a monophenol hydroxylase or first be hydroxylated by tyrosine hydroxylase to form L-DOPA and decarboxylated by DOPA decarboxylase. These create dopamine, which then experiences methylation by a catechol-O-methyltransferase (COMT) by an S-adenosyl methionine (SAM)-dependent mechanism. The resulting intermediate is then oxidized again by a hydroxylase enzyme, likely monophenol hydroxylase again, at carbon 5, and methylated by COMT. The product, methylated at the two meta positions with respect to the alkyl substituent, experiences a final methylation at the 4 carbon by a guaiacol-O-methyltransferase, which also operates by a SAM-dependent mechanism. This final methylation step results in the production of mescaline.
Phenylalanine serves as a precursor by first being converted to L-tyrosine by L-amino acid hydroxylase. Once converted, it follows the same pathway as described above.
Laboratory synthesis
Mescaline was first synthesized in 1919 by Ernst Späth from 3,4,5-trimethoxybenzoyl chloride. Several approaches using different starting materials have been developed since, including the following:
Hofmann rearrangement of 3,4,5-trimethoxyphenylpropionamide.
Cyanohydrin reaction between potassium cyanide and 3,4,5-trimethoxybenzaldehyde followed by acetylation and reduction.
Henry reaction of 3,4,5-trimethoxybenzaldehyde with nitromethane followed by nitro compound reduction of ω-nitrotrimethoxystyrene.
Ozonolysis of elemicin followed by reductive amination.
Ester reduction of Eudesmic acid's methyl ester followed by halogenation, Kolbe nitrile synthesis, and nitrile reduction.
Amide reduction of 3,4,5-trimethoxyphenylacetamide.
Reduction of 3,4,5-trimethoxy(2-nitrovinyl)benzene with lithium aluminum hydride.
Treatment of tricarbonyl-(η6-1,2,3-trimethoxybenzene) chromium complex with acetonitrile carbanion in THF and iodine, followed by reduction of the nitrile with lithium aluminum hydride.
Pharmacology
Pharmacodynamics
In plants, mescaline may be the end-product of a pathway utilizing catecholamines as a method of stress response, similar to how animals may release such compounds and others such as cortisol when stressed. The in vivo function of catecholamines in plants has not been investigated, but they may function as antioxidants, as developmental signals, and as integral cell wall components that resist degradation from pathogens. The deactivation of catecholamines via methylation produces alkaloids such as mescaline.
In humans, mescaline acts similarly to other psychedelic agents. It acts as an agonist, binding to and activating the serotonin 5-HT2A receptor. Its at the serotonin 5-HT2A receptor is approximately 10,000nM and at the serotonin 5-HT2B receptor is greater than 20,000nM. How activating the 5-HT2A receptor leads to psychedelic effects is still unknown, but it is likely that somehow it involves excitation of neurons in the prefrontal cortex. In addition to the serotonin 5-HT2A and 5-HT2B receptors, mescaline is also known to bind to the serotonin 5-HT2C receptor and a number of other targets.
Mescaline lacks affinity for the monoamine transporters, including the serotonin transporter (SERT), norepinephrine transporter (NET), and dopamine transporter (DAT) (Ki > 30,000nM). However, it has been found to increase levels of the major serotonin metabolite 5-hydroxyindoleacetic acid (5-HIAA) at high doses in rodents. This finding suggests that mescaline might inhibit the reuptake and/or induce the release of serotonin at such doses. However, this possibility has not yet been further assessed or demonstrated. Besides serotonin, mescaline might also weakly induce the release of dopamine, but this is probably of modest significance, if it occurs. In accordance, there is no evidence of the drug showing addiction or dependence. Other psychedelic phenethylamines, including the closely related 2C, DOx, and TMA drugs, are inactive as monoamine releasing agents and reuptake inhibitors. However, an exception is trimethoxyamphetamine (TMA), the amphetamine analogue of mescaline, which is a very low-potency serotonin releasing agent ( = 16,000nM). The possible monoamine-releasing effects of mescaline would likely be related to its structural similarity to substituted amphetamines and related compounds.
Tolerance to mescaline builds with repeated usage, lasting for a few days. The drug causes cross-tolerance with other serotonergic psychedelics such as LSD and psilocybin.
The LD50 of mescaline has been measured in various animals: 212–315 mg/kg i.p. (mice), 132–410 mg/kg i.p. (rats), 328 mg/kg i.p. (guinea pigs), 54mg/kg in dogs, and 130mg/kg i.v. in rhesus macaques. For humans, the LD50 of mescaline has been reported to be approximately 880 mg/kg. It has been said that it would be very difficult to consume enough mescaline to cause death in humans.
Mescaline is a relatively low-potency psychedelic, with active doses in the hundreds of milligrams and micromolar affinities for the serotonin 5-HT2A receptor. For comparison, psilocybin is approximately 20-fold more potent (doses in the tens of milligrams) and lysergic acid diethylamide (LSD) is approximately 2,000-fold more potent (doses in the tens to hundreds of micrograms). There have been efforts to develop more potent analogues of mescaline. Difluoromescaline and trifluoromescaline are more potent than mescaline, as is its amphetamine homologue TMA. Escaline and proscaline are also both more potent than mescaline, showing the importance of the 4-position substituent with regard to receptor binding.
Pharmacokinetics
About half the initial dosage is excreted after 6hours, but some studies suggest that it is not metabolized at all before excretion. Mescaline appears not to be subject to metabolism by CYP2D6 and between 20% and 50% of mescaline is excreted in the urine unchanged, with the rest being excreted as the deaminated-oxidised-carboxylic acid form of mescaline, a likely result of monoamine oxidase (MAO) degradation. However, the enzymes mediating the oxidative deamination of mescaine are controversial. MAO, diamine oxidase (DAO), and/or other enzymes may be involved or responsible.
The previously reported elimination half-life of mescaline was originally reported to be 6hours, but a new study published in 2023 reported a half-life of 3.6hours. The higher estimate is believed to be due to small sample numbers and collective measurement of mescaline metabolites.
Mescaline appears to have relatively poor blood–brain barrier permeability due to its low lipophilicity. However, it is still able to cross into the central nervous system and produce psychoactive effects at sufficienty high doses.
Active metabolites of mescaline may contribute to its psychoactive effects.
Legal status
United States
In the United States, mescaline was made illegal in 1970 by the Comprehensive Drug Abuse Prevention and Control Act, categorized as a Schedule I hallucinogen. The drug is prohibited internationally by the 1971 Convention on Psychotropic Substances. Mescaline is legal only for certain religious groups (such as the Native American Church by the American Indian Religious Freedom Act of 1978) and in scientific and medical research. In 1990, the Supreme Court ruled that the state of Oregon could ban the use of mescaline in Native American religious ceremonies. The Religious Freedom Restoration Act (RFRA) in 1993 allowed the use of peyote in religious ceremony, but in 1997, the Supreme Court ruled that the RFRA is unconstitutional when applied against states. Many states, including the state of Utah, have legalized peyote usage with "sincere religious intent", or within a religious organization, regardless of race. Synthetic mescaline, but not mescaline derived from cacti, was officially decriminalized in the state of Colorado by ballot measure Proposition 122 in November 2022.
While mescaline-containing cacti of the genus Echinopsis are technically controlled substances under the Controlled Substances Act, they are commonly sold publicly as ornamental plants.
United Kingdom
In the United Kingdom, mescaline in purified powder form is a Class A drug. However, dried cactus can be bought and sold legally.
Australia
Mescaline is considered a schedule 9 substance in Australia under the Poisons Standard (February 2020). A schedule 9 substance is classified as "Substances with a high potential for causing harm at low exposure and which require special precautions during manufacture, handling or use. These poisons should be available only to specialised or authorised users who have the skills necessary to handle them safely. Special regulations restricting their availability, possession, storage or use may apply."
Other countries
In Canada, France, The Netherlands and Germany, mescaline in raw form and dried mescaline-containing cacti are considered illegal drugs. However, anyone may grow and use peyote, or Lophophora williamsii, as well as Echinopsis pachanoi and Echinopsis peruviana without restriction, as it is specifically exempt from legislation. In Canada, mescaline is classified as a schedule III drug under the Controlled Drugs and Substances Act, whereas peyote is exempt.
In Russia mescaline, its derivatives and mescaline-containing plants are banned as narcotic drugs (Schedule I).
Notable users
Salvador Dalí experimented with mescaline believing it would enable him to use his subconscious to further his art potential
Antonin Artaud wrote 1947's The Peyote Dance, where he describes his peyote experiences in Mexico a decade earlier.
Jerry Garcia took peyote prior to forming The Grateful Dead but later switched to LSD and DMT since they were easier on the stomach.
Allen Ginsberg took peyote. Part II of his poem "Howl" was inspired by a peyote vision that he had in San Francisco.
Ken Kesey took peyote prior to writing One Flew Over the Cuckoo's Nest.
Jean-Paul Sartre took mescaline shortly before the publication of his first book, L'Imaginaire; he had a bad trip during which he imagined that he was menaced by sea creatures. For many years following this, he persistently thought that he was being followed by lobsters, and became a patient of Jacques Lacan in hopes of being rid of them. Lobsters and crabs figure in his novel Nausea.
Havelock Ellis was the author of one of the first written reports to the public about an experience with mescaline (1898).
Stanisław Ignacy Witkiewicz, Polish writer, artist and philosopher, experimented with mescaline and described his experience in a 1932 book Nikotyna Alkohol Kokaina Peyotl Morfina Eter.
Aldous Huxley described his experience with mescaline in the essay "The Doors of Perception" (1954).
Jim Carroll in The Basketball Diaries described using peyote that a friend smuggled from Mexico.
Quanah Parker, appointed by the federal government as principal chief of the entire Comanche Nation, advocated the syncretic Native American Church alternative, and fought for the legal use of peyote in the movement's religious practices.
Hunter S. Thompson wrote an extremely detailed account of his first use of mescaline in "First Visit with Mescalito", and it appeared in his book Songs of the Doomed, as well as featuring heavily in his novel Fear and Loathing in Las Vegas.
Psychedelic research pioneer Alexander Shulgin said he was first inspired to explore psychedelic compounds by a mescaline experience. In 1974, Shulgin synthesized 2C-B, a psychedelic phenylethylamine derivative, structurally similar to mescaline, and one of Shulgin's self-rated most important phenethylamine compounds together with Mescaline, 2C-E, 2C-T-7, and 2C-T-2.
Bryan Wynter produced Mars Ascends after trying the substance for the first time.
George Carlin mentioned mescaline use during his youth while being interviewed in 2008.
Carlos Santana told about his mescaline use in a 1989 Rolling Stone interview.
Disney animator Ward Kimball described participating in a study of mescaline and peyote conducted by UCLA in the 1960s.
Michael Cera used real mescaline for the movie Crystal Fairy & the Magical Cactus, as expressed in an interview.
Philip K. Dick was inspired to write Flow My Tears, the Policeman Said after taking mescaline.
Arthur Kleps, a psychologist turned drug legalization advocate and writer whose Neo-American Church defended use of marijuana and hallucinogens such as LSD and peyote for spiritual enlightenment and exploration, bought, in 1960, by mail from Delta Chemical Company in New York 1 g of mescaline sulfate and took 500 mg. He experienced a psychedelic trip that caused profound changes in his life and outlook.
See also
List of psychedelic plants
Methallylescaline
Psychedelic experience
Psychoactive drug
Entheogen
The Doors of Perception
Mind at Large (concept in The Doors of Perception)
The Psychedelic Experience: A Manual Based on the Tibetan Book of the Dead
References
Further reading
External links
National Institutes of Health – National Institute on Drug Abuse Hallucinogen InfoFacts
Mescaline at Erowid
Mescaline at PsychonautWiki
PiHKAL entry
Mescaline entry in PiHKAL • info
Mescaline: The Chemistry and Pharmacology of its Analogs, an essay by Alexander Shulgin
Mescaline on the Mexican Border
Alkaloids found in Fabaceae
Cacti
Entheogens
Experimental hallucinogens
Mescalines
Native American Church
Phenethylamine alkaloids
Serotonin receptor agonists
TAAR1 modulators | Mescaline | [
"Chemistry"
] | 4,743 | [
"Alkaloids by chemical classification",
"Phenethylamine alkaloids"
] |
42,078 | https://en.wikipedia.org/wiki/Hydrogen%20cyanide | Hydrogen cyanide (formerly known as prussic acid) is a chemical compound with the formula HCN and structural formula . It is a highly toxic and flammable liquid that boils slightly above room temperature, at . HCN is produced on an industrial scale and is a highly valued precursor to many chemical compounds ranging from polymers to pharmaceuticals. Large-scale applications are for the production of potassium cyanide and adiponitrile, used in mining and plastics, respectively. It is more toxic than solid cyanide compounds due to its volatile nature. A solution of hydrogen cyanide in water, represented as HCN, is called hydrocyanic acid. The salts of the cyanide anion are known as cyanides.
Whether hydrogen cyanide is an organic compound or not is a topic of debate among chemists, and opinions vary from author to author. Traditionally, it is considered inorganic by a significant number of authors. Contrary to this view, it is considered organic by other authors, because hydrogen cyanide belongs to the class of organic compounds known as nitriles which have the formula , where R is typically organyl group (e.g., alkyl or aryl) or hydrogen. In the case of hydrogen cyanide, the R group is hydrogen H, so the other names of hydrogen cyanide are methanenitrile and formonitrile.
Structure and general properties
Hydrogen cyanide is a linear molecule, with a triple bond between carbon and nitrogen. The tautomer of HCN is HNC, hydrogen isocyanide.
HCN has a faint bitter almond-like odor that some people are unable to detect owing to a recessive genetic trait. The volatile compound has been used as inhalation rodenticide and human poison, as well as for killing whales. Cyanide ions interfere with iron-containing respiratory enzymes.
Chemical properties
Hydrogen cyanide is weakly acidic with a pKa of 9.2. It partially ionizes in water to give the cyanide anion, . HCN forms hydrogen bonds with its conjugate base, species such as .
Hydrogen cyanide reacts with alkenes to give nitriles. The conversion, which is called hydrocyanation, employs nickel complexes as catalysts.
Four molecules of HCN will tetramerize into diaminomaleonitrile.
Metal cyanides are typically prepared by salt metathesis from alkali metal cyanide salts, but mercuric cyanide is formed from aqueous hydrogen cyanide:
History of discovery and naming
Hydrogen cyanide was first isolated in 1752 by French chemist Pierre Macquer who converted Prussian blue to an iron oxide plus a volatile component and found that these could be used to reconstitute it. The new component was what is now known as hydrogen cyanide. It was subsequently prepared from Prussian blue by the Swedish chemist Carl Wilhelm Scheele in 1782, and was eventually given the German name Blausäure (lit. "Blue acid") because of its acidic nature in water and its derivation from Prussian blue. In English, it became known popularly as prussic acid.
In 1787, the French chemist Claude Louis Berthollet showed that prussic acid did not contain oxygen, an important contribution to acid theory, which had hitherto postulated that acids must contain oxygen (hence the name of oxygen itself, which is derived from Greek elements that mean "acid-former" and are likewise calqued into German as Sauerstoff).
In 1811, Joseph Louis Gay-Lussac prepared pure, liquified hydrogen cyanide, and in 1815 he deduced Prussic acid's chemical formula.
Etymology
The word cyanide for the radical in hydrogen cyanide was derived from its French equivalent, cyanure, which Gay-Lussac constructed from the Ancient Greek word κύανος for dark blue enamel or lapis lazuli, again owing to the chemical’s derivation from Prussian blue. Incidentally, the Greek word is also the root of the English color name cyan.
Production and synthesis
The most important process is the Andrussow oxidation invented by Leonid Andrussow at IG Farben in which methane and ammonia react in the presence of oxygen at about over a platinum catalyst:
In 2006, between 500 million and 1 billion pounds (between 230,000 and 450,000 t) were produced in the US. Hydrogen cyanide is produced in large quantities by several processes and is a recovered waste product from the manufacture of acrylonitrile.
Of lesser importance is the Degussa process (BMA process) in which no oxygen is added and the energy must be transferred indirectly through the reactor wall:
This reaction is akin to steam reforming, the reaction of methane and water to give carbon monoxide and hydrogen.
In the Shawinigan Process, hydrocarbons, e.g. propane, are reacted with ammonia.
In the laboratory, small amounts of HCN are produced by the addition of acids to cyanide salts of alkali metals:
This reaction is sometimes the basis of accidental poisonings because the acid converts a nonvolatile cyanide salt into the gaseous HCN.
Hydrogen cyanide could be obtained from potassium ferricyanide and acid:
Historical methods of production
The large demand for cyanides for mining operations in the 1890s was met by George Thomas Beilby, who patented a method to produce hydrogen cyanide by passing ammonia over glowing coal in 1892. This method was used until Hamilton Castner in 1894 developed a synthesis starting from coal, ammonia, and sodium yielding sodium cyanide, which reacts with acid to form gaseous HCN.
Applications
HCN is the precursor to sodium cyanide and potassium cyanide, which are used mainly in gold and silver mining and for the electroplating of those metals. Via the intermediacy of cyanohydrins, a variety of useful organic compounds are prepared from HCN including the monomer methyl methacrylate, from acetone, the amino acid methionine, via the Strecker synthesis, and the chelating agents EDTA and NTA. Via the hydrocyanation process, HCN is added to butadiene to give adiponitrile, a precursor to Nylon-6,6.
HCN is used globally as a fumigant against many species of pest insects that infest food production facilities. Both its efficacy and method of application lead to very small amounts of the fumigant being used compared to other toxic substances used for the same purpose. Using HCN as a fumigant also has less environmental impact, compared to some other fumigants such as sulfuryl fluoride, and methyl bromide.
Occurrence
HCN is obtainable from fruits that have a pit, such as cherries, apricots, apples, and nuts such as bitter almonds, from which almond oil and extract is made. Many of these pits contain small amounts of cyanohydrins such as mandelonitrile and amygdalin, which slowly release hydrogen cyanide. One hundred grams of crushed apple seeds can yield about 70 mg of HCN. The roots of cassava plants contain cyanogenic glycosides such as linamarin, which decompose into HCN in yields of up to 370 mg per kilogram of fresh root. Some millipedes, such as Harpaphe haydeniana, Desmoxytes purpurosea, and Apheloria release hydrogen cyanide as a defense mechanism, as do certain insects, such as burnet moths and the larvae of Paropsisterna eucalyptus. Hydrogen cyanide is contained in the exhaust of vehicles, and in smoke from burning nitrogen-containing plastics.
On Titan
HCN has been measured in Titan's atmosphere by four instruments on the Cassini space probe, one instrument on Voyager, and one instrument on Earth. One of these measurements was in situ, where the Cassini spacecraft dipped between above Titan's surface to collect atmospheric gas for mass spectrometry analysis. HCN initially forms in Titan's atmosphere through the reaction of photochemically produced methane and nitrogen radicals which proceed through the H2CN intermediate, e.g., (CH3 + N → H2CN + H → HCN + H2). Ultraviolet radiation breaks HCN up into CN + H; however, CN is efficiently recycled back into HCN via the reaction CN + CH4 → HCN + CH3.
On the young Earth
It has been postulated that carbon from a cascade of asteroids (known as the Late Heavy Bombardment), resulting from interaction of Jupiter and Saturn, blasted the surface of young Earth and reacted with nitrogen in Earth's atmosphere to form HCN.
In mammals
Some authors have shown that neurons can produce hydrogen cyanide upon activation of their opioid receptors by endogenous or exogenous opioids. They have also shown that neuronal production of HCN activates NMDA receptors and plays a role in signal transduction between neuronal cells (neurotransmission). Moreover, increased endogenous neuronal HCN production under opioids was seemingly needed for adequate opioid analgesia, as analgesic action of opioids was attenuated by HCN scavengers. They considered endogenous HCN to be a neuromodulator.
It has also been shown that, while stimulating muscarinic cholinergic receptors in cultured pheochromocytoma cells increases HCN production, in a living organism (in vivo) muscarinic cholinergic stimulation actually decreases HCN production.
Leukocytes generate HCN during phagocytosis, and can kill bacteria, fungi, and other pathogens by generating several different toxic chemicals, one of which is hydrogen cyanide.
The vasodilatation caused by sodium nitroprusside has been shown to be mediated not only by NO generation, but also by endogenous cyanide generation, which adds not only toxicity, but also some additional antihypertensive efficacy compared to nitroglycerine and other non-cyanogenic nitrates which do not cause blood cyanide levels to rise.
HCN is a constituent of tobacco smoke.
HCN and the origin of life
Hydrogen cyanide has been discussed as a precursor to amino acids and nucleic acids, and is proposed to have played a part in the origin of life. Although the relationship of these chemical reactions to the origin of life theory remains speculative, studies in this area have led to discoveries of new pathways to organic compounds derived from the condensation of HCN (e.g. Adenine). That's why scientists who search for life on planets beyond Earth, the primary factors they examine, after confirming suitable temperatures and the presence of water, are molecules like hydrogen cyanide.
In space
HCN has been detected in the interstellar medium and in the atmospheres of carbon stars. Since then, extensive studies have probed formation and destruction pathways of HCN in various environments and examined its use as a tracer for a variety of astronomical species and processes. HCN can be observed from ground-based telescopes through a number of atmospheric windows. The J=1→0, J=3→2, J= 4→3, and J=10→9 pure rotational transitions have all been observed.
HCN is formed in interstellar clouds through one of two major pathways: via a neutral-neutral reaction (CH2 + N → HCN + H) and via dissociative recombination (HCNH+ + e− → HCN + H). The dissociative recombination pathway is dominant by 30%; however, the HCNH+ must be in its linear form. Dissociative recombination with its structural isomer, H2NC+, exclusively produces hydrogen isocyanide (HNC).
HCN is destroyed in interstellar clouds through a number of mechanisms depending on the location in the cloud. In photon-dominated regions (PDRs), photodissociation dominates, producing CN (HCN + ν → CN + H). At further depths, photodissociation by cosmic rays dominate, producing CN (HCN + cr → CN + H). In the dark core, two competing mechanisms destroy it, forming HCN+ and HCNH+ (HCN + H+ → HCN+ + H; HCN + HCO+ → HCNH+ + CO). The reaction with HCO+ dominates by a factor of ~3.5. HCN has been used to analyze a variety of species and processes in the interstellar medium. It has been suggested as a tracer for dense molecular gas and as a tracer of stellar inflow in high-mass star-forming regions. Further, the HNC/HCN ratio has been shown to be an excellent method for distinguishing between PDRs and X-ray-dominated regions (XDRs).
On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
In February 2016, it was announced that traces of hydrogen cyanide were found in the atmosphere of the hot Super-Earth 55 Cancri e with NASA's Hubble Space Telescope.
On 14 December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."
As a poison and chemical weapon
In World War I, hydrogen cyanide was used by the French from 1916 as a chemical weapon against the Central Powers, and by the United States and Italy in 1918. It was not found to be effective enough due to weather conditions. The gas is lighter than air and rapidly disperses up into the atmosphere. Rapid dilution made its use in the field impractical. In contrast, denser agents such as phosgene or chlorine tended to remain at ground level and sank into the trenches of the Western Front's battlefields. Compared to such agents, hydrogen cyanide had to be present in higher concentrations in order to be fatal.
A hydrogen cyanide concentration of 100–200 ppm in breathing air will kill a human within 10 to 60 minutes. A hydrogen cyanide concentration of 2000 ppm (about 2380 mg/m3) will kill a human in about one minute. The toxic effect is caused by the action of the cyanide ion, which halts cellular respiration. It acts as a non-competitive inhibitor for an enzyme in mitochondria called cytochrome c oxidase. As such, hydrogen cyanide is commonly listed among chemical weapons as a blood agent.
The Chemical Weapons Convention lists it under Schedule 3 as a potential weapon which has large-scale industrial uses. Signatory countries must declare manufacturing plants that produce more than 30 metric tons per year, and allow inspection by the Organisation for the Prohibition of Chemical Weapons.
Perhaps its most infamous use is (German: Cyclone B, with the B standing for – prussic acid; also, to distinguish it from an earlier product later known as Zyklon A), used in the Nazi German extermination camps of Majdanek and Auschwitz-Birkenau during World War II to kill Jews and other persecuted minorities en masse as part of their Final Solution genocide program. Hydrogen cyanide was also used in the camps for delousing clothing in attempts to eradicate diseases carried by lice and other parasites. One of the original Czech producers continued making Zyklon B under the trademark "Uragan D2" until around 2015.
During World War II, the US considered using it, along with cyanogen chloride, as part of Operation Downfall, the planned invasion of Japan, but President Harry Truman decided against it, instead using the atomic bombs developed by the secret Manhattan Project.
Hydrogen cyanide was also the agent employed in judicial execution in some U.S. states, where it was produced during the execution by the action of sulfuric acid on sodium cyanide or potassium cyanide.
Under the name prussic acid, HCN has been used as a killing agent in whaling harpoons, although it proved quite dangerous to the crew deploying it, and it was quickly abandoned. From the middle of the 18th century it was used in a number of poisoning murders and suicides.
Hydrogen cyanide gas in air is explosive at concentrations above 5.6%.
References
External links
Institut national de recherche et de sécurité (1997). "Cyanure d'hydrogène et solutions aqueuses". Fiche toxicologique n° 4, Paris:INRS, 5pp. (PDF file, in French)
International Chemical Safety Card 0492
Hydrogen cyanide and cyanides (CICAD 61)
National Pollutant Inventory: Cyanide compounds fact sheet
NIOSH Pocket Guide to Chemical Hazards
Department of health review
Density of Hydrogen Cyanide gas
Blood agents
Cyanides
Fumigants
Hydrogen compounds
Inorganic compounds
Gaseous signaling molecules
Soviet chemical weapons program
Triatomic molecules | Hydrogen cyanide | [
"Physics",
"Chemistry"
] | 3,680 | [
"Inorganic compounds",
"Chemical weapons",
"Molecules",
"Signal transduction",
"Gaseous signaling molecules",
"Triatomic molecules",
"Blood agents",
"Matter"
] |
42,081 | https://en.wikipedia.org/wiki/Wheel%20of%20time | The wheel of time or wheel of history (also known as Kalachakra) is a concept found in several religious traditions and philosophies, notably religions of Indian origin such as Hinduism, Jainism, Sikhism, and Buddhism, which regard time as cyclical and consisting of repeating ages. Many other cultures contain belief in a similar concept: notably, the Q'ero people of Peru, the Hopi people of Arizona, and the Bakongo people of Angola and Democratic Republic of the Congo.
Ancient Africa
In traditional Bakongo religion, the four elements are incorporated into the Kongo cosmogram. This sacred wheel depicts the physical world (Nseke), the spiritual world of the ancestors (Mpémba), the Kalûnga line that runs between the two worlds, the sacred river (mbûngi) that began as a circular void and forms a circle around the two worlds, and the path of the sun. Each element correlates to a period in the life cycle, which the Bakongo people also equate to the four cardinal directions and seasons. According to their cosmology, all living things go through this cycle.
Mbûngi represents aether and is the void that exists before creation.
Musoni time (South) represents air and is the period of conception that takes place during spring.
Kala time (East) represent fire and is the period of birth that takes place during summer.
Tukula time (North) represents earth and is the period of maturity that takes place during fall.
Luvemba time (West) represents water and is the period of death that takes place during winter.
Ancient Rome
The philosopher and emperor Marcus Aurelius saw time as extending forwards to infinity and backwards to infinity, while admitting the possibility (without arguing the case) that "the administration of the universe is organized into a succession of finite periods".
Buddhism
The Wheel of Time or Kalachakra is a Tantric deity that is associated with Tibetan Tantric Buddhism, which encompasses all four main schools of Sakya, Nyingma, Kagyu and Gelug, and is especially important within the lesser-known Jonang tradition.
The Kalachakra tantra prophesies a world within which (religious) conflict is prevalent. A worldwide war will be waged which will see the expansion of the mystical Kingdom of Shambhala led by a messianic king.
Hinduism
In Hindu cosmology, kala (time) is eternal, repeating general events in four types of cycles. The smallest cycle is a maha-yuga (great age), containing four yugas (dharmic ages): Satya Yuga, Treta Yuga, Dvapara Yuga and Kali Yuga. A manvantara (age of Manu) contains 71 maha-yugas. A kalpa (day of Brahma) contains 14 manvantaras and 15 sandhyas (connecting periods), which lasts for 1,000 maha-yugas and is followed by a pralaya (night of partial dissolution) of equal length, where a day and night make one full day. A maha-kalpa (life of Brahma) lasts for 100 of Brahma's years of 12 months of 30 full days (100 360-day years) or 72,000,000 maha-yugas, which is followed by a maha-pralaya (full dissolution) of equal length.
Jainism
Within Jainism, time is thought to be a wheel that rotates for infinity without a beginning. This wheel of time holds twelve spokes that each symbolize a different phase in the universe's cosmological history. It is further divided into two equal halves having six eras in them. While in a downward motion, the wheel of time falls into what is known as Avasarpiṇī and when in an upward motion, enters a state called Utsarpini. During both motions of the wheel, 24 tirthankaras come forth to teach the three jewels or sacred Jain teachings of right faith, right knowledge, and right practice, then create a spiritual ford across the ocean of rebirth for humanity.
Modern usage
Literature
In an interview included with the audiobook editions of his novels, author Robert Jordan has stated that his bestselling fantasy series The Wheel of Time borrows the titular concept from Hindu mythology.
The first chapter of every book in the series begins with the lines: "The Wheel of Time turns, and ages come and pass, leaving memories that become legend. Legends fade to myth, and even myth is long forgotten when the Age that gave it birth comes again."
Television
Several episodes of the American TV series Lost feature a wheel that can be physically turned in order to manipulate space and time. In a series of episodes during the fifth season, the island on which the show takes place begins to skip violently back and forth through time after the wheel is pulled off its axis.
The character Rust Cohle in the first season of True Detective makes numerous references to his belief that events in time repeat, claiming that "Time is a flat circle".
See also
Eternal return
Kalachakra
Wheel of the Year
References
Sources
Hindu philosophical concepts
Jain cosmology
Time in religion
Buddhist cosmology
Time in Hinduism
Time in Buddhism | Wheel of time | [
"Physics"
] | 1,074 | [
"Spacetime",
"Time in religion",
"Physical quantities",
"Time"
] |
42,113 | https://en.wikipedia.org/wiki/Chicago%20flood | The Chicago flood occurred on April 13, 1992, when repair work on a bridge spanning the Chicago River damaged the wall of an abandoned and disused utility tunnel beneath the river. The resulting breach flooded basements, facilities and the underground Chicago Pedway throughout the Chicago Loop with an estimated of water.
The remediation lasted for weeks, and cost about $2 billion in 1992 dollars, equivalent to $ in . The legal battles lasted for several years, and disagreement over who was at fault persists to this day.
Cause
Rehabilitation work on the Kinzie Street Bridge crossing the Chicago River required new pilings. However, when the City of Chicago specified that the old pilings be extracted and replaced by the new ones, the Great Lakes Dredge and Dock Company reported back that the old pilings were too close to the bridge tender's house, preventing proper removal without risking damaging or destroying the house. The City of Chicago then gave permission to install the new pilings south of the old pilings. The crew members who began work at the site did not know that beneath the river was an abandoned Chicago Tunnel Company (CTC) tunnel that had been used in the early 20th century to transport coal and goods. One of the pilings on the east bank was driven into the bottom of the river alongside the north wall of the old tunnel. The pilings did not punch through the tunnel wall, but clay soil displaced by the piling eventually breached the wall, allowing sediment and water to seep into the tunnel. After some weeks, most of the clay between the water and the breach had liquefied, which rapidly increased the rate of flooding in the tunnel. The situation became problematic because the flood doors had been removed from the old tunnels after they fell into disuse.
Discovery of the leak
A skilled telecommunications worker inspecting a cable running through the tunnel discovered the leak while it was still passing mud and forwarded a videotape to the city, which did not see anything serious and began a bid process to repair the tunnel. The CTC tunnels were never formally a public responsibility, as most of them had been dug clandestinely, many violated private property and the collapse of the operator had failed to resolve ownership and maintenance responsibilities. Meanwhile, the mud continued to push through until the river water was able to pour in unabated, creating an immediate emergency.
Effects
The water flooded into the basements of several Loop office buildings and retail stores and an underground shopping district. The Loop and financial district were evacuated, and electrical power and gas were interrupted in most of the area as a precaution. Trading at the Chicago Board of Trade Building and the Chicago Mercantile Exchange ended in mid-morning as water seeped into their basements. At its height, some buildings had of water in their lower levels. However, at the street level there was no water to be seen, as it was all underground.
At first, the source of the water was unclear. WMAQ radio reporter Larry Langford reported that city crews were in the process of shutting down large water mains to see if the water flow could be stopped.
Monitoring police scanners, Langford heard security crews from Chicago's Merchandise Mart (near the Kinzie Street Bridge) report that the water in their basement had fish. Langford drove to the Merchandise Mart, then reported over the air that water was swirling near a piling in a manner similar to water going down a bathtub drain. Within minutes emergency services were converging on the bridge.
Repair and cleanup
Workers attempted to plug the hole, by then about wide, with 65 truckloads of rocks, cement and old mattresses. In an attempt to slow the leak, the level of the Chicago River was lowered by opening the locks downstream of Chicago, and the freight tunnels were drained into the Chicago Deep Tunnel system. The leak was eventually stopped by placing a specialized concrete mixture supplied by Material Service Corporation (MSC) and placed by Kenny Construction. The concrete was designed by Brian Rice of MSC and was to set up so quickly that the concrete delivery trucks were provided police escorts. The concrete was placed into drilled shafts into the flooded tunnel near Kinzie Street and formed emergency plugs.
Aftermath
It took three days before the flood was cleaned up enough to allow business to begin to resume and cost the city an estimated $1.95 billion (equivalent to $ in ). Some buildings remained closed for a few weeks. Parking was banned downtown during the cleanup and some subway routes were temporarily closed or rerouted. Since it occurred near Tax Day, the IRS granted natural disaster extensions to those affected.
Eventually, the city assumed maintenance responsibility for the tunnels, and watertight hatches were installed at the river crossings. Great Lakes Dredge and Dock Co. sued the City of Chicago arguing that the city had failed to tell it about the existence of the tunnels.
Insurance battles lasted for years, the central point being the definition of the accident, i.e., whether it was a "flood" or a "leak". Leaks were covered by insurance, while floods were not. Eventually it was classified as a leak, which is why many residents still call it the "Great Chicago Leak".
Today, there remains contention as to whether the mistake was the fault of the workers on-site, their parent company, or even the claim that maps provided by the City of Chicago failed to accurately depict the old tunnel systems. In fact, the Kinzie Street river crossing was clearly delineated on the city maps: the typical tunnel ran down the center of the old streets. At Kinzie Street, like some of the other river crossings, it veered off to the side as the historic Kinzie bridge (at the time of the tunnel construction) was a pivoting bridge with a central pivot in the middle of the street. Thus the original tunnels were moved to the side, as were several other bridges across the Chicago River—shown in detail on the city maps.
Court cases
In the lawsuits that followed, Great Lakes Dredge and Dock Co. was initially found liable but was later cleared after it was revealed that the city was aware the tunnel was leaking before the flood and the city had also not properly maintained the tunnel.
In addition the case went to the United States Supreme Court in which ruled that since the work was being done by a vessel in navigable waters of the Chicago River, admiralty law applied and Great Lakes's liability was greatly limited.
References
1990s floods in the United States
1992 floods
1992 natural disasters in the United States
1992 industrial disasters
Disasters in Chicago
1992 in Chicago
April 1992 events in the United States
Engineering failures
1992 disasters in the United States | Chicago flood | [
"Technology",
"Engineering"
] | 1,340 | [
"Systems engineering",
"Reliability engineering",
"Technological failures",
"Engineering failures",
"Civil engineering"
] |
42,127 | https://en.wikipedia.org/wiki/Christiaan%20Huygens | Christiaan Huygens, Lord of Zeelhem, ( , ; ; also spelled Huyghens; ; 14 April 1629 – 8 July 1695) was a Dutch mathematician, physicist, engineer, astronomer, and inventor who is regarded as a key figure in the Scientific Revolution. In physics, Huygens made seminal contributions to optics and mechanics, while as an astronomer he studied the rings of Saturn and discovered its largest moon, Titan. As an engineer and inventor, he improved the design of telescopes and invented the pendulum clock, the most accurate timekeeper for almost 300 years. A talented mathematician and physicist, his works contain the first idealization of a physical problem by a set of mathematical parameters, and the first mathematical and mechanistic explanation of an unobservable physical phenomenon.
Huygens first identified the correct laws of elastic collision in his work De Motu Corporum ex Percussione, completed in 1656 but published posthumously in 1703. In 1659, Huygens derived geometrically the formula in classical mechanics for the centrifugal force in his work De vi Centrifuga, a decade before Isaac Newton. In optics, he is best known for his wave theory of light, which he described in his Traité de la Lumière (1690). His theory of light was initially rejected in favour of Newton's corpuscular theory of light, until Augustin-Jean Fresnel adapted Huygens's principle to give a complete explanation of the rectilinear propagation and diffraction effects of light in 1821. Today this principle is known as the Huygens–Fresnel principle.
Huygens invented the pendulum clock in 1657, which he patented the same year. His horological research resulted in an extensive analysis of the pendulum in Horologium Oscillatorium (1673), regarded as one of the most important 17th century works on mechanics. While it contains descriptions of clock designs, most of the book is an analysis of pendular motion and a theory of curves. In 1655, Huygens began grinding lenses with his brother Constantijn to build refracting telescopes. He discovered Saturn's biggest moon, Titan, and was the first to explain Saturn's strange appearance as due to "a thin, flat ring, nowhere touching, and inclined to the ecliptic." In 1662 Huygens developed what is now called the Huygenian eyepiece, a telescope with two lenses to diminish the amount of dispersion.
As a mathematician, Huygens developed the theory of evolutes and wrote on games of chance and the problem of points in Van Rekeningh in Spelen van Gluck, which Frans van Schooten translated and published as De Ratiociniis in Ludo Aleae (1657). The use of expected values by Huygens and others would later inspire Jacob Bernoulli's work on probability theory.
Biography
Christiaan Huygens was born into a rich and influential Dutch family in The Hague on 14 April 1629, the second son of Constantijn Huygens. Christiaan was named after his paternal grandfather. His mother, Suzanna van Baerle, died shortly after giving birth to Huygens's sister. The couple had five children: Constantijn (1628), Christiaan (1629), Lodewijk (1631), Philips (1632) and Suzanna (1637).
Constantijn Huygens was a diplomat and advisor to the House of Orange, in addition to being a poet and a musician. He corresponded widely with intellectuals across Europe, including Galileo Galilei, Marin Mersenne, and René Descartes. Christiaan was educated at home until the age of sixteen, and from a young age liked to play with miniatures of mills and other machines. He received a liberal education from his father, studying languages, music, history, geography, mathematics, logic, and rhetoric, alongside dancing, fencing and horse riding.
In 1644, Huygens had as his mathematical tutor Jan Jansz Stampioen, who assigned the 15-year-old a demanding reading list on contemporary science. Descartes was later impressed by his skills in geometry, as was Mersenne, who christened him the "new Archimedes."
Student years
At sixteen years of age, Constantijn sent Huygens to study law and mathematics at Leiden University, where he enrolled from May 1645 to March 1647. Frans van Schooten Jr., professor at Leiden's Engineering School, became private tutor to Huygens and his elder brother, Constantijn Jr., replacing Stampioen on the advice of Descartes. Van Schooten brought Huygens's mathematical education up to date, introducing him to the work of Viète, Descartes, and Fermat.
After two years, starting in March 1647, Huygens continued his studies at the newly founded Orange College, in Breda, where his father was a curator. Constantijn Huygens was closely involved in the new College, which lasted only to 1669; the rector was André Rivet. Christiaan Huygens lived at the home of the jurist Johann Henryk Dauber while attending college, and had mathematics classes with the English lecturer John Pell. His time in Breda ended around the time when his brother Lodewijk, who was enrolled at the school, duelled with another student. Huygens left Breda after completing his studies in August 1649 and had a stint as a diplomat on a mission with Henry, Duke of Nassau. After stays at Bentheim and Flensburg in Germany, he visited Copenhagen and Helsingør in Denmark. Huygens hoped to cross the Øresund to see Descartes in Stockholm but was prevented due to Descartes' death in the interim.
Although his father Constantijn had wished his son Christiaan to be a diplomat, circumstances kept him from becoming so. The First Stadtholderless Period that began in 1650 meant that the House of Orange was no longer in power, removing Constantijn's influence. Further, he realized that his son had no interest in such a career.
Early correspondence
Huygens generally wrote in French or Latin. In 1646, while still a college student at Leiden, he began a correspondence with his father's friend, Marin Mersenne, who died soon afterwards in 1648. Mersenne wrote to Constantijn on his son's talent for mathematics, and flatteringly compared him to Archimedes on 3 January 1647.
The letters show Huygens's early interest in mathematics. In October 1646 there is the suspension bridge and the demonstration that a hanging chain is not a parabola, as Galileo thought. Huygens would later label that curve the catenaria (catenary) in 1690 while corresponding with Gottfried Leibniz.
In the next two years (1647–48), Huygens's letters to Mersenne covered various topics, including a mathematical proof of the law of free fall, the claim by Grégoire de Saint-Vincent of circle quadrature, which Huygens showed to be wrong, the rectification of the ellipse, projectiles, and the vibrating string. Some of Mersenne's concerns at the time, such as the cycloid (he sent Huygens Torricelli's treatise on the curve), the centre of oscillation, and the gravitational constant, were matters Huygens only took seriously later in the 17th century. Mersenne had also written on musical theory. Huygens preferred meantone temperament; he innovated in 31 equal temperament (which was not itself a new idea but known to Francisco de Salinas), using logarithms to investigate it further and show its close relation to the meantone system.
In 1654, Huygens returned to his father's house in The Hague and was able to devote himself entirely to research. The family had another house, not far away at Hofwijck, and he spent time there during the summer. Despite being very active, his scholarly life did not allow him to escape bouts of depression.
Subsequently, Huygens developed a broad range of correspondents, though with some difficulty after 1648 due to the five-year Fronde in France. Visiting Paris in 1655, Huygens called on Ismael Boulliau to introduce himself, who took him to see Claude Mylon. The Parisian group of savants that had gathered around Mersenne held together into the 1650s, and Mylon, who had assumed the secretarial role, took some trouble to keep Huygens in touch. Through Pierre de Carcavi Huygens corresponded in 1656 with Pierre de Fermat, whom he admired greatly. The experience was bittersweet and somewhat puzzling since it became clear that Fermat had dropped out of the research mainstream, and his priority claims could probably not be made good in some cases. Besides, Huygens was looking by then to apply mathematics to physics, while Fermat's concerns ran to purer topics.
Scientific debut
Like some of his contemporaries, Huygens was often slow to commit his results and discoveries to print, preferring to disseminate his work through letters instead. In his early days, his mentor Frans van Schooten provided technical feedback and was cautious for the sake of his reputation.
Between 1651 and 1657, Huygens published a number of works that showed his talent for mathematics and his mastery of classical and analytical geometry, increasing his reach and reputation among mathematicians. Around the same time, Huygens began to question Descartes's laws of collision, which were largely wrong, deriving the correct laws algebraically and later by way of geometry. He showed that, for any system of bodies, the centre of gravity of the system remains the same in velocity and direction, which Huygens called the conservation of "quantity of movement". While others at the time were studying impact, Huygens's theory of collisions was more general. These results became the main reference point and the focus for further debates through correspondence and in a short article in Journal des Sçavans but would remain unknown to a larger audience until the publication of De Motu Corporum ex Percussione (Concerning the motion of colliding bodies) in 1703.
In addition to his mathematical and mechanical works, Huygens made important scientific discoveries: he was the first to identify Titan as one of Saturn's moons in 1655, invented the pendulum clock in 1657, and explained Saturn's strange appearance as due to a ring in 1659; all these discoveries brought him fame across Europe. On 3 May 1661, Huygens, together with astronomer Thomas Streete and Richard Reeve, observed the planet Mercury transit over the Sun using Reeve's telescope in London. Streete then debated the published record of Hevelius, a controversy mediated by Henry Oldenburg. Huygens passed to Hevelius a manuscript of Jeremiah Horrocks on the transit of Venus in 1639, printed for the first time in 1662.
In that same year, Sir Robert Moray sent Huygens John Graunt's life table, and shortly after Huygens and his brother Lodewijk dabbled on life expectancy. Huygens eventually created the first graph of a continuous distribution function under the assumption of a uniform death rate, and used it to solve problems in joint annuities. Contemporaneously, Huygens, who played the harpsichord, took an interest in Simon Stevin's theories on music; however, he showed very little concern to publish his theories on consonance, some of which were lost for centuries. For his contributions to science, the Royal Society of London elected Huygens a Fellow in 1663, making him its first foreign member when he was just 34 years old.
France
The Montmor Academy, started in the mid-1650s, was the form the old Mersenne circle took after his death. Huygens took part in its debates and supported those favouring experimental demonstration as a check on amateurish attitudes. He visited Paris a third time in 1663; when the Montmor Academy closed down the next year, Huygens advocated for a more Baconian program in science. Two years later, in 1666, he moved to Paris on an invitation to fill a leadership position at King Louis XIV's new French Académie des sciences.
While at the Académie in Paris, Huygens had an important patron and correspondent in Jean-Baptiste Colbert, First Minister to Louis XIV. His relationship with the French Académie was not always easy, and in 1670 Huygens, seriously ill, chose Francis Vernon to carry out a donation of his papers to the Royal Society in London should he die. However, the aftermath of the Franco-Dutch War (1672–78), and particularly England's role in it, may have damaged his later relationship with the Royal Society. Robert Hooke, as a Royal Society representative, lacked the finesse to handle the situation in 1673.
The physicist and inventor Denis Papin was an assistant to Huygens from 1671. One of their projects, which did not bear fruit directly, was the gunpowder engine. Huygens made further astronomical observations at the Académie using the observatory recently completed in 1672. He introduced Nicolaas Hartsoeker to French scientists such as Nicolas Malebranche and Giovanni Cassini in 1678.
The young diplomat Leibniz met Huygens while visiting Paris in 1672 on a vain mission to meet the French Foreign Minister Arnauld de Pomponne. Leibniz was working on a calculating machine at the time and, after a short visit to London in early 1673, he was tutored in mathematics by Huygens until 1676. An extensive correspondence ensued over the years, in which Huygens showed at first reluctance to accept the advantages of Leibniz's infinitesimal calculus.
Final years
Huygens moved back to The Hague in 1681 after suffering another bout of serious depressive illness. In 1684, he published Astroscopia Compendiaria on his new tubeless aerial telescope. He attempted to return to France in 1685 but the revocation of the Edict of Nantes precluded this move. His father died in 1687, and he inherited Hofwijck, which he made his home the following year.
On his third visit to England, Huygens met Newton in person on 12 June 1689. They spoke about Iceland spar, and subsequently corresponded about resisted motion.
Huygens returned to mathematical topics in his last years and observed the acoustical phenomenon now known as flanging in 1693. Two years later, on 8 July 1695, Huygens died in The Hague and was buried, like his father before him, in an unmarked grave at the Grote Kerk.
Huygens never married.
Mathematics
Huygens first became internationally known for his work in mathematics, publishing a number of important results that drew the attention of many European geometers. Huygens's preferred method in his published works was that of Archimedes, though he made use of Descartes's analytic geometry and Fermat's infinitesimal techniques more extensively in his private notebooks.
Published works
Theoremata de Quadratura
Huygens's first publication was Theoremata de Quadratura Hyperboles, Ellipsis et Circuli (Theorems on the quadrature of the hyperbola, ellipse, and circle), published by the Elzeviers in Leiden in 1651. The first part of the work contained theorems for computing the areas of hyperbolas, ellipses, and circles that paralleled Archimedes's work on conic sections, particularly his Quadrature of the Parabola. The second part included a refutation to Grégoire de Saint-Vincent's claims on circle quadrature, which he had discussed with Mersenne earlier.
Huygens demonstrated that the centre of gravity of a segment of any hyperbola, ellipse, or circle was directly related to the area of that segment. He was then able to show the relationships between triangles inscribed in conic sections and the centre of gravity for those sections. By generalizing these theorems to cover all conic sections, Huygens extended classical methods to generate new results.
Quadrature and rectification were live issues in the 1650s and, through Mylon, Huygens participated in the controversy surrounding Thomas Hobbes. Persisting in highlighting his mathematical contributions, he made an international reputation.
De Circuli Magnitudine Inventa
Huygens's next publication was De Circuli Magnitudine Inventa (New findings on the magnitude of the circle), published in 1654. In this work, Huygens was able to narrow the gap between the circumscribed and inscribed polygons found in Archimedes's Measurement of the Circle, showing that the ratio of the circumference to its diameter or pi () must lie in the first third of that interval.
Using a technique equivalent to Richardson extrapolation, Huygens was able to shorten the inequalities used in Archimedes's method; in this case, by using the centre of the gravity of a segment of a parabola, he was able to approximate the centre of gravity of a segment of a circle, resulting in a faster and accurate approximation of the circle quadrature. From these theorems, Huygens obtained two set of values for : the first between 3.1415926 and 3.1415927, and the second between 3.1415926533 and 3.1415926538.
Huygens also showed that, in the case of the hyperbola, the same approximation with parabolic segments produces a quick and simple method to calculate logarithms. He appended a collection of solutions to classical problems at the end of the work under the title Illustrium Quorundam Problematum Constructiones (Construction of some illustrious problems).
De Ratiociniis in Ludo Aleae
Huygens became interested in games of chance after he visited Paris in 1655 and encountered the work of Fermat, Blaise Pascal and Girard Desargues years earlier. He eventually published what was, at the time, the most coherent presentation of a mathematical approach to games of chance in De Ratiociniis in Ludo Aleae (On reasoning in games of chance). Frans van Schooten translated the original Dutch manuscript into Latin and published it in his Exercitationum Mathematicarum (1657).
The work contains early game-theoretic ideas and deals in particular with the problem of points. Huygens took from Pascal the concepts of a "fair game" and equitable contract (i.e., equal division when the chances are equal), and extended the argument to set up a non-standard theory of expected values. His success in applying algebra to the realm of chance, which hitherto seemed inaccessible to mathematicians, demonstrated the power of combining Euclidean synthetic proofs with the symbolic reasoning found in the works of Viète and Descartes.
Huygens included five challenging problems at the end of the book that became the standard test for anyone wishing to display their mathematical skill in games of chance for the next sixty years. People who worked on these problems included Abraham de Moivre, Jacob Bernoulli, Johannes Hudde, Baruch Spinoza, and Leibniz.
Unpublished work
Huygens had earlier completed a manuscript in the manner of Archimedes's On Floating Bodies entitled De Iis quae Liquido Supernatant (About parts floating above liquids). It was written around 1650 and was made up of three books. Although he sent the completed work to Frans van Schooten for feedback, in the end Huygens chose not to publish it, and at one point suggested it be burned. Some of the results found here were not rediscovered until the eighteenth and nineteenth centuries.
Huygens first re-derives Archimedes's solutions for the stability of the sphere and the paraboloid by a clever application of Torricelli's principle (i.e., that bodies in a system move only if their centre of gravity descends). He then proves the general theorem that, for a floating body in equilibrium, the distance between its centre of gravity and its submerged portion is at a minimum. Huygens uses this theorem to arrive at original solutions for the stability of floating cones, parallelepipeds, and cylinders, in some cases through a full cycle of rotation. His approach was thus equivalent to the principle of virtual work. Huygens was also the first to recognize that, for these homogeneous solids, their specific weight and their aspect ratio are the essentials parameters of hydrostatic stability.
Natural philosophy
Huygens was the leading European natural philosopher between Descartes and Newton. However, unlike many of his contemporaries, Huygens had no taste for grand theoretical or philosophical systems and generally avoided dealing with metaphysical issues (if pressed, he adhered to the Cartesian philosophy of his time). Instead, Huygens excelled in extending the work of his predecessors, such as Galileo, to derive solutions to unsolved physical problems that were amenable to mathematical analysis. In particular, he sought explanations that relied on contact between bodies and avoided action at a distance.
In common with Robert Boyle and Jacques Rohault, Huygens advocated an experimentally oriented, mechanical natural philosophy during his Paris years. Already in his first visit to England in 1661, Huygens had learnt about Boyle's air pump experiments during a meeting at Gresham College. Shortly afterwards, he reevaluated Boyle's experimental design and developed a series of experiments meant to test a new hypothesis. It proved a yearslong process that brought to the surface a number of experimental and theoretical issues, and which ended around the time he became a Fellow of the Royal Society. Despite the replication of results of Boyle's experiments trailing off messily, Huygens came to accept Boyle's view of the void against the Cartesian denial of it.
Newton's influence on John Locke was mediated by Huygens, who assured Locke that Newton's mathematics was sound, leading to Locke's acceptance of a corpuscular-mechanical physics.
Laws of motion, impact, and gravitation
Elastic collisions
The general approach of the mechanical philosophers was to postulate theories of the kind now called "contact action." Huygens adopted this method but not without seeing its limitations, while Leibniz, his student in Paris, later abandoned it. Understanding the universe this way made the theory of collisions central to physics, as only explanations that involved matter in motion could be truly intelligible. While Huygens was influenced by the Cartesian approach, he was less doctrinaire. He studied elastic collisions in the 1650s but delayed publication for over a decade.
Huygens concluded quite early that Descartes's laws for elastic collisions were largely wrong, and he formulated the correct laws, including the conservation of the product of mass times the square of the speed for hard bodies, and the conservation of quantity of motion in one direction for all bodies. An important step was his recognition of the Galilean invariance of the problems. Huygens had worked out the laws of collision from 1652 to 1656 in a manuscript entitled De Motu Corporum ex Percussione, though his results took many years to be circulated. In 1661, he passed them on in person to William Brouncker and Christopher Wren in London. What Spinoza wrote to Henry Oldenburg about them in 1666, during the Second Anglo-Dutch War, was guarded. The war ended in 1667, and Huygens announced his results to the Royal Society in 1668. He later published them in the Journal des Sçavans in 1669.
Centrifugal force
In 1659 Huygens found the constant of gravitational acceleration and stated what is now known as the second of Newton's laws of motion in quadratic form. He derived geometrically the now standard formula for the centrifugal force, exerted on an object when viewed in a rotating frame of reference, for instance when driving around a curve. In modern notation:
with m the mass of the object, ω the angular velocity, and r the radius. Huygens collected his results in a treatise under the title De vi Centrifuga, unpublished until 1703, where the kinematics of free fall were used to produce the first generalized conception of force prior to Newton.
Gravitation
The general idea for the centrifugal force, however, was published in 1673 and was a significant step in studying orbits in astronomy. It enabled the transition from Kepler's third law of planetary motion to the inverse square law of gravitation. Yet, the interpretation of Newton's work on gravitation by Huygens differed from that of Newtonians such as Roger Cotes: he did not insist on the a priori attitude of Descartes, but neither would he accept aspects of gravitational attractions that were not attributable in principle to contact between particles.
The approach used by Huygens also missed some central notions of mathematical physics, which were not lost on others. In his work on pendulums Huygens came very close to the theory of simple harmonic motion; the topic, however, was covered fully for the first time by Newton in Book II of the Principia Mathematica (1687). In 1678 Leibniz picked out of Huygens's work on collisions the idea of conservation law that Huygens had left implicit.
Horology
Pendulum clock
In 1657, inspired by earlier research into pendulums as regulating mechanisms, Huygens invented the pendulum clock, which was a breakthrough in timekeeping and became the most accurate timekeeper for almost 300 years until the 1930s. The pendulum clock was much more accurate than the existing verge and foliot clocks and was immediately popular, quickly spreading over Europe. Clocks prior to this would lose about 15 minutes per day, whereas Huygens's clock would lose about 15 seconds per day. Although Huygens patented and contracted the construction of his clock designs to Salomon Coster in The Hague, he did not make much money from his invention. Pierre Séguier refused him any French rights, while Simon Douw in Rotterdam and Ahasuerus Fromanteel in London copied his design in 1658. The oldest known Huygens-style pendulum clock is dated 1657 and can be seen at the Museum Boerhaave in Leiden.
Part of the incentive for inventing the pendulum clock was to create an accurate marine chronometer that could be used to find longitude by celestial navigation during sea voyages. However, the clock proved unsuccessful as a marine timekeeper because the rocking motion of the ship disturbed the motion of the pendulum. In 1660, Lodewijk Huygens made a trial on a voyage to Spain, and reported that heavy weather made the clock useless. Alexander Bruce entered the field in 1662, and Huygens called in Sir Robert Moray and the Royal Society to mediate and preserve some of his rights. Trials continued into the 1660s, the best news coming from a Royal Navy captain Robert Holmes operating against the Dutch possessions in 1664. Lisa Jardine doubts that Holmes reported the results of the trial accurately, as Samuel Pepys expressed his doubts at the time.
A trial for the French Academy on an expedition to Cayenne ended badly. Jean Richer suggested correction for the figure of the Earth. By the time of the Dutch East India Company expedition of 1686 to the Cape of Good Hope, Huygens was able to supply the correction retrospectively.
Horologium Oscillatorium
Sixteen years after the invention of the pendulum clock, in 1673, Huygens published his major work on horology entitled Horologium Oscillatorium: Sive de Motu Pendulorum ad Horologia Aptato Demonstrationes Geometricae (The Pendulum Clock: or Geometrical demonstrations concerning the motion of pendula as applied to clocks). It is the first modern work on mechanics where a physical problem is idealized by a set of parameters then analysed mathematically.
Huygens's motivation came from the observation, made by Mersenne and others, that pendulums are not quite isochronous: their period depends on their width of swing, with wide swings taking slightly longer than narrow swings. He tackled this problem by finding the curve down which a mass will slide under the influence of gravity in the same amount of time, regardless of its starting point; the so-called tautochrone problem. By geometrical methods which anticipated the calculus, Huygens showed it to be a cycloid, rather than the circular arc of a pendulum's bob, and therefore that pendulums needed to move on a cycloid path in order to be isochronous. The mathematics necessary to solve this problem led Huygens to develop his theory of evolutes, which he presented in Part III of his Horologium Oscillatorium.
He also solved a problem posed by Mersenne earlier: how to calculate the period of a pendulum made of an arbitrarily-shaped swinging rigid body. This involved discovering the centre of oscillation and its reciprocal relationship with the pivot point. In the same work, he analysed the conical pendulum, consisting of a weight on a cord moving in a circle, using the concept of centrifugal force.
Huygens was the first to derive the formula for the period of an ideal mathematical pendulum (with mass-less rod or cord and length much longer than its swing), in modern notation:
with T the period, l the length of the pendulum and g the gravitational acceleration. By his study of the oscillation period of compound pendulums Huygens made pivotal contributions to the development of the concept of moment of inertia.
Huygens also observed coupled oscillations: two of his pendulum clocks mounted next to each other on the same support often became synchronized, swinging in opposite directions. He reported the results by letter to the Royal Society, and it is referred to as "an odd kind of sympathy" in the Society's minutes. This concept is now known as entrainment.
Balance spring watch
In 1675, while investigating the oscillating properties of the cycloid, Huygens was able to transform a cycloidal pendulum into a vibrating spring through a combination of geometry and higher mathematics. In the same year, Huygens designed a spiral balance spring and patented a pocket watch. These watches are notable for lacking a fusee for equalizing the mainspring torque. The implication is that Huygens thought his spiral spring would isochronize the balance in the same way that cycloid-shaped suspension curbs on his clocks would isochronize the pendulum.
He later used spiral springs in more conventional watches, made for him by Thuret in Paris. Such springs are essential in modern watches with a detached lever escapement because they can be adjusted for isochronism. Watches in Huygens's time, however, employed the very ineffective verge escapement, which interfered with the isochronal properties of any form of balance spring, spiral or otherwise.
Huygens's design came around the same time as, though independently of, Robert Hooke's. Controversy over the priority of the balance spring persisted for centuries. In February 2006, a long-lost copy of Hooke's handwritten notes from several decades of Royal Society meetings was discovered in a cupboard in Hampshire, England, presumably tipping the evidence in Hooke's favour.
Optics
Dioptrics
Huygens had a long-term interest in the study of light refraction and lenses or dioptrics. From 1652 date the first drafts of a Latin treatise on the theory of dioptrics, known as the Tractatus, which contained a comprehensive and rigorous theory of the telescope. Huygens was one of the few to raise theoretical questions regarding the properties and working of the telescope, and almost the only one to direct his mathematical proficiency towards the actual instruments used in astronomy.
Huygens repeatedly announced its publication to his colleagues but ultimately postponed it in favor of a much more comprehensive treatment, now under the name of the Dioptrica. It consisted of three parts. The first part focused on the general principles of refraction, the second dealt with spherical and chromatic aberration, while the third covered all aspects of the construction of telescopes and microscopes. In contrast to Descartes' dioptrics which treated only ideal (elliptical and hyperbolical) lenses, Huygens dealt exclusively with spherical lenses, which were the only kind that could really be made and incorporated in devices such as microscopes and telescopes.
Huygens also worked out practical ways to minimize the effects of spherical and chromatic aberration, such as long focal distances for the objective of a telescope, internal stops to reduce the aperture, and a new kind of ocular known as the Huygenian eyepiece. The Dioptrica was never published in Huygens’s lifetime and only appeared in press in 1703, when most of its contents were already familiar to the scientific world.
Lenses
Together with his brother Constantijn, Huygens began grinding his own lenses in 1655 in an effort to improve telescopes. He designed in 1662 what is now called the Huygenian eyepiece, a set of two planoconvex lenses used as a telescope ocular. Huygens's lenses were known to be of superb quality and polished consistently according to his specifications; however, his telescopes did not produce very sharp images, leading some to speculate that he might have suffered from near-sightedness.
Lenses were also a common interest through which Huygens could meet socially in the 1660s with Spinoza, who ground them professionally. They had rather different outlooks on science, Spinoza being the more committed Cartesian, and some of their discussion survives in correspondence. He encountered the work of Antoni van Leeuwenhoek, another lens grinder, in the field of microscopy which interested his father. Huygens also investigated the use of lenses in projectors. He is credited as the inventor of the magic lantern, described in correspondence of 1659. There are others to whom such a lantern device has been attributed, such as Giambattista della Porta and Cornelis Drebbel, though Huygens's design used lens for better projection (Athanasius Kircher has also been credited for that).
Traité de la Lumière
Huygens is especially remembered in optics for his wave theory of light, which he first communicated in 1678 to the Académie des sciences in Paris. Originally a preliminary chapter of his Dioptrica, Huygens's theory was published in 1690 under the title Traité de la Lumière (Treatise on light), and contains the first fully mathematized, mechanistic explanation of an unobservable physical phenomenon (i.e., light propagation). Huygens refers to Ignace-Gaston Pardies, whose manuscript on optics helped him on his wave theory.
The challenge at the time was to explain geometrical optics, as most physical optics phenomena (such as diffraction) had not been observed or appreciated as issues. Huygens had experimented in 1672 with double refraction (birefringence) in the Iceland spar (a calcite), a phenomenon discovered in 1669 by Rasmus Bartholin. At first, he could not elucidate what he found but was later able to explain it using his wavefront theory and concept of evolutes. He also developed ideas on caustics. Huygens assumes that the speed of light is finite, based on a report by Ole Christensen Rømer in 1677 but which Huygens is presumed to have already believed. Huygens's theory posits light as radiating wavefronts, with the common notion of light rays depicting propagation normal to those wavefronts. Propagation of the wavefronts is then explained as the result of spherical waves being emitted at every point along the wave front (known today as the Huygens–Fresnel principle). It assumed an omnipresent ether, with transmission through perfectly elastic particles, a revision of the view of Descartes. The nature of light was therefore a longitudinal wave.
His theory of light was not widely accepted, while Newton's rival corpuscular theory of light, as found in his Opticks (1704), gained more support. One strong objection to Huygens's theory was that longitudinal waves have only a single polarization which cannot explain the observed birefringence. However, Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, could not be explained through Newton's or any other particle theory, reviving Huygens's ideas and wave models. Fresnel became aware of Huygens's work and in 1821 was able to explain birefringence as a result of light being not a longitudinal (as had been assumed) but actually a transverse wave. The thus-named Huygens–Fresnel principle was the basis for the advancement of physical optics, explaining all aspects of light propagation until Maxwell's electromagnetic theory culminated in the development of quantum mechanics and the discovery of the photon.
Astronomy
Systema Saturnium
In 1655, Huygens discovered the first of Saturn's moons, Titan, and observed and sketched the Orion Nebula using a refracting telescope with a 43x magnification of his own design. Huygens succeeded in subdividing the nebula into different stars (the brighter interior now bears the name of the Huygenian region in his honour), and discovered several interstellar nebulae and some double stars. He was also the first to propose that the appearance of Saturn, which had baffled astronomers, was due to "a thin, flat ring, nowhere touching, and inclined to the ecliptic”.
More than three years later, in 1659, Huygens published his theory and findings in Systema Saturnium. It is considered the most important work on telescopic astronomy since Galileo's Sidereus Nuncius fifty years earlier. Much more than a report on Saturn, Huygens provided measurements for the relative distances of the planets from the Sun, introduced the concept of the micrometer, and showed a method to measure angular diameters of planets, which finally allowed the telescope to be used as an instrument to measure (rather than just sighting) astronomical objects. He was also the first to question the authority of Galileo in telescopic matters, a sentiment that was to be common in the years following its publication.
In the same year, Huygens was able to observe Syrtis Major, a volcanic plain on Mars. He used repeated observations of the movement of this feature over the course of a number of days to estimate the length of day on Mars, which he did quite accurately to 24 1/2 hours. This figure is only a few minutes off of the actual length of the Martian day of 24 hours, 37 minutes.
Planetarium
At the instigation of Jean-Baptiste Colbert, Huygens undertook the task of constructing a mechanical planetarium that could display all the planets and their moons then known circling around the Sun. Huygens completed his design in 1680 and had his clockmaker Johannes van Ceulen built it the following year. However, Colbert passed away in the interim and Huygens never got to deliver his planetarium to the French Academy of Sciences as the new minister, François-Michel le Tellier, decided not to renew Huygens's contract.
In his design, Huygens made an ingenious use of continued fractions to find the best rational approximations by which he could choose the gears with the correct number of teeth. The ratio between two gears determined the orbital periods of two planets. To move the planets around the Sun, Huygens used a clock-mechanism that could go forwards and backwards in time. Huygens claimed his planetarium was more accurate that a similar device constructed by Ole Rømer around the same time, but his planetarium design was not published until after his death in the Opuscula Posthuma (1703).
Cosmotheoros
Shortly before his death in 1695, Huygens completed his most speculative work entitled Cosmotheoros. At his direction, it was to be published only posthumously by his brother, which Constantijn Jr. did in 1698. In this work, Huygens speculated on the existence of extraterrestrial life, which he imagined similar to that on Earth. Such speculations were not uncommon at the time, justified by Copernicanism or the plenitude principle, but Huygens went into greater detail, though without acknowledging Newton's laws of gravitation or the fact that planetary atmospheres are composed of different gases. Cosmotheoros, translated into English as The celestial worlds discover’d, is fundamentally a utopian work that owes some inspiration to the work of Peter Heylin, and it was likely seen by contemporary readers as a piece of fiction in the tradition of Francis Godwin, John Wilkins, and Cyrano de Bergerac.
Huygens wrote that availability of water in liquid form was essential for life and that the properties of water must vary from planet to planet to suit the temperature range. He took his observations of dark and bright spots on the surfaces of Mars and Jupiter to be evidence of water and ice on those planets. He argued that extraterrestrial life is neither confirmed nor denied by the Bible, and questioned why God would create the other planets if they were not to serve a greater purpose than that of being admired from Earth. Huygens postulated that the great distance between the planets signified that God had not intended for beings on one to know about the beings on the others, and had not foreseen how much humans would advance in scientific knowledge.
It was also in this book that Huygens published his estimates for the relative sizes of the solar system and his method for calculating stellar distances. He made a series of smaller holes in a screen facing the Sun, until he estimated the light was of the same intensity as that of the star Sirius. He then calculated that the angle of this hole was 1/27,664th the diameter of the Sun, and thus it was about 30,000 times as far away, on the (incorrect) assumption that Sirius is as luminous as the Sun. The subject of photometry remained in its infancy until the time of Pierre Bouguer and Johann Heinrich Lambert.
Legacy
Huygens has been called the first theoretical physicist and a founder of modern mathematical physics. Although his influence was considerable during his lifetime, it began to fade shortly after his death. His skills as a geometer and mechanical ingenuity elicited the admiration of many of his contemporaries, including Newton, Leibniz, l'Hôpital, and the Bernoullis. For his work in physics, Huygens has been deemed one of the greatest scientists in the Scientific Revolution, rivaled only by Newton in both depth of insight and the number of results obtained. Huygens also helped develop the institutional frameworks for scientific research on the European continent, making him a leading actor in the establishment of modern science.
Mathematics and physics
In mathematics, Huygens mastered the methods of ancient Greek geometry, particularly the work of Archimedes, and was an adept user of the analytic geometry and infinitesimal techniques of Descartes and Fermat. His mathematical style can be best described as geometrical infinitesimal analysis of curves and of motion. Drawing inspiration and imagery from mechanics, it remained pure mathematics in form. Huygens brought this type of geometrical analysis to a close, as more mathematicians turned away from classical geometry to the calculus for handling infinitesimals, limit processes, and motion.
Huygens was moreover able to fully employ mathematics to answer questions of physics. Often this entailed introducing a simple model for describing a complicated situation, then analyzing it starting from simple arguments to their logical consequences, developing the necessary mathematics along the way. As he wrote at the end of a draft of De vi Centrifuga:
Huygens favoured axiomatic presentations of his results, which require rigorous methods of geometric demonstration: although he allowed levels of uncertainty in the selection of primary axioms and hypotheses, the proofs of theorems derived from these could never be in doubt. Huygens's style of publication exerted an influence in Newton's presentation of his own major works.
Besides the application of mathematics to physics and physics to mathematics, Huygens relied on mathematics as methodology, specifically its ability to generate new knowledge about the world. Unlike Galileo, who used mathematics primarily as rhetoric or synthesis, Huygens consistently employed mathematics as a way to discover and develop theories covering various phenomena and insisted that the reduction of the physical to the geometrical satisfy exacting standards of fit between the real and the ideal. In demanding such mathematical tractability and precision, Huygens set an example for eighteenth-century scientists such as Johann Bernoulli, Jean le Rond d'Alembert, and Charles-Augustin de Coulomb.
Although never intended for publication, Huygens made use of algebraic expressions to represent physical entities in a handful of his manuscripts on collisions. This would make him one of the first to employ mathematical formulae to describe relationships in physics, as it is done today. Huygens also came close to the modern idea of limit while working on his Dioptrica, though he never used the notion outside geometrical optics.
Later influence
Huygens's standing as the greatest scientist in Europe was eclipsed by Newton's at the end of the seventeenth century, despite the fact that, as Hugh Aldersey-Williams notes, "Huygens's achievement exceeds that of Newton in some important respects". Although his journal publications anticipated the form of the modern scientific article, his persistent classicism and reluctance to publish his work did much to diminish his influence in the aftermath of the Scientific Revolution, as adherents of Leibniz’ calculus and Newton's physics took centre stage.
Huygens's analyses of curves that satisfy certain physical properties, such as the cycloid, led to later studies of many other such curves like the caustic, the brachistochrone, the sail curve, and the catenary. His application of mathematics to physics, such as in his studies of impact and birefringence, would inspire new developments in mathematical physics and rational mechanics in the following centuries (albeit in the new language of the calculus). Additionally, Huygens developed the oscillating timekeeping mechanisms, the pendulum and the balance spring, that have been used ever since in mechanical watches and clocks. These were the first reliable timekeepers fit for scientific use (e.g., to make accurate measurements of the inequality of the solar day, which was not possible before). His work in this area foreshadowed the union of applied mathematics with mechanical engineering in the centuries that followed.
Portraits
During his lifetime, Huygens and his father had a number of portraits commissioned. These included:
1639 – Constantijn Huygens in the midst of his five children by Adriaen Hanneman, painting with medallions, Mauritshuis, The Hague
1671 – Portrait by Caspar Netscher, Museum Boerhaave, Leiden, loan from Haags Historisch Museum
c.1675 – Depiction of Huygens in Établissement de l'Académie des Sciences et fondation de l'observatoire, 1666 by Henri Testelin. Colbert presents the members of the newly founded Académie des Sciences to king Louis XIV of France. Musée National du Château et des Trianons de Versailles, Versailles
1679 – Medaillon portrait in relief by the French sculptor Jean-Jacques Clérion
1686 – Portrait in pastel by Bernard Vaillant, Museum Hofwijck, Voorburg
1684 to 1687 – Engravings by G. Edelinck after the painting by Caspar Netscher
1688 – Portrait by Pierre Bourguignon (painter), Royal Netherlands Academy of Arts and Sciences, Amsterdam
Commemorations
The European Space Agency spacecraft that landed on Titan, Saturn's largest moon, in 2005 was named after him.
A number of monuments to Christiaan Huygens can be found across important cities in the Netherlands, including Rotterdam, Delft, and Leiden.
Works
Source(s):
1650 – De Iis Quae Liquido Supernatant (About parts floating above liquids), unpublished.
1651 – Theoremata de Quadratura Hyperboles, Ellipsis et Circuli, republished in Oeuvres Complètes, Tome XI.
1651 – Epistola, qua diluuntur ea quibus 'Εξέτασις [Exetasis] Cyclometriae Gregori à Sto. Vincentio impugnata fuit, supplement.
1654 – De Circuli Magnitudine Inventa.
1654 – Illustrium Quorundam Problematum Constructiones, supplement.
1655 – Horologium (The clock), short pamphlet on the pendulum clock.
1656 – De Saturni Luna Observatio Nova (About the new observation of the moon of Saturn), describes the discovery of Titan.
1656 – De Motu Corporum ex Percussione, published posthumously in 1703.
1657 – De Ratiociniis in Ludo Aleae (Van reeckening in spelen van geluck), translated into Latin by Frans van Schooten.
1659 – Systema Saturnium (System of Saturn).
1659 – De vi Centrifuga (Concerning the centrifugal force), published posthumously in 1703.
1673 – Horologium Oscillatorium Sive de Motu Pendulorum ad Horologia Aptato Demonstrationes Geometricae, includes a theory of evolutes and designs of pendulum clocks, dedicated to Louis XIV of France.
1684 – Astroscopia Compendiaria Tubi Optici Molimine Liberata (Compound telescopes without a tube).
1685 – Memoriën aengaende het slijpen van glasen tot verrekijckers, dealing with the grinding of lenses.
1686 – Kort onderwijs aengaende het gebruijck der horologiën tot het vinden der lenghten van Oost en West (in Old Dutch), instructions on how to use clocks to establish the longitude at sea.
1690 – Traité de la Lumière, dealing with the nature of light propagation.
1690 – Discours de la Cause de la Pesanteur (Discourse about gravity), supplement.
1691 – Lettre Touchant le Cycle Harmonique, short tract concerning the 31-tone system.
1698 – Cosmotheoros, deals with the solar system, cosmology, and extraterrestrial life.
1703 – Opuscula Posthuma including:
De Motu Corporum ex Percussione (Concerning the motions of colliding bodies), contains the first correct laws for collision, dating from 1656.
Descriptio Automati Planetarii, provides a description and design of a planetarium.
1724 – Novus Cyclus Harmonicus, a treatise on music published in Leiden after Huygens's death.
1728 – Christiani Hugenii Zuilichemii, dum viveret Zelhemii Toparchae, Opuscula Posthuma (alternate title: Opera Reliqua), includes works in optics and physics.
1888–1950 – Huygens, Christiaan. Oeuvres complètes. Complete works, 22 volumes. Editors D. Bierens de Haan (1–5), J. Bosscha (6–10), D.J. Korteweg (11–15), A.A. Nijland (15), J.A. Vollgraf (16–22). The Hague:
Tome I: Correspondance 1638–1656 (1888).
Tome II: Correspondance 1657–1659 (1889).
Tome III: Correspondance 1660–1661 (1890).
Tome IV: Correspondance 1662–1663 (1891).
Tome V: Correspondance 1664–1665 (1893).
Tome VI: Correspondance 1666–1669 (1895).
Tome VII: Correspondance 1670–1675 (1897).
Tome VIII: Correspondance 1676–1684 (1899).
Tome IX: Correspondance 1685–1690 (1901).
Tome X: Correspondance 1691–1695 (1905).
Tome XI: Travaux mathématiques 1645–1651 (1908).
Tome XII: Travaux mathématiques pures 1652–1656 (1910).
Tome XIII, Fasc. I: Dioptrique 1653, 1666 (1916).
Tome XIII, Fasc. II: Dioptrique 1685–1692 (1916).
Tome XIV: Calcul des probabilités. Travaux de mathématiques pures 1655–1666 (1920).
Tome XV: Observations astronomiques. Système de Saturne. Travaux astronomiques 1658–1666 (1925).
Tome XVI: Mécanique jusqu’à 1666. Percussion. Question de l'existence et de la perceptibilité du mouvement absolu. Force centrifuge (1929).
Tome XVII: L’horloge à pendule de 1651 à 1666. Travaux divers de physique, de mécanique et de technique de 1650 à 1666. Traité des couronnes et des parhélies (1662 ou 1663) (1932).
Tome XVIII: L'horloge à pendule ou à balancier de 1666 à 1695. Anecdota (1934).
Tome XIX: Mécanique théorique et physique de 1666 à 1695. Huygens à l'Académie royale des sciences (1937).
Tome XX: Musique et mathématique. Musique. Mathématiques de 1666 à 1695 (1940).
Tome XXI: Cosmologie (1944).
Tome XXII: Supplément à la correspondance. Varia. Biographie de Chr. Huygens. Catalogue de la vente des livres de Chr. Huygens (1950).
See also
Concepts
Huygens–Steiner theorem
Huygens's principle
Huygens's lemniscate
Evolute
Centre of oscillation
People
René Descartes
Galileo Galilei
Isaac Newton
Gottfried Wilhelm Leibniz
Technology
History of the internal combustion engine
List of largest optical telescopes historically
Fokker Organ
Seconds pendulum
References
Further reading
Andriesse, C.D. (2005). Huygens: The Man Behind the Principle. Foreword by Sally Miedema. Cambridge University Press.
Aldersey-Williams, Hugh. (2020). Dutch Light: Christiaan Huygens and the Making of Science in Europe. London: Picador.
Bell, A. E. (1947). Christian Huygens and the Development of Science in the Seventeenth Century
Boyer, C.B. (1968). A History of Mathematics, New York.
Dijksterhuis, E. J. (1961). The Mechanization of the World Picture: Pythagoras to Newton
Hooijmaijers, H. (2005). Telling time – Devices for time measurement in Museum Boerhaave – A Descriptive Catalogue, Leiden, Museum Boerhaave.
Struik, D.J. (1948). A Concise History of Mathematics
Van den Ende, H. et al. (2004). Huygens's Legacy, The golden age of the pendulum clock, Fromanteel Ltd, Castle Town, Isle of Man.
Yoder, J. G. (2005). "Book on the pendulum clock" in Ivor Grattan-Guinness, ed., Landmark Writings in Western Mathematics. Elsevier: 33–45.
External links
Primary sources, translations
:
C. Huygens (translated by Silvanus P. Thompson, 1912), Treatise on Light; Errata.
De Ratiociniis in Ludo Aleae or The Value of all Chances in Games of Fortune, 1657 Christiaan Huygens's book on probability theory. An English translation published in 1714. Text pdf file.
Horologium oscillatorium (German translation, pub. 1913) or Horologium oscillatorium (English translation by Ian Bruce) on the pendulum clock
ΚΟΣΜΟΘΕΩΡΟΣ (Cosmotheoros). (English translation of Latin, pub. 1698; subtitled The celestial worlds discover'd: or, Conjectures concerning the inhabitants, plants and productions of the worlds in the planets.)
C. Huygens (translated by Silvanus P. Thompson), Traité de la lumière or Treatise on light, London: Macmillan, 1912, archive.org/details/treatiseonlight031310mbp; New York: Dover, 1962; Project Gutenberg, 2005, gutenberg.org/ebooks/14725; Errata
Systema Saturnium 1659 text a digital edition of Smithsonian Libraries
On Centrifugal Force (1703)
Huygens's work at WorldCat
The Correspondence of Christiaan Huygens in EMLO
Christiaan Huygens biography and achievements
Portraits of Christiaan Huygens
Huygens's books, in digital facsimile from the Linda Hall Library:
(1659) Systema Saturnium (Latin)
(1684) Astroscopia compendiaria (Latin)
(1690) Traité de la lumiére (French)
(1698) ΚΟΣΜΟΘΕΩΡΟΣ, sive De terris cœlestibus (Latin)
Museums
Huygensmuseum Hofwijck in Voorburg, Netherlands, where Huygens lived and worked.
Huygens Clocks exhibition from the Science Museum, London
Online exhibition on Huygens in Leiden University Library
Other
Huygens and music theory Huygens–Fokker Foundation —on Huygens's 31 equal temperament and how it has been used
Christiaan Huygens on the 25 Dutch Guilder banknote of the 1950s.
How to pronounce "Huygens"
Christiaan
17th-century Dutch mathematicians
17th-century Dutch writers
17th-century writers in Latin
17th-century Dutch inventors
17th-century Dutch engineers
17th-century Dutch philosophers
Members of the French Academy of Sciences
Original fellows of the Royal Society
Leiden University alumni
Scientists from The Hague
Geometers
Optical physicists
Dutch theoretical physicists
Discoverers of moons
Astronomy in the Dutch Republic
Dutch clockmakers
Dutch music theorists
Dutch scientific instrument makers
Dutch members of the Dutch Reformed Church
1629 births
1695 deaths | Christiaan Huygens | [
"Mathematics"
] | 12,284 | [
"Geometers",
"Geometry"
] |
42,168 | https://en.wikipedia.org/wiki/Data%20communication | Data communication, including data transmission and data reception, is the transfer of data, transmitted and received over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.
Analog transmission is a method of conveying voice, data, image, signal or video information using a continuous signal that varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation is carried out by modem equipment.
Digital communications, including digital transmission and digital reception, is the transfer of
either a digitized analog signal or a born-digital bitstream. According to the most common definition, both baseband and passband bit-stream components are considered part of a digital signal; an alternative definition considers only the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.
Distinction between related subjects
Courses and textbooks in the field of data transmission as well as digital transmission and digital communications have similar content.
Digital transmission or data transmission traditionally belongs to telecommunications and electrical engineering. Basic principles of data transmission may also be covered within the computer science or computer engineering topic of data communications, which also includes computer networking applications and communication protocols, for example routing, switching and inter-process communication. Although the Transmission Control Protocol (TCP) involves transmission, TCP and other transport layer protocols are covered in computer networking but not discussed in a textbook or course about data transmission.
In most textbooks, the term analog transmission only refers to the transmission of an analog message signal (without digitization) by means of an analog signal, either as a non-modulated baseband signal or as a passband signal using an analog modulation method such as AM or FM. It may also include analog-over-analog pulse modulated baseband signals such as pulse-width modulation. In a few books within the computer networking tradition, analog transmission also refers to passband transmission of bit-streams using digital modulation methods such as FSK, PSK and ASK.
The theoretical aspects of data transmission are covered by information theory and coding theory.
Protocol layers and sub-topics
Courses and textbooks in the field of data transmission typically deal with the following OSI model protocol layers and topics:
Layer 1, the physical layer:
Channel coding including
Digital modulation schemes
Line coding schemes
Forward error correction (FEC) codes
Bit synchronization
Multiplexing
Equalization
Channel models
Layer 2, the data link layer:
Channel access schemes, media access control (MAC)
Packet mode communication and Frame synchronization
Error detection and automatic repeat request (ARQ)
Flow control
Layer 6, the presentation layer:
Source coding (digitization and data compression), and information theory.
Cryptography (may occur at any layer)
It is also common to deal with the cross-layer design of those three layers.
Applications and history
Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical, acoustic, mechanical) means since the advent of communication. Analog signal data has been sent electronically since the advent of the telephone. However, the first data electromagnetic transmission applications in modern time were electrical telegraphy (1809) and teletypewriters (1906), which are both digital signals. The fundamental theoretical work in data transmission and information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the early 20th century, was done with these applications in mind.
In the early 1960s, Paul Baran invented distributed adaptive message block switching for digital communication of voice messages using switches that were low-cost electronics. Donald Davies invented and implemented modern data communication during 1965-7, including packet switching, high-speed routers, communication protocols, hierarchical computer networks and the essence of the end-to-end principle. Baran's work did not include routers with software switches and communication protocols, nor the idea that users, rather than the network itself, would provide the reliability. Both were seminal contributions that influenced the development of computer networks.
Data transmission is utilized in computers in computer buses and for communication with peripheral equipment via parallel ports and serial ports such as RS-232 (1969), FireWire (1995) and USB (1996). The principles of data transmission are also utilized in storage media for error detection and correction since 1951. The first practical method to overcome the problem of receiving data accurately by the receiver using digital code was the Barker code invented by Ronald Hugh Barker in 1952 and published in 1953. Data transmission is utilized in computer networking equipment such as modems (1940), local area network (LAN) adapters (1964), repeaters, repeater hubs, microwave links, wireless network access points (1997), etc.
In telephone networks, digital communication is utilized for transferring many phone calls over the same copper cable or fiber cable by means of pulse-code modulation (PCM) in combination with time-division multiplexing (TDM) (1962). Telephone exchanges have become digital and software controlled, facilitating many value-added services. For example, the first AXE telephone exchange was presented in 1976. Digital communication to the end user using Integrated Services Digital Network (ISDN) services became available in the late 1980s. Since the end of the 1990s, broadband access techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-home (FTTH) have become widespread to small offices and homes. The current tendency is to replace traditional telecommunication services with packet mode communication such as IP telephony and IPTV.
Transmitting analog signals digitally allows for greater signal processing capability. The ability to process a communications signal means that errors caused by random processes can be detected and corrected. Digital signals can also be sampled instead of continuously monitored. The multiplexing of multiple digital signals is much simpler compared to the multiplexing of analog signals. Because of all these advantages, because of the vast demand to transmit computer data and the ability of digital communications to do so and because recent advances in wideband communication channels and solid-state electronics have allowed engineers to realize these advantages fully, digital communications have grown quickly.
The digital revolution has also resulted in many digital telecommunication applications where the principles of data transmission are applied. Examples include second-generation (1991) and later cellular telephony, video conferencing, digital TV (1998), digital radio (1999), and telemetry.
Data transmission, digital transmission or digital communications is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels include copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared light.
While analog transmission is the transfer of a continuously varying analog signal over an analog channel, digital communication is the transfer of discrete messages over a digital or an analog channel. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying wave forms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation (also known as detection) is carried out by modem equipment. According to the most common definition of a digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-digital conversion and data compression) schemes. This source coding and decoding is carried out by codec equipment.
Serial and parallel transmission
In telecommunications, serial transmission is the sequential transmission of signal elements of a group representing a character or other entity of data. Digital serial transmissions are bits sent over a single wire, frequency or optical path sequentially. Because it requires less signal processing and less chances for error than parallel transmission, the transfer rate of each individual path may be faster. This can be used over longer distances and a check digit or parity bit can be sent along with the data easily.
Parallel transmission is the simultaneous transmission of related signal elements over two or more separate paths. Multiple electrical wires are used that can transmit multiple bits simultaneously, which allows for higher data transfer rates than can be achieved with serial transmission. This method is typically used internally within the computer, for example, the internal buses, and sometimes externally for such things as printers. Timing skew can be a significant issue in these systems because the wires in parallel data transmission unavoidably have slightly different properties so some bits may arrive before others, which may corrupt the message. This issue tends to worsen with distance making parallel data transmission less reliable for long distances.
Communication channels
Some communications channel types include:
Data transmission circuit
Full-duplex
Half-duplex
Simplex
Multi-drop:
Bus network
Mesh network
Ring network
Star network
Wireless network
Point-to-point
Asynchronous and synchronous data transmission
Asynchronous serial communication uses start and stop bits to signify the beginning and end of transmission. This method of transmission is used when data are sent intermittently as opposed to in a solid stream.
Synchronous transmission synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals. The clock may be a separate signal or embedded in the data. A continual stream of data is then sent between the two nodes. Due to there being no start and stop bits, the data transfer rate may be more efficient.
See also
Internetworking
Media (communication)
Network security
Node-to-node data transfer
Transmission (disambiguation)
References
Computer networks engineering
Mass media technology
Telecommunications | Data communication | [
"Technology",
"Engineering"
] | 2,135 | [
"Information and communications technology",
"Mass media technology",
"Computer engineering",
"Computer networks engineering",
"Telecommunications"
] |
42,169 | https://en.wikipedia.org/wiki/Dual%20number | In algebra, the dual numbers are a hypercomplex number system first introduced in the 19th century. They are expressions of the form , where and are real numbers, and is a symbol taken to satisfy with .
Dual numbers can be added component-wise, and multiplied by the formula
which follows from the property and the fact that multiplication is a bilinear operation.
The dual numbers form a commutative algebra of dimension two over the reals, and also an Artinian local ring. They are one of the simplest examples of a ring that has nonzero nilpotent elements.
History
Dual numbers were introduced in 1873 by William Clifford, and were used at the beginning of the twentieth century by the German mathematician Eduard Study, who used them to represent the dual angle which measures the relative position of two skew lines in space. Study defined a dual angle as , where is the angle between the directions of two lines in three-dimensional space and is a distance between them. The -dimensional generalization, the Grassmann number, was introduced by Hermann Grassmann in the late 19th century.
Modern definition
In modern algebra, the algebra of dual numbers is often defined as the quotient of a polynomial ring over the real numbers by the principal ideal generated by the square of the indeterminate, that is
It may also be defined as the exterior algebra of a one-dimensional vector space with as its basis element.
Division
Division of dual numbers is defined when the real part of the denominator is non-zero. The division process is analogous to complex division in that the denominator is multiplied by its conjugate in order to cancel the non-real parts.
Therefore, to evaluate an expression of the form
we multiply the numerator and denominator by the conjugate of the denominator:
which is defined when is non-zero.
If, on the other hand, is zero while is not, then the equation
has no solution if is nonzero
is otherwise solved by any dual number of the form .
This means that the non-real part of the "quotient" is arbitrary and division is therefore not defined for purely nonreal dual numbers. Indeed, they are (trivially) zero divisors and clearly form an ideal of the associative algebra (and thus ring) of the dual numbers.
Matrix representation
The dual number can be represented by the square matrix . In this representation the matrix squares to the zero matrix, corresponding to the dual number .
There are other ways to represent dual numbers as square matrices. They consist of representing the dual number by the identity matrix, and by any matrix whose square is the zero matrix; that is, in the case of matrices, any nonzero matrix of the form
with
Differentiation
One application of dual numbers is automatic differentiation. Any polynomial
with real coefficients can be extended to a function of a dual-number-valued argument,
where is the derivative of
More generally, any (analytic) real function can be extended to the dual numbers via its Taylor series:
since all terms involving or greater powers are trivially by the definition of .
By computing compositions of these functions over the dual numbers and examining the coefficient of in the result we find we have automatically computed the derivative of the composition.
A similar method works for polynomials of variables, using the exterior algebra of an -dimensional vector space.
Geometry
The "unit circle" of dual numbers consists of those with since these satisfy where . However, note that
so the exponential map applied to the -axis covers only half the "circle".
Let . If and , then is the polar decomposition of the dual number , and the slope is its angular part. The concept of a rotation in the dual number plane is equivalent to a vertical shear mapping since .
In absolute space and time the Galilean transformation
that is
relates the resting coordinates system to a moving frame of reference of velocity . With dual numbers representing events along one space dimension and time, the same transformation is effected with multiplication by .
Cycles
Given two dual numbers and , they determine the set of such that the difference in slopes ("Galilean angle") between the lines from to and is constant. This set is a cycle in the dual number plane; since the equation setting the difference in slopes of the lines to a constant is a quadratic equation in the real part of , a cycle is a parabola. The "cyclic rotation" of the dual number plane occurs as a motion of its projective line. According to Isaak Yaglom, the cycle is invariant under the composition of the shear
with the translation
Applications in mechanics
Dual numbers find applications in mechanics, notably for kinematic synthesis. For example, the dual numbers make it possible to transform the input/output equations of a four-bar spherical linkage, which includes only rotoid joints, into a four-bar spatial mechanism (rotoid, rotoid, rotoid, cylindrical). The dualized angles are made of a primitive part, the angles, and a dual part, which has units of length. See screw theory for more.
Algebraic geometry
In modern algebraic geometry, the dual numbers over a field (by which we mean the ring ) may be used to define the tangent vectors to the points of a -scheme. Since the field can be chosen intrinsically, it is possible to speak simply of the tangent vectors to a scheme. This allows notions from differential geometry to be imported into algebraic geometry.
In detail: The ring of dual numbers may be thought of as the ring of functions on the "first-order neighborhood of a point" – namely, the -scheme . Then, given a -scheme , -points of the scheme are in 1-1 correspondence with maps , while tangent vectors are in 1-1 correspondence with maps .
The field above can be chosen intrinsically to be a residue field. To wit: Given a point on a scheme , consider the stalk . Observe that is a local ring with a unique maximal ideal, which is denoted . Then simply let .
Generalizations
This construction can be carried out more generally: for a commutative ring one can define the dual numbers over as the quotient of the polynomial ring by the ideal : the image of then has square equal to zero and corresponds to the element from above.
Arbitrary module of elements of zero square
There is a more general construction of the dual numbers. Given a commutative ring and a module , there is a ring called the ring of dual numbers which has the following structures:
It is the -module with the multiplication defined by for and
The algebra of dual numbers is the special case where and
Superspace
Dual numbers find applications in physics, where they constitute one of the simplest non-trivial examples of a superspace. Equivalently, they are supernumbers with just one generator; supernumbers generalize the concept to distinct generators , each anti-commuting, possibly taking to infinity. Superspace generalizes supernumbers slightly, by allowing multiple commuting dimensions.
The motivation for introducing dual numbers into physics follows from the Pauli exclusion principle for fermions. The direction along is termed the "fermionic" direction, and the real component is termed the "bosonic" direction. The fermionic direction earns this name from the fact that fermions obey the Pauli exclusion principle: under the exchange of coordinates, the quantum mechanical wave function changes sign, and thus vanishes if two coordinates are brought together; this physical idea is captured by the algebraic relation .
Projective line
The idea of a projective line over dual numbers was advanced by Grünwald and Corrado Segre.
Just as the Riemann sphere needs a north pole point at infinity to close up the complex projective line, so a line at infinity succeeds in closing up the plane of dual numbers to a cylinder.
Suppose is the ring of dual numbers and is the subset with . Then is the group of units of . Let . A relation is defined on B as follows: when there is a in such that and . This relation is in fact an equivalence relation. The points of the projective line over are equivalence classes in under this relation: . They are represented with projective coordinates .
Consider the embedding by . Then points , for , are in but are not the image of any point under the embedding. is mapped onto a cylinder by projection: Take a cylinder tangent to the double number plane on the line , . Now take the opposite line on the cylinder for the axis of a pencil of planes. The planes intersecting the dual number plane and cylinder provide a correspondence of points between these surfaces. The plane parallel to the dual number plane corresponds to points , in the projective line over dual numbers.
See also
Split-complex number
Smooth infinitesimal analysis
Perturbation theory
Infinitesimal
Screw theory
Dual-complex number
Laguerre transformations
Grassmann number
Automatic differentiation
References
Further reading
From Cornell Historical Mathematical Monographs at Cornell University.
Linear algebra
Hypercomplex numbers
Commutative algebra
Differential algebra
Nonstandard analysis | Dual number | [
"Mathematics"
] | 1,823 | [
"Differential algebra",
"Mathematical structures",
"Mathematical objects",
"Infinity",
"Nonstandard analysis",
"Fields of abstract algebra",
"Mathematics of infinitesimals",
"Algebraic structures",
"Model theory",
"Hypercomplex numbers",
"Linear algebra",
"Commutative algebra",
"Numbers",
... |
42,232 | https://en.wikipedia.org/wiki/Calliope%20%28music%29 | A calliope (see below for pronunciation) is a North American musical instrument that produces sound by sending a gas, originally steam or, more recently, compressed air, through large whistles—originally locomotive whistles.
A calliope is typically very loud. Even some small calliopes are audible for miles. There is no way to vary tone or volume. Musically, the only expression possible is the pitch, rhythm, and duration of the notes.
The steam calliope is also known as a steam organ ( in Quebec) or steam piano ( in Quebec). The air-driven calliope is sometimes called a calliaphone, the name given to it by Norman Baker, but the "Calliaphone" name is registered by the Miner Company for instruments produced under the Tangley name.
In the age of steam, the steam calliope was particularly used on riverboats and in circuses. In both cases, a steam supply was readily available for other purposes. Riverboats supplied steam from their propulsion boilers. Circus calliopes were sometimes installed in steam-driven carousels, or supplied with steam from a traction engine. The traction engine could also supply electric power for lighting, and tow the calliope in the circus parade, where it traditionally came last. Other circus calliopes were self-contained, mounted on a carved, painted and gilded wagon pulled by horses, but the presence of other steam boilers in the circus meant that fuel and expertise to run the boiler were readily available. Steam instruments often had keyboards made from brass. This was in part to resist the heat and moisture of the steam, but also for the golden shine of the highly polished keys.
Calliopes can be played by a player at a keyboard or mechanically. Mechanical operation may be by a drum similar to a music box drum, or by a roll similar to that of a player piano. Some instruments have both a keyboard and a mechanism for automated operation, others only one or the other. Some calliopes can also be played via a MIDI interface.
The whistles of a calliope are tuned to a chromatic scale, although this process is difficult and must be repeated often to maintain quality sound. Since the pitch of each note is largely affected by the temperature of the steam, accurate tuning is nearly impossible; however, the off-pitch notes (particularly in the upper register) have become something of a trademark of the steam calliope. A calliope may have anywhere from 25 to 67 whistles, but 32 is traditional for a steam calliope.
History
Joshua C. Stoddard of Worcester, Massachusetts patented the calliope on October 9, 1855, though his design echoes previous concepts, such as an 1832 instrument called a steam trumpet, later known as a train whistle. In 1851, William Hoyt of Dupont, Indiana claimed to have conceived of a device similar to Stoddard's calliope, but he never patented it. Later, an employee of Stoddard's American Music, Arthur S. Denny, attempted to market an "Improved Kalliope" in Europe, but it did not catch on. In 1859, he demonstrated this instrument in Crystal Palace, London. Unlike other calliopes before or since, Denny's Improved Kalliope let the player control the steam pressure, and therefore the volume of the music, while playing.
While Stoddard originally intended the calliope to replace bells at churches, it found its way onto riverboats during the paddlewheel era. While only a small number of working steamboats still exist, each has a steam calliope. These boats include the Delta Queen, the Belle of Louisville, and President. Their calliopes are played regularly on river excursions. Many surviving calliopes were built by Thomas J. Nichol, Cincinnati, Ohio, who built calliopes from 1890 until 1932. The Thomas J. Nichol calliopes featured rolled sheet copper (as used in roofing) for the resonant tube (the bell) of the whistle, lending a sweeter tone than cast bronze or brass, which were the usual materials for steam whistles of the day. David Morecraft pioneered a resurgence in the building of authentic steam calliopes of the Thomas J. Nichol style beginning in 1985 in Peru, Indiana. These calliopes are featured in Peru's annual Circus City Parade. Morecraft died on December 5, 2016.
Stoddard's original calliope was attached to a metal roller set with pins in the manner familiar to Stoddard from the contemporary clockwork music box. The pins on the roller opened valves that admitted steam into the whistles. Later, Stoddard replaced the cylinder with a keyboard, so that the calliope could be played like an organ.
Starting in the 1900s calliopes began using music rolls instead of a live musician. The music roll operated in a manner similar to a piano roll in a player piano, mechanically operating the keys. Many of these mechanical calliopes retained keyboards, allowing a live musician to play them if needed. During this period, compressed air began to replace steam as the vehicle of producing sound.
Most calliopes disappeared in the mid-20th century, as steam power was replaced with other power sources. Without the demand for technicians that mines and railroads supplied, no support was available to keep boilers running. Only a few calliopes have survived, which, unless converted to a modern power source, are rarely played.
A relatively recently-built calliope is that of Carl Bergman of Aspen, Colorado, which was built in the mid 1970s. The 6 foot tall wood-fired steam boiler was originally used by miners at Independence Pass and requires its owner to maintain a boiler operator's license. The calliope produces 10 notes and takes 8 hours to get ready.
Pronunciation
The pronunciation of the word has long been disputed, and often it is pronounced differently inside and outside the groups that use it. The Greek muse by the same name is pronounced , but the instrument was usually pronounced by people who played it.
A nineteenth-century magazine, Reedy's Mirror, attempted to settle the dispute by publishing this rhyme:
This, in turn, came from a poem by Vachel Lindsay, called "The Kallyope Yell",
in which Lindsay uses both pronunciations.
In the song "Blinded by the Light", written in 1972, Bruce Springsteen used the four-syllable ( ) pronunciation commonly used in the United States when referring to a fairground organ; this was also used by the British Manfred Mann's Earth Band in their 1976 cover.
Related instruments
Pyrophone
The pyrophone is a calliope-like instrument that uses internal combustion within its whistles to power their notes, rather than the calliope's system of friction from steam going through ducts.
At 1998's Burning Man, a pyrophone referred to as Satan's Calliope was powered by ignition of propane inside resonant cavities.
Calliaphone
The calliaphone is a compressed-air powered, easily transported instrument developed by early 20th century American inventor Norman Baker.
Lustre chantant
The lustre chantant (literally "singing chandelier") or musical lamp was invented by Frederik Kastner. It was a large chandelier with glass pipes of varying lengths each illuminated and heated by an individual gas jet. A keyboard allowed the player to turn down individual jets; as the glass tube cooled, a note was produced. Kastner installed several such instruments in Paris.
Popular culture
Henry Mancini used the calliope in his 1961 song "Baby Elephant Walk" for the film Hatari! to suggest the fun of a circus.
The Beatles, in recording "Being for the Benefit of Mr. Kite!" from the 1967 album Sgt. Pepper's Lonely Hearts Club Band, used tapes of calliope music to create the atmosphere of a circus. Beatles producer George Martin recalled, "When we first worked on 'Being for the Benefit of Mr. Kite!' John had said that he wanted to 'smell the sawdust on the floor', wanted to taste the atmosphere of the circus. I said to him, 'What we need is a calliope.' 'A what?' 'Steam whistles, played by a keyboard. Unable to find an authentic calliope, Martin resorted to tapes of calliopes playing Sousa marches. "[I] chopped the tapes up into small sections and had Geoff Emerick throw them up into the air, re-assembling them at random."
In the 1972 western film The Great Northfield, Minnesota Raid, which takes place in 1876, a calliope is featured prominently during a scene when Cole Younger (portrayed by Cliff Robertson) and his gang arrive in Northfield, Minnesota. The instrument is referred to by name and Younger shows such interest in it that he attempts to fix it.
In the video game Team Fortress 2, the calliope is prominently used as a quasi-bass line in the #11 track "Haunted Fortress 2" in the game's soundtrack, used mostly during Halloween or Full Moon-timed events.
See also
Fairground organ
Orchestrion
Showman's road locomotive
Notes
References
External links
Mechanical Music Digest: Calliope
"Harmony in Steam"
"Riverboat Calliopes" – includes audio clips of several riverboat calliopes
The calliope of Delta Queen (divX video clip)
"Europe's largest calliope aboard the ss Succes " – (Dutch) includes pictures & audio
Steam calliopes on YouTube (playlist)
Watch the National Film Board of Canada vignette Calliope
: Apparatus for producing music by steam or compressed air.
Kratz Steam Calliope at The Mariners' Museum
Popular Calliope and Steam Organ Videos
How the Belle of Louisville Steam Calliope Works (PDF) Schematic diagram of the Belle of Louisville's calliope
American musical instruments
Canadian musical instruments
Internal fipple flutes
Pipe organ
Steam power
1855 introductions
Circus music
French-American culture
Music of Quebec | Calliope (music) | [
"Physics"
] | 2,074 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
42,273 | https://en.wikipedia.org/wiki/Fence | A fence is a structure that encloses an area, typically outdoors, and is usually constructed from posts that are connected by boards, wire, rails or netting. A fence differs from a wall in not having a solid foundation along its whole length.
Alternatives to fencing include a ditch (sometimes filled with water, forming a moat).
Types
By function
Agricultural fencing, to keep livestock in and/or predators out
Blast fence, a safety device that redirects the high energy exhaust from a jet engine
Sound barrier or acoustic fencing, to reduce noise pollution
Crowd control barrier
Privacy fencing, to provide privacy and security
Temporary fencing, to provide safety, security, and to direct movement; wherever temporary access control is required, especially on building and construction sites
Perimeter fencing, to prevent trespassing or theft and/or to keep children and pets from wandering away.
Decorative fencing, to enhance the appearance of a property, garden or other landscaping
Boundary fencing, to demarcate a piece of real property
Newt fencing, amphibian fencing, drift fencing or turtle fence, a low fence of plastic sheeting or similar materials to restrict movement of amphibians or reptiles.
Pest-exclusion fence
Pet fence, an underground fence for pet containment
Pool fence
Snow fence
School fence
A balustrade or railing is a fence to prevent people from falling over an edge, most commonly found on a stairway, landing, or balcony. Railing systems and balustrades are also used along roofs, bridges, cliffs, pits, and bodies of water.
Another aim of using fence is to limit the intrusion attempt into a property by malicious intruders. In support of these barriers there are sophisticated technologies that can be applied on fence itself and strengthen the defence of territory reducing the risk.
The elements that reinforce the perimeter protection are:
Detectors
Peripheral alarm control unit
Means of deterrence
Means for communicating information remotely
remote alarm receiving unit
By construction
Brushwood fencing, a fence made using wires on either side of brushwood, to compact the brushwood material together.
Chain-link fencing, wire fencing made of wires woven together
Chicane
Close boarded fencing, strong and robust fence constructed from mortised posts, arris rails and vertical feather edge boards
Composite Fencing, made from a mixture of recycled wood and plastic
Expanding fence or trellis, a folding structure made from wood or metal on the scissor-like pantograph principle, sometimes only as a temporary barrier
Ha-ha (or sunken fence)
Hedge, including:
Cactus fence
Hedgerows of intertwined, living shrubs (constructed by hedge laying)
Live fencing is the use of live woody species for fences
Turf mounds in semiarid grasslands such as the western United States or Russian steppes
Hurdle fencing, made from moveable sections
Pale fence, or "post-and-rail" fence, composed of pales - vertical posts embedded in the ground, with their exposed end typically tapered to shed water and prevent rot from moisture entering end-grain wood - joined by horizontal rails, characteristically in two or three courses.
Palisade, or stakewall, made of vertical pales placed side by side with one end embedded in the ground and the other typically sharpened, to provide protection; characteristically two courses of waler are added on the interior side to reinforce the wall.
Picket fences, generally a waist-high, painted, partially decorative fence
Roundpole fences, similar to post-and-rail fencing but more closely spaced rails, typical of Scandinavia and other areas rich in raw timber.
Slate fence, a type of palisade made of vertical slabs of slate wired together. Commonly used in parts of Wales.
Split-rail fence, made of timber, often laid in a zig-zag pattern, particularly in newly settled parts of the United States and Canada
Vaccary fence (named from Latin vaca - cow), for restraining cattle, made of thin slabs of stone placed upright, found in various places in the north of the UK where suitable stone is had.
Vinyl fencing
Solid fences, including:
Dry-stone wall or rock fence, often agricultural
Stockade fence, a solid fence composed of contiguous or very closely spaced round or half-round posts, or stakes, typically pointed at the top. A scaled down version of a palisade wall made of logs, most commonly used for privacy.
Wattle fencing, of split branches woven between stakes.
Wire fences
Smooth wire fence
Barbed wire fence
Electric fence
Woven wire fencing, many designs, from fine chicken wire to heavy mesh "sheep fence" or "ring fence"
Welded wire mesh fence
Wood-panel fencing, whereby finished wood planks are arranged to make large solid panels, which are then suspended between posts, making an almost completely solid wall-like barrier. Usually as a decorative perimeter.
Wrought iron fencing, also known as ornamental iron
Legal issues
In most developed areas the use of fencing is regulated, variously in commercial, residential, and agricultural areas. Height, material, setback, and aesthetic issues are among the considerations subject to regulation.
Required use
The following types of areas or facilities often are required by law to be fenced in, for safety and security reasons:
Facilities with open high-voltage equipment (transformer stations, mast radiators). Transformer stations are usually surrounded with barbed-wire fences. Around mast radiators, wooden fences are used to avoid the problem of eddy currents.
Railway lines (in the United Kingdom)
fixed machinery with dangerous mobile parts (for example at merry go rounds on entertainment parks)
Explosive factories and quarry stores
Most industrial plants
Airfields and airports
Military areas
Prisons
Construction sites
Zoos and wildlife parks
Pastures containing male breeding animals, notably bulls and stallions.
Open-air areas that charge an entry fee
Amusement equipment which may pose danger for passers-by
Swimming pools and spas
History
Servitudes are legal arrangements of land use arising out of private agreements. Under the feudal system, most land in England was cultivated in common fields, where peasants were allocated strips of arable land that were used to support the needs of the local village or manor. By the sixteenth century the growth of population and prosperity provided incentives for landowners to use their land in more profitable ways, dispossessing the peasantry. Common fields were aggregated and enclosed by large and enterprising farmers—either through negotiation among one another or by lease from the landlord—to maximize the productivity of the available land and contain livestock. Fences redefined the means by which land is used, resulting in the modern law of servitudes.
In the United States, the earliest settlers claimed land by simply fencing it in. Later, as the American government formed, unsettled land became technically owned by the government and programs to register land ownership developed, usually making raw land available for low prices or for free, if the owner improved the property, including the construction of fences. However, the remaining vast tracts of unsettled land were often used as a commons, or, in the American West, "open range" as degradation of habitat developed due to overgrazing and a tragedy of the commons situation arose, common areas began to either be allocated to individual landowners via mechanisms such as the Homestead Act and Desert Land Act and fenced in, or, if kept in public hands, leased to individual users for limited purposes, with fences built to separate tracts of public and private land.
United Kingdom
Generally
Ownership of a fence on a boundary varies. The last relevant original title deed(s) and a completed seller's property information form may document which side has to put up and has installed any fence respectively; the first using "T" marks/symbols (the side with the "T" denotes the owner); the latter by a ticked box to the best of the last owner's belief with no duty, as the conventionally agreed conveyancing process stresses, to make any detailed, protracted enquiry. Commonly the mesh or panelling is in mid-position. Otherwise it tends to be on non-owner's side so the fence owner might access the posts when repairs are needed but this is not a legal requirement. Where estate planners wish to entrench privacy a close-boarded fence or equivalent well-maintained hedge of a minimum height may be stipulated by deed. Beyond a standard height planning permission is necessary.
The hedge and ditch ownership presumption
Where a rural fence or hedge has (or in some cases had) an adjacent ditch, the ditch is normally in the same ownership as the hedge or fence, with the ownership boundary being the edge of the ditch furthest from the fence or hedge. The principle of this rule is that an owner digging a boundary ditch will normally dig it up to the very edge of their land, and must then pile the spoil on their own side of the ditch to avoid trespassing on their neighbour. They may then erect a fence or hedge on the spoil, leaving the ditch on its far side. Exceptions exist in law, for example where a plot of land derives from subdivision of a larger one along the centre line of a previously existing ditch or other feature, particularly where reinforced by historic parcel numbers with acreages beneath which were used to tally up a total for administrative units not to confirm the actual size of holdings, a rare instance where Ordnance Survey maps often provide more than circumstantial evidence namely as to which feature is to be considered the boundary.
Fencing of livestock
On private land in the United Kingdom, it is the landowner's responsibility to fence their livestock in. Conversely, for common land, it is the surrounding landowners' duty to fence the common's livestock out such as in large parts of the New Forest. Large commons with livestock roaming have been greatly reduced by 18th and 19th century Acts for enclosure of commons covering most local units, with most remaining such land in the UK's National Parks.
Fencing of railways
A 19th-century law requires railways to be fenced to keep people and livestock out. It is also illegal to trespass on railways, incurring a fine of up to £1000.
United States
Distinctly different land ownership and fencing patterns arose in the eastern and western United States. Original fence laws on the east coast were based on the British common law system, and rapidly increasing population quickly resulted in laws requiring livestock to be fenced in. In the west, land ownership patterns and policies reflected a strong influence of Spanish law and tradition, plus the vast land area involved made extensive fencing impractical until mandated by a growing population and conflicts between landowners. The "open range" tradition of requiring landowners to fence out unwanted livestock was dominant in most of the rural west until very late in the 20th century, and even today, a few isolated regions of the west still have open range statutes on the books. More recently, fences are generally constructed on the surveyed property line as precisely as possible. Today, across the nation, each state is free to develop its own laws regarding fences. In many cases for both rural and urban property owners, the laws were designed to require adjacent landowners to share the responsibility for maintaining a common boundary fenceline. Today, however, only 22 states have retained that provision.
Some U.S. states, including Texas, Illinois, Missouri, and North Carolina, have enacted laws establishing that purple paint markings on fences (or trees) are the legal equivalent of "No Trespassing" signs. The laws are meant to spare landowners, particularly in rural areas, from having to continually replace printed signs that often end up being stolen or obliterated by the elements.
Cultural value of fences
The value of fences and the metaphorical significance of a fence, both positive and negative, has been extensively utilized throughout western culture. A few examples include:
"Good fences make good neighbors." – a proverb quoted by Robert Frost in the poem "Mending Wall"
"A good neighbor is a fellow who smiles at you over the back fence, but doesn't climb over it." – Arthur Baer
"There is something about jumping a horse over a fence, something that makes you feel good. Perhaps it's the risk, the gamble. In any event it's a thing I need." – William Faulkner
"Fear is the highest fence." – Dudley Nichols
"To be fenced in is to be withheld." – Kurt Tippett
"What have they done to the earth? / What have they done to our fair sister? / Ravaged and plundered / and ripped her / and bit her / stuck her with knives / in the side of the dawn / and tied her with fences / and dragged her down." – Jim Morrison, of The Doors
"Don't Fence Me In" – Cole Porter
"You shall build a turtle fence." – Peter Hoekstra
"A woman's dress should be like a barbed-wire fence: serving its purpose without obstructing the view." – Sophia Loren
See also
Agricultural fencing
Electric fence
Wire obstacle
Temporary fencing
Post pounder
Synthetic fence
Pool fence
Separation barrier
Border barrier
Brushwood fencing
Fencing (computing)
Zariba
Metal Fencing
References
Notes
Bibliography
Encyclopædia Britannica (1982). Vol IV, Fence.
Elizabeth Agate: Fencing, British Trust for Conservation Volunteers,
External links
Engineering barrages
Perimeter security
Buildings and structures by type | Fence | [
"Engineering"
] | 2,685 | [
"Military engineering",
"Engineering barrages",
"Buildings and structures by type",
"Architecture"
] |
42,274 | https://en.wikipedia.org/wiki/Barbed%20wire | Barbed wire, also known as barb wire, is a type of steel fencing wire constructed with sharp edges or points arranged at intervals along the strands. Its primary use is the construction of inexpensive fences, and it is also used as a security measure atop walls surrounding property. As a wire obstacle, it is a major feature of the fortifications in trench warfare.
A person or animal trying to pass through or over barbed wire will suffer discomfort and possibly injury. Barbed wire fencing requires only fence posts, wire, and fixing devices such as staples. It is simple to construct and quick to erect, even by an unskilled person.
The first patent in the United States for barbed wire was issued in 1867 to Lucien B. Smith of Kent, Ohio, who is regarded as the inventor. Joseph F. Glidden of DeKalb, Illinois, received a patent for the modern invention in 1874 after he made his own modifications to previous versions.
Wire fences are cheaper and easier to erect than their alternatives (one such alternative is Osage orange, a thorny bush that is time-consuming to transplant and grow). When wire fences became widely available in the United States in the late 19th century, it became more affordable to fence much larger areas than before, and intensive animal husbandry was made practical on a much larger scale.
An example of the costs of fencing with lumber immediately prior to the invention of barbed wire can be found with the first farmers in the Fresno, California, area, who spent nearly $4,000 () to have wood for fencing delivered and erected to protect of wheat crop from free-ranging livestock in 1872.
Design
Materials
Zinc-coated steel wire. Galvanized steel wire is the most widely used steel wire during barbed wire production. It has commercial type, Class 1 type and Class 3 type. Or it is also well known as electric galvanized steel wire and hot dipped galvanized steel wire.
Zinc-aluminum alloy coated steel wire. Barbed wire is available with zinc, 5% or 10% aluminum alloy and mischmetal steel wire, which is also known as Galfan wire.
Polymer-coated steel wire. Zinc steel wire or zinc-aluminum steel wire with PVC, PE or other organic polymer coating.
Stainless steel wire. It is available with SAE 304, 316 and other materials.
Strand structure
Single strand. Simple and light duty structure with single line wire (also known as strand wire) and barbs.
Double strand. Conventional structure with double strand wire (line wire) and barbs.
Barb structure
Single barb. Also known as 2-point barbed wire. It uses single barb wire twisted on the line wire (strand wire).
Double barb. Also known as 4-point barbed wire. Two barb wires twisted on the line wire (strand wire).
Twist type
Conventional twist. The strand wire (line wire) are twisted in single direction, which is also known as traditional twist. Besides, the barb wires are twisted between the two strand wire (line wire).
Reverse twist. The strand wire (line wire) are twisted in opposite direction. Besides, the barb wires are twisted outside of the two line wire.
Nominal diameter
History
Before 1865
Fencing consisting of flat and thin wire was first proposed in France, by Leonce Eugene Grassin-Baledans in 1860. His design consisted of bristling points, creating a fence that was painful to cross. In April 1865 Louis François Janin proposed a double wire with diamond-shaped metal barbs; Francois was granted a patent. Michael Kelly from New York had a similar idea, and proposed that the fencing should be used specifically for deterring animals.
More patents followed, and in 1867 alone there were six patents issued for barbed wire. Only two of them addressed livestock deterrence, one of which was from American Lucien B. Smith of Ohio. Before 1870, westward movement in the United States was largely across the plains with little or no settlement occurring. After the American Civil War the plains were extensively settled, consolidating America's dominance over them.
Ranchers moved out on the plains, and needed to fence their land in against encroaching farmers and other ranchers. The railroads throughout the growing West needed to keep livestock off their tracks, and farmers needed to keep stray cattle from trampling their crops. Traditional fence materials used in the Eastern U.S., like wood and stone, were expensive to use in the large open spaces of the plains, and hedging was not reliable in the rocky, clay-based and rain-starved dusty soils. A cost-effective alternative was needed to make cattle operations profitable.
1873 meeting and initial development
The "Big Four" in barbed wire were Joseph Glidden, Jacob Haish, Charles Francis Washburn, and Isaac L. Ellwood. Glidden, a farmer in 1873 and the first of the "Big Four", is often credited for designing a successful sturdy barbed wire product, but he let others popularize it for him. Glidden's idea came from a display at a fair in DeKalb, Illinois in 1873, by Henry B. Rose. Rose had patented "The Wooden Strip with Metallic Points" in May 1873.
This was simply a wooden block with wire protrusions designed to keep cows from breaching the fence. That day, Glidden was accompanied by two other men, Isaac L. Ellwood, a hardware dealer and Jacob Haish, a lumber merchant. Like Glidden, they both wanted to create a more durable wire fence with fixed barbs. Glidden experimented with a grindstone to twist two wires together to hold the barbs on the wire in place. The barbs were created from experiments with a coffee mill from his home.
Later Glidden was joined by Ellwood who knew his design could not compete with Glidden's for which he applied for a patent in October 1873. Meanwhile, Haish, who had already secured several patents for barbed wire design, applied for a patent on his third type of wire, the S barb, and accused Glidden of interference, deferring Glidden's approval for his patented wire, nicknamed "The Winner", until November 24, 1874.
Barbed wire production greatly increased with Glidden and Ellwood's establishment of the Barb Fence Company in DeKalb following the success of "The Winner". The company's success attracted the attention of Charles Francis Washburn, Vice President of Washburn & Moen Manufacturing Company, an important producer of plain wire in the Eastern U.S. Washburn visited DeKalb and convinced Glidden to sell his stake in the Barb Wire Fence Company, while Ellwood stayed in DeKalb and renamed the company I.L Ellwood & Company of DeKalb.
Promotion and consolidation
In the late 1870s, John Warne Gates of Illinois began to promote barbed wire, now a proven product, in the lucrative markets of Texas. At first, Texans were hesitant, as they feared that cattle might be harmed, or that the North was somehow trying to make profits from the South. There was also conflict between the farmers who wanted fencing and the ranchers who were losing the open range.
Demonstrations by Gates in San Antonio in 1876 showed that the wire could keep cattle contained, and sales then increased dramatically. Gates eventually parted company with Ellwood and became a barbed wire baron in his own right. Throughout the height of barbed wire sales in the late 19th century, Washburn, Ellwood, Gates, and Haish competed with one another. Ellwood and Gates eventually joined forces again to create the American Steel and Wire Company, later acquired by The U.S. Steel Corporation.
Between 1873 and 1899 there were as many as 150 companies manufacturing barbed wire. Investors knew that the business required minimal capital, and almost anyone with determination could profit by manufacturing a new wire design. There was then a sharp decline in the number of manufacturers, and many were consolidated into larger companies, notably the American Steel and Wire Company, formed by the merging of Gates's and Washburn's and Ellwood's industries.
Smaller companies were decimated because of economies of scale and the smaller pool of consumers available to them, compared to the larger corporations. The American Steel and Wire Company established in 1899 employed vertical integration: it controlled all aspects of production, from producing the steel rods to making many different wire and nail products from that steel. It later became part of U.S. Steel, and barbed wire remained a major source of revenue.
In the American West
Barbed wire was important in protecting range rights in the Western U.S. Although some ranchers put notices in newspapers claiming land areas, and joined stockgrowers associations to help enforce their claims, livestock continued to cross range boundaries. Fences of smooth wire did not hold stock well, and hedges were difficult to grow and maintain. Barbed wire's introduction in the West in the 1870s dramatically reduced the cost of enclosing land.
One fan wrote the inventor Joseph Glidden:
it takes no room, exhausts no soil, shades no vegetation, is proof against high winds, makes no snowdrifts, and is both durable and cheap.
Barbed wire emerged as a major source of conflict with the so-called "Big Die Up" incident in the 1880s. This occurred because of the instinctual migrations of cattle away from the blizzard conditions of the Northern Plains to the warmer and plentiful Southern Plains, but by the early 1880s this area was already divided and claimed by ranchers. The ranchers in place, especially in the Texas Panhandle, knew that their holdings could not support the grazing of additional cattle, so the only alternative was to block the migrations with barb wire fencing.
Many of the herds were decimated in the winter of 1885, with some losing as many as three-quarters of all animals when they could not find a way around the fence. Later other smaller scale cattlemen, especially in central Texas, opposed the closing of the open range, and began cutting fences to allow cattle to pass through to find grazing land. In this transition zone between the agricultural regions to the south and the rangeland to the north, conflict erupted, with vigilantes joining the scene causing chaos and even death. The Fence Cutting Wars ended with the passage of a Texas law in 1884 that made fence cutting a felony. Other states followed, although conflicts occurred through the early years of the 20th century. An 1885 federal law forbade placing such fences across the public domain.
Barbed wire is cited by historians as the invention that tamed the West. Herding large numbers of cattle on open range required significant manpower to catch strays. Barbed wire provided an inexpensive method to control the movement of cattle. By the beginning of the 20th century, large numbers of cowboys were unnecessary.
In the Southwest United States
John Warne Gates demonstrated barbed wire for Washburn and Moen in Military Plaza, San Antonio, Texas in 1876. The demonstration showing cattle restrained by the new kind of fencing was followed immediately by invitations to the Menger Hotel to place orders. Gates subsequently had a falling out with Washburn and Moen and Isaac Ellwood. He moved to St. Louis and founded the Southern Wire Company, which became the largest manufacturer of unlicensed or "bootleg" barbed wire.
An 1880 US District Court decision upheld the validity of the Glidden patent, effectively establishing a monopoly. This decision was affirmed by the US Supreme Court in 1892. In 1898 Gates took control of Washburn and Moen, and created the American Steel and Wire monopoly, which became a part of the United States Steel Corporation.
This led to disputes known as the range wars between open range ranchers and farmers in the late 19th century. These were similar to the disputes which resulted from enclosure laws in England in the early 18th century. These disputes were decisively settled in favor of the farmers, and heavy penalties were instituted for cutting a barbed wire fence. Within 2 years, nearly all of the open range had been fenced in under private ownership. For this reason, some historians have dated the end of the Old West era of American history to the invention and subsequent proliferation of barbed wire.
Installation
The most important and most time-consuming part of a barbed wire fence is constructing the corner post and the bracing assembly. A barbed wire fence is under tremendous tension, often up to half a ton, and so the corner post's sole function is to resist the tension of the fence spans connected to it. The bracing keeps the corner post vertical and prevents slack from developing in the fence.
Brace posts are placed in-line about from the corner post. A horizontal compression brace connects the top of the two posts, and a diagonal wire connects the top of the brace post to the bottom of the corner post. This diagonal wire prevents the brace post from leaning, which in turn allows the horizontal brace to prevent the corner post from leaning into the brace post. A second set of brace posts (forming a double brace) is used whenever the barbed wire span exceeds .
When the barbed wire span exceeds , a braced line assembly is added in-line. This has the function of a corner post and brace assembly but handles tension from opposite sides. It uses diagonal brace wire that connects the tops to the bottoms of all adjacent posts.
Line posts are installed along the span of the fence at intervals of . An interval of is most common. Heavy livestock and crowded pasture demands the smaller spacing. The sole function of a line post is not to take up slack but to keep the barbed wire strands spaced equally and off the ground.
Once these posts and bracing have been erected, the wire is wrapped around one corner post, held with a hitch (a timber hitch works well for this) often using a staple to hold the height and then reeled out along the span of the fence replacing the roll every 400 m. It is then wrapped around the opposite corner post, pulled tightly with wire stretchers, and sometimes nailed with more fence staples, although this may make readjustment of tension or replacement of the wire more difficult. Then it is attached to all of the line posts with fencing staples driven in partially to allow stretching of the wire.
There are several ways to anchor the wire to a corner post:
Hand-knotting. The wire is wrapped around the corner post and knotted by hand. This is the most common method of attaching wire to a corner post. A timber hitch works well as it stays better with wire than with rope.
Crimp sleeves. The wire is wrapped around the corner post and bound to the incoming wire using metal sleeves which are crimped using lock cutters. This method should be avoided because while sleeves can work well on repairs in the middle of the fence where there is not enough wire for hand knotting, they tend to slip when under tension.
Wire vise. The wire is passed through a hole drilled into the corner post and is anchored on the far side.
Wire wrap. The wire is wrapped around the corner post and wrapped onto a special, gritted helical wire which also wraps around the incoming wire, with friction holding it in place.
Barbed wire for agriculture use is typically double-strand -gauge, zinc-coated (galvanized) steel and comes in rolls of length. Barbed wire is usually placed on the inner (pasture) side of the posts. Where a fence runs between two pastures livestock could be with the wire on the outside or on both sides of the fence.
Galvanized wire is classified into three categories; Classes I, II, and III. Class I has the thinnest coating and the shortest life expectancy. A wire with Class I coating will start showing general rusting in 8 to 10 years, while the same wire with Class III coating will show rust in 15 to 20 years. Aluminum-coated wire is occasionally used, and yields a longer life.
Corner posts are in diameter or larger, and a minimum in length may consist of treated wood or from durable on-site trees such as osage orange, black locust, red cedar, or red mulberry, also railroad ties, telephone, and power poles are salvaged to be used as corner posts (poles and railroad ties were often treated with chemicals determined to be an environmental hazard and cannot be reused in some jurisdictions). In Canada spruce posts are sold for this purpose. Posts are in diameter driven at least and may be anchored in a concrete base square and deep. Iron posts, if used, are a minimum in diameter. Bracing wire is typically smooth 9-gauge. Line posts are set to a depth of about . Conversely, steel posts are not as stiff as wood, and wires are fastened with slips along fixed teeth, which means variations in driving height affect wire spacing.
During the First World War, screw pickets were used for the installation of wire obstacles; these were metal rods with eyelets for holding strands of wire, and a corkscrew-like end that could literally be screwed into the ground rather than hammered, so that wiring parties could work at night near enemy soldiers and not reveal their position by the sound of hammers.
Gates
As with any fence, barbed wire fences require gates to allow the passage of persons, vehicles and farm implements. Gates vary in width from to allow the passage of vehicles and tractors, to on farm land to pass combines and swathers.
One style of gate is called the Hampshire gate in the UK, a New Zealand gate in some areas, and often simply a "gate" elsewhere. Made of wire with posts attached at both ends and in the middle, it is permanently wired on one side and attaches to a gate post with wire loops on the other. Most designs can be opened by hand, though some gates that are frequently opened and closed may have a lever attached to assist in bringing the upper wire loop over the gate post.
Gates for cattle tend to have four wires when along a three wire fence, as cattle tend to put more stress on gates, particularly on corner gates. The fence on each side of the gate ends with two corner posts braced or unbraced depending on the size of the post. An unpounded post (often an old broken post) is held to one corner post with wire rings which act as hinges. On the other end a full-length post, the tractor post, is placed with the pointed end upwards with a ring on the bottom stapled to the other corner post, the latch post, and on top a ring is stapled to the tractor post, tied with a Stockgrower's Lash or one of numerous other opening bindings. Wires are then tied around the post at one end then run to the other end where they are stretched by hand or with a stretcher, before posts are stapled on every . Often this type of gate is called a portagee fence or a portagee gate in various ranching communities of coastal Central California.
Most gates can be opened by push post. The chain is then wrapped around the tractor post and pulled onto the nail, stronger people can pull the gate tighter but anyone can jar off the chain to open the gate.
Uses
Agriculture
Barbed wire fences remain the standard fencing technology for enclosing cattle in most regions of the United States, but not all countries. The wire is aligned under tension between heavy, braced, fence posts (strainer posts) and then held at the correct height by being attached to wooden or steel fence posts, and/or with battens in between.
The gaps between posts vary depending on type and terrain. On short fences in hilly country, steel posts may be placed every , while in flat terrain with long spans and relatively few stock they may be spaced up to apart. Wooden posts are normally spaced at on all terrain, with 4 or 5 battens in between. However, many farmers place posts apart as battens can bend, causing wires to close in on one another.
Barbed wire for agricultural fencing is typically available in two varieties: soft or mild-steel wire and high-tensile. Both types are galvanized for longevity. High-tensile wire is made with thinner but higher-strength steel. Its greater strength makes fences longer lasting because it resists stretching and loosening better, coping with expansion and contraction caused by heat and animal pressure by stretching and relaxing within wider elastic limits. It also supports longer spans, but because of its elastic (springy) nature, it is harder to handle and somewhat dangerous for inexperienced fencers. Soft wire is much easier to work but is less durable and only suitable for short spans such as repairs and gates, where it is less likely to tangle.
In high soil-fertility areas where dairy cattle are used in great numbers, 5- or 7-wire fences are common as the main boundary and internal dividing fences. On sheep farms 7-wire fences are common with the second (from bottom) to fifth wire being plain wire. In New Zealand wire fences must provide passage for dogs since they are the main means of controlling and driving animals on farms.
Around the turn of the 20th century, in some rural areas, barbed wire fences were used for local telephone networks.
Warfare and law enforcement
Barbed wire was used for the first time by Portuguese troops defending from African tribes during the Combat of Magul in 1895. Less well known is its extensive usage in the Russo-Japanese War.
In 1899 barbed wire was also extensively used in the Boer War, where it played a strategic role bringing spaces under control, at military outposts as well as to hold the captured Boer population in concentration camps.
The government of the United States built its first international border fence from 1909 to 1911 along the California-Mexico border. It included barbed wire and was intended to keep cattle from moving between the two countries. In 1924, the United States created its border patrol, which built more barbed wire fences on the Mexican border; this time to prevent people from crossing.
More significantly, barbed wire was used extensively by all participating combatants in World War I to prevent movement, with deadly consequences. Barbed wire entanglements were placed in front of trenches to prevent direct charges on men below, increasingly leading to greater use of more advanced weapons such as high-powered machine guns and grenades. A feature of these entanglements was that the barbs were much closer together, often forming a continuous sequence.
Barbed wire could be exposed to heavy bombardments because it could be easily replaced, and its structure included so much open space that machine guns rarely destroyed enough of it to defeat its purpose. However, barbed wire was defeated by the tank in 1916, as shown by the Allied breakthrough at Amiens through German lines on August 8, 1918.
One British writer described how the Germans used barbed wire as follows: The enemy wire was always deep, thick, and securely staked with iron supports, which were either crossed like the letter X, or upright, with loops to take the wire and shaped at one end like corkscrews so as to screw into the ground. The wire stood on these supports on a thick web, about four feet high and from thirty to forty feet across. The wire used was generally as thick as sailor's marline stuff, or two twisted rope yarns. It contained, as a rule, some sixteen barbs to the foot. The wire used in front of our lines was generally galvanized, and remained grey after months of exposure. The (German) wire, not being galvanized, rusted to a black color, and shows up black at a great distance.
During the Great Depression, migratory work camps in the United States used barbed wire.
In the 1930s and 1940s Europe the Nazis used barbed wire in concentration camp and extermination camp architecture, where it usually surrounded the camp and was electrified to prevent escape. Barbed wire served the purpose of keeping prisoners contained.
Infirmaries in extermination camps like Auschwitz where prisoners were gassed or experimented on were often separated from other areas by electrified wire and were often braided with branches to prevent outsiders from knowing what was concealed behind their walls.
During the United States' World War II Internment of Japanese Americans, barbed wire was used to enclose the concentration camps, such as Manzanar.
During the 1968 Chicago riots, barbed wire was attached to the fronts of police and National Guard vehicles. The vehicles were used to drive into protesters and rioters and were nicknamed "Daly dozers" after then-Chicago mayor Richard J. Daley.
Safety and injuries
Most barbed wire fences, while sufficient to discourage cattle, are passable by humans who can simply climb over or through the fence by stretching the gaps between the wires using non-barbed sections of the wire as handholds. To prevent humans crossing, many prisons, and other high-security installations construct fences with razor wire, a variant which replaces the barbs with near-continuous cutting surfaces sufficient to injure unprotected persons who climb on it. Both razor wire and barbed wire can be bypassed with protection, such as a thick carpet, or with the use of wire cutters.
A commonly seen alternative is the placement of a few strands of barbed wire at the top of a chain link fence. The limited mobility of someone climbing a fence makes passing conventional barbed wire more difficult. On some chain link fences, these strands are attached to a bracket tilted 45 degrees towards the intruder, further increasing the difficulty.
Barbed wire began to be widely used as an implement of war during World War I. Wire was placed either to impede or halt the passage of soldiers, or to channel them into narrow defiles in which small arms, particularly machine guns, and indirect fire could be used with greater effect as they attempted to pass. Artillery bombardments on the Western Front became increasingly aimed at cutting the barbed wire that was a major component of trench warfare, particularly once new "wire-cutting" fuzes were introduced midway through the war.
As the war progressed, the wire was used in shorter lengths that were easier to transport and more difficult to cut with artillery. Other inventions were also a result of the war, such as the screw picket, which enabled construction of wire obstacles to be done at night in No Man's Land without the necessity of hammering stakes into the ground and drawing attention from the enemy.
During the Soviet–Afghan War, the accommodation of Afghan refugees into Pakistan was controlled in Pakistan's largest province, Balochistan, under General Rahimuddin Khan, by making the refugees stay for controlled durations in barbed wire camps (see Controlling Soviet–Afghan war refugees).
The frequent use of barbed wire on prison walls, around concentration camps, and the like, has made it symbolic of oppression and denial of freedom in general. For example, in Germany, the totality of East Germany's border regime is commonly referred to with the short phrase "Mauer und Stacheldraht" (that is, "wall and barbed wire"), and Amnesty International has a barbed wire in their symbol.
Movement against barbed wire can result in moderate to severe injuries to the skin and, depending on body area and barbed wire configuration, possibly to the underlying tissue. Humans can manage not to injure themselves excessively when dealing with barbed wire as long as they are cautious. Restriction of movement, appropriate clothing, and slow movement when close to barbed wire aid in reducing injury.
Infantrymen are often trained and inured to the injuries caused by barbed wire. Several soldiers can lie across the wire to form a bridge for the rest of the formation to pass over; often any injury thus incurred is due to the tread of those passing over and not to the wire itself.
Injuries caused by barbed wire are typically seen in horses, bats, or birds. Horses panic easily, and once caught in barbed wire, large patches of skin may be torn off. At best, such injuries may heal, but they may cause disability or death (particularly due to infection). Birds or bats may not be able to perceive thin strands of barbed wire and suffer injuries.
For this reason, horse fences may have rubber bands nailed parallel to the wires.
More than 60 different species of wildlife have been reported in Australia as victims of entanglement on barbed wire fences, and the wildlife friendly fencing project is beginning to address this problem.
Grazing animals with slow movements that will back off at the first notion of pain (e.g., sheep and cows) will not generally suffer the severe injuries often seen in other animals.
Barbed wire has been reported as a tool for human torture. It is also frequently used as a weapon in hardcore professional wrestling matches, often as a covering for another type of weapon—Mick Foley was infamous for using a baseball bat wrapped in barbed wire—and infrequently as a covering of or substitute for the ring ropes.
Because of the risk of injuries, in 2010 Norway prohibited making new fences with barbed wire for limiting migration of animals. Electric fences are used instead. Consequently, automotive brands such as Bentley and Rolls-Royce Motor Cars is using Norwegian (and other Northern European region) hides for producing leather interior in their cars, since the hides from Norwegian cattle have fewer scratches than hides from countries where barbed wire is used.
See also
Bangalore torpedo
Barbed Wire Act 1893
Concertina wire
Isaac L. Ellwood
Jacob Haish
Kansas Barbed Wire Museum
Razor wire
Wire obstacle
Notes
References and further reading
Bennett, Lyn Ellen, and Scott Abbott. The Perfect Fence: Untangling the Meanings of Barbed Wire (Texas A&M University Press, 2017).
, LoC:65-11234
Biography of John W. Gates, barbed wire promoter who monopolized the industry with the American Steel and Wire Company, accessed March 29, 2006
External links
Website of the Devils Rope Museum in McLean, Texas
The Kansas Barbed Wire Museum in La Crosse, Kansas is the only museum in the world dedicated solely to barbed wire and the history of fencing.
Krell, Alan: Barbed Wire, in: 1914-1918-online. International Encyclopedia of the First World War.
Wire Fence and the Dіffеrent Styles They Come In
Development and Rise of Barbed Wire at University of Virginia accessed March 29, 2006
Barbed Wire Fencing - Its Rise and Influence also at UVA, from Agricultural History, Volume 13, October 1939, accessed September 20, 2006
Glidden's patent for barbed wire accessed March 29, 2006
Antique Barbed Wire Society accessed September 21, 2006
Barbed Wire in Texas
Barbed wire changes life on the American Great Plains
The History of Barbed Wire About.com
The Wildlife Friendly Fencing project
Papers, 1878-1938, of Texas rancher and co-inventor Isaac L. Ellwood in Southwest Collection/Special Collections Library at Texas Tech University
accessed September 21, 2006
– Lucien Smith, Kent, Ohio, Wire fence – "rotary spools with projecting spurs" (June 1867)
– William Hunt, Scott, New York, Improvement in Fences – "sharpened spur wheels" (July 1867)
– Michael Kelly, New York City (!), Improvement in Fences – "thorny fence" (1868)
– Joshua Rappleye, Seneca County, New York, Improvement in Constructing Wire fence – tensioner for fence with palings (pickets) (1871)
– Henry Rose, DeKalb County, Illinois, Improvement in Wire-fences – "strips provided with metal points" (1873)
– Isaac Ellwood, DeKalb, Illinois Improvement in Barbed Fences – "single piece of metal with four points, attached to a flat rail" (February, 1874)
– Joseph Glidden, DeKalb, Illinois, Improvement in Wire-fences – twisted fence wires with short spur coiled around one of the strands (November, 1874) This became the most popular patent.
– Jacob Haish, DeKalb, Illinois, Improvement in Wire-fence Barbs – "single piece of wire bent into the form of the letter S" so that both strands are clasped (1875)
– John Nelson, Creston, Illinois, Improvement in Wire-fence Barbs – barb installable on existing fence wire, (1876)
Engineering barrages
American inventions
Fences
Wire
Area denial weapons
Steel objects | Barbed wire | [
"Engineering"
] | 6,532 | [
"Area denial weapons",
"Military engineering",
"Engineering barrages"
] |
42,301 | https://en.wikipedia.org/wiki/Arity | In logic, mathematics, and computer science, arity () is the number of arguments or operands taken by a function, operation or relation. In mathematics, arity may also be called rank, but this word can have many other meanings. In logic and philosophy, arity may also be called adicity and degree. In linguistics, it is usually named valency.
Examples
In general, functions or operators with a given arity follow the naming conventions of n-based numeral systems, such as binary and hexadecimal. A Latin prefix is combined with the -ary suffix. For example:
A nullary function takes no arguments.
Example:
A unary function takes one argument.
Example:
A binary function takes two arguments.
Example:
A ternary function takes three arguments.
Example:
An n-ary function takes n arguments.
Example:
Nullary
A constant can be treated as the output of an operation of arity 0, called a nullary operation.
Also, outside of functional programming, a function without arguments can be meaningful and not necessarily constant (due to side effects). Such functions may have some hidden input, such as global variables or the whole state of the system (time, free memory, etc.).
Unary
Examples of unary operators in mathematics and in programming include the unary minus and plus, the increment and decrement operators in C-style languages (not in logical languages), and the successor, factorial, reciprocal, floor, ceiling, fractional part, sign, absolute value, square root (the principal square root), complex conjugate (unary of "one" complex number, that however has two parts at a lower level of abstraction), and norm functions in mathematics. In programming the two's complement, address reference, and the logical NOT operators are examples of unary operators.
All functions in lambda calculus and in some functional programming languages (especially those descended from ML) are technically unary, but see n-ary below.
According to Quine, the Latin distributives being singuli, bini, terni, and so forth, the term "singulary" is the correct adjective, rather than "unary". Abraham Robinson follows Quine's usage.
In philosophy, the adjective monadic is sometimes used to describe a one-place relation such as 'is square-shaped' as opposed to a two-place relation such as 'is the sister of'.
Binary
Most operators encountered in programming and mathematics are of the binary form. For both programming and mathematics, these include the multiplication operator, the radix operator, the often omitted exponentiation operator, the logarithm operator, the addition operator, and the division operator. Logical predicates such as OR, XOR, AND, IMP are typically used as binary operators with two distinct operands. In CISC architectures, it is common to have two source operands (and store result in one of them).
Ternary
The computer programming language C and its various descendants (including C++, C#, Java, Julia, Perl, and others) provide the ternary conditional operator ?:. The first operand (the condition) is evaluated, and if it is true, the result of the entire expression is the value of the second operand, otherwise it is the value of the third operand.
The Python language has a ternary conditional expression, . In Elixir the equivalent would be, .
The Forth language also contains a ternary operator, */, which multiplies the first two (one-cell) numbers, dividing by the third, with the intermediate result being a double cell number. This is used when the intermediate result would overflow a single cell.
The Unix dc calculator has several ternary operators, such as |, which will pop three values from the stack and efficiently compute with arbitrary precision.
Many (RISC) assembly language instructions are ternary (as opposed to only two operands specified in CISC); or higher, such as MOV %AX, (%BX, %CX), which will load () into register the contents of a calculated memory location that is the sum (parenthesis) of the registers and .
n-ary
The arithmetic mean of n real numbers is an n-ary function:
Similarly, the geometric mean of n positive real numbers is an n-ary function: Note that a logarithm of the geometric mean is the arithmetic mean of the logarithms of its n arguments
From a mathematical point of view, a function of n arguments can always be considered as a function of a single argument that is an element of some product space. However, it may be convenient for notation to consider n-ary functions, as for example multilinear maps (which are not linear maps on the product space, if ).
The same is true for programming languages, where functions taking several arguments could always be defined as functions taking a single argument of some composite type such as a tuple, or in languages with higher-order functions, by currying.
Varying arity
In computer science, a function that accepts a variable number of arguments is called variadic. In logic and philosophy, predicates or relations accepting a variable number of arguments are called multigrade, anadic, or variably polyadic.
Terminology
Latinate names are commonly used for specific arities, primarily based on Latin distributive numbers meaning "in group of n", though some are based on Latin cardinal numbers or ordinal numbers. For example, 1-ary is based on cardinal unus, rather than from distributive singulī that would result in singulary.
n-ary means having n operands (or parameters), but is often used as a synonym of "polyadic".
These words are often used to describe anything related to that number (e.g., undenary chess is a chess variant with an 11×11 board, or the Millenary Petition of 1603).
The arity of a relation (or predicate) is the dimension of the domain in the corresponding Cartesian product. (A function of arity n thus has arity n+1 considered as a relation.)
In computer programming, there is often a syntactical distinction between operators and functions; syntactical operators usually have arity 1, 2, or 3 (the ternary operator ?: is also common). Functions vary widely in the number of arguments, though large numbers can become unwieldy. Some programming languages also offer support for variadic functions, i.e., functions syntactically accepting a variable number of arguments.
See also
Logic of relatives
Binary relation
Ternary relation
Theory of relations
Signature (logic)
Parameter
p-adic number
Cardinality
Valency (linguistics)
n-ary code
n-ary group
Univariate and multivariate
Finitary
References
External links
A monograph available free online:
Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. . Especially pp. 22–24.
Abstract algebra
Universal algebra
cs:Operace (matematika)#Arita operace | Arity | [
"Mathematics"
] | 1,511 | [
"Abstract algebra",
"Fields of abstract algebra",
"Universal algebra",
"Algebra"
] |
42,309 | https://en.wikipedia.org/wiki/Closure%20%28topology%29 | In topology, the closure of a subset of points in a topological space consists of all points in together with all limit points of . The closure of may equivalently be defined as the union of and its boundary, and also as the intersection of all closed sets containing . Intuitively, the closure can be thought of as all the points that are either in or "very near" . A point which is in the closure of is a point of closure of . The notion of closure is in many ways dual to the notion of interior.
Definitions
Point of closure
For as a subset of a Euclidean space, is a point of closure of if every open ball centered at contains a point of (this point can be itself).
This definition generalizes to any subset of a metric space Fully expressed, for as a metric space with metric is a point of closure of if for every there exists some such that the distance ( is allowed). Another way to express this is to say that is a point of closure of if the distance where is the infimum.
This definition generalizes to topological spaces by replacing "open ball" or "ball" with "neighbourhood". Let be a subset of a topological space Then is a or of if every neighbourhood of contains a point of (again, for is allowed). Note that this definition does not depend upon whether neighbourhoods are required to be open.
Limit point
The definition of a point of closure of a set is closely related to the definition of a limit point of a set. The difference between the two definitions is subtle but important – namely, in the definition of a limit point of a set , every neighbourhood of must contain a point of , i.e., each neighbourhood of obviously has but it also must have a point of that is not equal to in order for to be a limit point of . A limit point of has more strict condition than a point of closure of in the definitions. The set of all limit points of a set is called the . A limit point of a set is also called cluster point or accumulation point of the set.
Thus, every limit point is a point of closure, but not every point of closure is a limit point. A point of closure which is not a limit point is an isolated point. In other words, a point is an isolated point of if it is an element of and there is a neighbourhood of which contains no other points of than itself.
For a given set and point is a point of closure of if and only if is an element of or is a limit point of (or both).
Closure of a set
The of a subset of a topological space denoted by or possibly by (if is understood), where if both and are clear from context then it may also be denoted by or (Moreover, is sometimes capitalized to .) can be defined using any of the following equivalent definitions:
is the set of all points of closure of
is the set together with all of its limit points. (Each point of is a point of closure of , and each limit point of is also a point of closure of .)
is the intersection of all closed sets containing
is the smallest closed set containing
is the union of and its boundary
is the set of all for which there exists a net (valued) in that converges to in
The closure of a set has the following properties.
is a closed superset of .
The set is closed if and only if .
If then is a subset of
If is a closed set, then contains if and only if contains
Sometimes the second or third property above is taken as the of the topological closure, which still make sense when applied to other types of closures (see below).
In a first-countable space (such as a metric space), is the set of all limits of all convergent sequences of points in For a general topological space, this statement remains true if one replaces "sequence" by "net" or "filter" (as described in the article on filters in topology).
Note that these properties are also satisfied if "closure", "superset", "intersection", "contains/containing", "smallest" and "closed" are replaced by "interior", "subset", "union", "contained in", "largest", and "open". For more on this matter, see closure operator below.
Examples
Consider a sphere in a 3 dimensional space. Implicitly there are two regions of interest created by this sphere; the sphere itself and its interior (which is called an open 3-ball). It is useful to distinguish between the interior and the surface of the sphere, so we distinguish between the open 3-ball (the interior of the sphere), and the closed 3-ball – the closure of the open 3-ball that is the open 3-ball plus the surface (the surface as the sphere itself).
In topological space:
In any space, . In other words, the closure of the empty set is itself.
In any space
Giving and the standard (metric) topology:
If is the Euclidean space of real numbers, then . In other words., the closure of the set as a subset of is .
If is the Euclidean space , then the closure of the set of rational numbers is the whole space We say that is dense in
If is the complex plane then
If is a finite subset of a Euclidean space then (For a general topological space, this property is equivalent to the T1 axiom.)
On the set of real numbers one can put other topologies rather than the standard one.
If is endowed with the lower limit topology, then
If one considers on the discrete topology in which every set is closed (open), then
If one considers on the trivial topology in which the only closed (open) sets are the empty set and itself, then
These examples show that the closure of a set depends upon the topology of the underlying space. The last two examples are special cases of the following.
In any discrete space, since every set is closed (and also open), every set is equal to its closure.
In any indiscrete space since the only closed sets are the empty set and itself, we have that the closure of the empty set is the empty set, and for every non-empty subset of In other words, every non-empty subset of an indiscrete space is dense.
The closure of a set also depends upon in which space we are taking the closure. For example, if is the set of rational numbers, with the usual relative topology induced by the Euclidean space and if then is both closed and open in because neither nor its complement can contain , which would be the lower bound of , but cannot be in because is irrational. So, has no well defined closure due to boundary elements not being in . However, if we instead define to be the set of real numbers and define the interval in the same way then the closure of that interval is well defined and would be the set of all greater than .
Closure operator
A on a set is a mapping of the power set of , into itself which satisfies the Kuratowski closure axioms.
Given a topological space , the topological closure induces a function that is defined by sending a subset to where the notation or may be used instead. Conversely, if is a closure operator on a set then a topological space is obtained by defining the closed sets as being exactly those subsets that satisfy (so complements in of these subsets form the open sets of the topology).
The closure operator is dual to the interior operator, which is denoted by in the sense that
and also
Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators by replacing sets with their complements in
In general, the closure operator does not commute with intersections. However, in a complete metric space the following result does hold:
Facts about closures
A subset is closed in if and only if In particular:
The closure of the empty set is the empty set;
The closure of itself is
The closure of an intersection of sets is always a subset of (but need not be equal to) the intersection of the closures of the sets.
In a union of finitely many sets, the closure of the union and the union of the closures are equal; the union of zero sets is the empty set, and so this statement contains the earlier statement about the closure of the empty set as a special case.
The closure of the union of infinitely many sets need not equal the union of the closures, but it is always a superset of the union of the closures.
Thus, just as the union of two closed sets is closed, so too does closure distribute over binary unions: that is, But just as a union of infinitely many closed sets is not necessarily closed, so too does closure not necessarily distribute over infinite unions: that is, is possible when is infinite.
If and if is a subspace of (meaning that is endowed with the subspace topology that induces on it), then and the closure of computed in is equal to the intersection of and the closure of computed in :
Because is a closed subset of the intersection is a closed subset of (by definition of the subspace topology), which implies that (because is the closed subset of containing ). Because is a closed subset of from the definition of the subspace topology, there must exist some set such that is closed in and Because and is closed in the minimality of implies that Intersecting both sides with shows that
It follows that is a dense subset of if and only if is a subset of
It is possible for to be a proper subset of for example, take and
If but is not necessarily a subset of then only
is always guaranteed, where this containment could be strict (consider for instance with the usual topology, and ), although if happens to an open subset of then the equality will hold (no matter the relationship between and ).
Let and assume that is open in Let which is equal to (because ). The complement is open in where being open in now implies that is also open in Consequently is a closed subset of where contains as a subset (because if is in then ), which implies that Intersecting both sides with proves that The reverse inclusion follows from
Consequently, if is any open cover of and if is any subset then:
because for every (where every is endowed with the subspace topology induced on it by ).
This equality is particularly useful when is a manifold and the sets in the open cover are domains of coordinate charts.
In words, this result shows that the closure in of any subset can be computed "locally" in the sets of any open cover of and then unioned together.
In this way, this result can be viewed as the analogue of the well-known fact that a subset is closed in if and only if it is "locally closed in ", meaning that if is any open cover of then is closed in if and only if is closed in for every
Functions and closure
Continuity
A function between topological spaces is continuous if and only if the preimage of every closed subset of the codomain is closed in the domain; explicitly, this means: is closed in whenever is a closed subset of
In terms of the closure operator, is continuous if and only if for every subset
That is to say, given any element that belongs to the closure of a subset necessarily belongs to the closure of in If we declare that a point is a subset if then this terminology allows for a plain English description of continuity: is continuous if and only if for every subset maps points that are close to to points that are close to Thus continuous functions are exactly those functions that preserve (in the forward direction) the "closeness" relationship between points and sets: a function is continuous if and only if whenever a point is close to a set then the image of that point is close to the image of that set.
Similarly, is continuous at a fixed given point if and only if whenever is close to a subset then is close to
Closed maps
A function is a (strongly) closed map if and only if whenever is a closed subset of then is a closed subset of
In terms of the closure operator, is a (strongly) closed map if and only if for every subset
Equivalently, is a (strongly) closed map if and only if for every closed subset
Categorical interpretation
One may define the closure operator in terms of universal arrows, as follows.
The powerset of a set may be realized as a partial order category in which the objects are subsets and the morphisms are inclusion maps whenever is a subset of Furthermore, a topology on is a subcategory of with inclusion functor The set of closed subsets containing a fixed subset can be identified with the comma category This category — also a partial order — then has initial object Thus there is a universal arrow from to given by the inclusion
Similarly, since every closed set containing corresponds with an open set contained in we can interpret the category as the set of open subsets contained in with terminal object the interior of
All properties of the closure can be derived from this definition and a few properties of the above categories. Moreover, this definition makes precise the analogy between the topological closure and other types of closures (for example algebraic closure), since all are examples of universal arrows.
See also
Closed regular set, a set equal to the closure of their interior
Notes
References
Bibliography
External links
General topology
Closure operators | Closure (topology) | [
"Mathematics"
] | 2,730 | [
"General topology",
"Topology",
"Closure operators",
"Order theory"
] |
42,314 | https://en.wikipedia.org/wiki/Naturalization | Naturalization (or naturalisation) is the legal act or process by which a non-national of a country acquires the nationality of that country after birth. The definition of naturalization by the International Organization for Migration of the United Nations excludes citizenship that is automatically acquired (e.g. at birth) or is acquired by declaration. Naturalization usually involves an application or a motion and approval by legal authorities. The rules of naturalization vary from country to country but typically include a promise to obey and uphold that country's laws and taking and subscribing to an oath of allegiance, and may specify other requirements such as a minimum legal residency and adequate knowledge of the national dominant language or culture. To counter multiple citizenship, some countries require that applicants for naturalization renounce any other citizenship that they currently hold, but whether this renunciation actually causes loss of original citizenship, as seen by the host country and by the original country, will depend on the laws of the countries involved. Arguments for increasing naturalization include reducing backlogs in naturalization applications and reshaping the electorate of the country.
History
The massive increase in population flux due to globalization and the sharp increase in the numbers of refugees following World War I created many stateless persons, people who were not citizens of any state. In some rare cases, laws for mass naturalization were passed. As naturalization laws had been designed to cater for the relatively few people who had voluntarily moved from one country to another (expatriates), many western democracies were not ready to naturalize large numbers of people. This included the massive influx of stateless people which followed massive denationalizations and the expulsion of ethnic minorities from newly created nation states in the first part of the 20th century.
Since World War II, the increase in international migrations created a new category of migrants, most of them economic migrants. For economic, political, humanitarian and pragmatic reasons, many states passed laws allowing a person to acquire their citizenship after birth, such as by marriage to a national – jus matrimonii – or by having ancestors who are nationals of that country, in order to reduce the scope of this category. However, in some countries this system still maintains a large part of the immigrant population in an illegal status, albeit with some massive regularizations. Examples include Spain under José Luis Rodríguez Zapatero's government, and Italy under Silvio Berlusconi's government.
Countries without a path to naturalization
Myanmar and Uruguay are currently the only countries in the world that deny immigrants any path to naturalization. Uruguayan legal citizenship has special characteristics. A person who acquires it retains their nationality of origin, which is determined by Uruguayan law to be that of their country of birth and therefore, is immutable. Legal citizens acquire political rights but do not acquire Uruguayan nationality as natural citizens do. According to Uruguayan law, those born in Uruguay or whose parents or grandparents are Uruguayan natural citizens are considered to be Uruguayan nationals.
As a result of Uruguay's unusual distinction between citizenship and nationality (it is the only country in the world that recognizes the right to citizenship without being a national), legal citizens have encountered problems with their Uruguayan passports at airports around the world since 2015. This is due to recommendations in the seventh edition of Doc. 9303 of the International Civil Aviation Organization (ICAO), which requires that travel documents issued by participating states include the "Nationality" field. The lack of a naturalization path means that the Nationality field in legal citizens' passports indicates their country of birth, which Uruguay assumes to be their nationality of origin. Many countries do not accept passports issued by a country that declares the holder to be a national of another country. As a consequence, it has severely curtailed legal citizens' exercise of the right to free movement, as their travel abroad is often difficult or downright impossible.
Due to its current and narrow definition of nationality, Uruguay could be violating the sovereignty of other countries by assigning foreign nationalities in its official documents, thus overriding their powers. Some Uruguayan legal citizens may even, as a result of the application of a national law of a third nation and this Uruguayan interpretation, become de facto stateless.
Summary by country
The following list is a brief summary of the duration of legal residence before a national of a foreign state, without any cultural, historical, or marriage ties or connections to the state in question, can request citizenship under that state's naturalization laws.
Laws by country
Australia
The Australian Citizenship Act 1973 ended the preferential treatment for British subjects from 1 December 1973. People who became permanent residents from 1 July 2007 must have been lawfully resident in Australia for four years before applying for citizenship by conferral. Those who were present in Australia as permanent residents before 1 July 2007 remain subject to the previous residence requirement (in force since 1984, e.g. resident for two years).
People's Republic of China
The People's Republic of China gives citizenship to people with one or two parents with Chinese nationality who have not taken residence in other countries. The country also gives citizenship to people born on its territory to stateless people who have settled there. Furthermore, individuals may apply for nationality if they have a near relative with Chinese nationality, if they have settled in China, or if they present another legitimate reason. In practice, few people gain Chinese citizenship; as of 2010, China had only 1,448 naturalised Chinese in total.
The naturalization process starts with a written application. Applicants must submit three copies, written with a ball-point or fountain pen, to national authorities, and to provincial authorities in the Ministry of Public Security and the Public Security Bureau. Applicants must also submit original copies of a foreign passport, a residence permit, a permanent residence permit, and four two-and-a-half inch long pictures. According to the conditions outlined in the Nationality Law of the People's Republic of China, authorities may also require "any other material that the authority believes are related to the nationality application".
France
People who fulfil all of the following criteria can obtain French citizenship through naturalisation:
At least 5 years' residence, although reduced to the following minimum periods in certain situations:
2 years:
Successfully completed 2 years of studies with a view to obtaining a degree or diploma at a French higher educational institution;
Made an exceptional contribution to France's standing and influence in the arts, science, sport, culture, academia, entrepreneurship, etc.
No minimum residence period:
Performed military service with the French Army;
Served voluntarily in wartime in the French Army or an allied army;
Rendered exceptional service to France (requires personal ministerial approval);
Attained the official status of a refugee in France;
Citizen of a member state of the and have French as their native language or have completed at least 5 years of schooling in a French-speaking educational establishment.
Integration into French society, including adhering to the values and principles of the Republic, and having a sufficient knowledge of French history, culture and society;
Sufficient spoken command of the French language;
No serious criminal convictions, defined as follows:
Never been sentenced to more than 6 months' imprisonment (not including suspended sentences) for any crime (unless the applicant has been legally deemed rehabilitated or the sentence has been wiped from their criminal record);
Never been convicted of any crime that counters France's fundamental interests (unless the applicant has been legally deemed rehabilitated or the sentence has been wiped from their criminal record);
Never been convicted of any act of terrorism (unless the applicant has been legally deemed rehabilitated or the sentence has been wiped from their criminal record).
The fee for naturalisation is €55, except in French Guiana, where it is €27.50.
Germany
People who fulfil all of the following criteria can obtain German citizenship through naturalisation:
At least 8 years' residence in Germany with a valid residence permit. This minimum period is reduced as follows:
7 years for people who have successfully completed the Integrationskurs;
3 years for spouses and registered same-sex partners of a German citizen (must have been married or in the registered partnership for at least 2 years at the time of application).
Declaring allegiance to the German Constitution;
Sufficient command of the German language;
No serious criminal convictions.
The dependent minor children of an applicant for naturalisation may also themselves become naturalised German citizens.
The fee for standard naturalisation is €255, while it is €51 per dependent minor child naturalised along with their parent. The fee may be waived in cases of extreme hardship or public interest.
People who naturalise as German citizens must usually give up their previous nationality, as German law takes a restrictive approach to multiple citizenship. Exceptions are made for EU and Swiss citizens (provided that the law of their country of origin does not prohibit the acquisition of another citizenship) and citizens of countries where renouncing one's citizenship is too difficult or humiliating (e.g. Afghanistan), prohibitively expensive (e.g. the United States) or legally impossible (e.g. Argentina).
Grenada
The Grenadian Government grants citizenship of Grenada for the following reasons:
By Birth
Any person born in Grenada after 1974 or later acquires Grenadian citizenship at birth. The exception is for children born to diplomat parents.
By Descent
Children born outside Grenada to a Grenadian-born parent.
By Registration
Children (over 18) born outside of Grenada to a Grenadian parent.
Children (under 18) born outside of Grenada to a Grenadian parent.
A person who was born outside of Grenada who is a Grandchild of a Grenadian citizen by birth.
A person who is/or has been married to a citizen of Grenada.
Citizens of Caribbean Countries may apply for citizenship by registration provided that person has been living in Grenada for 4 years and 2 years as a Permanent Resident (within the four-year period) immediately preceding the date of application.
Commonwealth & Irish citizens may apply for citizenship by registration provided that the person has been living in Grenada for 7 years and 2 years as a Permanent Resident (within the seven-year period) immediately preceding the date of application.
By Naturalisation
An Alien or a British Protected Person may apply for citizenship by naturalisation provided that the person has been living in Grenada for 7 years and 2 years as a Permanent Resident (within the seven-year period) immediately preceding the date of application..
India
The Indian citizenship and nationality law and the Constitution of India provides single citizenship for the entire country. The provisions relating to citizenship at the commencement of the Constitution are contained in Articles 5 to 11 in Part II of the Constitution of India. Relevant Indian legislation is the Citizenship Act 1955, which has been amended by the Citizenship (Amendment) Act 1986, the Citizenship (Amendment) Act 1992, the Citizenship (Amendment) Act 2003, and Citizenship (Amendment) Ordinance 2005. The Citizenship (Amendment) Act 2003 received the assent of the President of India on 7 January 2004 and came into force on 3 December 2004. The Citizenship (Amendment) Ordinance 2005 was promulgated by the President of India and came into force on 28 June 2005.
Following these reforms, Indian nationality law largely follows the jus sanguinis (citizenship by right of blood) as opposed to the jus soli (citizenship by right of birth within the territory).
In 2019, a Citizenship Amendment Act was passed by the Parliament of India. This Act aims at fast tracking citizenship for illegal immigrants and refugees fleeing religious persecution for people of Hindu, Sikh, Buddhist, Jain, Parsi or Christian faiths who have entered India on or before 31 December 2014 from the neighbouring countries of Pakistan, Afghanistan and Bangladesh.
Italy
The Italian Government grants Italian citizenship for the following reasons.
Automatically
Jus sanguinis: for birth;
If an Italian citizen recognizes, at a time after birth, a minor child;
For adoption;
To obtain or re-obtain from a parent.
Following declaration
By descent;
Jus soli: by birth or descent in Italy;
By marriage or naturalization
By marriage: the foreign or stateless spouse of an Italian citizen may acquire Italian citizenship after two years of legal residence in Italy or, if residing abroad, after three years from the date of marriage;
By naturalization: the foreigner can apply for Italian citizenship after ten years of legal residence in Italy, reduced to five years for those who have been recognized as stateless or refugee and four years for citizens of countries of the European Community.
Indonesia
Indonesian nationality is regulated by Law No. 12/2006 (UU No. 12 Tahun 2006). The Indonesian nationality law is based on jus sanguinis and jus soli. The Indonesian nationality law does not recognize dual citizenship except for people under the age of 18 (limited double citizenship principle). After reaching 18 years of age individuals are forced to choose one citizenship (single citizenship principle).
A foreign citizen can apply to become an Indonesian citizen with the following requirements:
Age 18 or older, or married
Resided in Indonesia for a minimum of 5 consecutive years or 10 non-consecutive years
Physically and mentally healthy
Ability to speak Indonesian and acknowledge Pancasila and Undang-Undang Dasar Negara Republik Indonesia Tahun 1945
Never convicted of a crime for which the punishment is imprisonment for one year or more
If having Indonesian citizenship will not give the person dual citizenship
Employed or have fixed income
Pay citizenship fee
Any application for citizenship is granted by the President of Indonesia.
Israel
Israel's Declaration of Independence was made on 14 May 1948, the day before the British Mandate was due to expire as a result of the United Nations Partition Plan. The Israeli parliament created two laws regarding immigration, citizenship and naturalization: the Law of Return and the Israeli citizenship law. The Law of Return, enacted on July 15, 1950, gives Jews living anywhere in the world the right to immigrate to Israel. This right to immigrate did not and still does not grant citizenship. In fact, for four years after Israel gained independence, there were no Israeli citizens.
On July 14, 1952, the Israeli parliament enacted the Israeli Nationality Law. The Nationality Law naturalized all citizens of Mandated Palestine, the inhabitants of Israel on July 15, 1952, and those who had legally resided in Israel between May 14, 1948, and July 14, 1952. The law further clarified that naturalization was available to immigrants who had arrived before Israel's creation, immigrants who arrived after statehood was granted, and those who did not come to Israel as immigrants but have since expressed desire to settle in Israel, with restriction. Naturalization applicants must also meet the following requirements: be over 18 years of age, have resided in Israel for three out of the five preceding years, have settled or intend to settle permanently in Israel, have some knowledge of Hebrew, and have renounced prior nationality or demonstrated ability to renounce nationality after becoming a citizen of Israel.
Because of Israel's relatively new and culturally mixed identity, Israel does not grant citizenship to people born on Israeli soil. Instead, the government chose to enact a jus sanguinis system, with the naturalization restrictions listed above. There is currently no legislation on second-generation immigrants (those born in Israel to immigrant parents). Furthermore, foreign spouses can apply for citizenship through the Minister of the Interior, but have a variety of restrictions and are not guaranteed citizenship.
Luxembourg
People who fulfil all of the following criteria can obtain Luxembourg citizenship through naturalisation:
At least 18 years old.
At least 5 years of legal residence in Luxembourg, including an uninterrupted period of one year immediately before applying for citizenship.
Passing a Luxembourgish language exam.
Taking a course on "Living together in the Grand Duchy" and passing the associated examination.
Never having been handed an immediate custodial sentence of 12 months or more or a suspended custodial sentence of 24 months or more, in any country.
Malaysia
Naturalisation in Malaysia is guided by the 1964 Malaysian Constitution. According to the law, those who want to be the country citizen should live in the country for a period of 10–12 years. The would-be-citizens are required to speak the Malay language as well submitting the identity cards of two Malaysians who recommend the applicant for citizenship. As the Government of Malaysia does not recognise dual citizenship, those who seek naturalisation are needed to reside permanently in the country and renouncing their former country citizenship.
The requirements are as follows:
The applicant shall appear before the Registrar of Citizenship when submitting the application.
The applicant must be aged 21 years and above on the date of the application.
The applicant has resided in the federation for a period of not less than 10 years in a period of 12 years, including the 12 months immediately preceding the date of application.
The applicant intends to reside permanently in the federation.
The applicant is of good character.
The applicant has adequate knowledge of the Malay language.
The applicant must be sponsored by two referees who are citizens aged 21 years and above and who are not relatives, not hired people, and not advocates or solicitors to the applicant.
Form C must be completed and submitted together with copies of the necessary documents.
The Article 16 of 1957 Malaysian Constitution also stated a similar condition previously.
Philippines
Commonwealth Act No. 473, the Revised Naturalization Law, approved June 17, 1939, provided that people having certain specified qualifications may become a citizen of the Philippines by naturalization. Republic Act No. 9139, approved June 8, 2001, provided that aliens under the age of 18 who were born in the Philippines, who have resided in the Philippines since birth, and who possess other specified qualifications may be granted Philippines citizenship by administrative proceeding subject to certain requirements.
Russia
Naturalization in Russia is guided by articles 13 and 14 of the federal law "About Citizenship of Russian Federation" passed on May 31, 2002. Citizenship of Russia can be obtained in general or simplified order. To become a citizen in general order, one must be 18 years of age or older, continuously live in Russia as a permanent resident for at least five years (this term is limited to one year for valued specialists, political asylum seekers and refugees), have legal means of existence, promise to obey the laws and Constitution of Russia and be fluent in the Russian language.
There is also a possibility to naturalize in a simplified order, in which certain requirements will be waived. Eligible for that are people, at least one parent of whom is a Russian citizen living on Russian territory; people, who lived on the territories of the former Soviet republics but never obtained citizenships of those nations after they gained independence; people, who were born on the territory of RSFSR and formerly held Soviet citizenship; people married to Russian citizens for at least 3 years; people, who served in Russian Armed Forces under contract for at least 3 years; parents of mentally incapacitated children over 18 who are Russian citizens; participants of the State Program for Assisting Compatriots Residing Abroad; and some other categories.
Spain
People who fulfill all of the following criteria can obtain Spanish citizenship through naturalisation
At least 10 years' residence in Spain. This period is reduced to 5 years for people who have obtained refugee status; 2 years for nationals of Ibero-American countries, Andorra, the Philippines, Equatorial Guinea, Portugal or persons of Sephardic origin; 1 year for spouses, widows, widowers, people born in Spain or by a Spanish mother or father.
Sufficient command of the Spanish language and culture;
Declaring allegiance to the Spanish Constitution;
No serious criminal convictions.
People who naturalise as Spanish citizens must usually give up their previous nationality, as Spanish law takes a restrictive approach to multiple citizenship.
South Africa
Chapter 2 of the South African Citizenship Act, enacted on October 6, 1995, defines who is considered a naturalized citizen at the time of the act and also outlines the naturalization process for future immigrants.
Any person who immediately prior to the commencement of the act had been a South African citizen via naturalization, had been deemed to be a South African citizen by registration, or had been a citizen via naturalization of any of the former states now composing South Africa is now considered to be a naturalized citizen of South Africa.
Those wishing to apply for naturalization in the future must apply to the Minister of Home Affairs and must meet a slew of requirements. First, naturalization applicants must be over the age of 18 and must have been a permanent resident of South Africa for five years prior to application (prior to 2010, the permanent residence requirement was one year prior to application and for four out of the eight years prior to application). Applicants must also demonstrate good character and knowledge of the basic responsibilities and privileges of a South African citizen. The ability to communicate in one of the official languages of South Africa is also required. Applicants must show the intention to reside in South Africa after naturalization, and they are required to make a declaration of allegiance. The Constitution of South Africa states that national legislation must provide for the acquisition, loss and restoration of citizenship.
Being a naturalized South African citizen is a privilege, not a right. Even after meeting all the requirements and going through the naturalization process, the minister holds the right to deny citizenship. Foreign spouses of South African citizens can apply for naturalization after two years of marriage, but is subject to potential denial of the minister. The minister can also grant citizenship to minors, if their parent applies for them.
The minister also holds the power to revoke naturalization at any time for specific reasons listed in the Act. Reasons for revoking the naturalization certificate include marrying someone who is a citizen of another country and holding citizenship in another country, or applying for citizenship of another country without prior authorization for retention of citizenship. If a permanent resident is denied naturalization, he or she must wait at least one year before reapplying.
United Kingdom
There has always been a distinction in the law of England and Wales between the subjects of the monarch and aliens: the monarch's subjects owed the monarch allegiance, and included those born in his or her dominions (natural-born subjects) and those who later gave him or her their allegiance (naturalised subjects). Today, the requirements for naturalisation as a citizen of the United Kingdom depend on whether or not one is the spouse or civil partner of a citizen. An applicant who is a spouse or civil partner of a British citizen must:
hold indefinite leave to remain in the UK (or an equivalent such as Right of Abode or Irish citizenship)
have lived legally in the UK for three years
been outside of the UK no more than 90 days during the one-year period prior to filing the application.
show sufficient knowledge of life in the UK, either by passing the Life in the United Kingdom test or by attending combined English language and citizenship classes. Proof of this must be supplied with one's application for naturalisation. Those aged 65 or over may be able to claim exemption.
meet specified English, Welsh or Scottish Gaelic language competence standards.
For those not married to or in a civil partnership with a British citizen, the requirements are:
Five years legal residence in the UK
Indefinite leave to remain or "equivalent" for this purpose (see above) must have been held for 12 months
the applicant must intend to continue to live in the UK or work overseas for the UK government or a British corporation or association
the same "good character" standards apply as for those married to British citizens
the same language and knowledge of life in the UK standards apply as for those married to British citizens.
United States
Persons who are not US citizens may receive citizenship through the process of naturalization, following the Congressional requirements in the Immigration and Nationality Act (INA). Naturalized citizens have the same rights as those who acquired citizenship at birth.
The INA states the following:
The Naturalization Act of 1795 set the initial rules on naturalization: "free, White persons" who had been resident for five years or more. An 1862 law allowed honorably discharged Army veterans of any war to petition for naturalization after only one year of residence in the United States. An 1894 law extended the same privilege to honorably discharged five-year veterans of the Navy or Marine Corps. Laws enacted in 1919, 1926, 1940, and 1952 continued preferential treatment provisions for veterans.
Following the Spanish–American War in 1898, Philippine citizens were classified as US nationals, and the 1917 Jones–Shafroth Act granted US citizenship to natives of Puerto Rico. But the 1934 Tydings–McDuffie Act reclassified Filipinos as aliens, and set a quota of 50 immigrants per year, and otherwise applying the Immigration Act of 1924 to them.
The Magnuson Act repealed the Chinese Exclusion Act. During the 1940s, 100 annual immigrants from British India and the Philippines were allowed. The War Brides Act of 1945 permitted soldiers to bring back their foreign wives and established precedent in naturalization through marriage. The Immigration Act of 1965 finally allowed people from all nations to be given equal access to immigration and naturalization.
Illegal immigration became a major issue in the United States at the end of the 20th century. The Immigration Reform and Control Act of 1986, while tightening border controls, also provided the opportunity of naturalization for illegal aliens who had been in the country for at least four years. Today, lawful permanent residents of the United States are eligible to apply for US citizenship after five years, unless they continue to be married to a US citizen, in which case they can apply after only three years of permanent residency.
The Child Citizenship Act of 2000 streamlined the naturalization process for children adopted internationally. A child under age 18 who is adopted by at least one US citizen parent, and is in the custody of the citizen parent(s), is now automatically naturalized once admitted to the United States as an immigrant or when legally adopted in the United States, depending on the visa under which the child was admitted to the United States. The Act also provides that the non-citizen minor child of a newly naturalized US citizen, whether by birth or adoption, also automatically receives US citizenship.
Mass naturalizations
A few rare mass naturalization processes have been implemented by nation states. In 1891, Brazil granted naturalization to all aliens living in the country. In 1922, Greece massively naturalized all the Greek refugees coming from Turkey. The second massive naturalization process was in favor of Armenian refugees coming from Turkey, who went to Syria, Lebanon or other former Ottoman countries. Reciprocally, Turkey massively naturalized the refugees of Turkish descent or other ethnic backgrounds in Muslim creed from these countries during a redemption process.
Canada instituted a mass naturalization by Act of Parliament with the enactment of the Canadian Citizenship Act 1946.
After annexation of the territories east of the Curzon line by the Soviet Union in 1945, Soviets naturalized en masse all the inhabitants of those territories—including ethnic Poles, as well as its other citizens who had been deported into the Soviet Union, mainly to Kazakhstan. Those people were forcibly naturalized as Soviet citizens. Later on, Germany granted to the ethnic German population in Russia and Kazakhstan full citizenship rights. Poland has a limited repatriation program in place.
In the late 1970s, President Ferdinand Marcos facilitated the mass naturalization of ethnic Chinese in the Philippines.
The most recent massive naturalization case resulted from the Argentine economic crisis in the beginning of the 21st century. Existing or slightly updated right of return laws in Spain and Italy allowed many of their diasporic descendants to obtain—in many cases to regain—naturalization in virtue of jus sanguinis, as in the Greek case. Hence, many Argentines acquired European nationality.
Since the Fourteenth Amendment to the United States Constitution grants citizenship only to those "born or naturalized in the United States, and subject to the jurisdiction thereof", and the original United States Constitution only grants Congress the power of naturalization, it could be argued that all acts of Congress that expand the right of citizenship are cases of mass naturalization. This includes the acts that extended U.S. citizenship to citizens of Puerto Rico, the United States Virgin Islands, Guam, and the Northern Mariana Islands, as well as the Indian Citizenship Act of 1924 which made all Native Americans citizens (most of them were previously excluded under the "jurisdiction" clause of the 14th Amendment).
In the eastern Malaysian state of Sabah, mass naturalisation also happened during the administration of United Sabah National Organisation (USNO) and Sabah People's United Front (BERJAYA's) Muslim-dominated political parties to increase the Muslim population in the territory by naturalising immigrants and refugees from the mainly-Muslim dominated areas of Mindanao and Sulu Archipelago of the Philippines and Sulawesi of Indonesia.
In occupied territories
The mass naturalization of native people in occupied territories is illegal under the laws of war (Hague and Geneva Conventions). However, there have been many instances of such illegal mass naturalizations in the 20th century.
See also
Citizenship
Denaturalization
Integration of immigrants
Permanent residency
History of citizenship
European Convention on Nationality
Convention on the Reduction of Statelessness
Notes
References
External links
PoliticosLatinos.com Videos of 2008 US Presidential Election Candidates' Positions regarding Immigration
Naturalization First Appeared in the Constitution
EUDO CITIZENSHIP Observatory
Acquired citizenship
Philosophy of law
Time in government | Naturalization | [
"Physics"
] | 5,902 | [
"Spacetime",
"Physical quantities",
"Time in government",
"Time"
] |
42,315 | https://en.wikipedia.org/wiki/Topological%20group | In mathematics, topological groups are the combination of groups and topological spaces, i.e. they are groups and topological spaces at the same time, such that the continuity condition for the group operations connects these two structures together and consequently they are not independent from each other.
Topological groups have been studied extensively in the period of 1925 to 1940. Haar and Weil (respectively in 1933 and 1940) showed that the integrals and Fourier series are special cases of a very wide class of topological groups.
Topological groups, along with continuous group actions, are used to study continuous symmetries, which have many applications, for example, in physics. In functional analysis, every topological vector space is an additive topological group with the additional property that scalar multiplication is continuous; consequently, many results from the theory of topological groups can be applied to functional analysis.
Formal definition
A topological group, , is a topological space that is also a group such that the group operation (in this case product):
,
and the inversion map:
,
are continuous.
Here is viewed as a topological space with the product topology.
Such a topology is said to be compatible with the group operations and is called a group topology.
Checking continuity
The product map is continuous if and only if for any and any neighborhood of in , there exist neighborhoods of and of in such that , where }.
The inversion map is continuous if and only if for any and any neighborhood of in , there exists a neighborhood of in such that , where }.
To show that a topology is compatible with the group operations, it suffices to check that the map
,
is continuous.
Explicitly, this means that for any and any neighborhood in of , there exist neighborhoods of and of in such that .
Additive notation
This definition used notation for multiplicative groups;
the equivalent for additive groups would be that the following two operations are continuous:
,
, .
Hausdorffness
Although not part of this definition, many authors require that the topology on be Hausdorff.
One reason for this is that any topological group can be canonically associated with a Hausdorff topological group by taking an appropriate canonical quotient;
this however, often still requires working with the original non-Hausdorff topological group.
Other reasons, and some equivalent conditions, are discussed below.
This article will not assume that topological groups are necessarily Hausdorff.
Category
In the language of category theory, topological groups can be defined concisely as group objects in the category of topological spaces, in the same way that ordinary groups are group objects in the category of sets.
Note that the axioms are given in terms of the maps (binary product, unary inverse, and nullary identity), hence are categorical definitions.
Homomorphisms
A homomorphism of topological groups means a continuous group homomorphism .
Topological groups, together with their homomorphisms, form a category.
A group homomorphism between topological groups is continuous if and only if it is continuous at some point.
An isomorphism of topological groups is a group isomorphism that is also a homeomorphism of the underlying topological spaces.
This is stronger than simply requiring a continuous group isomorphism—the inverse must also be continuous.
There are examples of topological groups that are isomorphic as ordinary groups but not as topological groups.
Indeed, any non-discrete topological group is also a topological group when considered with the discrete topology.
The underlying groups are the same, but as topological groups there is not an isomorphism.
Examples
Every group can be trivially made into a topological group by considering it with the discrete topology; such groups are called discrete groups.
In this sense, the theory of topological groups subsumes that of ordinary groups.
The indiscrete topology (i.e. the trivial topology) also makes every group into a topological group.
The real numbers, with the usual topology form a topological group under addition.
Euclidean -space is also a topological group under addition, and more generally, every topological vector space forms an (abelian) topological group.
Some other examples of abelian topological groups are the circle group , or the torus for any natural number .
The classical groups are important examples of non-abelian topological groups. For instance, the general linear group of all invertible -by- matrices with real entries can be viewed as a topological group with the topology defined by viewing as a subspace of Euclidean space .
Another classical group is the orthogonal group , the group of all linear maps from to itself that preserve the length of all vectors.
The orthogonal group is compact as a topological space. Much of Euclidean geometry can be viewed as studying the structure of the orthogonal group, or the closely related group of isometries of .
The groups mentioned so far are all Lie groups, meaning that they are smooth manifolds in such a way that the group operations are smooth, not just continuous.
Lie groups are the best-understood topological groups; many questions about Lie groups can be converted to purely algebraic questions about Lie algebras and then solved.
An example of a topological group that is not a Lie group is the additive group of rational numbers, with the topology inherited from .
This is a countable space, and it does not have the discrete topology.
An important example for number theory is the group of p-adic integers, for a prime number , meaning the inverse limit of the finite groups as n goes to infinity.
The group is well behaved in that it is compact (in fact, homeomorphic to the Cantor set), but it differs from (real) Lie groups in that it is totally disconnected.
More generally, there is a theory of p-adic Lie groups, including compact groups such as as well as locally compact groups such as , where is the locally compact field of p-adic numbers.
The group is a pro-finite group; it is isomorphic to a subgroup of the product in such a way that its topology is induced by the product topology, where the finite groups are given the discrete topology.
Another large class of pro-finite groups important in number theory are absolute Galois groups.
Some topological groups can be viewed as infinite dimensional Lie groups; this phrase is best understood informally, to include several different families of examples.
For example, a topological vector space, such as a Banach space or Hilbert space, is an abelian topological group under addition. Some other infinite-dimensional groups that have been studied, with varying degrees of success, are loop groups, Kac–Moody groups, Diffeomorphism groups, homeomorphism groups, and gauge groups.
In every Banach algebra with multiplicative identity, the set of invertible elements forms a topological group under multiplication.
For example, the group of invertible bounded operators on a Hilbert space arises this way.
Properties
Translation invariance
Every topological group's topology is , which by definition means that if for any left or right multiplication by this element yields a homeomorphism
Consequently, for any and the subset is open (resp. closed) in if and only if this is true of its left translation and right translation
If is a neighborhood basis of the identity element in a topological group then for all
is a neighborhood basis of in
In particular, any group topology on a topological group is completely determined by any neighborhood basis at the identity element.
If is any subset of and is an open subset of then is an open subset of
Symmetric neighborhoods
The inversion operation on a topological group is a homeomorphism from to itself.
A subset is said to be symmetric if where
The closure of every symmetric set in a commutative topological group is symmetric.
If is any subset of a commutative topological group , then the following sets are also symmetric: , , and .
For any neighborhood in a commutative topological group of the identity element, there exists a symmetric neighborhood of the identity element such that , where note that is necessarily a symmetric neighborhood of the identity element.
Thus every topological group has a neighborhood basis at the identity element consisting of symmetric sets.
If is a locally compact commutative group, then for any neighborhood in of the identity element, there exists a symmetric relatively compact neighborhood of the identity element such that (where is symmetric as well).
Uniform space
Every topological group can be viewed as a uniform space in two ways; the left uniformity turns all left multiplications into uniformly continuous maps while the right uniformity turns all right multiplications into uniformly continuous maps.
If is not abelian, then these two need not coincide.
The uniform structures allow one to talk about notions such as completeness, uniform continuity and uniform convergence on topological groups.
Separation properties
If is an open subset of a commutative topological group and contains a compact set , then there exists a neighborhood of the identity element such that .
As a uniform space, every commutative topological group is completely regular.
Consequently, for a multiplicative topological group with identity element 1, the following are equivalent:
is a T0-space (Kolmogorov);
is a T2-space (Hausdorff);
is a T3 (Tychonoff);
is closed in ;
, where is a neighborhood basis of the identity element in ;
for any such that there exists a neighborhood in of the identity element such that
A subgroup of a commutative topological group is discrete if and only if it has an isolated point.
If is not Hausdorff, then one can obtain a Hausdorff group by passing to the quotient group , where is the closure of the identity.
This is equivalent to taking the Kolmogorov quotient of .
Metrisability
Let be a topological group. As with any topological space, we say that is metrisable if and only if there exists a metric on , which induces the same topology on . A metric on is called
left-invariant (resp. right-invariant) if and only if (resp. ) for all (equivalently, is left-invariant just in case the map is an isometry from to itself for each ).
proper if and only if all open balls, for , are pre-compact.
The Birkhoff–Kakutani theorem (named after mathematicians Garrett Birkhoff and Shizuo Kakutani) states that the following three conditions on a topological group are equivalent:
is (Hausdorff and) first countable (equivalently: the identity element 1 is closed in , and there is a countable basis of neighborhoods for 1 in ).
is metrisable (as a topological space).
There is a left-invariant metric on that induces the given topology on .
There is a right-invariant metric on that induces the given topology on .
Furthermore, the following are equivalent for any topological group :
is a second countable locally compact (Hausdorff) space.
is a Polish, locally compact (Hausdorff) space.
is properly metrisable (as a topological space).
There is a left-invariant, proper metric on that induces the given topology on .
Note: As with the rest of the article we of assume here a Hausdorff topology.
The implications 4 3 2 1 hold in any topological space. In particular 3 2 holds, since in particular any properly metrisable space is countable union of compact metrisable and thus separable (cf. properties of compact metric spaces) subsets.
The non-trivial implication 1 4 was first proved by Raimond Struble in 1974. An alternative approach was made by Uffe Haagerup and Agata Przybyszewska in 2006,
the idea of the which is as follows:
One relies on the construction of a left-invariant metric, , as in the case of first countable spaces. By local compactness, closed balls of sufficiently small radii are compact, and by normalising we can assume this holds for radius 1. Closing the open ball, , of radius 1 under multiplication yields a clopen subgroup, , of , on which the metric is proper. Since is open and is second countable, the subgroup has at most countably many cosets. One now uses this sequence of cosets and the metric on to construct a proper metric on .
Subgroups
Every subgroup of a topological group is itself a topological group when given the subspace topology.
Every open subgroup is also closed in , since the complement of is the open set given by the union of open sets for .
If is a subgroup of then the closure of is also a subgroup.
Likewise, if is a normal subgroup of , the closure of is normal in .
Quotients and normal subgroups
If is a subgroup of , the set of left cosets with the quotient topology is called a homogeneous space for .
The quotient map is always open.
For example, for a positive integer , the sphere is a homogeneous space for the rotation group in , with .
A homogeneous space is Hausdorff if and only if is closed in .
Partly for this reason, it is natural to concentrate on closed subgroups when studying topological groups.
If is a normal subgroup of , then the quotient group becomes a topological group when given the quotient topology.
It is Hausdorff if and only if is closed in .
For example, the quotient group is isomorphic to the circle group .
In any topological group, the identity component (i.e., the connected component containing the identity element) is a closed normal subgroup.
If is the identity component and a is any point of , then the left coset is the component of containing a.
So the collection of all left cosets (or right cosets) of in is equal to the collection of all components of .
It follows that the quotient group is totally disconnected.
Closure and compactness
In any commutative topological group, the product (assuming the group is multiplicative) of a compact set and a closed set is a closed set.
Furthermore, for any subsets and of , .
If is a subgroup of a commutative topological group and if is a neighborhood in of the identity element such that is closed, then is closed.
Every discrete subgroup of a Hausdorff commutative topological group is closed.
Isomorphism theorems
The isomorphism theorems from ordinary group theory are not always true in the topological setting.
This is because a bijective homomorphism need not be an isomorphism of topological groups.
For example, a native version of the first isomorphism theorem is false for topological groups: if is a morphism of topological groups (that is, a continuous homomorphism), it is not necessarily true that the induced homomorphism is an isomorphism of topological groups; it will be a bijective, continuous homomorphism, but it will not necessarily be a homeomorphism. In other words, it will not necessarily admit an inverse in the category of topological groups.
There is a version of the first isomorphism theorem for topological groups, which may be stated as follows: if is a continuous homomorphism, then the induced homomorphism from to is an isomorphism if and only if the map is open onto its image.
The third isomorphism theorem, however, is true more or less verbatim for topological groups, as one may easily check.
Hilbert's fifth problem
There are several strong results on the relation between topological groups and Lie groups.
First, every continuous homomorphism of Lie groups is smooth.
It follows that a topological group has a unique structure of a Lie group if one exists.
Also, Cartan's theorem says that every closed subgroup of a Lie group is a Lie subgroup, in particular a smooth submanifold.
Hilbert's fifth problem asked whether a topological group that is a topological manifold must be a Lie group.
In other words, does have the structure of a smooth manifold, making the group operations smooth?
As shown by Andrew Gleason, Deane Montgomery, and Leo Zippin, the answer to this problem is yes.
In fact, has a real analytic structure.
Using the smooth structure, one can define the Lie algebra of , an object of linear algebra that determines a connected group up to covering spaces.
As a result, the solution to Hilbert's fifth problem reduces the classification of topological groups that are topological manifolds to an algebraic problem, albeit a complicated problem in general.
The theorem also has consequences for broader classes of topological groups. First, every compact group (understood to be Hausdorff) is an inverse limit of compact Lie groups.
(One important case is an inverse limit of finite groups, called a profinite group. For example, the group of p-adic integers and the absolute Galois group of a field are profinite groups.)
Furthermore, every connected locally compact group is an inverse limit of connected Lie groups.
At the other extreme, a totally disconnected locally compact group always contains a compact open subgroup, which is necessarily a profinite group.
(For example, the locally compact group contains the compact open subgroup , which is the inverse limit of the finite groups as ' goes to infinity.)
Representations of compact or locally compact groups
An action of a topological group on a topological space X is a group action of on X such that the corresponding function is continuous.
Likewise, a representation of a topological group on a real or complex topological vector space V is a continuous action of on V such that for each , the map from V to itself is linear.
Group actions and representation theory are particularly well understood for compact groups, generalizing what happens for finite groups.
For example, every finite-dimensional (real or complex) representation of a compact group is a direct sum of irreducible representations.
An infinite-dimensional unitary representation of a compact group can be decomposed as a Hilbert-space direct sum of irreducible representations, which are all finite-dimensional; this is part of the Peter–Weyl theorem.
For example, the theory of Fourier series describes the decomposition of the unitary representation of the circle group on the complex Hilbert space .
The irreducible representations of are all 1-dimensional, of the form for integers (where is viewed as a subgroup of the multiplicative group *).
Each of these representations occurs with multiplicity 1 in .
The irreducible representations of all compact connected Lie groups have been classified.
In particular, the character of each irreducible representation is given by the Weyl character formula.
More generally, locally compact groups have a rich theory of harmonic analysis, because they admit a natural notion of measure and integral, given by the Haar measure.
Every unitary representation of a locally compact group can be described as a direct integral of irreducible unitary representations.
(The decomposition is essentially unique if is of Type I, which includes the most important examples such as abelian groups and semisimple Lie groups.)
A basic example is the Fourier transform, which decomposes the action of the additive group on the Hilbert space as a direct integral of the irreducible unitary representations of .
The irreducible unitary representations of are all 1-dimensional, of the form for .
The irreducible unitary representations of a locally compact group may be infinite-dimensional.
A major goal of representation theory, related to the Langlands classification of admissible representations, is to find the unitary dual (the space of all irreducible unitary representations) for the semisimple Lie groups.
The unitary dual is known in many cases such as , but not all.
For a locally compact abelian group , every irreducible unitary representation has dimension 1.
In this case, the unitary dual is a group, in fact another locally compact abelian group.
Pontryagin duality states that for a locally compact abelian group , the dual of is the original group .
For example, the dual group of the integers is the circle group , while the group of real numbers is isomorphic to its own dual.
Every locally compact group has a good supply of irreducible unitary representations; for example, enough representations to distinguish the points of (the Gelfand–Raikov theorem).
By contrast, representation theory for topological groups that are not locally compact has so far been developed only in special situations, and it may not be reasonable to expect a general theory.
For example, there are many abelian Banach–Lie groups for which every representation on Hilbert space is trivial.
Homotopy theory of topological groups
Topological groups are special among all topological spaces, even in terms of their homotopy type.
One basic point is that a topological group determines a path-connected topological space, the classifying space (which classifies principal -bundles over topological spaces, under mild hypotheses).
The group is isomorphic in the homotopy category to the loop space of ; that implies various restrictions on the homotopy type of .
Some of these restrictions hold in the broader context of H-spaces.
For example, the fundamental group of a topological group is abelian.
(More generally, the Whitehead product on the homotopy groups of is zero.)
Also, for any field k, the cohomology ring has the structure of a Hopf algebra.
In view of structure theorems on Hopf algebras by Heinz Hopf and Armand Borel, this puts strong restrictions on the possible cohomology rings of topological groups.
In particular, if is a path-connected topological group whose rational cohomology ring is finite-dimensional in each degree, then this ring must be a free graded-commutative algebra over , that is, the tensor product of a polynomial ring on generators of even degree with an exterior algebra on generators of odd degree.
In particular, for a connected Lie group , the rational cohomology ring of is an exterior algebra on generators of odd degree.
Moreover, a connected Lie group has a maximal compact subgroup K, which is unique up to conjugation, and the inclusion of K into is a homotopy equivalence.
So describing the homotopy types of Lie groups reduces to the case of compact Lie groups.
For example, the maximal compact subgroup of is the circle group , and the homogeneous space can be identified with the hyperbolic plane.
Since the hyperbolic plane is contractible, the inclusion of the circle group into is a homotopy equivalence.
Finally, compact connected Lie groups have been classified by Wilhelm Killing, Élie Cartan, and Hermann Weyl.
As a result, there is an essentially complete description of the possible homotopy types of Lie groups.
For example, a compact connected Lie group of dimension at most 3 is either a torus, the group SU(2) (diffeomorphic to the 3-sphere ), or its quotient group (diffeomorphic to ).
Complete topological group
Information about convergence of nets and filters, such as definitions and properties, can be found in the article about filters in topology.
Canonical uniformity on a commutative topological group
This article will henceforth assume that any topological group that we consider is an additive commutative topological group with identity element
The diagonal of is the set
and for any containing the canonical entourage or canonical vicinities around is the set
For a topological group the canonical uniformity on is the uniform structure induced by the set of all canonical entourages as ranges over all neighborhoods of in
That is, it is the upward closure of the following prefilter on
where this prefilter forms what is known as a base of entourages of the canonical uniformity.
For a commutative additive group a fundamental system of entourages is called a translation-invariant uniformity if for every if and only if for all A uniformity is called translation-invariant if it has a base of entourages that is translation-invariant.
The canonical uniformity on any commutative topological group is translation-invariant.
The same canonical uniformity would result by using a neighborhood basis of the origin rather the filter of all neighborhoods of the origin.
Every entourage contains the diagonal because
<li>If is symmetric (that is, ) then is symmetric (meaning that ) and
The topology induced on by the canonical uniformity is the same as the topology that started with (that is, it is ).
Cauchy prefilters and nets
The general theory of uniform spaces has its own definition of a "Cauchy prefilter" and "Cauchy net." For the canonical uniformity on these reduces down to the definition described below.
Suppose is a net in and is a net in Make into a directed set by declaring if and only if Then denotes the product net. If then the image of this net under the addition map denotes the sum of these two nets:
and similarly their difference is defined to be the image of the product net under the subtraction map:
A net in an additive topological group is called a Cauchy net if
or equivalently, if for every neighborhood of in there exists some such that
for all indices
A Cauchy sequence is a Cauchy net that is a sequence.
If is a subset of an additive group and is a set containing then is said to be an -small set or small of order if
A prefilter on an additive topological group called a Cauchy prefilter if it satisfies any of the following equivalent conditions:
in where is a prefilter.
in where is a prefilter equivalent to
For every neighborhood of in contains some -small set (that is, there exists some such that ).
and if is commutative then also:
For every neighborhood of in there exists some and some such that
It suffices to check any of the above condition for any given neighborhood basis of in
Suppose is a prefilter on a commutative topological group and Then in if and only if and is Cauchy.
Complete commutative topological group
Recall that for any a prefilter on is necessarily a subset of ; that is,
A subset of a topological group is called a complete subset if it satisfies any of the following equivalent conditions:
Every Cauchy prefilter on converges to at least one point of
If is Hausdorff then every prefilter on will converge to at most one point of But if is not Hausdorff then a prefilter may converge to multiple points in The same is true for nets.
Every Cauchy net in converges to at least one point of ;
Every Cauchy filter on converges to at least one point of
is a complete uniform space (under the point-set topology definition of "complete uniform space") when is endowed with the uniformity induced on it by the canonical uniformity of ;
A subset is called a sequentially complete subset if every Cauchy sequence in (or equivalently, every elementary Cauchy filter/prefilter on ) converges to at least one point of
Importantly, convergence outside of is allowed: If is not Hausdorff and if every Cauchy prefilter on converges to some point of then will be complete even if some or all Cauchy prefilters on also converge to points(s) in the complement In short, there is no requirement that these Cauchy prefilters on converge only to points in The same can be said of the convergence of Cauchy nets in
As a consequence, if a commutative topological group is not Hausdorff, then every subset of the closure of say is complete (since it is clearly compact and every compact set is necessarily complete). So in particular, if (for example, if a is singleton set such as ) then would be complete even though every Cauchy net in (and every Cauchy prefilter on ), converges to every point in (include those points in that are not in ).
This example also shows that complete subsets (indeed, even compact subsets) of a non-Hausdorff space may fail to be closed (for example, if then is closed if and only if ).
A commutative topological group is called a complete group if any of the following equivalent conditions hold:
is complete as a subset of itself.
Every Cauchy net in converges to at least one point of
There exists a neighborhood of in that is also a complete subset of
This implies that every locally compact commutative topological group is complete.
When endowed with its canonical uniformity, becomes is a complete uniform space.
In the general theory of uniform spaces, a uniform space is called a complete uniform space if each Cauchy filter in converges in to some point of
A topological group is called sequentially complete if it is a sequentially complete subset of itself.
Neighborhood basis: Suppose is a completion of a commutative topological group with and that is a neighborhood base of the origin in Then the family of sets
is a neighborhood basis at the origin in
Let and be topological groups, and be a map. Then is uniformly continuous if for every neighborhood of the origin in there exists a neighborhood of the origin in such that for all if then
Generalizations
Various generalizations of topological groups can be obtained by weakening the continuity conditions:
A semitopological group is a group with a topology such that for each the two functions defined by and are continuous.
A quasitopological group is a semitopological group in which the function mapping elements to their inverses is also continuous.
A paratopological group is a group with a topology such that the group operation is continuous.
See also
Notes
Citations
References
Lie groups
Fourier analysis | Topological group | [
"Mathematics"
] | 5,994 | [
"Lie groups",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Topological groups"
] |
42,373 | https://en.wikipedia.org/wiki/Logarithmic%20spiral | A logarithmic spiral, equiangular spiral, or growth spiral is a self-similar spiral curve that often appears in nature. The first to describe a logarithmic spiral was Albrecht Dürer (1525) who called it an "eternal line" ("ewige Linie"). More than a century later, the curve was discussed by Descartes (1638), and later extensively investigated by Jacob Bernoulli, who called it Spira mirabilis, "the marvelous spiral".
The logarithmic spiral is distinct from the Archimedean spiral in that the distances between the turnings of a logarithmic spiral increase in a geometric progression, whereas for an Archimedean spiral these distances are constant.
Definition
In polar coordinates the logarithmic spiral can be written as
or
with being the base of natural logarithms, and , being real constants.
In Cartesian coordinates
The logarithmic spiral with the polar equation
can be represented in Cartesian coordinates by
In the complex plane :
Spira mirabilis and Jacob Bernoulli
Spira mirabilis, Latin for "miraculous spiral", is another name for the logarithmic spiral. Although this curve had already been named by other mathematicians, the specific name ("miraculous" or "marvelous" spiral) was given to this curve by Jacob Bernoulli, because he was fascinated by one of its unique mathematical properties: the size of the spiral increases but its shape is unaltered with each successive curve, a property known as self-similarity. Possibly as a result of this unique property, the spira mirabilis has evolved in nature, appearing in certain growing forms such as nautilus shells and sunflower heads. Jacob Bernoulli wanted such a spiral engraved on his headstone along with the phrase "Eadem mutata resurgo" ("Although changed, I shall arise the same."), but, by error, an Archimedean spiral was placed there instead.
Properties
The logarithmic spiral has the following properties (see Spiral):
Pitch angle: with pitch angle (see diagram and animation).(In case of angle would be 0 and the curve a circle with radius .)
Curvature:
Arc length: Especially: , if . This property was first realized by Evangelista Torricelli even before calculus had been invented.
Sector area:
Inversion: Circle inversion () maps the logarithmic spiral onto the logarithmic spiral
Rotating, scaling: Rotating the spiral by angle yields the spiral , which is the original spiral uniformly scaled (at the origin) by . Scaling by gives the same curve.
Self-similarity: A result of the previous property: A scaled logarithmic spiral is congruent (by rotation) to the original curve. Example: The diagram shows spirals with slope angle and . Hence they are all scaled copies of the red one. But they can also be generated by rotating the red one by angles resp.. All spirals have no points in common (see property on complex exponential function).
Relation to other curves: Logarithmic spirals are congruent to their own involutes, evolutes, and the pedal curves based on their centers.
Complex exponential function: The exponential function exactly maps all lines not parallel with the real or imaginary axis in the complex plane, to all logarithmic spirals in the complex plane with centre at : The pitch angle of the logarithmic spiral is the angle between the line and the imaginary axis.
Special cases and approximations
The golden spiral is a logarithmic spiral that grows outward by a factor of the golden ratio for every 90 degrees of rotation (pitch angle about 17.03239 degrees). It can be approximated by a "Fibonacci spiral", made of a sequence of quarter circles with radii proportional to Fibonacci numbers.
In nature
In several natural phenomena one may find curves that are close to being logarithmic spirals. Here follow some examples and reasons:
The approach of a hawk to its prey in classical pursuit, assuming the prey travels in a straight line. Their sharpest view is at an angle to their direction of flight; this angle is the same as the spiral's pitch.
The approach of an insect to a light source. They are used to having the light source at a constant angle to their flight path. Usually the Sun (or Moon for nocturnal species) is the only light source and flying that way will result in a practically straight line. In the same token, a rhumb line approximates a logarithmic spiral close to a pole.
The arms of spiral galaxies. The Milky Way galaxy has several spiral arms, each of which is roughly a logarithmic spiral with pitch of about 12 degrees. However, although spiral galaxies have often been modeled as logarithmic spirals, Archimedean spirals, or hyperbolic spirals, their pitch angles vary with distance from the galactic center, unlike logarithmic spirals (for which this angle does not vary), and also at variance with the other mathematical spirals used to model them.
The nerves of the cornea (this is, corneal nerves of the subepithelial layer terminate near superficial epithelial layer of the cornea in a logarithmic spiral pattern).
The bands of tropical cyclones, such as hurricanes.
Many biological structures including the shells of mollusks. In these cases, the reason may be construction from expanding similar shapes, as is the case for polygonal figures.
Logarithmic spiral beaches can form as the result of wave refraction and diffraction by the coast. Half Moon Bay (California) is an example of such a type of beach.
In engineering applications
Logarithmic spiral antennas are frequency-independent antennas, that is, antennas whose radiation pattern, impedance and polarization remain largely unmodified over a wide bandwidth.
When manufacturing mechanisms by subtractive fabrication machines (such as laser cutters), there can be a loss of precision when the mechanism is fabricated on a different machine due to the difference of material removed (that is, the kerf) by each machine in the cutting process. To adjust for this variation of kerf, the self-similar property of the logarithmic spiral has been used to design a kerf cancelling mechanism for laser cutters.
Logarithmic spiral bevel gears are a type of spiral bevel gear whose gear tooth centerline is a logarithmic spiral. A logarithmic spiral has the advantage of providing equal angles between the tooth centerline and the radial lines, which gives the meshing transmission more stability.
In rock climbing, spring-loaded camming devices are made from metal cams whose outer gripping surfaces are shaped as arcs of logarithmic spirals. When the device is inserted into a rock crack, the rotation of these cams expands their combined width to match the width of the crack, while maintaining a constant angle against the surface of the rock (relative to the center of the spiral, where force is applied). The pitch angle of the spiral is chosen to optimize the friction of the device against the rock.
See also
Archimedean spiral
Epispiral
List of spirals
Mice problem, a geometric problem asking for the path followed by mice chasing one another whose solution is a logarithmic spiral
Tait–Kneser theorem
References
Jim Wilson, Equiangular Spiral (or Logarithmic Spiral) and Its Related Curves, University of Georgia (1999)
Alexander Bogomolny, Spira Mirabilis - Wonderful Spiral, at cut-the-knot
External links
Spira mirabilis history and math
SpiralZoom.com, an educational website about the science of pattern formation, spirals in nature, and spirals in the mythic imagination.
Online exploration using JSXGraph (JavaScript)
YouTube lecture on Zeno's mice problem and logarithmic spirals
Spirals
Spiral
Spiral
Plane curves | Logarithmic spiral | [
"Mathematics"
] | 1,659 | [
"Logarithms",
"Plane curves",
"Euclidean plane geometry",
"E (mathematical constant)",
"Exponentials",
"Planes (geometry)"
] |
42,400 | https://en.wikipedia.org/wiki/Socialization | In sociology, socialization (also socialisation – see spelling differences) is the process of internalizing the norms and ideologies of society. Socialization encompasses both learning and teaching and is thus "the means by which social and cultural continuity are attained".
Socialization is strongly connected to developmental psychology and behaviourism. Humans need social experiences to learn their culture and to survive.
Socialization essentially represents the whole process of learning throughout the life course and is a central influence on the behavior, beliefs, and actions of adults as well as of children.
Socialization may lead to desirable outcomes—sometimes labeled "moral"—as regards the society where it occurs. Individual views are influenced by the society's consensus and usually tend toward what that society finds acceptable or "normal". Socialization provides only a partial explanation for human beliefs and behaviors, maintaining that agents are not blank slates predetermined by their environment; scientific research provides evidence that people are shaped by both social influences and genes.
Genetic studies have shown that a person's environment interacts with their genotype to influence behavioral outcomes.
It is the process by which individuals learn their own societies culture.
History
Notions of society and the state of nature have existed for centuries. In its earliest usages, socialization was simply the act of socializing or another word for socialism. Socialization as a concept originated concurrently with sociology, as sociology was defined as the treatment of "the specifically social, the process and forms of socialization, as such, in contrast to the interests and contents which find expression in socialization". In particular, socialization consisted of the formation and development of social groups, and also the development of a social state of mind in the individuals who associate. Socialization is thus both a cause and an effect of association. The term was relatively uncommon before 1940, but became popular after World War II, appearing in dictionaries and scholarly works such as the theory of Talcott Parsons.
Stages of moral development
Lawrence Kohlberg studied moral reasoning and developed a theory of how individuals reason situations as right from wrong. The first stage is the pre-conventional stage, where a person (typically children) experience the world in terms of pain and pleasure, with their moral decisions solely reflecting this experience. Second, the conventional stage (typical for adolescents and adults) is characterized by an acceptance of society's conventions concerning right and wrong, even when there are no consequences for obedience or disobedience. Finally, the post-conventional stage (more rarely achieved) occurs if a person moves beyond society's norms to consider abstract ethical principles when making moral decisions.
Stages of psychosocial development
Erik H. Erikson (1902–1994) explained the challenges throughout the life course. The first stage in the life course is infancy, where babies learn trust and mistrust. The second stage is toddlerhood where children around the age of two struggle with the challenge of autonomy versus doubt. In stage three, preschool, children struggle to understand the difference between initiative and guilt. Stage four, pre-adolescence, children learn about industriousness and inferiority. In the fifth stage called adolescence, teenagers experience the challenge of gaining identity versus confusion. The sixth stage, young adulthood, is when young people gain insight into life when dealing with the challenge of intimacy and isolation. In stage seven, or middle adulthood, people experience the challenge of trying to make a difference (versus self-absorption). In the final stage, stage eight or old age, people are still learning about the challenge of integrity and despair.< This concept has been further developed by Klaus Hurrelmann and Gudrun Quenzel using the dynamic model of "developmental tasks".
Behaviorism
George Herbert Mead (1863–1931) developed a theory of social behaviorism to explain how social experience develops an individual's self-concept. Mead's central concept is the self: It is composed of self-awareness and self-image. Mead claimed that the self is not there at birth, rather, it is developed with social experience. Since social experience is the exchange of symbols, people tend to find meaning in every action. Seeking meaning leads us to imagine the intention of others. Understanding intention requires imagining the situation from the other's point of view. In effect, others are a mirror in which we can see ourselves. Charles Horton Cooley (1902-1983) coined the term looking glass self, which means self-image based on how we think others see us. According to Mead, the key to developing the self is learning to take the role of the other. With limited social experience, infants can only develop a sense of identity through imitation. Gradually children learn to take the roles of several others. The final stage is the generalized other, which refers to widespread cultural norms and values we use as a reference for evaluating others.
Contradictory evidence to behaviorism
Behaviorism makes claims that when infants are born they lack social experience or self. The social pre-wiring hypothesis, on the other hand, shows proof through a scientific study that social behavior is partly inherited and can influence infants and also even influence foetuses. Wired to be social means that infants are not taught that they are social beings, but they are born as prepared social beings.
The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social". The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social.
Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behavior. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behavior cannot be contributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behavior and identity through genetics.
Principal evidence of this theory is uncovered by examining Twin pregnancies. The main argument is, if there are social behaviors that are inherited and developed before birth, then one should expect twin foetuses to engage in some form of social interaction before they are born. Thus, ten foetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin foetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins was not accidental but specifically aimed.
The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin foetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behavior: when the context enables it, as in the case of twin foetuses, other-directed actions are not only possible but predominant over self-directed actions."
Types
Primary socialization
Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. Primary socialization for a child is very important because it sets the groundwork for all future socialization. It is mainly influenced by immediate family and friends. For example, if a child's mother expresses a discriminatory opinion about a minority or majority group, then that child may think this behavior is acceptable and could continue to have this opinion about that minority or majority group.
Secondary socialization
Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. Basically, it involves the behavioral patterns reinforced by socializing agents of society. Secondary socialization takes place outside the home. It is where children and adults learn how to act in a way that is appropriate for the situations they are in. Schools require very different behavior from the home, and children must act according to new rules. New teachers have to act in a way that is different from pupils and learn the new rules from people around them. Secondary socialization is usually associated with teenagers and adults and involves smaller changes than those occurring in primary socialization. Examples of secondary socialization may include entering a new profession or relocating to a new environment or society.
Anticipatory socialization
Anticipatory socialization refers to the processes of socialization in which a person "rehearses" for future positions, occupations, and social relationships. For example, a couple might move in together before getting married in order to try out, or anticipate, what living together will be like. Research by Kenneth J. Levine and Cynthia A. Hoffner identifies parents as the main source of anticipatory socialization in regard to jobs and careers.
Resocialization
Resocialization refers to the process of discarding former behavior-patterns and reflexes while accepting new ones as part of a life transition. This can occur throughout the human life-span. Resocialization can be an intense experience, with individuals experiencing a sharp break with their past, as well as a need to learn and be exposed to radically different norms and values. One common example involves resocialization through a total institution, or "a setting in which people are isolated from the rest of society and manipulated by an administrative staff". Resocialization via total institutions involves a two step process: 1) the staff work to root out a new inmate's individual identity; and 2) the staff attempt to create for the inmate a new identity.
Other examples include the experiences of a young person leaving home to join the military, or of a religious convert internalizing the beliefs and rituals of a new faith. Another example would be the process by which a transsexual person learns to function socially in a dramatically altered gender-role.
Organizational socialization
Organizational socialization is the process whereby an employee learns the knowledge and skills necessary to assume his or her role in an organization. As newcomers become socialized, they learn about the organization and its history, values, jargon, culture, and procedures. Acquired knowledge about new employees' future work-environment affects the way they are able to apply their skills and abilities to their jobs. How actively engaged the employees are in pursuing knowledge affects their socialization process. New employees also learn about their work group, the specific people they will work with on a daily basis, their own role in the organization, the skills needed to do their job, and both formal procedures and informal norms. Socialization functions as a control system in that newcomers learn to internalize and obey organizational values and practices.
Group socialization
Group socialization is the theory that an individual's peer groups, rather than parental figures, become the primary influence on personality and behavior in adulthood. Parental behavior and the home environment has either no effect on the social development of children, or the effect varies significantly between children. Adolescents spend more time with peers than with parents. Therefore, peer groups have stronger correlations with personality development than parental figures do. For example, twin brothers with an identical genetic heritage will differ in personality because they have different groups of friends, not necessarily because their parents raised them differently. Behavioral genetics suggest that up to fifty percent of the variance in adult personality is due to genetic differences. The environment in which a child is raised accounts for only approximately ten percent in the variance of an adult's personality. As much as twenty percent of the variance is due to measurement error. This suggests that only a very small part of an adult's personality is influenced by factors which parents control (i.e. the home environment). Harris grants that while siblings do not have identical experiences in the home environment (making it difficult to associate a definite figure to the variance of personality due to home environments), the variance found by current methods is so low that researchers should look elsewhere to try to account for the remaining variance. Harris also states that developing long-term personality characteristics away from the home environment would be evolutionarily beneficial because future success is more likely to depend on interactions with peers than on interactions with parents and siblings. Also, because of already existing genetic similarities with parents, developing personalities outside of childhood home environments would further diversify individuals, increasing their evolutionary success.
Stages
Individuals and groups change their evaluations of and commitments to each other over time. There is a predictable sequence of stages that occur as an individual transitions through a group: investigation, socialization, maintenance, resocialization, and remembrance. During each stage, the individual and the group evaluate each other, which leads to an increase or decrease in commitment to socialization. This socialization pushes the individual from prospective to new, full, marginal, and ex member.
Stage 1: Investigation
This stage is marked by a cautious search for information. The individual compares groups in order to determine which one will fulfill their needs (reconnaissance), while the group estimates the value of the potential member (recruitment). The end of this stage is marked by entry to the group, whereby the group asks the individual to join and they accept the offer.
Stage 2: Socialization
Now that the individual has moved from a prospective member to a new member, the recruit must accept the group's culture. At this stage, the individual accepts the group's norms, values, and perspectives (assimilation), and the group may adapt to fit the new member's needs (accommodation). The acceptance transition-point is then reached and the individual becomes a full member. However, this transition can be delayed if the individual or the group reacts negatively. For example, the individual may react cautiously or misinterpret other members' reactions in the belief that they will be treated differently as a newcomer.
Stage 3: Maintenance
During this stage, the individual and the group negotiate what contribution is expected of members (role negotiation). While many members remain in this stage until the end of their membership, some individuals may become dissatisfied with their role in the group or fail to meet the group's expectations (divergence).
Stage 4: Resocialization
If the divergence point is reached, the former full member takes on the role of a marginal member and must be resocialized. There are two possible outcomes of resocialization: the parties resolve their differences and the individual becomes a full member again (convergence), or the group and the individual part ways via expulsion or voluntary exit.
Stage 5: Remembrance
In this stage, former members reminisce about their memories of the group and make sense of their recent departure. If the group reaches a consensus on their reasons for departure, conclusions about the overall experience of the group become part of the group's tradition.
Gender socialization
Henslin contends that "an important part of socialization is the learning of culturally defined gender roles". Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex: boys learn to be boys and girls learn to be girls. This "learning" happens by way of many different agents of socialization. The behavior that is seen to be appropriate for each gender is largely determined by societal, cultural, and economic values in a given society. Gender socialization can therefore vary considerably among societies with different values. The family is certainly important in reinforcing gender roles, but so are groups - including friends, peers, school, work, and the mass media. Social groups reinforce gender roles through "countless subtle and not so subtle ways". In peer-group activities, stereotypic gender-roles may also be rejected, renegotiated, or artfully exploited for a variety of purposes.
Carol Gilligan compared the moral development of girls and boys in her theory of gender and moral development. She claimed that boys have a justice perspective - meaning that they rely on formal rules to define right and wrong. Girls, on the other hand, have a care-and-responsibility perspective, where personal relationships are considered when judging a situation. Gilligan also studied the effect of gender on self-esteem. She claimed that society's socialization of females is the reason why girls' self-esteem diminishes as they grow older. Girls struggle to regain their personal strength when moving through adolescence as they have fewer female teachers and most authority figures are men.
As parents are present in a child's development from the beginning, their influence in a child's early socialization is very important, especially in regard to gender roles. Sociologists have identified four ways in which parents socialize gender roles in their children: Shaping gender related attributes through toys and activities, differing their interaction with children based on the sex of the child, serving as primary gender models, and communicating gender ideals and expectations.
Sociologist of gender R.W. Connell contends that socialization theory is "inadequate" for explaining gender, because it presumes a largely consensual process except for a few "deviants", when really most children revolt against pressures to be conventionally gendered; because it cannot explain contradictory "scripts" that come from different socialization agents in the same society, and because it does not account for conflict between the different levels of an individual's gender (and general) identity.
Racial socialization
Racial socialization, or racial-ethnic socialization, has been defined as "the developmental processes by which children acquire the behaviors, perceptions, values, and attitudes of an ethnic group, and come to see themselves and others as members of the group". The existing literature conceptualizes racial socialization as having multiple dimensions. Researchers have identified five dimensions that commonly appear in the racial socialization literature: cultural socialization, preparation for bias, promotion of mistrust, egalitarianism, and other. Cultural socialization, sometimes referred to as "pride development", refers to parenting practices that teach children about their racial history or heritage.
Preparation for bias refers to parenting practices focused on preparing children to be aware of, and cope with, discrimination. Promotion of mistrust refers to the parenting practices of socializing children to be wary of people from other races. Egalitarianism refers to socializing children with the belief that all people are equal and should be treated with common humanity. In the United States, white people are socialized to perceive race as a zero-sum game and a black-white binary.
Oppression socialization
Oppression socialization refers to the process by which "individuals develop understandings of power and political structure, particularly as these inform perceptions of identity, power, and opportunity relative to gender, racialized group membership, and sexuality". This action is a form of political socialization in its relation to power and the persistent compliance of the disadvantaged with their oppression using limited "overt coercion".
Language socialization
Based on comparative research in different societies, and focusing on the role of language in child development, linguistic anthropologists Elinor Ochs and Bambi Schieffelin have developed the theory of language socialization.
They discovered that the processes of enculturation and socialization do not occur apart from the process of language acquisition, but that children acquire language and culture together in what amounts to an integrated process. Members of all societies socialize children both to and through the use of language; acquiring competence in a language, the novice is by the same token socialized into the categories and norms of the culture, while the culture, in turn, provides the norms of the use of language.
Planned socialization
Planned socialization occurs when other people take actions designed to teach or train others. This type of socialization can take on many forms and can occur at any point from infancy onward.
Natural socialization
Natural socialization occurs when infants and youngsters explore, play and discover the social world around them. Natural socialization is easily seen when looking at the young of almost any mammalian species (and some birds).
On the other hand, planned socialization is mostly a human phenomenon; all through history, people have made plans for teaching or training others. Both natural and planned socialization can have good and bad qualities: it is useful to learn the best features of both natural and planned socialization in order to incorporate them into life in a meaningful way.
Political socialization
Socialization produces the economic, social, and political development of any particular country. The nature of the compromise between nature and nurture also determines whether society is good or harmful. Political socialization is described as "the long developmental process by which an infant (even an adult) citizen learns, imbibes and ultimately internalizes the political culture (core political values, beliefs, norms and ideology) of his political system in order to make him a more informed and effective political participant."
A society's political culture is inculcated in its citizens and passed down from one generation to the next as part of the political socialization process. Agents of socialization are thus people, organizations, or institutions that have an impact on how people perceive themselves, behave, or have other orientations. In contemporary democratic government, political parties are the main forces behind political socialization.
Socialization enhances business, trade, and foreign investment globally. Building technology is made easy, is improved and carried out due to the ease with which interaction in interest services and media work can be connected. Citizens must instil in themselves excellent morals, ethics, and values and must preserve human rights or have sound judgment to be able to lead a country to a higher developmental level in order to construct a decent and democratic society for nation-building. Developing nations can transfer agricultural technology and machinery like tractors, harvesters, and agrochemicals to enhance the agricultural sector of the economy through socialization.
Positive socialization
Positive socialization is the type of social learning that is based on pleasurable and exciting experiences. Individual humans tend to like the people who fill their social learning processes with positive motivation, loving care, and rewarding opportunities. Positive socialization occurs when desired behaviors are reinforced with a reward, encouraging the individual to continue exhibiting similar behaviors in the future.
Negative socialization
Negative socialization occurs when socialialization agents use punishment, harsh criticisms, or anger to try to "teach us a lesson"; and often we come to dislike both negative socialization and the people who impose it on us. There are all types of mixes of positive and negative socialization, and the more positive social learning experiences we have, the happier we tend to be—especially if we are able to learn useful information that helps us cope well with the challenges of life. A high ratio of negative to positive socialization can make a person unhappy, leading to defeated or pessimistic feelings about life.
Bullying can examplify negative socialization.
Institutions
In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of individuals within a given human collectivity. Institutions are identified with a social purpose and permanence, transcending individual human lives and intentions, and with the making and enforcing of rules governing cooperative human behavior.
Productive processing of reality
From the late 1980s, sociological and psychological theories have been connected with the term socialization. One example of this connection is the theory of Klaus Hurrelmann. In his book Social Structure and Personality Development, he develops the model of productive processing of reality. The core idea is that socialization refers to an individual's personality development. It is the result of the productive processing of interior and exterior realities. Bodily and mental qualities and traits constitute a person's inner reality; the circumstances of the social and physical environment embody the external reality. Reality processing is productive because human beings actively grapple with their lives and attempt to cope with the attendant developmental tasks. The success of such a process depends on the personal and social resources available. Incorporated within all developmental tasks is the necessity to reconcile personal individuation and social integration and so secure the "I-dentity". The process of productive processing of reality is an enduring process throughout the life course.
Oversocialization
The problem of order, or Hobbesian problem, questions the existence of social orders and asks if it is possible to oppose them. Émile Durkheim viewed society as an external force controlling individuals through the imposition of sanctions and codes of law. However, constraints and sanctions also arise internally as feelings of guilt or anxiety.
See also
References
Further reading
Bayley, Robert; Schecter, Sandra R. (2003). Multilingual Matters,
Duff, Patricia A.; Hornberger, Nancy H. (2010). Language Socialization: Encyclopedia of Language and Education, Volume 8. Springer,
Kramsch, Claire (2003). Language Acquisition and Language Socialization: Ecological Perspectives – Advances in Applied Linguistics. Continuum International Publishing Group,
McQuail, Dennis (2005). McQuail's Mass Communication Theory: Fifth Edition, London: Sage.
Mehan, Hugh (1991). Sociological Foundations Supporting the Study of Cultural Diversity. National Center for Research on Cultural Diversity and Second Language Learning.
White, Graham (1977). Socialisation, London: Longman.
Conformity
Deviance (sociology)
Sociological terminology
Majority–minority relations | Socialization | [
"Biology"
] | 5,146 | [
"Deviance (sociology)",
"Behavior",
"Conformity",
"Human behavior"
] |
42,405 | https://en.wikipedia.org/wiki/Ichthyology | Ichthyology is the branch of zoology devoted to the study of fish, including bony fish (Osteichthyes), cartilaginous fish (Chondrichthyes), and jawless fish (Agnatha). According to FishBase, 33,400 species of fish had been described as of October 2016, with approximately 250 new species described each year.
Etymology
The word is derived from the Greek words ἰχθύς, ikhthus, meaning "fish"; and λογία, logia, meaning "to study".
History
The study of fish dates from the Upper Paleolithic Revolution (with the advent of "high culture"). The science of ichthyology was developed in several interconnecting epochs, each with various significant advancements.
The study of fish receives its origins from humans' desire to feed, clothe, and equip themselves with useful implements. According to Michael Barton, a prominent ichthyologist and professor at Centre College, "the earliest ichthyologists were hunters and gatherers who had learned how to obtain the most useful fish, where to obtain them in abundance, and at what times they might be the most available". Early cultures manifested these insights in abstract and identifiable artistic expressions.
1500 BC–40 AD
Informal, scientific descriptions of fish are represented within the Judeo-Christian tradition. The Old Testament laws of kashrut forbade the consumption of fish without scales or appendages. Theologians and ichthyologists believe that the apostle Peter and his contemporaries harvested the fish that are today sold in modern industry along the Sea of Galilee, presently known as Lake Kinneret. These fish include cyprinids of the genera Barbus and Mirogrex, cichlids of the genus Sarotherodon, and Mugil cephalus of the family Mugilidae.
335 BC–80 AD
Aristotle incorporated ichthyology into formal scientific study. Between 333 and 322 BC, he provided the earliest taxonomic classification of fish, accurately describing 117 species of Mediterranean fish. Furthermore, Aristotle documented anatomical and behavioral differences between fish and marine mammals. After his death, some of his pupils continued his ichthyological research. Theophrastus, for example, composed a treatise on amphibious fish. The Romans, although less devoted to science, wrote extensively about fish. Pliny the Elder, a notable Roman naturalist, compiled the ichthyological works of indigenous Greeks, including verifiable and ambiguous peculiarities such as the sawfish and mermaid, respectively. Pliny's documentation was the last significant contribution to ichthyology until the European Renaissance.
European Renaissance
The writings of three 16th-century scholars, Hippolito Salviani, Pierre Belon, and Guillaume Rondelet, signify the conception of modern ichthyology. The investigations of these individuals were based upon actual research in comparison to ancient recitations. This property popularized and emphasized these discoveries. Despite their prominence, Rondelet's De Piscibus Marinis is regarded as the most influential, identifying 244 species of fish.
16th–17th century
The incremental alterations in navigation and shipbuilding throughout the Renaissance marked the commencement of a new epoch in ichthyology. The Renaissance culminated with the era of exploration and colonization, and upon the cosmopolitan interest in navigation came the specialization in naturalism. Georg Marcgrave of Saxony composed the Naturalis Brasilae in 1648. This document contained a description of 100 species of fish indigenous to the Brazilian coastline. In 1686, John Ray and Francis Willughby collaboratively published Historia Piscium, a scientific manuscript containing 420 species of fish, 178 of these newly discovered. The fish contained within this informative literature were arranged in a provisional system of classification.
The classification used within the Historia Piscium was further developed by Carl Linnaeus, the "father of modern taxonomy". His taxonomic approach became the systematic approach to the study of organisms, including fish. Linnaeus was a professor at the University of Uppsala and an eminent botanist; however, one of his colleagues, Peter Artedi, earned the title "father of ichthyology" through his indispensable advancements. Artedi contributed to Linnaeus's refinement of the principles of taxonomy. Furthermore, he recognized five additional orders of fish: Malacopterygii, Acanthopterygii, Branchiostegi, Chondropterygii, and Plagiuri. Artedi developed standard methods for making counts and measurements of anatomical features that are modernly exploited. Another associate of Linnaeus, Albertus Seba, was a prosperous pharmacist from Amsterdam. Seba assembled a cabinet, or collection, of fish. He invited Artedi to use this assortment of fish; in 1735, Artedi fell into an Amsterdam canal and drowned at the age of 30.
Linnaeus posthumously published Artedi's manuscripts as Ichthyologia, sive Opera Omnia de Piscibus (1738). His refinement of taxonomy culminated in the development of the binomial nomenclature, which is in use by contemporary ichthyologists. Furthermore, he revised the orders introduced by Artedi, placing significance on pelvic fins. Fish lacking this appendage were placed within the order Apodes; fish having abdominal, thoracic, or jugular pelvic fins were termed Abdominales, Thoracici, and Jugulares, respectively. However, these alterations were not grounded within evolutionary theory. Therefore, over a century was needed for Charles Darwin to provide the intellectual foundation needed to perceive that the degree of similarity in taxonomic features was a consequence of phylogenetic relationships.
Modern era
Close to the dawn of the 19th century, Marcus Elieser Bloch of Berlin and Georges Cuvier of Paris made attempts to consolidate the knowledge of ichthyology. Cuvier summarized all of the available information in his monumental Histoire Naturelle des Poissons. This manuscript was published between 1828 and 1849 in a 22-volume series. This document describes 4,514 species of fish, 2,311 of these new to science. It remains one of the most ambitious treatises of the modern world. Scientific exploration of the Americas advanced knowledge of the remarkable diversity of fish. Charles Alexandre Lesueur was a student of Cuvier. He made a cabinet of fish dwelling within the Great Lakes and Saint Lawrence River regions.
Adventurous individuals such as John James Audubon and Constantine Samuel Rafinesque figure in the faunal documentation of North America. They often traveled with one another. Rafinesque wrote Ichthyologic Ohiensis in 1820. In addition, Louis Agassiz of Switzerland established his reputation through the study of freshwater fish and the first comprehensive treatment of palaeoichthyology, Poisson Fossil's. In the 1840s, Agassiz moved to the United States, where he taught at Harvard University until his death in 1873.
Albert Günther published his Catalogue of the Fish of the British Museum between 1859 and 1870, describing over 6,800 species and mentioning another 1,700. Generally considered one of the most influential ichthyologists, David Starr Jordan wrote 650 articles and books on the subject and served as president of Indiana University and Stanford University.
Modern publications
Organizations
Notable ichthyologists
Members of this list meet one or more of the following criteria: 1) Author of 50 or more fish taxon names, 2) Author of major reference work in ichthyology, 3) Founder of major journal or museum, 4) Person most notable for other reasons who has also worked in ichthyology.
Alexander Emanuel Agassiz
Louis Agassiz
Emperor Akihito of Japan
Gerald R. Allen
Peter Artedi
Herbert R. Axelrod
William O. Ayres, California
Spencer Fullerton Baird
Tarleton Hoffman Bean
Lev Berg, Russia
Henry Bryant Bigelow
Pieter Bleeker, East Indies
Marcus Elieser Bloch
George Albert Boulenger
Jean Cadenat
Pierre Carbonnier
Eugenie Clark
Leonard Compagno
Edward Drinker Cope
Georges Cuvier
Francis Day, India
Francis Buchanan-Hamilton, Scottish
Carl H. Eigenmann
Rosa Smith Eigenmann
William N. Eschmeyer
Barton Warren Evermann
Henry Weed Fowler
Joseph Paul Gaimard
Samuel Garman
Charles Henry Gilbert
Theodore Nicholas Gill
Charles Frédéric Girard
George Brown Goode
Albert Günther
Albert William Herre
Carl L. Hubbs
David Starr Jordan
Maurice Kottelat, Swiss
Bernard Germain de Lacépède
Carl Linnaeus
Seth Eugene Meek
George S. Myers
Joseph S. Nelson, Fishes of the World
John Treadwell Nichols, China, founder of Copeia
John Roxborough Norman
Peter Simon Pallas
Wilhelm Peters
Felipe Poey
Jean René Constant Quoy
Constantine Samuel Rafinesque
John Ernest Randall
Charles Tate Regan
John Richardson
Raúl Adolfo Ringuelet
Eduard Rüppell
Johann Gottlob Schneider
H.M. Smith
J.L.B. Smith
Edwin Chapin Starks
Franz Steindachner
Royal D. Suttkus
Frank Talbot
Shigeho Tanaka
Ethelwynn Trewavas, English
Achille Valenciennes
Johann Julius Walbaum
Gilbert Percy Whitley
Francis Willughby
Stan Wood
William Yarrell
Paleoichthyologists
Hans C. Bjerring
Erik Jarvik
Erik Stensiö
Non-academic ichthyologists
Sakana-kun
See also
Ichthyology terms
Ichthyology and GIS
Meristics
References
Additional references
Bond, Carl E (1996) Biology of Fish. Saunders. .
Nelson, Joseph S (2006) Fish of the World. Wiley. .
Michael Barton (2007) Bond's Biology of Fish, Third Edition. Julet. .
Pauly D, Froese R, Palomares ML and Stergiou KI (2014) Fish on line, Version 3 A guide to learning and teaching ichthyology using the FishBase Information System.
External links
Brian Coad's Dictionary of Ichthyology
Ichthyology Web Resources (archived 13 July 2006)
List of Ichthyologists. .
Subfields of zoology | Ichthyology | [
"Biology"
] | 2,017 | [
"Subfields of zoology"
] |
42,416 | https://en.wikipedia.org/wiki/X10%20%28industry%20standard%29 | X10 is a protocol for communication among electronic devices used for home automation (domotics). It primarily uses power line wiring for signaling and control, where the signals involve brief radio frequency bursts representing digital information. A wireless radio-based protocol transport is also defined.
X10 was developed in 1975 by Pico Electronics of Glenrothes, Scotland, in order to allow remote control of home devices and appliances. It was the first general purpose home automation network technology and remains the most widely available.
Although a number of higher-bandwidth alternatives exist, X10 remains popular in the home environment with millions of units in use worldwide, and inexpensive availability of new components.
History
In 1970, a group of engineers started a company in Glenrothes, Scotland called Pico Electronics. The company developed the first single chip calculator. When calculator integrated circuit prices started to fall, Pico refocused on commercial products rather than plain ICs.
In 1974, the Pico engineers jointly developed an LP record turntable, the ADC Accutrac 4000, with Birmingham Sound Reproducers, at the time the largest manufacturer of record changers in the world. It could be programmed to play selected tracks, and could be operated by a remote control using ultrasound signals, which sparked the idea of remote control for lights and appliances. By 1975, the X10 project was conceived, so named because it was the tenth project. In 1978, X10 products started to appear in RadioShack and Sears stores. Together with BSR a partnership was formed, with the name X10 Ltd. At that time the system consisted of a 16 channel command console, a lamp module, and an appliance module. Soon after came the wall switch module and the first X10 timer.
In the 1980s, the CP-290 computer interface was released. Software for the interface runs on the Commodore 64, Apple II, classic Mac OS, MS-DOS, and Microsoft Windows.
In 1985, BSR went out of business, and X10 (USA) Inc. was formed. In the early 1990s, the consumer market was divided into two main categories, the ultra-high-end with a budget at US$100,000 and the mass market with budgets at US$2,000 to US$35,000. CEBus (1984) and LonWorks (1991) were attempts to improve reliability and replace X10.
Brands
X10 components have been sold under a variety of brand names:
X10 Powerhouse
X10 Pro
X10 Activehome
Radio Shack Plug 'n Power
Leviton Central Control System (CCS)
Leviton Decora Electronic Controls
Sears Home Control System
Stanley LightMaker
Stanley Homelink
Black & Decker Freewire
IBM Home Director
RCA Home Control
GE Homeminder
Advanced Control Technologies (ACT)
Magnavox Home Security
NuTone
Schlage
Smarthome
Safety 1st
HAL
HomeSeer
Many of these vendors have since exited this business.
Power line carrier control overview
Household electrical wiring which powers lights and appliances is used to send digital data between X10 devices. This data is encoded onto a 120 kHz carrier which is transmitted as bursts during the relatively quiet zero crossings of the 50 or 60 Hz AC alternating current waveform. One bit is transmitted at each zero crossing.
The digital data consists of an address and a command sent from a controller to a controlled device. More advanced controllers can also query equally advanced devices to respond with their status. This status may be as simple as "off" or "on", or the current dimming level, or even the temperature or other sensor reading. Devices usually plug into the wall where a lamp, television, or other household appliance plugs in; however some built-in controllers are also available for wall switches and ceiling fixtures.
The relatively high-frequency carrier wave carrying the signal cannot pass through a power transformer or across the phases of a multiphase system. For split phase systems, the signal can be passively coupled from leg-to-leg using a passive capacitor, but for three phase systems or where the capacitor provides insufficient coupling, an active X10 repeater can be used. To allow signals to be coupled across phases and still match each phase's zero crossing point, each bit is transmitted three times in each half cycle, offset by 1/6 cycle.
It may also be desirable to block X10 signals from leaving the local area so, for example, the X10 controls in one house do not interfere with the X10 controls in a neighboring house. In this situation, inductive filters can be used to attenuate the X10 signals coming into or going out of the local area.
Protocol
Whether using power line or radio communications, packets transmitted using the X10 control protocol consist of a four bit house code followed by one or more four bit unit codes, finally followed by a four bit command. For the convenience of users configuring a system, the four bit house code is selected as a letter from A through P while the four bit unit code is a number 1 through 16.
When the system is installed, each controlled device is configured to respond to one of the 256 possible addresses (16 house codes × 16 unit codes); each device reacts to commands specifically addressed to it, or possibly to several broadcast commands.
The protocol may transmit a message that says "select code A3", followed by "turn on", which commands unit "A3" to turn on its device. Several units can be addressed before giving the command, allowing a command to affect several units simultaneously. For example, "select A3", "select A15", "select A4", and finally, "turn on", causes units A3, A4, and A15 to all turn on.
Note that there is no restriction that prevents using more than one house code within a single house. The "all lights on" command and "all units off" commands will only affect a single house code, so an installation using multiple house codes effectively has the devices divided into separate zones.
One-way vs two-way
Inexpensive X10 devices only receive commands and do not acknowledge their status to the rest of the network. Two-way controller devices allow for a more robust network but cost two to four times more and require two-way X10 devices.
List of X10 commands
List of X10 house and unit code encodings
Note that the binary values for the house and unit codes correspond, but they are not a straight binary sequence. A unit code is followed by one additional "0" bit to distinguish from a command code (detailed above).
Physical layer details
In the 60 Hz AC current flow, each bit transmitted requires two zero crossings. A "1" bit is represented by an active zero crossing followed by an inactive zero crossing. A "0" bit is represented by an inactive zero crossing followed by an active zero crossing. An active zero crossing is represented by a 1 millisecond burst of 120 kHz at the zero crossing point (nominally 0°, but within 200 microseconds of the zero crossing point). An inactive zero crossing will not have a pulse of 120 kHz signal.
In order to provide a predictable start point, every data frame transmitted always begins with a start code of three active zero crossings followed by an inactive crossing. Since all data bits are sent as one active and one inactive (or one inactive and one active) zero crossing, the start code, possessing three active crossings in a row, can be uniquely detected. Many X10 protocol charts represent this start code as "1110", but it is important to realize that is in terms of zero crossings, not data bits.
Immediately after the start code, a 4-bit house code (normally represented by the letters A to P on interface units) appears, and after the house code comes a 5-bit function code. Function codes may specify a unit number code (1–16) or a command code. The unit number or command code occupies the first 4 of the 5 bits. The final bit is a 0 for a unit code and a 1 for a command code. Multiple unit codes may be transmitted in sequence before a command code is finally sent. The command will be applied to all unit codes sent. It is also possible to send a message with no unit codes, just a house code and a command code. This will apply to the command to the last group of units codes previously sent.
One start code, one house code, and one function code is known as an X10 frame and represent the minimum components of a valid X10 data packet.
Each frame is sent twice in succession to make sure the receivers understand it over any power line noise for purposes of redundancy, reliability, and to accommodate line repeaters. After allowing for retransmission, line control, etc., data rates are around 20 bit/s, making X10 data transmission so slow that the technology is confined to turning devices on and off or other very simple operations.
Whenever the data changes from one address to another address, from an address to a command, or from one command to another command, the data frames must be separated by at least 6 clear zero crossings (or "000000"). The sequence of six zeros resets the device decoder hardware.
Later developments (1997) of hardware are improvements of the native X10 hardware. In Europe (2001) for the 230 VAC 50 Hz market. All improved products use the same X10 protocol and are compatible.
RF protocol
To allow for wireless keypads, remote switches, motion sensors, et cetera, an RF protocol is also defined. X10 wireless devices send data packets that are nearly identical to the NEC IR protocol used by many IR remotes, and a radio receiver then provides a bridge which translates these radio packets to ordinary X10 power line control packets. The wireless protocol operates at a frequency of 310 MHz in the U.S. and 433.92 MHz in European systems.
The devices available using the radio protocol include:
Keypad controllers, for example the X10 Palm Pad HR12A ("clickers")
Keychain controllers that can control one to four X10 devices, such as the KR19A.
Burglar alarm modules that can transmit sensor data
Passive infrared switches to control lighting and X-10 chimes
Non-passive information bursts
Hardware support
Device modules
Depending on the load that is to be controlled, different modules must be used. For incandescent lamp loads, a lamp module or wall switch module can be used. These modules switch the power using a TRIAC solid state switch and are also capable of dimming the lamp load. Lamp modules are almost silent in operation, and generally rated to control loads ranging from approximately 60 to 500 watts.
For loads other than incandescent lamps, such as fluorescent lamps, high-intensity discharge lamps, and electrical home appliances, the triac-based electronic switching in the lamp module is unsuitable and an appliance module must be used instead. These modules switch the power using an impulse relay. In the U.S., these modules are generally rated to control loads up to 15 amperes (1800 watts at 120 V).
Many device modules offer a feature called local control. If the module is switched off, operating the power switch on the lamp or appliance will cause the module to turn on. In this way, a lamp can still be lit or a coffee pot turned on without the need to use an X10 controller. Wall switch modules may not offer this feature. As a result, older Appliance modules may fail to work with, for example, a very low load such as a 5W LED table lamp.
Some wall switch modules offer a feature called local dimming. Ordinarily, the local push button of a wall switch module simply offers on/off control with no possibility of locally dimming the controlled lamp. If local dimming is offered, holding down the push button will cause the lamp to cycle through its brightness range.
Higher end modules have more advanced features such as programmable on levels, customizable fade rates, the ability to transmit commands when used (referred to as 2-way devices), and scene support.
There are sensor modules that sense and report temperature, light, infrared, motion, or contact openings and closures. Device modules include thermostats, audible alarms and controllers for low voltage switches.
Controllers
X10 controllers range from extremely simple to very sophisticated.
The simplest controllers are arranged to control four X10 devices at four sequential addresses (1–4 or 5–8). The controllers typically contain the following buttons:
Unit 1 on/off
Unit 2 on/off
Unit 3 on/off
Unit 4 on/off
Brighten/dim (last selected unit)
All lights on/all units off
More sophisticated controllers can control more units and/or incorporate timers that perform preprogrammed functions at specific times each day. Units are also available that use passive infrared motion detectors or photocells to turn lights on and off based on external conditions.
Finally, there are very sophisticated units are available that can be fully programmed and/or controlled with a piece of software called Active Home, such as the CM11A serial interface. These systems can execute many different timed events, respond to external sensors, and execute, with the press of a single button, an entire scene, turning lights on, establishing brightness levels, and so on. Control programs are available for computers running Microsoft Windows, Apple's Macintosh, Linux and FreeBSD operating systems.
Burglar alarm systems are also available. These systems contain door/window sensors, as well as motion sensors that use a coded radio frequency (RF) signal to identify when they are tripped or just to routinely check-in and give a heart-beat signal to show that the system is still active. Users can arm and disarm their system via several different remote controls that also use a coded RF signal to ensure security. When an alarm is triggered the console will make an outbound telephone call with a recorded message. The console will also use X10 protocols to flash lights when an alarm has been triggered while the security console sounds an external siren. Using X10 protocols, signals will also be sent to remote sirens for additional security.
Bridges
There are bridges to translate X10 to other home automation standards (e.g., KNX). ioBridge can be used to translate the X10 protocol to a web service API via the X10 PSC04 Powerline Interface Module. The magDomus home controller from magnocomp allows interconnection and inter-operation between most home automation technologies.
Thermostat
With X10 being an open standard, companies such as RCS released an x10 controllable thermostat model TX15-B, which is controllable via a web interface or a computer running a X10 software such as HAL or HomeSeer.
Limitations
Compatibility
Solid-state switches used in X10 controls pass a very small leakage current. Compact fluorescent lamps may display nuisance blinking when switched off; CFL manufacturers recommend against controlling lamps with solid-state timers or remote controls.
Some X10 controllers with triac solid-state outputs may not work well with low power devices (below 50 watts) or devices like fluorescent bulbs due to the leakage current of the device. An appliance module, using a relay with metallic contacts may resolve this problem. Many older appliance units have a 'local control' feature whereby the relay is intentionally bypassed with a high value resistor; the module can then sense the appliance's own switch and turn on the relay when the local switch is operated. This sense current may be incompatible with LED or CFL lamps.
Not all devices can be used on a dimmer. Fluorescent lamps are not dimmable with incandescent lamp dimmers; certain models of compact fluorescent lamps are dimmable but cost more. Motorized appliances such as fans, etc. generally will not operate as expected on a dimmer.
Wiring and interfering sources
One problem with X10 is excessive attenuation of signals between the two live conductors in the 3-wire 120/240 volt system used in typical North American residential construction. Signals from a transmitter on one live conductor may not propagate through the high impedance of the distribution transformer winding to the other live conductor. Often, there's simply no reliable path to allow the X10 signals to propagate from one transformer leg wire to the other; this failure may come and go as large 240 volt devices such as stoves or dryers are turned on and off. (When turned on, such devices provide a low-impedance bridge for the X10 signals between the two leg wires.) This problem can be permanently overcome by installing a capacitor between the leg wires as a path for the X10 signals; manufacturers commonly sell signal couplers that plug into 240 volt sockets that perform this function. More sophisticated installations install an active repeater device between the legs, while others combine signal amplifiers with a coupling device. A repeater is also needed for inter-phase communication in homes with three-phase electric power. In many countries outside North America, entire houses are typically wired from a single 240 volt single-phase wire, so this problem does not occur.
Television receivers or household wireless devices may cause spurious "off" or "on" signals. Noise filtering (as installed on computers as well as many modern appliances) may help keep external noise out of X10 signals, but noise filters not designed for X10 may also attenuate X10 signals traveling on the branch circuit to which the appliance is connected.
Certain types of power supplies used in modern electronic equipment, such as computers, television receivers and satellite receivers, attenuate passing X10 signals by providing a low impedance path to high frequency signals. Typically, the capacitors used on the inputs to these power supplies short the X10 signal from line to neutral, suppressing any hope of X10 control on the circuit near that device. Filters are available that will block the X10 signals from ever reaching such devices; plugging offending devices into such filters can cure mysterious X10 intermittent failures.
Having a backup power supply or standby power supply such as used with computers or other electronic devices can totally kill that leg in a household installation because of the filtering used in the power supply.
Commands getting lost
X10 signals can only be transmitted one command at a time, first by addressing the device to control, and then sending an operation for that device to perform. If two X10 signals are transmitted at the same time they may collide or interleave, leading to commands that either cannot be decoded or that trigger incorrect operations. The CM15A and RR501 Transceiver can avoid these signal collisions that can sometimes occur with other models.
Lack of speed
The X10 protocol is slow. It takes roughly three quarters of a second to transmit a device address and a command. While generally not noticeable when using a tabletop controller, it becomes a noticeable problem when using 2-way switches or when utilizing some sort of computerized controller. The apparent delay can be lessened somewhat by using slower device dim rates. With more advanced modules another option is to use group control (lighting scene) extended commands. These allow adjusting several modules at once by a single command.
Limited functionality
X10 protocol does support more advanced control over the dimming speed, direct dim level setting and group control (scene settings). This is done via extended message set, which is an official part of X10 standard. However support for all extended messages is not mandatory, and many cheaper modules implement only the basic message set. These require adjusting each lighting circuit one after the other, which can be visually unappealing and also very slow.
Interference and lack of encryption
The standard X10 power line and RF protocols lack support for encryption, and can only address 256 devices. Unfiltered power line signals from close neighbors using the same X10 device addresses may interfere with each other. Interfering RF wireless signals may similarly be received, with it being easy for anyone nearby with an X10 RF remote to wittingly or unwittingly cause mayhem if an RF to power line device is being used on a premises.
See also
Insteon
Home automation
Power-line communication
References
External links
X10 Knowledge Base
X10 Schematics and Modifications
Digital X-10, Which One Should I Use?.
Home automation
Remote control
Architectural lighting design
Lighting | X10 (industry standard) | [
"Technology"
] | 4,185 | [
"Home automation"
] |
42,445 | https://en.wikipedia.org/wiki/Dalton%20%28unit%29 | The dalton or unified atomic mass unit (symbols: Da or u, respectively) is a unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. It is a non-SI unit accepted for use with SI. The atomic mass constant, denoted mu, is defined identically, giving .
This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13.
The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the unit kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total.
The mole is a unit of amount of substance used in chemistry and physics, such that the mass of one mole of a substance expressed in grams is numerically equal to the average mass of one of its particles expressed in daltons. That is, the molar mass of a chemical compound expressed in g/mol or kg/kmol is numerically equal to its average molecular mass expressed in Da. For example, the average mass of one molecule of water is about 18.0153 Da, and the mass of one mole of water is about 18.0153 g. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for practical purposes, it is only approximate, because of the 2019 redefinition of the mole.
In general, the mass in daltons of an atom is numerically close but not exactly equal to the number of nucleons in its nucleus. It follows that the molar mass of a compound (grams per mole) is numerically close to the average number of nucleons contained in each molecule. By definition, the mass of an atom of carbon-12 is 12 daltons, which corresponds with the number of nucleons that it has (6 protons and 6 neutrons). However, the mass of an atomic-scale object is affected by the binding energy of the nucleons in its atomic nuclei, as well as the mass and binding energy of its electrons. Therefore, this equality holds only for the carbon-12 atom in the stated conditions, and will vary for other substances. For example, the mass of an unbound atom of the common hydrogen isotope (hydrogen-1, protium) is ,
the mass of a proton is the mass of a free neutron is and the mass of a hydrogen-2 (deuterium) atom is . In general, the difference (absolute mass excess) is less than 0.1%; exceptions include hydrogen-1 (about 0.8%), helium-3 (0.5%), lithium-6 (0.25%) and beryllium (0.14%).
The dalton differs from the unit of mass in the system of atomic units, which is the electron rest mass (m).
Energy equivalents
The atomic mass constant can also be expressed as its energy-equivalent, mc. The CODATA recommended values are:
The mass-equivalent is commonly used in place of a unit of mass in particle physics, and these values are also important for the practical determination of relative atomic masses.
History
Origin of the concept
The interpretation of the law of definite proportions in terms of the atomic theory of matter implied that the masses of atoms of various elements had definite ratios that depended on the elements. While the actual masses were unknown, the relative masses could be deduced from that law. In 1803 John Dalton proposed to use the (still unknown) atomic mass of the lightest atom, hydrogen, as the natural unit of atomic mass. This was the basis of the atomic weight scale.
For technical reasons, in 1898, chemist Wilhelm Ostwald and others proposed to redefine the unit of atomic mass as the mass of an oxygen atom. That proposal was formally adopted by the International Committee on Atomic Weights (ICAW) in 1903. That was approximately the mass of one hydrogen atom, but oxygen was more amenable to experimental determination. This suggestion was made before the discovery of isotopes in 1912. Physicist Jean Perrin had adopted the same definition in 1909 during his experiments to determine the atomic masses and the Avogadro constant. This definition remained unchanged until 1961. Perrin also defined the "mole" as an amount of a compound that contained as many molecules as 32 grams of oxygen (). He called that number the Avogadro number in honor of physicist Amedeo Avogadro.
Isotopic variation
The discovery of isotopes of oxygen in 1929 required a more precise definition of the unit. Two distinct definitions came into use. Chemists choose to define the AMU as of the average mass of an oxygen atom as found in nature; that is, the average of the masses of the known isotopes, weighted by their natural abundance. Physicists, on the other hand, defined it as of the mass of an atom of the isotope oxygen-16 (16O).
Definition by IUPAC
The existence of two distinct units with the same name was confusing, and the difference (about in relative terms) was large enough to affect high-precision measurements. Moreover, it was discovered that the isotopes of oxygen had different natural abundances in water and in air. For these and other reasons, in 1961 the International Union of Pure and Applied Chemistry (IUPAC), which had absorbed the ICAW, adopted a new definition of the atomic mass unit for use in both physics and chemistry; namely, of the mass of a carbon-12 atom. This new value was intermediate between the two earlier definitions, but closer to the one used by chemists (who would be affected the most by the change).
The new unit was named the "unified atomic mass unit" and given a new symbol "u", to replace the old "amu" that had been used for the oxygen-based unit. However, the old symbol "amu" has sometimes been used, after 1961, to refer to the new unit, particularly in lay and preparatory contexts.
With this new definition, the standard atomic weight of carbon is about , and that of oxygen is about . These values, generally used in chemistry, are based on averages of many samples from Earth's crust, its atmosphere, and organic materials.
Adoption by BIPM
The IUPAC 1961 definition of the unified atomic mass unit, with that name and symbol "u", was adopted by the International Bureau for Weights and Measures (BIPM) in 1971 as a non-SI unit accepted for use with the SI.
Unit name
In 1993, the IUPAC proposed the shorter name "dalton" (with symbol "Da") for the unified atomic mass unit. As with other unit names such as watt and newton, "dalton" is not capitalized in English, but its symbol, "Da", is capitalized. The name was endorsed by the International Union of Pure and Applied Physics (IUPAP) in 2005.
In 2003 the name was recommended to the BIPM by the Consultative Committee for Units, part of the CIPM, as it "is shorter and works better with [SI] prefixes". In 2006, the BIPM included the dalton in its 8th edition of the SI brochure of formal definitions as a non-SI unit accepted for use with the SI. The name was also listed as an alternative to "unified atomic mass unit" by the International Organization for Standardization in 2009. It is now recommended by several scientific publishers, and some of them consider "atomic mass unit" and "amu" deprecated. In 2019, the BIPM retained the dalton in its 9th edition of the SI brochure, while dropping the unified atomic mass unit from its table of non-SI units accepted for use with the SI, but secondarily notes that the dalton (Da) and the unified atomic mass unit (u) are alternative names (and symbols) for the same unit.
2019 revision of the SI
The definition of the dalton was not affected by the 2019 revision of the SI, that is, 1 Da in the SI is still of the mass of a carbon-12 atom, a quantity that must be determined experimentally in terms of SI units. However, the definition of a mole was changed to be the amount of substance consisting of exactly entities and the definition of the kilogram was changed as well. As a consequence, the molar mass constant remains close to but no longer exactly 1 g/mol, meaning that the mass in grams of one mole of any substance remains nearly but no longer exactly numerically equal to its average molecular mass in daltons, although the relative standard uncertainty of at the time of the redefinition is insignificant for all practical purposes.
Measurement
Though relative atomic masses are defined for neutral atoms, they are measured (by mass spectrometry) for ions: hence, the measured values must be corrected for the mass of the electrons that were removed to form the ions, and also for the mass equivalent of the electron binding energy, E/mc. The total binding energy of the six electrons in a carbon-12 atom is = : Eb/muc2 = , or about one part in 10 million of the mass of the atom.
Before the 2019 revision of the SI, experiments were aimed to determine the value of the Avogadro constant for finding the value of the unified atomic mass unit.
Josef Loschmidt
A reasonably accurate value of the atomic mass unit was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas.
Jean Perrin
Perrin estimated the Avogadro number by a variety of methods, at the turn of the 20th century. He was awarded the 1926 Nobel Prize in Physics, largely for this work.
Coulometry
The electric charge per mole of elementary charges is a constant called the Faraday constant, F, whose value had been essentially known since 1834 when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan obtained the first measurement of the charge on an electron, −e. The quotient F/e provided an estimate of the Avogadro constant.
The classic experiment is that of Bower and Davis at NIST, and relies on dissolving silver metal away from the anode of an electrolysis cell, while passing a constant electric current I for a known time t. If m is the mass of silver lost from the anode and A the atomic weight of silver, then the Faraday constant is given by:
The NIST scientists devised a method to compensate for silver lost from the anode by mechanical causes, and conducted an isotope analysis of the silver used to determine its atomic weight. Their value for the conventional Faraday constant was F = , which corresponds to a value for the Avogadro constant of : both values have a relative standard uncertainty of .
Electron mass measurement
In practice, the atomic mass constant is determined from the electron rest mass m and the electron relative atomic mass A(e) (that is, the mass of electron divided by the atomic mass constant). The relative atomic mass of the electron can be measured in cyclotron experiments, while the rest mass of the electron can be derived from other physical constants.
where c is the speed of light, h is the Planck constant, α is the fine-structure constant, and R is the Rydberg constant.
As may be observed from the old values (2014 CODATA) in the table below, the main limiting factor in the precision of the Avogadro constant was the uncertainty in the value of the Planck constant, as all the other constants that contribute to the calculation were known more precisely.
The power of having defined values of universal constants as is presently the case can be understood from the table below (2018 CODATA).
X-ray crystal density methods
Silicon single crystals may be produced today in commercial facilities with extremely high purity and with few lattice defects. This method defined the Avogadro constant as the ratio of the molar volume, V, to the atomic volume V:
where and n is the number of atoms per unit cell of volume Vcell.
The unit cell of silicon has a cubic packing arrangement of 8 atoms, and the unit cell volume may be measured by determining a single unit cell parameter, the length a of one of the sides of the cube. The CODATA value of a for silicon is
In practice, measurements are carried out on a distance known as d(Si), which is the distance between the planes denoted by the Miller indices {220}, and is equal to .
The isotope proportional composition of the sample used must be measured and taken into account. Silicon occurs in three stable isotopes (Si, Si, Si), and the natural variation in their proportions is greater than other uncertainties in the measurements. The atomic weight A for the sample crystal can be calculated, as the standard atomic weights of the three nuclides are known with great accuracy. This, together with the measured density ρ of the sample, allows the molar volume V to be determined:
where M is the molar mass constant. The CODATA value for the molar volume of silicon is , with a relative standard uncertainty of
See also
Mass (mass spectrometry)
Kendrick mass
Monoisotopic mass
Mass-to-charge ratio
Notes
References
External links
Metrology
Nuclear chemistry
Units of chemical measurement
Units of mass | Dalton (unit) | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,914 | [
"Units of measurement",
"Nuclear chemistry",
"Quantity",
"Units of mass",
"Mass",
"Chemical quantities",
"nan",
"Nuclear physics",
"Units of chemical measurement",
"Matter"
] |
42,451 | https://en.wikipedia.org/wiki/Pentaerythritol%20tetranitrate | Pentaerythritol tetranitrate (PETN), also known as PENT, pentyl, PENTA (ПЕНТА, primarily in Russian), TEN (tetraeritrit nitrate), corpent, or penthrite (or, rarely and primarily in German, as nitropenta), is an explosive material. It is the nitrate ester of pentaerythritol, and is structurally very similar to nitroglycerin. Penta refers to the five carbon atoms of the neopentane skeleton. PETN is a very powerful explosive material with a relative effectiveness factor of 1.66. When mixed with a plasticizer, PETN forms a plastic explosive. Along with RDX it is the main ingredient of Semtex.
PETN is also used as a vasodilator drug to treat certain heart conditions, such as for management of angina.
History
Pentaerythritol tetranitrate was first prepared and patented in 1894 by the explosives manufacturer of Cologne, Germany. The production of PETN started in 1912, when the improved method of production was patented by the German government. PETN was used by the German Military in . It was also used in the MG FF/M autocannons and many other weapon systems of the Luftwaffe in World War II.
Properties
PETN is practically insoluble in water (0.01 g/100 mL at 50 °C), weakly soluble in common nonpolar solvents such as aliphatic hydrocarbons (like gasoline) or tetrachloromethane, but soluble in some other organic solvents, particularly in acetone (about 15 g/100 g of the solution at 20 °C, 55 g/100 g at 60 °C) and dimethylformamide (40 g/100 g of the solution at 40 °C, 70 g/100 g at 70 °C). It is a non-planar molecule that crystallizes in the space group P21c. PETN forms eutectic mixtures with some liquid or molten aromatic nitro compounds, e.g. trinitrotoluene (TNT) or tetryl. Due to steric hindrance of the adjacent neopentyl-like moiety, PETN is resistant to attack by many chemical reagents; it does not hydrolyze in water at room temperature or in weaker alkaline aqueous solutions. Water at 100 °C or above causes hydrolysis to dinitrate; presence of 0.1% nitric acid accelerates the reaction.
The chemical stability of PETN is of interest, because of the presence of PETN in aging weapons. Neutron radiation degrades PETN, producing carbon dioxide and some pentaerythritol dinitrate and trinitrate. Gamma radiation increases the thermal decomposition sensitivity of PETN, lowers melting point by few degrees Celsius, and causes swelling of the samples. Like other nitrate esters, the primary degradation mechanism is the loss of nitrogen dioxide; this reaction is autocatalytic. Studies were performed on thermal decomposition of PETN.
In the environment, PETN undergoes biodegradation. Some bacteria denitrate PETN to trinitrate and then dinitrate, which is then further degraded. PETN has low volatility and low solubility in water, and therefore has low bioavailability for most organisms. Its toxicity is relatively low, and its transdermal absorption also seems to be low. It poses a threat for aquatic organisms. It can be degraded to pentaerythritol by iron.
Production
Production is by the reaction of pentaerythritol with concentrated nitric acid to form a precipitate which can be recrystallized from acetone to give processable crystals.
Variations of a method first published in US Patent 2,370,437 by Acken and Vyverberg (1945 to Du Pont) form the basis of all current commercial production.
PETN is manufactured by numerous manufacturers as a powder, or together with nitrocellulose and plasticizer as thin plasticized sheets (e.g. Primasheet 1000 or Detasheet). PETN residues are easily detectable in hair of people handling it. The highest residue retention is on black hair; some residues remain even after washing.
Explosive use
The most common use of PETN is as an explosive with high brisance. It is a secondary explosive, meaning it is more difficult to detonate than primary explosives, so dropping or igniting it will typically not cause an explosion (at standard atmospheric pressure it is difficult to ignite and burns vigorously), but is more sensitive to shock and friction than other secondary explosives such as TNT or tetryl. Under certain conditions a deflagration to detonation transition can occur, just like that of ammonium nitrate.
It is rarely used alone in military operations due to its lower stability, but is primarily used in the main charges of plastic explosives (such as C4) along with other explosives (especially RDX), booster and bursting charges of small caliber ammunition, in upper charges of detonators in some land mines and shells, as the explosive core of detonation cord. PETN is the least stable of the common military explosives, but can be stored without significant deterioration for longer than nitroglycerin or nitrocellulose.
During World War II, PETN was most importantly used in exploding-bridgewire detonators for the atomic bombs. These exploding-bridgewire detonators gave more precise detonation compared to primacord. PETN was used for these detonators because it was safer than primary explosives like lead azide: while it was sensitive, it would not detonate below a threshold amount of energy. Exploding bridgewires containing PETN remain used in current nuclear weapons. In spark detonators, PETN is used to avoid the need for primary explosives; the energy needed for a successful direct initiation of PETN by an electric spark ranges between 10–60 mJ.
Its basic explosion characteristics are:
Explosion energy: 5810 kJ/kg (1390 kcal/kg), so 1 kg of PETN has the energy of 1.24 kg TNT.
Detonation velocity: 8350 m/s (1.73 g/cm3), 7910 m/s (1.62 g/cm3), 7420 m/s (1.5 g/cm3), 8500 m/s (pressed in a steel tube)
Volume of gases produced: 790 dm3/kg (other value: 768 dm3/kg)
Explosion temperature: 4230 °C
Oxygen balance: −6.31 atom -g/kg
Melting point: 141.3 °C (pure), 140–141 °C (technical)
Trauzl lead block test: 523 cm3 (other values: 500 cm3 when sealed with sand, or 560 cm3 when sealed with water)
Critical diameter (minimal diameter of a rod that can sustain detonation propagation): 0.9 mm for PETN at 1 g/cm3, smaller for higher densities (other value: 1.5 mm)
In mixtures
PETN is used in a number of compositions. It is a major ingredient of theSemtex plastic explosive. It is also used as a component of pentolite, a castable mixture with TNT (usually 50/50 but may contain more TNT), which is, along with pure PETN, a common explosive for boosters for the blasting work (as in mining). The XTX8003 extrudable explosive, used in the W68 and W76 nuclear warheads, is a mixture of 80% PETN and 20% of Sylgard 182, a silicone rubber. It is often phlegmatized by addition of 5–40% of wax, or by polymers (producing polymer-bonded explosives); in this form it is used in some cannon shells up to 30 mm caliber, though it is unsuitable for higher calibers. It is also used as a component of some gun propellants and solid rocket propellants. Nonphlegmatized PETN is stored and handled with approximately 10% water content. PETN alone cannot be cast as it explosively decomposes slightly above its melting point, but it can be mixed with other explosives to form castable mixtures.
PETN can be initiated by a laser. A pulse with duration of 25 nanoseconds and 0.5–4.2 joules of energy from a Q-switched ruby laser can initiate detonation of a PETN surface coated with a 100 nm thick aluminium layer in less than half of a microsecond.
PETN has been replaced in many applications by RDX, which is thermally more stable and has a longer shelf life. PETN can be used in some ram accelerator types. Replacement of the central carbon atom with silicon produces Si-PETN, which is extremely sensitive.
Terrorist and Military use
Ten kilograms of PETN was used in the 1980 Paris synagogue bombing.
In 1983, 307 people were killed after a truck bomb filled with PETN was detonated at the Beirut barracks.
In 1983, the "Maison de France" house in Berlin was brought to a near-total collapse by the detonation of of PETN by terrorist Johannes Weinrich.
On July 17, 1996, flight TWA 800 exploded and crashed in the Atlantic Ocean. Traces of PETN were found in the wreckage.
In 1999, Alfred Heinz Reumayr used PETN as the main charge for his fourteen improvised explosive devices that he constructed in a thwarted attempt to damage the Trans-Alaska Pipeline System.
In 2001, al-Qaeda member Richard Reid, the "Shoe Bomber", used PETN in the sole of his shoe in his unsuccessful attempt to blow up American Airlines Flight 63 from Paris to Miami. He had intended to use the solid triacetone triperoxide (TATP) as a detonator.
In 2009, PETN was used in an attempt by al-Qaeda in the Arabian Peninsula to murder the Saudi Arabian Deputy Minister of Interior Prince Muhammad bin Nayef, by Saudi suicide bomber Abdullah Hassan al Asiri. The target survived and the bomber died in the blast. The PETN was hidden in the bomber's rectum, which security experts described as a novel technique.
On 25 December 2009, PETN was found in the underwear of Umar Farouk Abdulmutallab, the "Underwear bomber", a Nigerian with links to al-Qaeda in the Arabian Peninsula. According to US law enforcement officials, he had attempted to blow up Northwest Airlines Flight 253 while approaching Detroit from Amsterdam. Abdulmutallab had tried, unsuccessfully, to detonate approximately of PETN sewn into his underwear by adding liquid from a syringe; however, only a small fire resulted.
In the al-Qaeda in the Arabian Peninsula October 2010 cargo plane bomb plot, two PETN-filled printer cartridges were found at East Midlands Airport and in Dubai on flights bound for the US on an intelligence tip. Both packages contained sophisticated bombs concealed in computer printer cartridges filled with PETN. The bomb found in England contained of PETN, and the one found in Dubai contained of PETN. Hans Michels, professor of safety engineering at University College London, told a newspaper that of PETN—"around 50 times less than was used—would be enough to blast a hole in a metal plate twice the thickness of an aircraft's skin". In contrast, according to an experiment conducted by a BBC documentary team designed to simulate Abdulmutallab's Christmas Day bombing, using a Boeing 747 plane, even 80 grams of PETN was not sufficient to materially damage the fuselage.
On 12 July 2017, 150 grams of PETN was found in the Assembly of Uttar Pradesh, India's most populous state.
PETN was used by Israel in the manufacturing of pagers provided to Hezbollah. On September 17, 2024, the pagers detonated, killing 12 people and injuring thousands.
Detection
In the wake of terrorist PETN bomb plots, an article in Scientific American noted PETN is difficult to detect because it does not readily vaporize into the surrounding air. The Los Angeles Times noted in November 2010 that PETN's low vapor pressure makes it difficult for bomb-sniffing dogs to detect.
Many technologies can be used to detect PETN, including chemical sensors, X-rays, infrared, microwaves and terahertz, some of which have been implemented in public screening applications, primarily for air travel. PETN is one of the explosive chemicals typically of interest in that area, and it belongs to a family of common nitrate-based explosive chemicals which can often be detected by the same tests.
One detection system in use at airports involves analysis of swab samples obtained from passengers and their baggage. Whole-body imaging scanners that use radio-frequency electromagnetic waves, low-intensity X-rays, or T-rays of terahertz frequency that can detect objects hidden under clothing are not widely used because of cost, concerns about the resulting traveler delays, and privacy concerns.
Both parcels in the 2010 cargo plane bomb plot were x-rayed without the bombs being spotted. Qatar Airways said the PETN bomb "could not be detected by x-ray screening or trained sniffer dogs". The Bundeskriminalamt received copies of the Dubai x-rays, and an investigator said German staff would not have identified the bomb either. New airport security procedures followed in the U.S., largely to protect against PETN.
Medical use
Like nitroglycerin (glyceryl trinitrate) and other nitrates, PETN is also used medically as a vasodilator in the treatment of heart conditions. These drugs work by releasing the signaling gas nitric oxide in the body. The heart medicine Lentonitrat is nearly pure PETN.
Monitoring of oral usage of the drug by patients has been performed by determination of plasma levels of several of its hydrolysis products, pentaerythritol dinitrate, pentaerythritol mononitrate and pentaerythritol, in plasma using gas chromatography-mass spectrometry.
See also
Erythritol tetranitrate
RE factor
References
Further reading
Antianginals
Explosive chemicals
German inventions
Nitrate esters | Pentaerythritol tetranitrate | [
"Chemistry"
] | 2,980 | [
"Explosive chemicals"
] |
42,453 | https://en.wikipedia.org/wiki/Kirkendall%20effect | The Kirkendall effect is the motion of the interface between two metals that occurs due to the difference in diffusion rates of the metal atoms. The effect can be observed, for example, by placing insoluble markers at the interface between a pure metal and an alloy containing that metal, and heating to a temperature where atomic diffusion is reasonable for the given timescale; the boundary will move relative to the markers.
This process was named after Ernest Kirkendall (1914–2005), assistant professor of chemical engineering at Wayne State University from 1941 to 1946. The paper describing the discovery of the effect was published in 1947.
The Kirkendall effect has important practical consequences. One of these is the prevention or suppression of voids formed at the boundary interface in various kinds of alloy-to-metal bonding. These are referred to as Kirkendall voids.
History
The Kirkendall effect was discovered by Ernest Kirkendall and Alice Smigelskas in 1947, in the course of Kirkendall's ongoing research into diffusion in brass. The paper in which he discovered the famous effect was the third in his series of papers on brass diffusion, the first being his thesis. His second paper revealed that zinc diffused more quickly than copper in alpha-brass, which led to the research producing his revolutionary theory. Until this point, substitutional and ring methods were the dominant ideas for diffusional motion. Kirkendall's experiment produced evidence of a vacancy diffusion mechanism, which is the accepted mechanism to this day. At the time it was submitted, the paper and Kirkendall's ideas were rejected from publication by Robert Franklin Mehl, director of the Metals Research Laboratory at Carnegie Institute of Technology (now Carnegie Mellon University). Mehl refused to accept Kirkendall's evidence of this new diffusion mechanism and denied publication for over six months, only relenting after a conference was held and several other researchers confirmed Kirkendall's results.
Kirkendall's experiment
A bar of brass (70% Cu, 30% Zn) was used as a core, with molybdenum wires stretched along its length, and then coated in a layer of pure copper. Molybdenum was chosen as the marker material due to it being very insoluble in brass, eliminating any error due to the markers diffusing themselves. Diffusion was allowed to take place at 785 °C over the course of 56 days, with cross-sections being taken at six times throughout the span of the experiment. Over time, it was observed that the wire markers moved closer together as the zinc diffused out of the brass and into the copper. A difference in location of the interface was visible in cross sections of different times. Compositional change of the material from diffusion was confirmed by x-ray diffraction.
Diffusion mechanism
Early diffusion models postulated that atomic motion in substitutional alloys occurs via a direct exchange mechanism, in which atoms migrate by switching positions with atoms on adjacent lattice sites. Such a mechanism implies that the atomic fluxes of two different materials across an interface must be equal, as each atom moving across the interface causes another atom to move across in the other direction.
Another possible diffusion mechanism involves lattice vacancies. An atom can move into a vacant lattice site, effectively causing the atom and the vacancy to switch places. If large-scale diffusion takes place in a material, there will be a flux of atoms in one direction and a flux of vacancies in the other.
The Kirkendall effect arises when two distinct materials are placed next to each other and diffusion is allowed to take place between them. In general, the diffusion coefficients of the two materials in each other are not the same. This is only possible if diffusion occurs by a vacancy mechanism; if the atoms instead diffused by an exchange mechanism, they would cross the interface in pairs, so the diffusion rates would be identical, contrary to observation. By Fick's 1st law of diffusion, the flux of atoms from the material with the higher diffusion coefficient will be larger, so there will be a net flux of atoms from the material with the higher diffusion coefficient into the material with the lower diffusion coefficient. To balance this flux of atoms, there will be a flux of vacancies in the opposite direction—from the material with the lower diffusion coefficient into the material with the higher diffusion coefficient—resulting in an overall translation of the lattice relative to the environment in the direction of the material with the lower diffusion constant.
Macroscopic evidence for the Kirkendall effect can be gathered by placing inert markers at the initial interface between the two materials, such as molybdenum markers at an interface between copper and brass. The diffusion coefficient of zinc is higher than the diffusion coefficient of copper in this case. Since zinc atoms leave the brass at a higher rate than copper atoms enter, the size of the brass region decreases as diffusion progresses. Relative to the molybdenum markers, the copper–brass interface moves toward the brass at an experimentally measurable rate.
Darken's equations
Shortly after the publication of Kirkendall's paper, L. S. Darken published an analysis of diffusion in binary systems much like the one studied by Smigelskas and Kirkendall. By separating the actual diffusive flux of the materials from the movement of the interface relative to the markers, Darken found the marker velocity to be
where and are the diffusion coefficients of the two materials, and is an atomic fraction.
One consequence of this equation is that the movement of an interface varies linearly with the square root of time, which is exactly the experimental relationship discovered by Smigelskas and Kirkendall.
Darken also developed a second equation that defines a combined chemical diffusion coefficient in terms of the diffusion coefficients of the two interfacing materials:
This chemical diffusion coefficient can be used to mathematically analyze Kirkendall effect diffusion via the Boltzmann–Matano method.
Kirkendall porosity
One important consideration deriving from Kirkendall's work is the presence of pores formed during diffusion. These voids act as sinks for vacancies, and when enough accumulate, they can become substantial and expand in an attempt to restore equilibrium. Porosity occurs due to the difference in diffusion rate of the two species.
Pores in metals have ramifications for mechanical, thermal, and electrical properties, and thus control over their formation is often desired. The equation
where is the distance moved by a marker, is a coefficient determined by intrinsic diffusivities of the materials, and is a concentration difference between components, has proven to be an effective model for mitigating Kirkendall porosity. Controlling annealing temperature is another method of reducing or eliminating porosity. Kirkendall porosity typically occurs at a set temperature in a system, so annealing can be performed at lower temperatures for longer times to avoid formation of pores.
Examples
In 1972, C. W. Horsting of the RCA Corporation published a paper which reported test results on the reliability of semiconductor devices in which the connections were made using aluminium wires bonded ultrasonically to gold-plated posts. His paper demonstrated the importance of the Kirkendall effect in wire bonding technology, but also showed the significant contribution of any impurities present to the rate at which precipitation occurred at the wire bonds. Two of the important contaminants that have this effect, known as Horsting effect (Horsting voids) are fluorine and chlorine. Both Kirkendall voids and Horsting voids are known causes of wire-bond fractures, though historically this cause is often confused with the purple-colored appearance of one of the five different gold–aluminium intermetallics, commonly referred to as "purple plague" and less often "white plague".
See also
Electromigration
References
External links
Aloke Paul, Tomi Laurila, Vesa Vuorinen and Sergiy Divinski, Thermodynamics, Diffusion and the Kirkendall effect in Solids, Springer, Heidelberg, Germany, 2014.
Kirkendall Effect: Dramatic History of Discovery and Developments by L.N. Paritskaya
Interdiffusion and Kirkendall Effect in Cu-Sn Alloys
Visual demonstration of the Kirkendall effect
Metallurgy | Kirkendall effect | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,690 | [
"Metallurgy",
"Materials science",
"nan"
] |
42,515 | https://en.wikipedia.org/wiki/Infinite%20monkey%20theorem | The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. The theorem can be generalized to state that any sequence of events that has a non-zero probability of happening will almost certainly occur an infinite number of times, given an infinite amount of time or a universe that is infinite in size.
In this context, "almost surely" is a mathematical term meaning the event happens with probability 1, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. Variants of the theorem include multiple and even infinitely many typists, and the target text varies between an entire library and a single sentence.
One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913, but the first instance may have been even earlier. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters. In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.
Solution
Direct proof
There is a straightforward proof of this theorem. As an introduction, recall that if two events are statistically independent, then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain in Moscow on a particular day in the future is 0.4 and the chance of an earthquake in San Francisco on any particular day is 0.00003, then the chance of both happening on the same day is , assuming that they are indeed independent.
Consider the probability of typing the word banana on a typewriter with 50 keys. Suppose that the keys are pressed independently and uniformly at random, meaning that each key has an equal chance of being pressed regardless of what keys had been pressed previously. The chance that the first letter typed is 'b' is 1/50, and the chance that the second letter typed is 'a' is also 1/50, and so on. Therefore, the probability of the first six letters spelling banana is:
(1/50) × (1/50) × (1/50) × (1/50) × (1/50) × (1/50) = (1/50)6 = 1/15,625,000,000.
The result is less than one in 15 billion, but not zero.
From the above, the chance of not typing banana in a given block of 6 letters is 1 − (1/50)6. Because each block is typed independently, the chance Xn of not typing banana in any of the first n blocks of 6 letters is:
As n grows, Xn gets smaller. For n = 1 million, Xn is roughly 0.9999, but for n = 10 billion Xn is roughly 0.53 and for n = 100 billion it is roughly 0.0017. As n approaches infinity, the probability Xn approaches zero; that is, by making n large enough, Xn can be made as small as is desired, and the chance of typing banana approaches 100%. Thus, the probability of the word banana appearing at some point in an infinite sequence of keystrokes is equal to one.
The same argument applies if we replace one monkey typing n consecutive blocks of text with n monkeys each typing one block (simultaneously and independently). In this case, Xn = (1 − (1/50)6)n is the probability that none of the first n monkeys types banana correctly on their first try. Therefore, at least one of infinitely many monkeys will (with probability equal to one) produce a text as quickly as it would be produced by a perfectly accurate human typist copying it from the original.
Infinite strings
This can be stated more generally and compactly in terms of strings, which are sequences of characters chosen from some finite alphabet:
Given an infinite string where each character is chosen uniformly at random, any given finite string almost surely occurs as a substring at some position.
Given an infinite sequence of infinite strings, where each character of each string is chosen uniformly at random, any given finite string almost surely occurs as a prefix of one of these strings.
Both follow easily from the second Borel–Cantelli lemma. For the second theorem, let Ek be the event that the kth string begins with the given text. Because this has some fixed nonzero probability p of occurring, the Ek are independent, and the below sum diverges,
the probability that infinitely many of the Ek occur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text and make Ek the event where the kth block equals the desired string.
Probabilities
However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably small.
Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000 letters. Thus, there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783.
Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 observable universes made of protonic monkeys. As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet is therefore zero in any operational sense of an event ...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers."
In fact, there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long.
An online demonstration showed that short random programs can produce highly structured outputs more often than classical probability suggests, aligning with Gregory Chaitin's modern theorem and building on Algorithmic Information Theory and Algorithmic probability by Ray Solomonoff and Leonid Levin. The demonstration illustrates that the chance of producing a specific binary sequence is not shorter than the base-2 logarithm of the sequence length, showing the difference between Algorithmic probability and classical probability, as well as between random programs and random letters or digits.
Almost surely
The probability that an infinite randomly generated string of text will contain a particular finite substring is 1. However, this does not mean the substring's absence is "impossible", despite the absence having a prior probability of 0. For example, the immortal monkey could randomly type G as its first letter, G as its second, and G as every single letter, thereafter, producing an infinite string of Gs; at no point must the monkey be "compelled" to type anything else. (To assume otherwise implies the gambler's fallacy.) However long a randomly generated finite string is, there is a small but nonzero chance that it will turn out to consist of the same character repeated throughout; this chance approaches zero as the string's length approaches infinity. There is nothing special about such a monotonous sequence except that it is easy to describe; the same fact applies to any nameable specific sequence, such as "RGRGRG" repeated forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…".
If the hypothetical monkey has a typewriter with 90 equally likely keys that include numerals and punctuation, then the first typed keys might be "3.14" (the first three digits of pi) with a probability of (1/90)4, which is 1/65,610,000. Equally probable is any other string of four characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e". The probability that 100 randomly typed keys will consist of the first 99 digits of pi (including the separator key), or any other particular sequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digit of pi is 0, which is just as possible (mathematically probable) as typing nothing but Gs (also probability 0).
The same applies to the event of typing a particular version of Hamlet followed by endless copies of itself; or Hamlet immediately followed by all the digits of pi; these specific strings are equally infinite in length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact, any particular infinite sequence the immortal monkey types will have had a prior probability of 0, even though the monkey must type something.
This is an extension of the principle that a finite string of random text has a lower and lower probability of being a particular string the longer it is (though all specific strings are equally unlikely). This probability approaches 0 as the string approaches infinity. Thus, the probability of the monkey typing an endlessly long string, such as all of the digits of pi in order, on a 90-key keyboard is (1/90)∞ which equals (1/∞) which is essentially 0. At the same time, the probability that the sequence contains a particular subsequence (such as the word MONKEY, or the 12th through 999th digits of pi, or a version of the King James Bible) increases as the total string increases. This probability approaches 1 as the total string approaches infinity, and thus the original theorem is correct.
Correspondence between strings and numbers
In a simplification of the thought experiment, the monkey could have a typewriter with just two keys: 1 and 0. The infinitely long string thusly produced would correspond to the binary digits of a particular real number between 0 and 1. A countably infinite set of possible strings end in infinite repetitions, which means the corresponding real number is rational. Examples include the strings corresponding to one-third (010101...), five-sixths (11010101...) and five-eighths (1010000...). Only a subset of such real number strings (albeit a countably infinite subset) contains the entirety of Hamlet (assuming that the text is subjected to a numerical encoding, such as ASCII).
Meanwhile, there is an uncountably infinite set of strings which do not end in such repetition; these correspond to the irrational numbers. These can be sorted into two uncountably infinite subsets: those which contain Hamlet and those which do not. However, the "largest" subset of all the real numbers is those which not only contain Hamlet, but which contain every other possible string of any length, and with equal distribution of such strings. These irrational numbers are called normal. Because almost all numbers are normal, almost all possible strings contain all possible finite substrings. Hence, the probability of the monkey typing a normal number is 1. The same principles apply regardless of the number of keys from which the monkey can choose; a 90-key keyboard can be seen as a generator of numbers written in base 90.
History
Statistical mechanics
In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (; the French word singe covers both the monkeys and the apes), appeared in Émile Borel's 1913 article "Mécanique Statique et Irréversibilité" (Static mechanics and irreversibility), and in his book "Le Hasard" in 1914. His "monkeys" are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.
The physicist Arthur Eddington drew on Borel's image further in The Nature of the Physical World (1928), writing:
These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys' success is effectively impossible, and it may safely be said that such a process will never happen. It is clear from the context that Eddington is not suggesting that the probability of this happening is worthy of serious consideration. On the contrary, it was a rhetorical illustration of the fact that below certain levels of probability, the term improbable is functionally equivalent to impossible.
Origins and "The Total Library"
In a 1939 essay entitled "The Total Library", Argentine writer Jorge Luis Borges traced the infinite-monkey concept back to Aristotle's Metaphysics. Explaining the views of Leucippus, who held that the world arose through the random combination of atoms, Aristotle notes that the atoms themselves are homogeneous and their possible arrangements only differ in shape, position and ordering. In On Generation and Corruption, the Greek philosopher compares this to the way that a tragedy and a comedy consist of the same "atoms", i.e., alphabetic characters. Three centuries later, Cicero's De natura deorum (On the Nature of the Gods) argued against the Epicurean atomist worldview:
Borges follows the history of this argument through Blaise Pascal and Jonathan Swift, then observes that in his own time, the vocabulary had changed. By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.") Borges then imagines the contents of the Total Library which this enterprise would produce if carried to its fullest extreme:
Borges' total library concept was the main theme of his widely read 1941 short story "The Library of Babel", which describes an unimaginably vast library consisting of interlocking hexagonal chambers, together containing every possible volume that could be composed from the letters of the alphabet and some punctuation characters.
Actual monkeys
In 2002, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes crested macaques in Paignton Zoo in Devon, England from May 1 to June 22, with a radio link to broadcast the results on a website.
Not only did the monkeys produce nothing but five total pages largely consisting of the letter "S", the lead male began striking the keyboard with a stone, and other monkeys followed by urinating and defecating on the machine. Mike Phillips, director of the university's Institute of Digital Arts and Technology (i-DAT), said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. ... They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."
Applications and criticisms
Evolution
In his 1931 book The Mysterious Universe, Eddington's rival James Jeans attributed the monkey parable to a "Huxley", presumably meaning Thomas Henry Huxley. This attribution is incorrect. Today, it is sometimes further reported that Huxley applied the example in a now-legendary debate over Charles Darwin's On the Origin of Species with the Anglican Bishop of Oxford, Samuel Wilberforce, held at a meeting of the British Association for the Advancement of Science at Oxford on 30 June 1860. This story suffers not only from a lack of evidence, but the fact that in 1860 the typewriter was not yet commercially available.
Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example of Christian apologetics Doug Powell argued that even if a monkey accidentally types the letters of Hamlet, it has failed to produce Hamlet because it lacked the intention to communicate. His parallel implication is that natural laws could not produce the information content in DNA. A more common argument is represented by Reverend John F. MacArthur, who claimed that the genetic mutations necessary to produce a tapeworm from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy, and hence the odds against the evolution of all life are impossible to overcome.
Evolutionary biologist Richard Dawkins employs the typing monkey concept in his book The Blind Watchmaker to demonstrate the ability of natural selection to produce biological complexity out of random mutations. In a simulation experiment Dawkins has his weasel program produce the Hamlet phrase METHINKS IT IS LIKE A WEASEL, starting from a randomly typed parent, by "breeding" subsequent generations and always choosing the closest match from progeny that are copies of the parent with random mutations. The chance of the target phrase appearing in a single step is extremely small, yet Dawkins showed that it could be produced rapidly (in about 40 generations) using cumulative selection of phrases. The random choices furnish raw material, while cumulative selection imparts information. As Dawkins acknowledges, however, the weasel program is an imperfect analogy for evolution, as "offspring" phrases were selected "according to the criterion of resemblance to a distant ideal target." In contrast, Dawkins affirms, evolution has no long-term plans and does not progress toward some distant goal (such as humans). The weasel program is instead meant to illustrate the difference between non-random cumulative selection, and random single-step selection. In terms of the typing monkey analogy, this means that Romeo and Juliet could be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because the fitness function will tend to preserve in place any letters that happen to match the target text, improving each successive generation of typing monkeys.
A different avenue for exploring the analogy between evolution and an unconstrained monkey lies in the problem that the monkey types only one letter at a time, independently of the other letters. Hugh Petrie argues that a more sophisticated setup is required, in his case not for biological evolution but the evolution of ideas:
James W. Valentine, while admitting that the classic monkey's task is impossible, finds that there is a worthwhile analogy between written English and the metazoan genome in this other sense: both have "combinatorial, hierarchical structures" that greatly constrain the immense number of combinations at the alphabet level.
Zipf's law
Zipf's law states that the frequency of words is a power law function of its frequency rank:where are real numbers. Assuming that a monkey is typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the text produced by the monkey follows Zipf's law.
Literary theory
R. G. Collingwood argued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics,
Nelson Goodman took the contrary position, illustrating his point along with Catherine Elgin by the example of Borges' "Pierre Menard, Author of the Quixote",
In another writing, Goodman elaborates, "That the monkey may be supposed to have produced his copy randomly makes no difference. It is the same text, and it is open to all the same interpretations. ..." Gérard Genette dismisses Goodman's argument as begging the question.
For Jorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typing Hamlet, despite having no intention of meaning and therefore disqualifying itself as an author, then it appears that texts do not require authors. Possible solutions include saying that whoever finds the text and identifies it as Hamlet is the author; or that Shakespeare is the author, the monkey his agent, and the finder merely a user of the text. These solutions have their own difficulties, in that the text appears to have a meaning separate from the other agents: What if the monkey operates before Shakespeare is born, or if Shakespeare is never born, or if no one ever finds the monkey's typescript?
Simulated and limited conditions
In 1979, William R. Bennett Jr., a profesor of physics at Yale University, brought fresh attention to the theorem by applying a series of computer programs. Dr. Bennett simulated varying conditions under which an imaginary monkey, given a keyboard consisting of twenty-eight characters, and typing ten keys per second, might attempt to reproduce the sentence, "To be or not to be, that is the question." Although his experiments agreed with the overall conclusion that even such a short string of words would require many times the current age of the universe to reproduce, he noted that by modifying the statistical probability of certain letters to match the ordinary patterns of various languages and of Shakespeare in particular, seemingly random strings of words could be made to appear. But even with several refinements, the English sentence closest to the target phrase remained gibberish: "TO DEA NOW NAT TO BE WILL AND THEM BE DOES DOESORNS CAI AWROUTROULD."
Random document generation
The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation.
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on 4 August 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".
A website entitled The Monkey Shakespeare Simulator, launched on 1 July 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters:
RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d
Due to processing power limitations, the program used a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detected a match" (that is, the RNG generated a certain value or a value within a certain range), the simulator simulated the match by generating matched text.
Testing of random-number generators
Questions about the statistics describing how often an ideal monkey is expected to type certain strings translate into practical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professors George Marsaglia and Arif Zaman report that they used to call one such category of tests "overlapping m-tuple tests" in lectures, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.
In popular culture
The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than through formal education. This is helped by the innate humor stemming from the image of literal monkeys rattling away on a set of typewriters, and is a popular visual gag.
A quotation attributed to a 1996 speech by Robert Wilensky stated, "We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true."
The enduring, widespread popularity of the theorem was noted in the introduction to a 2001 paper, "Monkeys, Typewriters and Networks: The Internet in the Light of the Theory of Accidental Excellence". In 2002, an article in The Washington Post said, "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare". In 2003, the previously mentioned Arts Council−funded experiment involving real monkeys and a computer keyboard received widespread press coverage. In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments.
American playwright David Ives' short one-act play Words, Words, Words, from the collection All in the Timing, pokes fun of the concept of the infinite monkey theorem.
In 2015 Balanced Software released Monkey Typewriter on the Microsoft Store. The software generates random text using the Infinite Monkey theorem string formula. The software queries the generated text for user inputted phrases. However the software should not be considered true to life representation of the theory. This is a more of a practical presentation of the theory rather than scientific model on how to randomly generate text.
See also
, another thought experiment involving infinity
Notes
References
External links
– a bibliography with quotations
– on populating the cosmos with monkey particles
– Matt Kane's application of the Infinite Monkey Theorem on pixels to create images.
– April Fools' Day RFC on the implementation of the Infinite Monkey Theorem.
Articles containing proofs
Metaphors referring to monkeys
Infinity
Literary theory
Probability theorems
Statistical randomness
Random text generation
Thought experiments | Infinite monkey theorem | [
"Mathematics"
] | 5,796 | [
"Mathematical objects",
"Infinity",
"Theorems in probability theory",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
42,526 | https://en.wikipedia.org/wiki/Etching | Etching is traditionally the process of using strong acid or mordant to cut into the unprotected parts of a metal surface to create a design in intaglio (incised) in the metal. In modern manufacturing, other chemicals may be used on other types of material. As a method of printmaking, it is, along with engraving, the most important technique for old master prints, and remains in wide use today. In a number of modern variants such as microfabrication etching and photochemical milling, it is a crucial technique in modern technology, including circuit boards.
In traditional pure etching, a metal plate (usually of copper, zinc or steel) is covered with a waxy ground which is resistant to acid. The artist then scratches off the ground with a pointed etching needle where the artist wants a line to appear in the finished piece, exposing the bare metal. The échoppe, a tool with a slanted oval section, is also used for "swelling" lines. The plate is then dipped in a bath of acid, known as the mordant (French for "biting") or etchant, or has acid washed over it. The acid "bites" into the metal (it undergoes a redox reaction) to a depth depending on time and acid strength, leaving behind the drawing (as carved into the wax) on the metal plate. The remaining ground is then cleaned off the plate. For first and renewed uses the plate is inked in any chosen non-corrosive ink all over and the surface ink drained and wiped clean, leaving ink in the etched forms.
The plate is then put through a high-pressure printing press together with a sheet of paper (often moistened to soften it). The paper picks up the ink from the etched lines, making a print. The process can be repeated many times; typically several hundred impressions (copies) could be printed before the plate shows much sign of wear. The work on the plate can be added to or repaired by re-waxing and further etching; such an etching (plate) may have been used in more than one state.
Etching has often been combined with other intaglio techniques such as engraving (e.g., Rembrandt) or aquatint (e.g., Francisco Goya).
History
Origin
Etching in antiquity
Etching was already used in antiquity for decorative purposes. Etched carnelian beads are a type of ancient decorative beads made from carnelian with an etched design in white, which were probably manufactured by the Indus Valley civilization during the 3rd millennium BCE. They were made according to a technique of alkaline etching developed by the Harappans, and vast quantities of these beads were found in the archaeological sites of the Indus Valley civilization. They are considered as an important marker of ancient trade between the Indus Valley, Mesopotamia and even Ancient Egypt, as these precious and unique manufactured items circulated in great numbers between these geographical areas during the 3rd millennium BCE, and have been found in numerous tomb deposits. Sumerian kings, such as Shulgi , also created etched carnelian beads for dedication purposes.
Early etching
Etching by goldsmiths and other metal-workers in order to decorate metal items such as guns, armour, cups and plates has been known in Europe since the Middle Ages at least, and may go back to antiquity. The elaborate decoration of armour, in Germany at least, was an art probably imported from Italy around the end of the 15th century—little earlier than the birth of etching as a printmaking technique. Printmakers from the German-speaking lands and Central Europe perfected the art and transmitted their skills over the Alps and across Europe.
The process as applied to printmaking is believed to have been invented by Daniel Hopfer (–1536) of Augsburg, Germany. Hopfer was a craftsman who decorated armour in this way, and applied the method to printmaking, using iron plates (many of which still exist). Apart from his prints, there are two proven examples of his work on armour: a shield from 1536 now in the Real Armeria of Madrid and a sword in the Germanisches Nationalmuseum of Nuremberg. An Augsburg horse armour in the German Historical Museum, Berlin, dating to between 1512 and 1515, is decorated with motifs from Hopfer's etchings and woodcuts, but this is no evidence that Hopfer himself worked on it, as his decorative prints were largely produced as patterns for other craftsmen in various media. The oldest dated etching is by Albrecht Dürer in 1515, although he returned to engraving after six etchings instead of developing the craft.
The switch to copper plates was probably made in Italy, and thereafter etching soon came to challenge engraving as the most popular medium for artists in printmaking. Its great advantage was that, unlike engraving where the difficult technique for using the burin requires special skill in metalworking, the basic technique for creating the image on the plate in etching is relatively easy to learn for an artist trained in drawing. On the other hand, the handling of the ground and acid need skill and experience, and are not without health and safety risks, as well as the risk of a ruined plate.
Callot's innovations: échoppe, hard ground, stopping-out
Jacques Callot (1592–1635) from Nancy in Lorraine (now part of France) made important technical advances in etching technique.
Callot also appears to have been responsible for an improved, harder, recipe for the etching ground, using lute-makers' varnish rather than a wax-based formula. This enabled lines to be more deeply bitten, prolonging the life of the plate in printing, and also greatly reducing the risk of "foul-biting", where acid gets through the ground to the plate where it is not intended to, producing spots or blotches on the image. Previously the risk of foul-biting had always been at the back of an etcher's mind, preventing too much time on a single plate that risked being ruined in the biting process. Now etchers could do the highly detailed work that was previously the monopoly of engravers, and Callot made full use of the new possibilities.
Callot also made more extensive and sophisticated use of multiple "stoppings-out" than previous etchers had done. This is the technique of letting the acid bite lightly over the whole plate, then stopping-out those parts of the work which the artist wishes to keep light in tone by covering them with ground before bathing the plate in acid again. He achieved unprecedented subtlety in effects of distance and light and shade by careful control of this process. Most of his prints were relatively small—up to about six inches or 15 cm on their longest dimension, but packed with detail.
One of his followers, the Parisian Abraham Bosse, spread Callot's innovations all over Europe with the first published manual of etching, which was translated into Italian, Dutch, German and English.
The 17th century was the great age of etching, with Rembrandt, Giovanni Benedetto Castiglione and many other masters. In the 18th century, Piranesi, Tiepolo and Daniel Chodowiecki were the best of a smaller number of fine etchers. In the 19th and early 20th century, the Etching revival produced a host of lesser artists, but no really major figures. Etching is still widely practiced today.
Variants
Aquatint uses acid-resistant resin to achieve tonal effects.
Soft-ground etching uses a special softer ground. The artist places a piece of paper (or cloth etc. in modern uses) over the ground and draws on it. The print resembles a drawing. Soft ground can also be used to capture the texture or pattern of fabrics or furs pressed into the soft surface.
Other materials that are not manufactured specifically for etching can be used as grounds or resists. Examples including printing ink, paint, spray paint, oil pastels, candle or bees wax, tacky vinyl or stickers, and permanent markers.
There are some new non-toxic grounds on the market that work differently than typical hard or soft grounds.
Relief etching was invented by William Blake in about 1788, and he has been almost the only artist to use it in its original form. However, from 1880 to 1950 a photo-mechanical ("line-block") variant was the dominant form of commercial printing for images. A similar process to etching, but printed as a relief print, so it is the "white" background areas which are exposed to the acid, and the areas to print "black" which are covered with ground. Blake's exact technique remains controversial. He used the technique to print texts and images together, writing the text and drawing lines with an acid-resistant medium.
Carborundum etching (sometimes called carbograph printing) was invented in the mid-20th century by American artists who worked for the WPA. In this technique, a metal plate is first covered with silicon carbide grit and run through an etching press; then a design is drawn on the roughened plate using an acid-resistant medium. After immersion in an acid bath, the resulting plate is printed as a relief print. The roughened surface of the relief permits considerable tonal range, and it is possible to attain a high relief that results in strongly embossed prints.
Printmaking technique in detail
A waxy acid-resist, known as a ground, is applied to a metal plate, most often copper or zinc but steel plate is another medium with different qualities. There are two common types of ground: hard ground and soft ground.
Hard ground can be applied in two ways. Solid hard ground comes in a hard waxy block. To apply hard ground of this variety, the plate to be etched is placed upon a hot-plate (set at 70 °C, 158 °F), a kind of metal worktop that is heated up. The plate heats up and the ground is applied by hand, melting onto the plate as it is applied. The ground is spread over the plate as evenly as possible using a roller. Once applied the etching plate is removed from the hot-plate and allowed to cool which hardens the ground.
After the ground has hardened the artist "smokes" the plate, classically with 3 beeswax tapers, applying the flame to the plate to darken the ground and make it easier to see what parts of the plate are exposed. Smoking not only darkens the plate but adds a small amount of wax. Afterwards the artist uses a sharp tool to scratch into the ground, exposing the metal.
The second way to apply hard ground is by liquid hard ground. This comes in a can and is applied with a brush upon the plate to be etched. Exposed to air the hard ground will harden. Some printmakers use
oil/tar based asphaltum or bitumen as hard ground, although often bitumen is used to protect steel plates from rust and copper plates from aging.
Soft ground also comes in liquid form and is allowed to dry but it does not dry hard like hard ground and is impressionable. After the soft ground has dried the printmaker may apply materials such as leaves, objects, hand prints and so on which will penetrate the soft ground and expose the plate underneath.
The ground can also be applied in a fine mist, using powdered rosin or spraypaint. This process is called aquatint, and allows for the creation of tones, shadows, and solid areas of color.
The design is then drawn (in reverse) with an etching-needle or échoppe. An "echoppe" point can be made from an ordinary tempered steel etching needle, by grinding the point back on a carborundum stone, at a 45–60 degree angle. The "echoppe" works on the same principle that makes a fountain pen's line more attractive than a ballpoint's: The slight swelling variation caused by the natural movement of the hand "warms up" the line, and although hardly noticeable in any individual line, has a very attractive overall effect on the finished plate. It can be drawn with in the same way as an ordinary needle.
The plate is then completely submerged in a solution that eats away at the exposed metal. ferric chloride may be used for etching copper or zinc plates, whereas nitric acid may be used for etching zinc or steel plates. Typical solutions are 1 part FeCl3 to 1 part water and 1 part nitric to 3 parts water. The strength of the acid determines the speed of the etching process.
The etching process is known as biting (see also spit-biting below).
The waxy resist prevents the acid from biting the parts of the plate which have been covered.
The longer the plate remains in the acid the deeper the "bites" become.
During the etching process the printmaker uses a bird feather or similar item to wave away bubbles and detritus produced by the dissolving process, from the surface of the plate, or the plate may be periodically lifted from the acid bath. If a bubble is allowed to remain on the plate then it will stop the acid biting into the plate where the bubble touches it. Zinc produces more bubbles much more rapidly than copper and steel and some artists use this to produce interesting round bubble-like circles within their prints for a Milky Way effect.
The detritus is powdery dissolved metal that fills the etched grooves and can also block the acid from biting evenly into the exposed plate surfaces. Another way to remove detritus from a plate is to place the plate to be etched face down within the acid upon plasticine balls or marbles, although the drawback of this technique is the exposure to bubbles and the inability to remove them readily.
For aquatinting a printmaker will often use a test strip of metal about a centimetre to three centimetres wide. The strip will be dipped into the acid for a specific number of minutes or seconds. The metal strip will then be removed and the acid washed off with water. Part of the strip will be covered in ground and then the strip is redipped into the acid and the process repeated. The ground will then be removed from the strip and the strip inked up and printed. This will show the printmaker the different degrees or depths of the etch, and therefore the strength of the ink color, based upon how long the plate is left in the acid.
The plate is removed from the acid and washed over with water to remove the acid. The ground is removed with a solvent such as turpentine. Turpentine is often removed from the plate using methylated spirits since turpentine is greasy and can affect the application of ink and the printing of the plate.
Spit-biting is a process whereby the printmaker will apply acid to a plate with a brush in certain areas of the plate. The plate may be aquatinted for this purpose or exposed directly to the acid. The process is known as "spit"-biting due to the use of saliva once used as a medium to dilute the acid, although gum arabic or water are now commonly used.
A piece of matte board, a plastic "card", or a wad of cloth is often used to push the ink into the incised lines. The surface is wiped clean with a piece of stiff fabric known as tarlatan and then wiped with newsprint paper; some printmakers prefer to use the blade part of their hand or palm at the base of their thumb. The wiping leaves ink in the incisions. You may also use a folded piece of organza silk to do the final wipe. If copper or zinc plates are used, then the plate surface is left very clean and therefore white in the print. If steel plate is used, then the plate's natural tooth gives the print a grey background similar to the effects of aquatinting. As a result, steel plates do not need aquatinting as gradual exposure of the plate via successive dips into acid will produce the same result.
A damp piece of paper is placed over the plate and it is run through the press.
Nontoxic etching
Growing concerns about the health effects of acids and solvents led to the development of less toxic etching methods in the late 20th century. An early innovation was the use of floor wax as a hard ground for coating the plate. Others, such as printmakers Mark Zaffron and Keith Howard, developed systems using acrylic polymers as a ground and ferric chloride for etching. The polymers are removed with sodium carbonate (washing soda) solution, rather than solvents. When used for etching, ferric chloride does not produce a corrosive gas, as acids do, thus eliminating another danger of traditional etching.
The traditional aquatint, which uses either powdered rosin or enamel spray paint, is replaced with an airbrush application of the acrylic polymer hard ground. Again, no solvents are needed beyond the soda ash solution, though a ventilation hood is needed due to acrylic particulates from the air brush spray.
The traditional soft ground, requiring solvents for removal from the plate, is replaced with water-based relief printing ink. The ink receives impressions like traditional soft ground, resists the ferric chloride etchant, yet can be cleaned up with warm water and either soda ash solution or ammonia.
Anodic etching has been used in industrial processes for over a century. The etching power is a source of direct current. The item to be etched (anode) is connected to its positive pole. A receiver plate (cathode) is connected to its negative pole. Both, spaced slightly apart, are immersed in a suitable aqueous solution of a suitable electrolyte. The current pushes the metal out from the anode into solution and deposits it as metal on the cathode. Shortly before 1990, two groups working independently developed different ways of applying it to creating intaglio printing plates.
In the patented Electroetch system, invented by Marion and Omri Behr, in contrast to certain nontoxic etching methods, an etched plate can be reworked as often as the artist desires The system uses voltages below 2 volts which exposes the uneven metal crystals in the etched areas resulting in superior ink retention and printed image appearance of quality equivalent to traditional acid methods. With polarity reversed the low voltage provides a simpler method of making mezzotint plates as well as the "steel facing" copper plates.
Some of the earliest printmaking workshops experimenting with, developing and promoting nontoxic techniques include Grafisk Eksperimentarium, in Copenhagen, Denmark, Edinburgh Printmakers, in Scotland, and New Grounds Print Workshop, in Albuquerque, New Mexico.
Photo-etching
Light sensitive polymer plates allow for photorealistic etchings. A photo-sensitive coating is applied to the plate by either the plate supplier or the artist. Light is projected onto the plate as a negative image to expose it. Photopolymer plates are either washed in hot water or under other chemicals according to the plate manufacturers' instructions. Areas of the photo-etch image may be stopped-out before etching to exclude them from the final image on the plate, or removed or lightened by scraping and burnishing once the plate has been etched. Once the photo-etching process is complete, the plate can be worked further as a normal intaglio plate, using drypoint, further etching, engraving, etc. The final result is an intaglio plate which is printed like any other.
Types of metal plates
Copper is a traditional metal, and is still preferred, for etching, as it bites evenly, holds texture well, and does not distort the color of the ink when wiped. Zinc is cheaper than copper, so preferable for beginners, but it does not bite as cleanly as copper does, and it alters some colors of ink. Steel is growing in popularity as an etching substrate. Increases in the prices of copper and zinc have steered steel to an acceptable alternative. The line quality of steel is less fine than copper, but finer than zinc. Steel has a natural and rich aquatint.
The type of metal used for the plate impacts the number of prints the plate will produce. The firm pressure of the printing press slowly rubs out the finer details of the image with every pass-through. With relatively soft copper, for example, the etching details will begin to wear very quickly, some copper plates show extreme wear after only ten prints. Steel, on the other hand, is incredibly durable. This wearing out of the image over time is one of the reasons etched prints created early in a numbered series tend to be valued more highly. An artist thus takes the total number of prints he or she wishes to produce into account whenever choosing the metal.
Industrial uses
Etching is also used in the manufacturing of printed circuit boards and semiconductor devices, and in the preparation of metallic specimens for microscopic observation.
Prior to 1100 AD, the New World Hohokam culture independently utilized the technique of acid etching in marine shell designs. The shells were daubed in pitch and then bathed in acid probably made from fermented cactus juice.
Metallographic etching
Metallographic etching is a method of preparing samples of metal for analysis. It can be applied after polishing to further reveal microstructural features (such as grain size, distribution of phases, and inclusions), along with other aspects such as prior mechanical deformation or thermal treatments. Metal can be etched using chemicals, electrolysis, or heat (thermal etching).
Controlling the acid's effects
There are many ways for the printmaker to control the acid's effects.
Hard grounds
Most typically, the surface of the plate is covered in a hard, waxy 'ground' that resists acid. The printmaker then scratches through the ground with a sharp point, exposing lines of metal which the mordant acid attacks.
Aquatint
Aquatint is a variation giving only tone rather than lines when printed. Particulate resin is evenly distributed on all or parts of the plate, then heated to form a screen ground of uniform, but less than perfect, density. After etching, any exposed surface will result in a roughened (i.e., darkened) surface. Areas that are to be light in the final print are protected by varnishing between acid baths. Successive turns of varnishing and placing the plate in acid create areas of tone difficult or impossible to achieve by drawing through a wax ground.
Sugar lift
Designs in a syrupy solution of sugar or Camp Coffee are painted onto the metal surface prior to it being coated in a liquid etching ground or 'stop out' varnish. When the plate is placed in hot water the sugar dissolves, leaving the image. The plate can then be etched.
Spit bite
A mixture of nitric acid and gum arabic (or, very rarely, saliva) which can be dripped, spattered or painted onto a metal surface giving interesting results. A mixture of nitric acid and rosin may also be used.
Printing
Printing the plate is done by covering the surface with printing ink, then rubbing the ink off the surface with tarlatan cloth or newsprint, leaving ink in the roughened areas and lines. Damp paper is placed on the plate, and both are run through a printing press; the pressure forces the paper into contact with the ink, transferring the image (c.f., chine-collé). The pressure subtly degrades the image in the plate, smoothing the roughened areas and closing the lines; a copper plate is good for, at most, a few hundred printings of a strongly etched imaged before the degradation is considered too great by the artist. At that point, the artist can manually restore the plate by re-etching it, essentially putting ground back on and retracing their lines; alternatively, plates can be electro-plated before printing with a harder metal to preserve the surface. Zinc is also used, because as a softer metal, etching times are shorter; however, that softness also leads to faster degradation of the image in the press.
Faults
Foul-bite or "over-biting" is common in etching, and is the effect of minuscule amounts of acid leaking through the ground to create minor pitting and burning on the surface. This incidental roughening may be removed by smoothing and polishing the surface, but artists often leave faux-bite, or deliberately court it by handling the plate roughly, because it is viewed as a desirable mark of the process.
"Etchings" euphemism
The phrase "Want to come up and see my etchings?" is a romantic euphemism by which a person entices someone to come back to their place with an offer to look at something artistic, but with ulterior motives. The phrase is a corruption of some phrases in a novel by Horatio Alger Jr. called The Erie Train Boy, which was first published in 1891. Alger was an immensely popular author in the 19th century—especially with young people—and his books were widely quoted. In chapter XXII of the book, a woman writes to her boyfriend, "I have a new collection of etchings that I want to show you. Won't you name an evening when you will call, as I want to be certain to be at home when you really do come." The boyfriend then writes back "I shall no doubt find pleasure in examining the etchings which you hold out as an inducement to call."
This was referenced in a 1929 James Thurber cartoon in which a man tells a woman in a building lobby: "You wait here and I'll bring the etchings down". It was also referenced in Dashiell Hammett's 1934 novel The Thin Man, in which the narrator answers his wife asking him about a lady he had wandered off with by saying: "She just wanted to show me some French etchings."
The phrase was given new popularity in 1937: in a well publicized case, violinist David Rubinoff was accused of inviting a young woman to his hotel room to view some French etchings, but instead seducing her.
As early as 1895, Hjalmar Söderberg used the reference in his "decadent" début novel Delusions (swe: Förvillelser), when he lets the dandy Johannes Hall lure the main character's younger sister Greta into his room under the pretence that they browse through his etchings and engravings (e.g., Die Sünde by Franz Stuck).
See also
Acid test (gold)
Electroetching
List of art techniques
List of etchings by Rembrandt
List of printmakers
Old master prints for the history of the method
Photoengraving
Photolithography
References
External links
Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art, which contains material on etching
The Print Australia Reference Library Catalogue
Etching from the MMA Timeline of Art History
Metropolitan Museum, materials-and-techniques: etching
Museum of Modern Art information on printing techniques and examples of prints
The Wenceslaus Hollar Collection of digitized books and images at the University of Toronto
Carrington, Fitzroy. Prints and their makers: essays on engravers and etchers old and modern. United States: The Century Co., 1911, copyright 1912.
Printmaking
Relief printing
Metalworking
Chemical processes | Etching | [
"Chemistry"
] | 5,657 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
42,553 | https://en.wikipedia.org/wiki/DATR | DATR is a language for lexical knowledge representation. The lexical knowledge is encoded in a network of nodes. Each node has a set of attributes encoded with it. A node can represent a word or a word form.
DATR was developed in the late 1980s by Roger Evans, Gerald Gazdar and Bill Keller, and used extensively in the 1990s; the standard specification is contained in the Evans and Gazdar RFC, available on the Sussex website (below). DATR has been implemented in a variety of programming languages, and several implementations are available on the internet, including an RFC compliant implementation at the Bielefeld website (below).
DATR is still used for encoding inheritance networks in various linguistic and non-linguistic domains and is under discussion as a standard notation for the representation of lexical information.
References
External links
DATR at the University of Sussex
DATR repository and RFC compliant ZDATR implementation at Universität Bielefeld
Natural language processing
Domain-specific knowledge representation languages | DATR | [
"Technology"
] | 203 | [
"Natural language processing",
"Natural language and computing"
] |
42,566 | https://en.wikipedia.org/wiki/Ionic%20crystal | In chemistry, an ionic crystal is a crystalline form of an ionic compound. They are solids consisting of ions bound together by their electrostatic attraction into a regular lattice. Examples of such crystals are the alkali halides, including potassium fluoride (KF), potassium chloride (KCl), potassium bromide (KBr), potassium iodide (KI), sodium fluoride (NaF).
Sodium chloride (NaCl) has a 6:6 co-ordination. The properties of NaCl reflect the strong interactions that exist between the ions. It is a good conductor of electricity when molten, but very poor in the solid state. When fused the mobile ions carry charge through the liquid.
They are characterized by strong absorption of infrared radiation and have planes along which they cleave easily.
The exact arrangement of ions in an ionic lattice varies according to the size of the ions in the solid.
References
External links
Art of the States: Anea musical work inspired by ionic crystals
Crystals | Ionic crystal | [
"Chemistry",
"Materials_science"
] | 205 | [
"Crystallography stubs",
"Materials science stubs",
"Crystallography",
"Crystals"
] |
42,567 | https://en.wikipedia.org/wiki/Spleen | The spleen (, from Ancient Greek σπλήν, splḗn) is an organ found in almost all vertebrates. Similar in structure to a large lymph node, it acts primarily as a blood filter.
The spleen plays important roles in regard to red blood cells (erythrocytes) and the immune system. It removes old red blood cells and holds a reserve of blood, which can be valuable in case of hemorrhagic shock, and also recycles iron. As a part of the mononuclear phagocyte system, it metabolizes hemoglobin removed from senescent red blood cells. The globin portion of hemoglobin is degraded to its constitutive amino acids, and the heme portion is metabolized to bilirubin, which is removed in the liver.
The spleen houses antibody-producing lymphocytes in its white pulp and monocytes which remove antibody-coated bacteria and antibody-coated blood cells by way of blood and lymph node circulation. These monocytes, upon moving to injured tissue (such as the heart after myocardial infarction), turn into dendritic cells and macrophages while promoting tissue healing. The spleen is a center of activity of the mononuclear phagocyte system and is analogous to a large lymph node, as its absence causes a predisposition to certain infections.
In humans, the spleen is purple in color and is in the left upper quadrant of the abdomen. The surgical process to remove the spleen is known as a splenectomy.
Structure
In humans, the spleen is underneath the left part of the diaphragm, and has a smooth, convex surface that faces the diaphragm. It is underneath the ninth, tenth, and eleventh ribs. The other side of the spleen is divided by a ridge into two regions: an anterior gastric portion, and a posterior renal portion. The gastric surface is directed forward, upward, and toward the middle, is broad and concave, and is in contact with the posterior wall of the stomach. Below this it is in contact with the tail of the pancreas. The renal surface is directed medialward and downward. It is somewhat flattened, considerably narrower than the gastric surface, and is in relation with the upper part of the anterior surface of the left kidney and occasionally with the left adrenal gland.
There are four ligaments attached to the spleen: gastrosplenic ligament, splenorenal ligament, colicosplenic ligament, and phrenocolic ligament.
Measurements
The spleen, in healthy adult humans, is approximately in length.
An easy way to remember the anatomy of the spleen is the 1×3×5×7×9×10×11 rule. The spleen is , weighs approximately , and lies between the ninth and eleventh ribs on the left-hand side and along the axis of the tenth rib. The weight varies between and (standard reference range), correlating mainly to height, body weight and degree of acute congestion but not to sex or age.
Blood supply
Near the middle of the spleen is a long fissure, the hilum, which is the point of attachment for the gastrosplenic ligament and the point of insertion for the splenic artery and splenic vein. There are other openings present for lymphatic vessels and nerves. In addition to the splenic artery, collateral blood supply is provided by the adjacent short gastric arteries.
Like the thymus, the spleen possesses only efferent lymphatic vessels. The spleen is part of the lymphatic system. Both the short gastric arteries and the splenic artery supply it with blood.
The germinal centers are supplied by arterioles called penicilliary radicles.
Nerve supply
The spleen is innervated by the splenic plexus, which connects a branch of the celiac ganglia to the vagus nerve.
The underlying central nervous processes coordinating the spleen's function seem to be embedded into the hypothalamic-pituitary-adrenal-axis, and the brainstem, especially the subfornical organ.
Development
The spleen is unique in respect to its development within the gut. While most of the gut organs are endodermally derived, the spleen is derived from mesenchymal tissue. Specifically, the spleen forms within, and from, the dorsal mesentery. However, it still shares the same blood supply—the celiac trunk—as the foregut organs.
Function
Pulp
Other
Other functions of the spleen are less prominent, especially in the healthy adult:
Spleen produces all types of blood cells during fetal life
Production of opsonins, properdin, and tuftsin.
Release of neutrophils following myocardial infarction.
Creation of red blood cells. While the bone marrow is the primary site of hematopoiesis in the adult, the spleen has important hematopoietic functions up until the fifth month of gestation. After birth, erythropoietic functions cease, except in some hematologic disorders. As a major lymphoid organ and a central player in the reticuloendothelial system, the spleen retains the ability to produce lymphocytes and, as such, remains a hematopoietic organ.
Storage of red blood cells, lymphocytes and other formed elements. The spleen of horses stores roughly 30 percent of the red blood cells and can release them when needed. In humans, up to a cup (240 ml) of red blood cells is held within the spleen and released in cases of hypovolemia and hypoxia. It can store platelets in case of an emergency and also clears old platelets from the circulation. Up to a quarter of lymphocytes are stored in the spleen at any one time.
Clinical significance
Enlarged spleen
Enlargement of the spleen is known as splenomegaly. It may be caused by sickle cell anemia, sarcoidosis, malaria, bacterial endocarditis, leukemia, polycythemia vera, pernicious anemia, Gaucher's disease, leishmaniasis, Hodgkin's disease, Banti's disease, hereditary spherocytosis, cysts, glandular fever (including mononucleosis or 'Mono' caused by the Epstein–Barr virus and infection from cytomegalovirus), and tumours. Primary tumors of the spleen include hemangiomas and hemangiosarcomas. Marked splenomegaly may result in the spleen occupying a large portion of the left side of the abdomen.
The spleen is the largest collection of lymphoid tissue in the body. It is normally palpable in preterm infants, in 30% of normal, full-term neonates, and in 5% to 10% of infants and toddlers. A spleen easily palpable below the costal margin in any child over the age of three to four years should be considered abnormal until proven otherwise.
Splenomegaly can result from antigenic stimulation (e.g., infection), obstruction of blood flow (e.g., portal vein obstruction), underlying functional abnormality (e.g., hemolytic anemia), or infiltration (e.g., leukemia or storage disease, such as Gaucher's disease). The most common cause of acute splenomegaly in children is viral infection, which is transient and usually moderate. Basic work-up for acute splenomegaly includes a complete blood count with differential, platelet count, and reticulocyte and atypical lymphocyte counts to exclude hemolytic anemia and leukemia. Assessment of IgM antibodies to viral capsid antigen (a rising titer) is indicated to confirm Epstein–Barr virus or cytomegalovirus. Other infections should be excluded if these tests are negative.
Calculators have been developed for measurements of spleen size based on CT, US, and MRI findings.
Splenic injury
Trauma, such as a road traffic collision, can cause rupture of the spleen, which is a situation requiring immediate medical attention.
Asplenia
Asplenia refers to a non-functioning spleen, which may be congenital, or caused by traumatic injury, surgical resection (splenectomy) or a disease such as sickle cell anaemia. Hyposplenia refers to a partially functioning spleen. These conditions may cause a modest increase in circulating white blood cells and platelets, a diminished response to some vaccines, and an increased susceptibility to infection. In particular, there is an increased risk of sepsis from polysaccharide encapsulated bacteria. Encapsulated bacteria inhibit binding of complement or prevent complement assembled on the capsule from interacting with macrophage receptors. Phagocytosis needs natural antibodies, which are immunoglobulins that facilitate phagocytosis either directly or by complement deposition on the capsule. They are produced by IgM memory B cells (a subtype of B cells) in the marginal zone of the spleen.
A splenectomy (removal of the spleen) results in a greatly diminished frequency of memory B cells. A 28-year follow-up of 740 World War II veterans whose spleens were removed on the battlefield showed a significant increase in the usual death rate from pneumonia (6 rather than the expected 1.3) and an increase in the death rate from ischemic heart disease (41 rather than the expected 30), but not from other conditions.
Accessory spleen
An accessory spleen is a small splenic nodule extra to the spleen usually formed in early embryogenesis. Accessory spleens are found in approximately 10 percent of the population and are typically around 1 centimeter in diameter. Splenosis is a condition where displaced pieces of splenic tissue (often following trauma or splenectomy) autotransplant in the abdominal cavity as accessory spleens.
Polysplenia is a congenital disease manifested by multiple small accessory spleens, rather than a single, full-sized, normal spleen. Polysplenia sometimes occurs alone, but it is often accompanied by other developmental abnormalities such as intestinal malrotation or biliary atresia, or cardiac abnormalities, such as dextrocardia. These accessory spleens are non-functional.
Infarction
Splenic infarction is a condition in which blood flow supply to the spleen is compromised, leading to partial or complete infarction (tissue death due to oxygen shortage) in the organ.
Splenic infarction occurs when the splenic artery or one of its branches are occluded, for example by a blood clot. Although it can occur asymptomatically, the typical symptom is severe pain in the left upper quadrant of the abdomen, sometimes radiating to the left shoulder. Fever and chills develop in some cases. It has to be differentiated from other causes of acute abdomen.
Hyaloserositis
The spleen may be affected by hyaloserositis, in which it is coated with fibrous hyaline.
Society and culture
There has been a long and varied history of misconceptions regarding the physiological role of the spleen, and it has often been seen as a reservoir for juices closely linked to digestion. In various cultures, the organ has been linked to melancholia, due to the influence of ancient Greek medicine and the associated doctrine of humourism, in which the spleen was believed to be a reservoir for an elusive fluid known as "black bile" (one of the four humours). The spleen also plays an important role in traditional Chinese medicine, where it is considered to be a key organ that displays the Yin aspect of the Earth element (its Yang counterpart is the stomach). In contrast, the Talmud (tractate Berachoth 61b) refers to the spleen as the organ of laughter while possibly suggesting a link with the humoral view of the organ.
Etymologically, spleen comes from the Ancient Greek (splḗn), where it was the idiomatic equivalent of the heart in modern English. Persius, in his satires, associated spleen with immoderate laughter. The native Old English word for it is , now primarily used for animals; a loanword from Latin is .
In English, William Shakespeare frequently used the word spleen to signify melancholy, but also caprice and merriment. In Julius Caesar, he uses the spleen to describe Cassius's irritable nature:
Must I observe you? must I stand and crouch
Under your testy humour? By the gods
You shall digest the venom of your spleen,
Though it do split you; for, from this day forth,
I'll use you for my mirth, yea, for my laughter,
When you are waspish.
The spleen, as a byword for melancholy, has also been considered an actual disease. In the early 18th century, the physician Richard Blackmore considered it to be one of the two most prevalent diseases in England (along with consumption). In 1701, Anne Finch (later, Countess of Winchilsea) had published a Pindaric ode, The Spleen, drawing on her first-hand experiences of an affliction which, at the time, also had a reputation of being a fashionably upper-class disease of the English. Both Blackmore and George Cheyne treated this malady as the male equivalent of "the vapours", while preferring the more learned terms "hypochondriasis" and "hysteria". In the late 18th century, the German word Spleen came to denote eccentric and hypochondriac tendencies that were thought to be characteristic of English people.
In French, "splénétique" refers to a state of pensive sadness or melancholy. This usage was popularised by the poems of Charles Baudelaire (1821–1867) and his collection Le Spleen de Paris, but it was also present in earlier 19th-century Romantic literature.
Food
The spleen is one of the many organs that may be included in offal. It is not widely eaten as a principal ingredient, but cow spleen sandwiches are eaten in Sicilian cuisine. Chicken spleen is one of the main ingredients of Jerusalem mixed grill.
Other animals
In cartilaginous and ray-finned fish, the spleen consists primarily of red pulp and is normally somewhat elongated, as it lies inside the serosal lining of the intestine. In many amphibians, especially frogs, it has the more rounded form and there is often a greater quantity of white pulp.
In reptiles, birds, and mammals, white pulp is always relatively plentiful, and in birds and some mammals the spleen is typically rounded, but it adjusts its shape somewhat to the arrangement of the surrounding organs. In most vertebrates, the spleen continues to produce red blood cells throughout life; only in mammals this function is lost in middle-aged adults. Many mammals have tiny spleen-like structures known as haemal nodes throughout the body that are presumed to have the same function as the spleen. The spleens of aquatic mammals differ in some ways from those of fully land-dwelling mammals; in general they are bluish in colour. In cetaceans and manatees, they tend to be quite small, but in deep diving pinnipeds, they can be massive, due to their function of storing red blood cells.
Marsupials have y-shaped spleens, and it develops postnatally.
The only vertebrates lacking a spleen are the lampreys and hagfishes (the early-branching Cyclostomata, or jawless fishes). Even in these animals, there is a diffuse layer of haematopoeitic tissue within the gut wall, which has a similar structure to red pulp and is presumed homologous with the spleen of higher vertebrates.
In mice, the spleen stores half the body's monocytes so that, upon injury, they can migrate to the injured tissue and transform into dendritic cells and macrophages to assist wound healing.
Additional images
See also
References
External links
– "The visceral surface of the spleen."
"spleen" from Encyclopædia Britannica Online
"The Spleen (for Parents)", from KidsHealth.org
"Spleen Diseases" from MedlinePlus
"Finally, the Spleen Gets Some Respect" – The New York Times
Normal range of spleen size for a given age in children
Abdomen
Glands
Immune system
Lymphatic system
Lymphatics of the torso
Lymphoid organ
Organs (anatomy) | Spleen | [
"Biology"
] | 3,541 | [
"Immune system",
"Organ systems"
] |
42,666 | https://en.wikipedia.org/wiki/Zerg | The Zerg are a fictional race of insectoid aliens obsessed with assimilating other races into their swarm in pursuit of genetic perfection, and the overriding antagonists for much of the StarCraft franchise. Unlike the fictional universe's other primary races, the Protoss and Terrans, the Zerg lack technological inclination. Instead, they "force-evolve" genetic traits by directed mutation in order to match such technology. Operating as a hive mind-linked "chain of command", the Zerg strive for "genetic perfection" by assimilating the unique genetic code of advanced species deemed "worthy" into their own gene pool, creating numerous variations of specialized strains of Zerg gifted with unique adaptations. Despite being notoriously cunning and ruthlessly efficient, the majority of Zerg species have low intelligence, becoming mindless beasts if not connected to a "hive-cluster" or a "command entity".
As with the other races, the Zerg are the subject of a full single-player campaign in each of the series' real-time strategy video games. Zerg units are designed to be cost-efficient and fast to produce, encouraging players to overwhelm their opponents with sheer numerical advantage. Since the release of StarCraft, the Zerg have become a gaming icon, described by PC Gamer UK as "the best race in strategy history". The term "Zerg Rush", or "zerging", is now commonly used to describe sacrificing economic development in favor of using many cheap, yet weak units to overwhelm an enemy by attrition or sheer numbers. The tactic is infamous; most experienced real-time strategy players are familiar with the tactic in one form or another.
Attributes
Biology
Zerg have two cell types at birth — one which creates random mutations, and another that hunts these mutations — the result being that any mutations that survive are the strongest of all. Despite being a hive-minded species, the Zerg understand the principles of evolution and incorporate this into the development of their species. Zerg purposely situate themselves in harsh climates in order to further their own evolution through natural selection. Only the strongest with the best mutations survive, and they assimilate only the strongest species into their gene pool.
Society
It was stated in an interview by Blizzard employees that the Zerg Swarm are universally feared, hated, and hunted by all the sapient species of the Milky Way. The Zerg are a collective consciousness of a variety of different species assimilated into the Zerg genome. The Swarms' organization, psychological and economic, is more like that of ants or termites: they are communal entities, a people efficiently adapted by evolution. Every thousand Zerg warriors killed at a cost of one Terran soldier was a net victory for the Zerg; the Zerg Swarm didn't care any more about expending warriors than the Terrans cared about expending ammunition.
The Zerg were originally commanded by and unified by their absolute obedience to the Zerg collective sentience known as the Zerg Overmind, a manifestation of this hive mind, and under the Overmind's control the Zerg strove for genetic perfection by assimilating the favorable traits of other species. Zerg creatures are rapidly and selectively evolved into deadly and efficient killers to further the driving Zerg imperative of achieving absolute domination. After a species has been assimilated into the Swarm, it is mutated toward a different function within its hierarchy, from being a hive worker to a warrior strain. StarCraft's manual notes that some species bear little resemblance to their original forms after just a short time into assimilation (an example would be the formerly peaceful Slothien species, which was assimilated and mutated into the vicious Hydralisk strain and so on). The Overmind controls the Swarm through secondary agents called cerebrates. Cerebrates command an individual brood of Zerg, each with a distinct tactical role within the hierarchy. Cerebrates further delegate power through the use of overlords for battlefield direction and queens for hive watch.
The quest for 'genetic perfection' is a pseudo-religious concept to Zerg that drives them on a steady state of evolution and conflict; the zerg believed there was a state that the zerg could reach where they no longer needed to evolve, that their evolutionary form would never have to change again because they could already adapt to any situation. Abathur, an evolution master, doubted that this was possible, but reasoned that "chasing the illusion of perfection" was, regardless, tactically sound.
The vast majority of the Zerg do not have any free will as they are genetically forced to obey the commands of those further up the Zerg hierarchy, although they are sufficiently intelligent to form strategies and work as a team on the battlefield. Despite this, the average Zerg has no sense of self preservation. Along with the Overmind, the cerebrates are the only Zerg with full sapience, each with its own personality and methods, although they too are genetically incapable of disobeying the Overmind. The Overmind also possesses the ability to reincarnate its cerebrates should their bodies be killed, although Protoss dark templar energies are capable of disrupting this process. If a cerebrate is completely dead and cannot be reincarnated, the Overmind loses control of the cerebrate's brood, causing it to mindlessly rampage and attack anything. As a result of the Overmind's death in StarCraft and the subsequent destruction of a new Overmind in Brood War, the remaining cerebrates perished, as they could not survive without an Overmind. Sarah Kerrigan replaced the cerebrates with "brood mothers". These creatures fulfil much the same purpose, but are loyal to Kerrigan and could survive her temporary departure during the events of Starcraft 2.
An exception to all of this would be the Primal Zerg, who inhabit the original Zerg homeworld of Zerus (as seen in Heart of the Swarm). The Zerg Hive Mind was created to control the Zerg, and eventually put them under the control of the main antagonist of the series, the fallen Xel'Naga Amon. Some Zerg, however, managed to avoid being subsumed. These are the Primal Zerg, who have much the same genetic abilities but are not bound to the Overmind. These creatures are each independently sapient, and if they follow a leader it is because they choose to. Their lack of a Hive Mind also shields them from specific psionic attacks engineered to counter the Zerg Hive Mind.
Zerg worlds
Zerg have conquered and/or infested many worlds, but only two of them are important:
Zerus: Jungle world. Located in the galactic core' Theta quadrant of the Milky Way. It is the birthworld of the zerg. The position of the Zerus is known from the manual. It is situated near the Galaxy core. The Galaxy disc has a radius of 49 000 LY. The Zerg covered some 30-40 000 LY during their search for Protoss homeworld.
Char: Volcanic world. Current Zerg capital world. Former Terran Dominions world. The new Overmind grew here, but later it was enslaved by the United Earth Directorate (UED). Eventually the planet was regained by the Zerg under Kerrigan's control. The Zerg settled to infest other Terran worlds during their search, lies on the borders of the Terran sector. Because the Zerg ran across the Terrans first, it can implicate, that the Koprulu sector lies approximately on the line with Zerus, Char and Aiur.
Depiction
The Zerg were created from the native lifeforms of Zerus, who had the natural ability to absorb the "essence" of creatures they killed, transforming their bodies to gain new adaptations. The Xel'Naga created the Overmind and bound the primal Zerg to its will. They gave the Overmind a powerful desire to travel across the stars and absorb useful lifeforms into the Swarm, particularly the Protoss, their previous creation, so as to become the ultimate lifeform.
The Zerg are a completely organic race, making no use of lifeless technology and instead using specialized organisms for every function efficiently fulfilled through biological adaptation and planned mutation of the Zerg strains. Their buildings are specialized organs within the living, growing organism of a Zerg nest, as are the Leviathans "space ships" that carry them across space. Zerg colonies produce a carpet of bio-matter referred to as the "creep", which essentially provides nourishment for Zerg structures and creatures. The visual aesthetic of the Zerg greatly resembles that of invertebrates such as crustaceans and insects (and certainly draws inspiration from the creatures from the Alien movies). The Zerg are shown to be highly dependent on their command structure: if a Zerg should lose its connection to the hive mind, it may turn passive and incapable of action, or become completely uncontrollable and attack allies and enemies alike.
Zerg buildings and units are entirely organic in-game, and all Zerg can regenerate slowly without assistance (though not as quickly as Protoss shields or Terran medivac). Zerg production is far more centralized than with the Terrans and Protoss; a central hatchery must be utilized to create new Zerg, with other structures providing the necessary technology tree assets, whereas the other two races can produce units from several structures. Zerg units tend to be weaker than those of the other two races, but are also cheaper, allowing for rush tactics to be used. Some Zerg units are capable of infesting enemies with various parasites that range from being able to see what an enemy unit sees to spawning Zerg inside an enemy unit. In addition, Zerg can infest some Terran buildings, allowing for the production of special infested Terran units.
Appearances
In StarCraft, the Zerg are obsessed with the pursuit of genetic purity, and are the focus of the game's second episode. With the Xel'Naga–empowered Protoss targeted as the ultimate lifeform, the Zerg invade the Terran colonies in the Koprulu Sector to assimilate the Terrans' psionic potential and give the Zerg an edge over the Protoss. Through the actions of the Sons of Korhal, the Zerg are lured to the Confederate capital Tarsonis, where they capture the psionic ghost agent Sarah Kerrigan and infest her. Returning to the Zerg base of operations on Char, the Zerg are attacked by the dark templar Zeratul, who accidentally gives the location of the Protoss homeworld Aiur to the Zerg Overmind. With victory in sight, the Overmind launches an invasion of Aiur and manifests itself on the planet. However, at the end of the game, the Protoss high templar Tassadar sacrifices himself to destroy the Overmind, leaving the Zerg to run rampant and leaderless across the planet.
The Zerg return in Brood War initially as uncontrolled indiscriminate killers without the will of the Overmind to guide them. Through the early portions of Brood War, Sarah Kerrigan is at odds with the surviving cerebrates, who have formed a new Overmind to restore control of the Swarm. Through allying herself with the Protoss, Kerrigan strikes at the cerebrates, causing disruption of their plans. Eventually, the UED fleet takes control of Char and pacifies the new Overmind with drugs, putting the cerebrates and most of the Zerg under their control. Kerrigan retaliates by forming a tenuous alliance with the remnants of the Dominion and the forces of Jim Raynor and Fenix, their subsequent victories turning the tide against the UED. However, she later betrays the alliance by dealing long-term damage to the infrastructures of her allies and killing Fenix. Proceeding to blackmail Zeratul into killing the new Overmind, Kerrigan's forces destroy the remnants of the UED fleet, giving her full control of the Zerg and establishing the Swarm as the most powerful faction in the sector.
In StarCraft II: Wings of Liberty, Jim Raynor and the rebel forces who oppose both the Dominion and the Zerg, manage to secure an ancient Xel'Naga artifact and after successfully infiltrating Char, they use it to subjugate the Zerg and restore Kerrigan's human form. Once again without a unified leadership, the Zerg get divided into multiple broods feuding over control of the Swarm. This situation persists until the events of StarCraft II: Heart of the Swarm. Kerrigan, believing Raynor to have been killed in a Dominion surprise attack, enters the original Zerg spawning pool to become the Queen of Blades again. This time she is no longer motivated to destroy humanity, having kept more of her original mindset due to the non-interference of the Zerg Hive Mind, and by extension, the Dark Voice, Amon.
Kerrigan is the protagonist and player character of StarCraft II: Heart of the Swarm. After being deinfested, she was taken in Valerian Mengsk's hideout to research on her, until the Dominion attacks the facility, she escaped along with the rest of the facility, except Raynor, who was captured by Nova. She later learned that Raynor was executed and seeks revenge on Arcturus. As she enters a Leviathan, she controls the local Swarm inside, and starts rebuilding her forces from scratch. She later evolved into a Primal Zerg after a confrontation with Zeratul, leading her to the origins of the Zerg. She became a Primal, after absorbing the spawning pool and killing Primal Leaders to collect essence. With her newfound power, she initially takes the fight to the Dominion after subduing countless Queens. She was later shocked after she knows that Raynor survived and held by Dominion as a bargaining chip. She organizes a raid, rescuing Raynor but the man fell into disbelief that the one he saved, returns to being a monster. She also confronted an ancient Shapeshifter creating Hybrids at the behest of her former rival named Alexei Stukov. She later prepared to end Arcturus Mengsk's reign by killing him in his palace in Korhal. She later left to confront the Shapeshifter's master and only by allied effort, they finished them. Kerrigan left them to her Broodmother Zagara's control.
Critical reception
One of the main factors responsible for StarCraft's positive reception is the attention paid to the three unique playable races, for each of which Blizzard developed completely different characteristics, graphics, backstories, and styles of gameplay, while keeping them balanced in performance against each other. Previous to this, most real-time strategy games consisted of factions and races with the same basic "chess" play styles and units with only superficial differences. The use of unique sides and asymmetric warfare in StarCraft has been credited with popularizing the concept within the real-time strategy genre. Contemporary reviews of the game have mostly praised the attention to the gameplay balance between the species, as well as the fictional narratives built around them.
In their review for StarCraft, IGN's Tom Chick stated that the balance and difference between the races was "remarkable", continuing to praise the game's "radical" approach to different races and its high degree of success when compared with other games in the genre. IGN was also positive about the unit arrangements for the three races, crediting Blizzard Entertainment for not letting units become obsolete during extended play and for showing an "extraordinary amount of patience in balancing them." GameSpot was complimentary of the species in its review for StarCraft, describing the races as being full of personality. Stating that the use of distinct races allowed for the game "to avoid the problem [of equal sides] that has plagued every other game in the genre", GameSpot praised Blizzard Entertainment for keeping it "well balanced despite the great diversity."
Other reviews have echoed much of this positive reception. The site The Gamers' Temple described the species as "very diverse but well-balanced, " stating that this allowed for "a challenging and fun gaming experience." Allgame stated that the inclusion of three "dynamic" species "raises the bar" for real-time strategy games, complimenting the game for forcing the player to "learn how [the aliens'] minds work and not think like a human". Commentators have also praised the aesthetic design of the three races; in particular, the powered armor worn by the Terran Marine was rated eleventh on in a Maxim feature on the top armor suits in video games, and ninth in a similar feature by Machinima.com.
This positive view, however, is not universally held. For example, Computer and Video Games, while describing the game as "highly playable, " nevertheless described a "slight feeling of déjà vu" between the three races.
References
Fictional extraterrestrial species and races
Fictional superorganisms
StarCraft characters
Video game species and races
Video game characters introduced in 1998 | Zerg | [
"Biology"
] | 3,518 | [
"Superorganisms",
"Fictional superorganisms"
] |
42,676 | https://en.wikipedia.org/wiki/Mold%20health%20issues | Mold health issues refer to the harmful health effects of molds ("moulds" in British English) and their mycotoxins.
Molds are ubiquitous in the biosphere, and mold spores are a common component of household and workplace dust. The vast majority of molds are not hazardous to humans, and reaction to molds can vary between individuals, with relatively minor allergic reactions being the most common. The United States Centers for Disease Control and Prevention (CDC) reported in its June 2006 report, 'Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods,' that "excessive exposure to mold-contaminated materials can cause adverse health effects in susceptible persons regardless of the type of mold or the extent of contamination." When mold spores are present in abnormally high quantities, they can present especially hazardous health risks to humans after prolonged exposure, including allergic reactions or poisoning by mycotoxins, or causing fungal infection (mycosis).
Health effects
People who are atopic (sensitive), already have allergies, asthma, or compromised immune systems and occupy damp or moldy buildings are at an increased risk of health problems such as inflammatory responses to mold spores, metabolites such as mycotoxins, and other components. Other problems are respiratory and/or immune system responses including respiratory symptoms, respiratory infections, exacerbation of asthma, and rarely hypersensitivity pneumonitis, allergic alveolitis, chronic rhinosinusitis and allergic fungal sinusitis. A person's reaction to mold depends on their sensitivity and other health conditions, the amount of mold present, length of exposure, and the type of mold or mold products.
The five most common genera of indoor molds are Cladosporium, Penicillium, Aspergillus, Alternaria, and Trichoderma.
Damp environments that allow mold to grow can also allow the proliferation of bacteria and release volatile organic compounds.
Symptoms of mold exposure
Symptoms of mold exposure can include:
Nasal and sinus congestion, runny nose
Respiratory problems, such as wheezing and difficulty breathing, chest tightness
Cough
Throat irritation
Sneezing
Health effects linking to asthma
Adverse respiratory health effects are associated with occupancy in buildings with moisture and mold damage. Infants in homes with mold have a much greater risk of developing asthma and allergic rhinitis. Infants may develop respiratory symptoms due to exposure to a specific type of fungal mold, called Penicillium. Signs that an infant may have mold-related respiratory problems include (but are not limited to) a persistent cough and wheeze. Increased exposure increases the probability of developing respiratory symptoms during their first year of life. As many as 21% of asthma cases may result from exposure to mold.
Mold exposures have a variety of health effects depending on the person. Some people are more sensitive to mold than others. Exposure to mold can cause several health issues such as; throat irritation, nasal stuffiness, eye irritation, cough, and wheezing, as well as skin irritation in some cases. Exposure to mold may also cause heightened sensitivity depending on the time and nature of exposure. People at higher risk for mold allergies are people with chronic lung illnesses and weak immune systems, which can often result in more severe reactions when exposed to mold.
There has been sufficient evidence that damp indoor environments are correlated with upper respiratory tract symptoms such as coughing, and wheezing in people with asthma.
Flood-specific mold health effects
Among children and adolescents, the most common health effect post-flooding was lower respiratory tract symptoms, though there was a lack of association with measurements of total fungi. Another study found that these respiratory symptoms were positively associated with exposure to water damaged homes, exposure included being inside without participating in clean up. Despite lower respiratory effects among all children, there was a significant difference in health outcomes between children with pre-existing conditions and children without. Children with pre-existing conditions were at greater risk that can likely be attributed to the greater disruption of care in the face of flooding and natural disaster.
Although mold is the primary focus post flooding for residents, the effects of dampness alone must also be considered. According to the Institute of Medicine, there is a significant association between dampness in the home and wheeze, cough, and upper respiratory symptoms. A later analysis determined that 30% to 50% of asthma-related health outcomes are associated with not only mold, but also dampness in buildings.
While there is a proven correlation between mold exposure and the development of upper and lower respiratory syndromes, there are still fewer incidences of negative health effects than one might expect. Barbeau and colleagues suggested that studies do not show a greater impact from mold exposure for several reasons: 1) the types of health effects are not severe and are therefore not caught; 2) people whose homes have flooded find alternative housing to prevent exposure; 3) self-selection, the healthier people participated in mold clean-up and were less likely to get sick; 4) exposures were time-limited as result of remediation efforts and; 5) the lack of access to health care post-flooding may result in fewer illnesses being discovered and reported for their association with mold. There are also certain notable scientific limitations in studying the exposure effects of dampness and molds on individuals because there are currently no known biomarkers that can prove that a person was exclusively exposed to molds. Thus, it is currently impossible to prove correlation between mold exposure and symptoms.
Mold-associated conditions
Health problems associated with high levels of airborne mold spores include allergic reactions, asthma episodes, irritations of the eye, nose and throat, sinus congestion, and other respiratory problems. Several studies and reviews have suggested that childhood exposure to dampness and mold might contribute to the development of asthma. For example, residents of homes with mold are at an elevated risk for both respiratory infections and bronchitis. When mold spores are inhaled by an immunocompromised individual, some mold spores may begin to grow on living tissue, attaching to cells along the respiratory tract and causing further problems. Generally, when this occurs, the illness is an epiphenomenon and not the primary pathology. Also, mold may produce mycotoxins, either before or after exposure to humans, potentially causing toxicity.
Fungal infection
A serious health threat from mold exposure for immunocompromised individuals is systemic fungal infection (systemic mycosis). Immunocompromised individuals exposed to high levels of mold, or individuals with chronic exposure may become infected. Sinuses and digestive tract infections are most common; lung and skin infections are also possible. Mycotoxins may or may not be produced by the invading mold.
Dermatophytes are the parasitic fungi that cause skin infections such as athlete's foot and tinea cruris. Most dermatophyte fungi take the form of mold, as opposed to a yeast, with an appearance (when cultured) that is similar to other molds.
Opportunistic infection by molds such as Talaromyces marneffei and Aspergillus fumigatus is a common cause of illness and death among immunocompromised people, including people with AIDS or asthma.
Mold-induced hypersensitivity
The most common form of hypersensitivity is caused by the direct exposure to inhaled mold spores that can be dead or alive or hyphal fragments which can lead to allergic asthma or allergic rhinitis. The most common effects are rhinorrhea (runny nose), watery eyes, coughing and asthma attacks. Another form of hypersensitivity is hypersensitivity pneumonitis. Exposure can occur at home, at work or in other settings. It is predicted that about 5% of people have some airway symptoms due to allergic reactions to molds in their lifetimes.
Hypersensitivity may also be a reaction toward an established fungal infection in allergic bronchopulmonary aspergillosis.
Mycotoxin toxicity
Some molds excrete toxic compounds called mycotoxins, secondary metabolites produced by fungi under certain environmental conditions. These environmental conditions affect the production of mycotoxins at the transcription level. Temperature, water activity and pH, strongly influence mycotoxin biosynthesis by increasing the level of transcription within the fungal spore. It has also been found that low levels of fungicides can boost mycotoxin synthesis. Certain mycotoxins can be harmful or lethal to humans and animals when exposure is high enough.
Extreme exposure to very high levels of mycotoxins can lead to neurological problems and, in some cases, death; fortunately, such exposures rarely to never occur in normal exposure scenarios, even in residences with serious mold problems. Prolonged exposure, such as daily workplace exposure, can be particularly harmful.
It is thought that all molds may produce mycotoxins, and thus all molds may be potentially toxic if large enough quantities are ingested, or the human becomes exposed to extreme quantities of mold. Mycotoxins are not produced all the time, but only under specific growing conditions. Mycotoxins are harmful or lethal to humans and animals only when exposure is high enough.
Mycotoxins can be found on the mold spore and mold fragments, and therefore they can also be found on the substrate upon which the mold grows. Routes of entry for these insults can include ingestion, dermal exposure, and inhalation.
Aflatoxin is an example of a mycotoxin. It is a cancer-causing poison produced by certain fungi in or on foods and feeds, especially in field corn and peanuts.
Exposure sources and prevention
The primary sources of mold exposure are from the indoor air in buildings with substantial mold growth and the ingestion of food with mold growths.
Air
While mold and related microbial agents can be found both inside and outside, specific factors can lead to significantly higher levels of these microbes, creating a potential health hazard. Several notable factors are water damage in buildings, the use of building materials which provide a suitable substrate and source of food to amplify mold growth, relative humidity, and energy-efficient building designs, which can prevent proper circulation of outside air and create a unique ecology in the built environment. A common issue with mold hazards in the household can be the placement of furniture, resulting in a lack of ventilation of the nearby wall. The simplest method of avoiding mold in a home so affected is to move the furniture in question.
More than half of adult workers in moldy/humid buildings suffer from nasal or sinus symptoms due to mold exposure.
Prevention of mold exposure and its ensuing health issues begins with the prevention of mold growth in the first place by avoiding a mold-supporting environment. Extensive flooding and water damage can support extensive mold growth. Following hurricanes, homes with greater flood damage, especially those with more than of indoor flooding, demonstrated far higher levels of mold growth compared with homes with little or no flooding.
It is useful to perform an assessment of the location and extent of the mold hazard in a structure. Various practices of remediation can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels. Removal of affected materials after the source of moisture has been reduced and/or eliminated may be necessary, as some materials cannot be remediated. Thus, the concept of mold growth, assessment, and remediation is essential in preventing health issues arising due to the presence of dampness and mold.
Molds may excrete liquids or low-volatility gases, but the concentrations are so low that frequently they cannot be detected even with sensitive analytical sampling techniques. Sometimes, these by-products are detectable by odor, in which case they are referred to as "ergonomic odors", meaning the odors are noticeable but do not indicate toxicologically significant exposures.
Food
Molds that are often found on meat and poultry include members of the genera Alternaria, Aspergillus, Botrytis, Cladosporium, Fusarium, Geotrichum, Mortierella, Mucor, Neurospora, Paecilomyces, Penicillium, and Rhizopus. Grain crops in particular incur considerable losses both in field and storage due to pathogens, post-harvest spoilage, and insect damage. A number of common microfungi are important agents of post-harvest spoilage, notably members of the genera Aspergillus, Fusarium, and Penicillium. A number of these produce mycotoxins (soluble, non-volatile toxins produced by a range of microfungi that demonstrate specific and potent toxic properties on human and animal cells) that can render foods unfit for consumption. When ingested, inhaled, or absorbed through skin, mycotoxins may cause or contribute to a range of effects from reduced appetite and general malaise to acute illness or death in rare cases. Mycotoxins may also contribute to cancer. Dietary exposure to the mycotoxin aflatoxin B1, commonly produced by growth of the fungus Aspergillus flavus on improperly stored ground nuts in many areas of the developing world, is known to independently (and synergistically with Hepatitis B virus) induce liver cancer. Mycotoxin-contaminated grain and other food products have a significant impact on human and animal health globally. According to the World Health Organization, roughly 25% of the world's food may be contaminated by mycotoxins.
Prevention of mold exposure from food is generally to consume food that has no mold growths on it. Also, mold growth in the first place can be prevented by the same concept of mold growth, assessment, and remediation that prevents air exposure. Also, it is especially useful to clean the inside of the refrigerator and to ensure dishcloths, towels, sponges, and mops are clean.
Ruminants are considered to have increased resistance to some mycotoxins, presumably due to the superior mycotoxin-degrading capabilities of their gut microbiota. The passage of mycotoxins through the food chain may also have important consequences on human health. For example, in China in December 2011, high levels of carcinogen aflatoxin M1 in Mengniu brand milk were found to be associated with the consumption of mold-contaminated feed by dairy cattle.
Bedding
Bacteria, fungi, allergens, and particle-bound semi-volatile organic compounds (SVOCs) can all be found in bedding and pillows with possible consequences for human health given the high amount of exposure each day. Over 47 species of fungi have been identified in pillows, although the typical range of species found in a single pillow varied between four and sixteen. Compared to feather pillows, synthetic pillows typically display a slightly greater variety of fungal species and significantly higher levels of β‐(1,3)‐glucan, which can cause inflammatory responses. The authors concluded that these and related results suggest feather bedding might be a more appropriate choice for asthmatics than synthetics. Some newer bedding products incorporate silver nanoparticles due to their antibacterial, antifungal, and antiviral properties; however, the long-term safety of this additional exposure to these nanoparticles is relatively unknown, and a conservative approach to the use of these products is recommended.
Flooding
Flooding in houses causes a unique opportunity for mold growth, which may be attributed to adverse health effects in people exposed to the mold, especially children and adolescents. In a study on the health effects of mold exposure after hurricanes Katrina and Rita, the predominant types of mold were Aspergillus, Penicillium, and Cladosporium with indoor spore counts ranging from 6,142 – 735,123 spores m−3. Molds isolated following flooding were different from mold previously reported for non-water damaged homes in the area. Further research found that homes with greater than three feet of indoor flooding demonstrated significantly higher levels of mold than those with little or no flooding.
Mitigation
Recommended strategies to prevent mold include avoiding mold-contamination; utilization of environmental controls; the use of personal protective equipment (PPE), including skin and eye protection and respiratory protection; and environmental controls such as ventilation and suppression of dust. When mold cannot be prevented, the CDC recommends clean-up protocol including first taking emergency action to stop water intrusion. Second, they recommend determining the extent of water damage and mold contamination. And third, they recommend planning remediation activities such as establishing containment and protection for workers and occupants; eliminating water or moisture sources if possible; decontaminating or removing damaged materials and drying any wet materials; evaluating whether space has been successfully remediated; and reassembling the space to control sources of moisture.
History
In 1698, the physician Sir John Floyer published the first edition of A Treatise of the Asthma, the first English textbook on the malady. In it, he describes how dampness and mold could trigger an asthmatic attack, specifically, "damp houses and fenny [boggy] countries". He also writes of an asthmatic "who fell into a violent fit by going into a Wine-Cellar", presumably due to the "fumes" in the air.
In the 1930s, mold was identified as the cause behind the mysterious deaths of farm animals in Russia and other countries. Stachybotrys chartarum was found growing on the wet grain used for animal feed. Illness and death also occurred in humans when starving peasants ate large quantities of rotten food grains and cereals heavily overgrown with the Stachybotrys mold.
In the 1970s, building construction techniques changed in response to changing economic realities, including the energy crisis. As a result, homes, and buildings became more airtight. Also, cheaper materials such as drywall came into common use. The newer building materials reduced the drying potential of the structures, making moisture problems more prevalent. This combination of increased moisture and suitable substrates contributed to increased mold growth inside buildings.
Today, the US Food and Drug Administration and the agriculture industry closely monitor mold and mycotoxin levels in grains and foodstuffs to keep the contamination of animal feed and human food supplies below specific levels. In 2005, Diamond Pet Foods, a US pet food manufacturer, experienced a significant rise in the number of corn shipments containing elevated levels of aflatoxin. This mold toxin eventually made it into the pet food supply, and dozens of dogs and cats died before the company was forced to recall affected products.
In November 2022, a UK coroner recorded that a two-year-old child, Awaab Ishak from Rochdale, England, died in 2020 of "acute airway oedema with severe granulomatous tracheobronchitis due to environmental mould exposure" in his home. While not specified in the coroner's report or outputs from official proceedings, the death was widely reported as due to specifically 'toxic' or 'toxic black' mold. The finding led to a 2023 change in UK law, known as Awaab's Law, which will require social housing providers to remedy reported damp and mould within certain time limits.
See also
Environmental engineering
Environmental health
Occupational asthma
Occupational safety and health
References
Further reading
External links
CDC.gov Mold
US EPA: Mold Information – U.S. Environmental Protection Agency
US EPA: EPA Publication #402-K-02-003 "A Brief Guide to Mold, Moisture, and Your Home"
NIBS: Whole Building Design Guide: Air Decontamination
NPIC: Mold Pest Control Information – National Pesticide Information Center
Mycotoxins in grains and the food supply:
indianacrop.org
cropwatch.unl.edu
agbiopubs.sdstate.edu (PDF)
Building biology
Fungi and humans
Environmental engineering
Toxic effects of substances chiefly nonmedicinal as to source
Industrial hygiene
Building defects
Environmental law
Product liability
Occupational safety and health
Indoor air pollution | Mold health issues | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology",
"Environmental_science"
] | 4,100 | [
"Humans and other species",
"Fungi",
"Toxicology",
"Building engineering",
"Chemical engineering",
"Environmental engineering",
"Fungi and humans",
"Civil engineering",
"Building defects",
"Toxic effects of substances chiefly nonmedicinal as to source",
"Mechanical failure",
"Building biology"... |
42,693 | https://en.wikipedia.org/wiki/Upper%20and%20lower%20bounds | In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is every element of .
Dually, a lower bound or minorant of is defined to be an element of that is less than or equal to every element of .
A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound.
The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds.
Examples
For example, is a lower bound for the set (as a subset of the integers or of the real numbers, etc.), and so is . On the other hand, is not a lower bound for since it is not smaller than every element in . and other numbers x such that would be an upper bound for S.
The set has as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that .
Every subset of the natural numbers has a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of the rational numbers may or may not be bounded from below, and may or may not be bounded from above.
Every finite subset of a non-empty totally ordered set has both upper and lower bounds.
Bounds of functions
The definitions can be generalized to functions and even to sets of functions.
Given a function with domain and a preordered set as codomain, an element of is an upper bound of if for each in . The upper bound is called sharp if equality holds for at least one value of . It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality.
Similarly, a function defined on domain and having the same codomain is an upper bound of , if for each in . The function is further said to be an upper bound of a set of functions, if it is an upper bound of each function in that set.
The notion of lower bound for (sets of) functions is defined analogously, by replacing ≥ with ≤.
Tight bounds
An upper bound is said to be a tight upper bound, a least upper bound, or a supremum, if no smaller value is an upper bound. Similarly, a lower bound is said to be a tight lower bound, a greatest lower bound, or an infimum, if no greater value is a lower bound.
Exact upper bounds
An upper bound of a subset of a preordered set is said to be an exact upper bound for if every element of that is strictly majorized by is also majorized by some element of . Exact upper bounds of reduced products of linear orders play an important role in PCF theory.
See also
Greatest element and least element
Infimum and supremum
Maximal and minimal elements
References
Mathematical terminology
Order theory
Real analysis
de:Schranke (Mathematik)
pl:Kresy dolny i górny | Upper and lower bounds | [
"Mathematics"
] | 659 | [
"Order theory",
"nan"
] |
42,719 | https://en.wikipedia.org/wiki/Do%20it%20yourself | "Do it yourself" ("DIY") is the method of building, modifying, or repairing things by oneself without the direct aid of professionals or certified experts. Academic research has described DIY as behaviors where "individuals use raw and semi-raw materials and parts to produce, transform, or reconstruct material possessions, including those drawn from the natural environment (e.g., landscaping)". DIY behavior can be triggered by various motivations previously categorized as marketplace motivations (economic benefits, lack of product availability, lack of product quality, need for customization), and identity enhancement (craftsmanship, empowerment, community seeking, uniqueness).
The term "do-it-yourself" has been associated with consumers since at least 1912 primarily in the domain of home improvement and maintenance activities. The phrase "do it yourself" had come into common usage (in standard English) by the 1950s, in reference to the emergence of a trend of people undertaking home improvement and various other small craft and construction projects as both a creative-recreational and cost-saving activity.
Subsequently, the term DIY has taken on a broader meaning that covers a wide range of skill sets. DIY has been described as a "self-made-culture"; one of designing, creating, customizing and repairing items or things without any special training. DIY has grown to become a social concept with people sharing ideas, designs, techniques, methods and finished projects with one another either online or in person.
DIY can be seen as a cultural reaction in modern technological society to increasing academic specialization and economic specialization which brings people into contact with only a tiny focus area within the larger context, positioning DIY as a venue for holistic engagement. DIY ethic is the ethic of self-sufficiency through completing tasks without the aid of a paid expert. The DIY ethic promotes the idea that anyone is capable of performing a variety of tasks rather than relying on paid specialists.
History
Italian archaeologists have unearthed the ruins of a 6th-century BC Greek structure in southern Italy. The ruins appeared to come with detailed assembly instructions and are being called an "ancient IKEA building". The structure was a temple-like building discovered at Torre Satriano, near the southern city of Potenza, in Basilicata. This region was recognized as a place where local people mingled with Greeks who had settled along the southern coast known as Magna Graecia and in Sicily from the 8th century BC onwards. Christopher Smith, director of the British School at Rome, said that the discovery was, "the clearest example yet found of mason's marks of the time. It looks as if someone was instructing others how to mass-produce components and put them together in this way." Much like our modern instruction booklets, various sections of the luxury building were inscribed with coded symbols showing how the pieces slotted together. The characteristics of these inscriptions indicate they date back to around the 6th century BC, which tallies with the architectural evidence suggested by the decoration. The building was built by Greek artisans coming from the Spartan colony of Taranto in Apulia.
In North America, there was a DIY magazine publishing niche in the first half of the twentieth century. Magazines such as Popular Mechanics (founded in 1902) and Mechanix Illustrated (founded in 1928) offered a way for readers to keep current on useful practical skills, techniques, tools, and materials. As many readers lived in rural or semi-rural regions, initially much of the material related to their needs on the farm or in a small town. In addition, authors such as F. J. Christopher began to become heavy advocates for do-it-yourself projects.
By the 1950s, DIY became common usage with the emergence of people undertaking home improvement projects, construction projects and smaller crafts. Artists began to fight against mass production and mass culture by claiming to be self-made. However, DIY practices also responded to geopolitical tensions, such as in the form of home-made Cold War nuclear fallout shelters, and the dark aesthetics and nihilist discourse in punk fanzines in the 1970s and onwards in the shadow of rising unemployment and social tensions. In the 1960s and 1970s, books and TV shows about the DIY movement and techniques on building and home decoration began appearing. By the 1990s, the DIY movement felt the impact of the digital age with the rise of the internet. With computers and the internet becoming mainstream, increased accessibility to the internet has led to more households undertaking DIY methods. Platforms, such as YouTube or Instagram, provide people the opportunity to share their creations and instruct others on how to replicate DIY techniques in their own home.
The DIY movement is a re-introduction (often to urban and suburban dwellers) of the old pattern of personal involvement and use of skills in the upkeep of a house or apartment, making clothes; maintenance of cars, computers, websites; or any material aspect of living. The philosopher Alan Watts (from the "Houseboat Summit" panel discussion in a 1967 edition of the San Francisco Oracle) reflected a growing sentiment:
In the 1970s, DIY spread through the North American population of college and recent-college-graduate age groups. In part, this movement involved the renovation of affordable, rundown older homes. But, it also related to various projects expressing the social and environmental vision of the 1960s and early 1970s. The young visionary Stewart Brand, working with friends and family, and initially using the most basic of typesetting and page-layout tools, published the first edition of The Whole Earth Catalog (subtitled Access to Tools) in late 1968.
The first Catalog, and its successors, used a broad definition of the term "tools." There were informational tools, such as books (often technical in nature), professional journals, courses and classes. There were specialized, designed items, such as carpentry and stonemasonry tools, garden tools, welding equipment, chainsaws, fiberglass materials and so on – even early personal computers. The designer J. Baldwin served as technology editor and wrote many of the reviews of fabrication tools, tools for working soil, etc. The Catalog publication both emerged from and spurred the great wave of experimentalism, convention-breaking, and do-it-yourself attitude of the late 1960s. Often copied, the Catalog appealed to a wide cross-section of people in North America and had a broad influence.
DIY home improvement books burgeoned in the 1970s, first created as collections of magazine articles. An early, extensive line of DIY how-to books were created by Sunset Books, based upon previously published articles from their magazine, Sunset, based in California. Time-Life, Better Homes and Gardens, Balcony Garden Web and other publishers soon followed suit.
In the mid-1990s, DIY home-improvement content began to find its way onto the World Wide Web. HouseNet was the earliest bulletin-board style site where users could share information. Since the late 1990s, DIY has exploded on the Web through thousands of sites.
In the 1970s, when home video (VCRs) came along, DIY instructors quickly grasped its potential for demonstrating processes by audio-visual means. In 1979, the PBS television series This Old House, starring Bob Vila, premiered and spurred a DIY television revolution. The show was immensely popular, educating people on how to improve their living conditions (and the value of their house) without the expense of paying someone else to do (as much of) the work. In 1994, the HGTV Network cable television channel was launched in the United States and Canada, followed in 1999 by the DIY Network cable television channel. Both were launched to appeal to the growing percentage of North Americans interested in DIY topics, from home improvement to knitting. Such channels have multiple shows revealing how to stretch one's budget to achieve professional-looking results (Design Cents, Design on a Dime, etc.) while doing the work yourself. Toolbelt Diva specifically caters to female DIYers.
Beyond magazines and television, the scope of home improvement DIY continues to grow online where most mainstream media outlets now have extensive DIY-focused informational websites such as This Old House, Martha Stewart, Hometalk, and the DIY Network. These are often extensions of their magazine or television brand. The growth of independent online DIY resources is also spiking. The number of homeowners who blog about their experiences continues to grow, along with DIY websites from smaller organizations.
Adverse effects of power tool use
Use of power tools can cause adverse effects on people living nearby. Power tools can produce large amounts of particulates including, ultrafine particles.
Particulates are the most harmful form (other than ultra-fines) of air pollution. Exposure to particulate matter, especially PM2.5 and ultrafine particles (PM0.1), has serious health implications. According to the World Health Organization, there is no safe level of particulate exposure, with these emissions linked to increased risks of respiratory and cardiovascular diseases.
Many tasks create dust. With high dust levels are caused by one of more the following:
equipment – high energy tools, such as cut-off saws, grinders, wall chasers and grit blasters produce a lot of dust in a very short time
work method – dry sweeping can make a lot of dust compared to vacuuming or wet brushing
work area – the more enclosed a space, the more the dust will build up
time – the longer you work the more dust there will be
Examples of high dust level tasks include:
using power tools to cut, grind, drill or prepare a surface
sanding taped plaster board joints
dry sweeping
Some power tools are equipped with dust collection system (e.g. HEPA vacuum cleaner) or integrated water delivery system which extract the dust after emission.
While the type of material used will determine the composition of the dust generated, the size and amount of particulates produced are mainly determined by the type of tool used. Implementation of effective dust control measures may also play a role.
Use of angle grinder is not preferred as large amounts of harmful sparks and fumes (and particulates) are generated when compared with using reciprocating saw or band saw. Angle grinders produce sparks when cutting ferrous metals. They also produce shards cutting other materials. The blades themselves may also break. This is a great hazard to the face and eyes especially, as well as other parts of the body.
Modern power tools are mostly equipped with advanced dust control systems, including HEPA-certified dust extractors and integrated water delivery systems, to mitigate the release of harmful particulates. The Occupational Safety and Health Administration (OSHA) mandates the use of such control measures in environments with high dust levels.
Fashion
DIY is prevalent amongst the fashion community, with ideas being shared on social media, such as YouTube, about clothing, jewellery, makeup, and hairstyles. Techniques include distressing and bleaching jeans, redesigning old shirts, and studding denim.
The concept of DIY has also emerged within the art and design community. The terms Hacktivist, Craftivist, or maker have been used to describe creatives working within a DIY framework (Busch). Otto von Busch describes 'Hacktivism' as "[including] the participant in the process of making, [to give] rise to new attitudes within the 'maker' or collaborator" (Busch 49). Busch suggests that by engaging in participatory forms of fashion, consumers are able to step away from the idea of "mass-homogenized 'Mc-Fashion (Lee 2003)", as fashion Hacktivism allows consumers to play a more active role in engaging with the clothes they wear (Busch 32).
Subculture
DIY as a subculture was brought forward by the punk movement of the 1970s. Instead of traditional means of bands reaching their audiences through large music labels, bands began recording, manufacturing albums and merchandise, booking their own tours, and creating opportunities for smaller bands to get wider recognition through repetitive low-cost DIY touring. The burgeoning zine movement took up coverage of and promotion of the underground punk scenes, and significantly altered the way fans interacted with musicians. Zines quickly branched off from being hand-made music magazines to become more personal; they quickly became one of the youth culture's gateways to DIY culture. This led to tutorial zines showing others how to make their own shirts, posters, zines, books, food, etc.
The terms "DIY" and "do-it-yourself" are also used to describe:
Self-publishing books, zines, doujin, and alternative comics
Bands or solo artists releasing their music on self-funded record labels.
Trading of mixtapes as part of cassette culture
The international mail art network which circumvents galleries and official art institutions by creating a precursor to social networking.
Homemade stuffs based on the principles of "Recycle, Reuse & Reduce" (the 3R's). A common term in many Environmental movements encouraging people to reuse old, used objects found in their homes and to recycle simple materials like paper.
Crafts such as knitting, crochet, sewing, handmade jewelry, ceramics
Designing business cards, invitations and so on
Creating punk or indie musical merchandise through the use of recycling thrift store or discarded materials, usually decorated with art applied by silk screen.
Independent game development and game modding
Contemporary roller derby
Skateparks built by skateboarders without paid professional assistance
Building musical electronic circuits such as the Atari Punk Console and create circuit bending noise machines from vintage children toys.
Modifying ("modding") common products to allow extended or unintended uses, commonly referred to by the internet term, "life-hacking". Related to jury-rigging i.e. sloppy/ unlikely mods
Hobby electronics or in amateur radio equipment producing.
DIY science: using open-source hardware to make scientific equipment to conduct citizen science or simply low-cost traditional science
Using low-cost single-board computers, such as Arduino and Raspberry Pi, as embedded systems with various applications
DIY bio
Use of a custom Linux distribution catered for a specific purpose.
Building a custom synthesizer.
Use of FPGAs.
Privately made firearms
Taxidermyizing the scores of hunting or fishing expeditions.
Music
Much contemporary DIY music has its origins in the late 1970s punk rock subculture. It developed as a way to circumnavigate the corporate mainstream music industry. By controlling the entire production and distribution chain, DIY bands attempt to develop a closer relationship between artists and fans. The DIY ethic gives total control over the final product without need to compromise with record major labels.
According to the punk aesthetic, one can express oneself and produce moving and serious works with limited means. Arguably, the earliest example of this attitude was the punk music scene of the 1970s.
More recently, the orthodox understanding that DIY originates in 1970s punk, with its clearest practices being in the self-produced 7" single and self-published fanzines, has been challenged. As George McKay asks in the title of his 2023 article: 'Was punk DIY? Is DIY punk?' McKay argues instead for what he terms a 'depunking' of DIY.
Riot grrrl, associated with third-wave feminism, also adopted the core values of the DIY punk ethic by leveraging creative ways of communication through zines and other projects.
Adherents of the DIY punk ethic also work collectively. For example, punk impresario David Ferguson's CD Presents was a DIY concert production, recording studio, and record label network.
Film
A form of independent filmmaking characterized by low budgets, skeleton crews, and simple props using whatever is available.
By country
As a means of adaptation during the Cuban Special Period times of economic crisis, resolver ("to resolve") became an important part of Cuban culture. Resolver refers to a spirit of resourcefulness and do-it-yourself problem solving.
India
Jugaad is a colloquial Hindi, Bengali, Marathi, Punjabi, Sindhi and Urdu word, which refers to a non-conventional, frugal innovation, often termed a "hack". It could also refer to an innovative fix or a simple work-around, a solution that bends the rules, or a resource that can be used in such a way. It is also often used to signify creativity: to make existing things work, or to create new things with meager resources.
United States
Rasquache is the English form of the Spanish term rascuache, originally with a negative connotation in Mexico it was recontextualized by the Mexican and Chicano arts movement to describe a specific artistic aesthetic, Rasquachismo, suited to overcoming material and professional limitations faced by artists in the movement.
See also
Air pollution
Bricolage
Circuit bending
Edupunk
Hackerspace
Handyman
Instructables
Junk box
Kludge
Mail art
Maker culture
Number 8 wire
Particulate
Open design
Power tool
Prosumer
Ready-to-assemble furniture
Sawdust
3D printing
Subculture links
Punk subculture
Basement show
Bricolage
Cassette culture
Circuit bending
Critical making
D.I.Y. or Die: How to Survive as an Independent Artist
Edupunk
Guerrilla gig
Hackerspace
Homebuilt aircraft
Individualism
Infoshops
Maker culture
Mumblecore
Off-the-grid
Remodernist Film
Self-publishing
Underground comix
White box (computer hardware)
Solarpunk
References
Further reading
Bailey, Thomas Bey William (2012) Unofficial Release: Self-Released And Handmade Audio In Post-Industrial Society, Belsona Books.
DIY, Alternative Cultures and Society journal
McKay, George. (2023). 'Was punk DIY? Is DIY punk? Interrogating the DIY/punk nexus, with particular reference to the early UK punk scene, c. 1976-1984.' DIY, Alternative Cultures and Society online first.
Smith, G. and Gillett, A. G., (2015). "Creativities, innovation, and networks in garage punk rock: A case study of the Eruptörs". Artivate: A Journal of Entrepreneurship in the Arts, 9–24
Do It Yourself: Democracy and Design'' by Paul Atkinson, Journal of Design History, March 2006
Building
Handbooks and manuals
Skills
Cassette culture 1970s–1990s
Simple living
Articles containing video clips | Do it yourself | [
"Engineering"
] | 3,785 | [
"Construction",
"Building"
] |
42,728 | https://en.wikipedia.org/wiki/Signal%20reflection | In telecommunications, signal reflection occurs when a signal is transmitted along a transmission medium, such as a copper cable or an optical fiber. Some of the signal power may be reflected back to its origin rather than being carried all the way along the cable to the far end. This happens because imperfections in the cable cause impedance mismatches and non-linear changes in the cable characteristics. These abrupt changes in characteristics cause some of the transmitted signal to be reflected. In radio frequency (RF) practice this is often measured in a dimensionless ratio known as voltage standing wave ratio (VSWR) with a VSWR bridge. The ratio of energy bounced back depends on the impedance mismatch. Mathematically, it is defined using the reflection coefficient.
Because the principles are the same, this concept is perhaps easiest to understand when considering an optical fiber. Imperfections in the glass create mirrors that reflect the light back along the fiber.
Impedance discontinuities cause attenuation, attenuation distortion, standing waves, ringing and other effects because a portion of a transmitted signal will be reflected back to the transmitting device rather than continuing to the receiver, much like an echo. This effect is compounded if multiple discontinuities cause additional portions of the remaining signal to be reflected back to the transmitter. This is a fundamental problem with the daisy chain method of connecting electronic components.
When a returning reflection strikes another discontinuity, some of the signal rebounds in the original signal direction, creating multiple echo effects. These forward echoes strike the receiver at different intervals making it difficult for the receiver to accurately detect data values on the signal. The effects can resemble those of jitter.
Because damage to the cable can cause reflections, an instrument called an electrical time-domain reflectometer (ETDR; for electrical cables) or an optical time-domain reflectometer (OTDR; for optical cables) can be used to locate the damaged part of a cable. These instruments work by sending a short pulsed signal into the cable and measuring how long the reflection takes to return. If only reflection magnitudes are desired, however, and exact fault locations are not required, VSWR bridges perform a similar but lesser function for RF cables.
The combination of the effects of signal attenuation and impedance discontinuities on a communications link is called insertion loss. Proper network operation depends on constant characteristic impedance in all cables and connectors, with no impedance discontinuities in the entire cable system. When a sufficient degree of impedance matching is not practical, echo suppressors or echo cancellers, or both, can sometimes reduce the problems.
The Bergeron diagram method, valid for both linear and non-linear models, evaluates the reflection's effects in an electric line.
See also
Crosstalk (electronics)
Digital subscriber line
Project Echo
Fresnel reflection
Ground-penetrating radar
Impedance matching
Signal integrity
Reflections of signals on conducting lines
Reflection phase change
References
Radio electronics
Electricity
Geometrical optics
Electronic design
Electrical engineering
Physical optics | Signal reflection | [
"Engineering"
] | 609 | [
"Radio electronics",
"Electronic design",
"Electronic engineering",
"Electrical engineering",
"Design"
] |
42,739 | https://en.wikipedia.org/wiki/Bubble%20fusion | Bubble fusion is the non-technical name for a nuclear fusion reaction hypothesized to occur inside extraordinarily large collapsing gas bubbles created in a liquid during acoustic cavitation. The more technical name is sonofusion.
The term was coined in 2002 with the release of a report by Rusi Taleyarkhan and collaborators that claimed to have observed evidence of sonofusion. The claim was quickly surrounded by controversy, including allegations ranging from experimental error to academic fraud. Subsequent publications claiming independent verification of sonofusion were also highly controversial.
Eventually, an investigation by Purdue University found that Taleyarkhan had engaged in falsification of independent verification, and had included a student as an author on a paper when he had not participated in the research. He was subsequently stripped of his professorship. One of his funders, the Office of Naval Research reviewed the report by Purdue and barred him from federal funding for 28 months.
Original experiments
US patent 4,333,796, filed by Hugh Flynn in 1978, appears to be the earliest documented reference to a sonofusion-type reaction.
In the March 8, 2002 issue of the peer-reviewed journal Science, Rusi P. Taleyarkhan and colleagues at the Oak Ridge National Laboratory (ORNL) reported that acoustic cavitation experiments conducted with deuterated acetone () showed measurements of tritium and neutron output consistent with the occurrence of fusion. The neutron emission was also reported to be coincident with the sonoluminescence pulse, a key indicator that its source was fusion caused by the heat and pressure inside the collapsing bubbles.
Oak Ridge failed replication
The results were so startling that the Oak Ridge National Laboratory asked two independent researchers, D. Shapira and M. J. Saltmarsh, to repeat the experiment using more sophisticated neutron detection equipment. They reported that the neutron release was consistent with random coincidence. A rebuttal by Taleyarkhan and the other authors of the original report argued that the Shapira and Saltmarsh report failed to account for significant differences in experimental setup, including over an inch of shielding between the neutron detector and the sonoluminescing acetone. According to Taleyarkhan et al., when properly considering those differences, the results were consistent with fusion.
As early as 2002, while experimental work was still in progress, Aaron Galonsky of Michigan State University, in a letter to the journal Science
expressed doubts about the claim made by the Taleyarkhan team. In Galonsky's opinion, the observed neutrons were too high in energy to be from a deuterium-deuterium (d-d) fusion reaction. In their response (published on the same page), the Taleyarkhan team provided detailed counter-arguments and concluded that the energy was "reasonably close" to that which was expected from a fusion reaction.
In February 2005 the documentary series Horizon commissioned two leading sonoluminescence researchers, Seth Putterman and Kenneth S. Suslick, to reproduce Taleyarkhan's work. Using similar acoustic parameters, deuterated acetone, similar bubble nucleation, and a much more sophisticated neutron detection device, the researchers could find no evidence of a fusion reaction.
Subsequent reports of replication
In 2004, new reports of bubble fusion were published by the Taleyarkhan group, claiming that the results of previous experiments had been replicated under more stringent experimental conditions. These results differed from the original results in that fusion was claimed to occur over longer times than previously reported. The original report only claimed neutron emission from the initial bubble collapse following bubble nucleation, whereas this report claimed neutron emission many acoustic cycles later.
In July 2005, two of Taleyarkhan's students at Purdue University published evidence confirming the previous result. They used the same acoustic chamber, the same deuterated acetone fluid and a similar bubble nucleation system. In this report, no neutron-sonoluminescence coincidence was attempted. An article in Nature raised issues about the validity of the research and complaints from his Purdue colleagues (see full analysis elsewhere in this page). Charges of misconduct were raised, and Purdue University opened an investigation. It concluded in 2008 that Taleyarkhan's name should have appeared in the author list because of his deep involvement in many steps of the research, that he added one author that had not really participated in the paper just to overcome the criticism of one reviewer, and that this was part of an attempt of "an effort to falsify the scientific record by assertion of independent confirmation". The investigation did not address the validity of the experimental results.
In January 2006, a paper published in the journal Physical Review Letters by Taleyarkhan in collaboration with researchers from Rensselaer Polytechnic Institute reported statistically significant evidence of fusion.
In November 2006, in the midst of accusations concerning Taleyarkhan's research standards, two different scientists visited the meta-stable fluids research lab at Purdue University to measure neutrons, using Taleyarkhan's equipment. Dr. Edward R. Forringer and undergraduates David Robbins and Jonathan Martin of LeTourneau University presented two papers at the American Nuclear Society Winter Meeting that reported replication of neutron emission. Their experimental setup was similar to previous experiments in that it used a mixture of deuterated acetone, deuterated benzene, tetrachloroethylene and uranyl nitrate. Notably, however, it operated without an external neutron source and used two types of neutron detectors. They claimed a liquid scintillation detector measured neutron levels at 8 standard deviations above the background level, while plastic detectors measured levels at 3.8 standard deviations above the background. When the same experiment was performed with non-deuterated control liquid, the measurements were within one standard deviation of background, indicating that the neutron production had only occurred during cavitation of the deuterated liquid. William M. Bugg, emeritus physics professor at the University of Tennessee also traveled to Taleyarkhan's lab to repeat the experiment with his equipment. He also reported neutron emission, using plastic neutron detectors. Taleyarkhan claimed these visits counted as independent replications by experts, but Forringer later recognized that he was not an expert, and Bugg later said that Taleyarkhan performed the experiments and he had only watched.
Nature report
In March 2006, Nature published a special report that called into question the validity of the results of the Purdue experiments. The report quotes Brian Naranjo of the University of California, Los Angeles to the effect that neutron energy spectrum reported in the 2006 paper by Taleyarkhan, et al. was statistically inconsistent with neutrons produced by the proposed fusion reaction and instead highly consistent with neutrons produced by the radioactive decay of Californium 252, an isotope commonly used as a laboratory neutron source.
The response of Taleyarkhan et al., published in Physical Review Letters, attempts to refute Naranjo's hypothesis as to the cause of the neutrons detected.
Tsoukalas, head of the School of Nuclear Engineering at Purdue, and several of his colleagues at Purdue, had convinced Taleyarkhan to move to Purdue and attempt a joint replication. In the 2006 Nature report they detail several troubling issues when trying to collaborate with Taleyarkhan. He reported positive results from certain set of raw data, but his colleagues had also examined that set and it only contained negative results. He never showed his colleagues the raw data corresponding to the positive results, despite several requests. He moved the equipment from a shared laboratory to his own laboratory, thus impeding review by his colleagues, and he did not give any advance warning or explanation for the move. Taleyarkhan convinced his colleagues that they should not publish a paper with their negative results. Taleyarkhan then insisted that the university's press release present his experiment as "peer-reviewed" and "independent", when the co-authors were working in his laboratory under his supervision, and his peers in the faculty were not allowed to review the data. In summary, Taleyarkhan's colleagues at Purdue said he placed obstacles to peer review of his experiments, and they had serious doubts about the validity of the research.
Nature also revealed that the process of anonymous peer-review had not been followed, and that the journal Nuclear Engineering and Design was not independent from the authors. Taleyarkhan was co-editor of the journal, and the paper was only peer-reviewed by his co-editor, with Taleyarkhan's knowledge.
In 2002, Taleyarkhan filed a patent application on behalf of the United States Department of Energy, while working in Oak Ridge. Nature reported that the patent had been rejected in 2005 by the US Patent Office. The examiner called the experiment a variation of discredited cold fusion, found that there was "no reputable evidence of record to support any allegations or claims that the invention is capable of operating as indicated", and found that there was not enough detail for others to replicate the invention. The field of fusion suffered from many flawed claims, thus the examiner asked for additional proof that the radiation was generated from fusion and not from other sources. An appeal was not filed because the Department of Energy had dropped the claim in December 2005.
Doubts prompt investigation
Doubts among Purdue University's Nuclear Engineering faculty as to whether the positive results reported from sonofusion experiments conducted there were truthful prompted the university to initiate a review of the research, conducted by Purdue's Office of the Vice President for Research. In a March 9, 2006 article entitled "Evidence for bubble fusion called into question", Nature interviewed several of Taleyarkhan's colleagues who suspected something was amiss.
On February 7, 2007, the Purdue University administration determined that "the evidence does not support the allegations of research misconduct and that no further investigation of the allegations is warranted". Their report also stated that "vigorous, open debate of the scientific merits of this new technology is the most appropriate focus going forward." In order to verify that the investigation was properly conducted, House Representative Brad Miller requested full copies of its documents and reports by March 30, 2007. His congressional report concluded that "Purdue deviated from its own procedures in investigating this case and did not conduct a thorough investigation"; in response, Purdue announced that it would re-open its investigation.
In June 2008, a multi-institutional team including Taleyarkhan published a paper in Nuclear Engineering and Design to "clear up misconceptions generated by a webposting of UCLA which served as the basis for the Nature article of March 2006", according to a press release.
On July 18, 2008, Purdue University announced that a committee with members from five institutions had investigated 12 allegations of research misconduct against Rusi Taleyarkhan. It concluded that two allegations were founded—that Taleyarkhan had claimed independent confirmation of his work when in reality the apparent confirmations were done by Taleyarkhan's former students and was not as "independent" as Taleyarkhan implied, and that Taleyarkhan had included a colleague's name on one of his papers who had not actually been involved in the research ("the sole apparent motivation for the addition of Mr. Bugg was a desire to overcome a reviewer's criticism", the report concluded).
Taleyarkhan's appeal of the report's conclusions was rejected. He said the two allegations of misconduct were trivial administrative issues and had nothing to do with the discovery of bubble nuclear fusion or the underlying science, and that "all allegations of fraud and fabrication have been dismissed as invalid and without merit — thereby supporting the underlying science and experimental data as being on solid ground". A researcher questioned by the LA Times said that the report had not clarified whether bubble fusion was real or not, but that the low quality of the papers and the doubts cast by the report had destroyed Taleyarkhan's credibility with the scientific community.
On August 27, 2008, he was stripped of his named Arden Bement Jr. Professorship, and forbidden to be a thesis advisor for graduate students for at least the next 3 years.
Despite the findings against him, Taleyarkhan received a $185,000 grant from the National Science Foundation between September 2008 and August 2009 to investigate bubble fusion. In 2009 the Office of Naval Research debarred him for 28 months, until September 2011, from receiving U.S. Federal Funding. During that period his name was listed in the 'Excluded Parties List' to prevent him from receiving further grants from any government agency.
See also
Cold fusion
List of energy topics
Mechanism of sonoluminescence
References
Further reading
"Bubble Fusion Research Under Scrutiny", IEEE Spectrum, May 2006
"Sonofusion Experiment Produces Results Without External Neutron Source" PhysOrg.com January 27, 2006
"Bubble fusion: silencing the hype", Nature online, March 8, 2006 — Nature reveals serious doubts over reports of fusion in collapsing bubbles (subscription required)
"Fusion controversy rekindled" BBC News, March 5, 2002
"Fusion experiment disappoints" BBC News, July 2, 2002
What's New, March 10, 2006 – failed replications
"Practical Fusion, or Just a Bubble?", Kenneth Chang, The New York Times, February 27, 2007
Cold fusion
Bubbles (physics)
Scientific misconduct incidents
2002 in science
2006 in science
de:Kalte Fusion#Sonofusion | Bubble fusion | [
"Physics",
"Chemistry"
] | 2,717 | [
"Bubbles (physics)",
"Foams",
"Cold fusion",
"Nuclear physics",
"Nuclear fusion",
"Fluid dynamics"
] |
42,752 | https://en.wikipedia.org/wiki/Sonoluminescence | Sonoluminescence is the emission of light from imploding bubbles in a liquid when excited by sound.
Sonoluminescence was first discovered in 1934 at the University of Cologne. It occurs when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly, emitting a burst of light. The phenomenon can be observed in stable single-bubble sonoluminescence (SBSL) and multi-bubble sonoluminescence (MBSL). In 1960, Peter Jarman proposed that sonoluminescence is thermal in origin and might arise from microshocks within collapsing cavities. Later experiments revealed that the temperature inside the bubble during SBSL could reach up to . The exact mechanism behind sonoluminescence remains unknown, with various hypotheses including hotspot, bremsstrahlung, and collision-induced radiation. Some researchers have even speculated that temperatures in sonoluminescing systems could reach millions of kelvins, potentially causing thermonuclear fusion; this idea, however, has been met with skepticism by other researchers. The phenomenon has also been observed in nature, with the pistol shrimp being the first known instance of an animal producing light through sonoluminescence.
History
The sonoluminescence effect was first discovered at the University of Cologne in 1934 as a result of work on sonar. Hermann Frenzel and H. Schultes put an ultrasound transducer in a tank of photographic developer fluid. They hoped to speed up the development process. Instead, they noticed tiny dots on the film after developing and realized that the bubbles in the fluid were emitting light with the ultrasound turned on. It was too difficult to analyze the effect in early experiments because of the complex environment of a large number of short-lived bubbles. This phenomenon is now referred to as multi-bubble sonoluminescence (MBSL).
In 1960, Peter Jarman from Imperial College of London proposed the most reliable theory of sonoluminescence phenomenon. He concluded that sonoluminescence is basically thermal in origin and that it might possibly arise from microshocks with the collapsing cavities.
In 1990, an experimental advance was reported by Gaitan and Crum, who produced stable single-bubble sonoluminescence (SBSL). In SBSL, a single bubble trapped in an acoustic standing wave emits a pulse of light with each compression of the bubble within the standing wave. This technique allowed a more systematic study of the phenomenon because it isolated the complex effects into one stable, predictable bubble. It was realized that the temperature inside the bubble was hot enough to melt steel, as seen in an experiment done in 2012; the temperature inside the bubble as it collapsed reached about . Interest in sonoluminescence was renewed when an inner temperature of such a bubble well above was postulated. This temperature is thus far not conclusively proven; rather, recent experiments indicate temperatures around .
Properties
Sonoluminescence can occur when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly. This cavity may take the form of a preexisting bubble or may be generated through a process known as cavitation. Sonoluminescence in the laboratory can be made to be stable so that a single bubble will expand and collapse over and over again in a periodic fashion, emitting a burst of light each time it collapses. For this to occur, a standing acoustic wave is set up within a liquid, and the bubble will sit at a pressure antinode of the standing wave. The frequencies of resonance depend on the shape and size of the container in which the bubble is contained.
Some facts about sonoluminescence:
The light that flashes from the bubbles last between 35 and a few hundred picoseconds long, with peak intensities of the order of .
The bubbles are very small when they emit light—about in diameter—depending on the ambient fluid (e.g., water) and the gas content of the bubble (e.g., atmospheric air).
SBSL pulses can have very stable periods and positions. In fact, the frequency of light flashes can be more stable than the rated frequency stability of the oscillator making the sound waves driving them. The stability analyses of the bubble, however, show that the bubble itself undergoes significant geometric instabilities due to, for example, the Bjerknes forces and Rayleigh–Taylor instabilities.
The addition of a small amount of noble gas (such as helium, argon, or xenon) to the gas in the bubble increases the intensity of the emitted light.
Spectral measurements have given bubble temperatures in the range from , the exact temperatures depending on experimental conditions including the composition of the liquid and gas. Detection of very high bubble temperatures by spectral methods is limited due to the opacity of liquids to short wavelength light characteristic of very high temperatures.
A study describes a method of determining temperatures based on the formation of plasmas. Using argon bubbles in sulfuric acid, the data shows the presence of ionized molecular oxygen , sulfur monoxide, and atomic argon populating high-energy excited states, which confirms a hypothesis that the bubbles have a hot plasma core. The ionization and excitation energy of dioxygenyl cations, which they observed, is . From this observation, they conclude the core temperatures reach at least —hotter than the surface of the Sun.
Rayleigh–Plesset equation
The dynamics of the motion of the bubble is characterized to a first approximation by the Rayleigh–Plesset equation (named after Lord Rayleigh and Milton Plesset):
This is an approximate equation that is derived from the Navier–Stokes equations (written in spherical coordinate system) and describes the motion of the radius of the bubble R as a function of time t. Here, μ is the viscosity, is the external pressure infinitely far from the bubble, is the internal pressure of the bubble, is the liquid density, and γ is the surface tension. The over-dots represent time derivatives. This equation, though approximate, has been shown to give good estimates on the motion of the bubble under the acoustically driven field except during the final stages of collapse. Both simulation and experimental measurement show that during the critical final stages of collapse, the bubble wall velocity exceeds the speed of sound of the gas inside the bubble. Thus a more detailed analysis of the bubble's motion is needed beyond Rayleigh–Plesset to explore the additional energy focusing that an internally formed shock wave might produce. In the static case, the Rayleigh-Plesset equation simplifies, yielding the Young–Laplace equation.
Mechanism of phenomena
The mechanism of the phenomenon of sonoluminescence is unknown. Hypotheses include: hotspot, bremsstrahlung radiation, collision-induced radiation and corona discharges, nonclassical light, proton tunneling, electrodynamic jets and fractoluminescent jets (now largely discredited due to contrary experimental evidence).
In 2002, M. Brenner, S. Hilgenfeldt, and D. Lohse published a 60-page review that contains a detailed explanation of the mechanism. An important factor is that the bubble contains mainly inert noble gas such as argon or xenon (air contains about 1% argon, and the amount dissolved in water is too great; for sonoluminescence to occur, the concentration must be reduced to 20–40% of its equilibrium value) and varying amounts of water vapor. Chemical reactions cause nitrogen and oxygen to be removed from the bubble after about one hundred expansion-collapse cycles. The bubble will then begin to emit light. The light emission of highly compressed noble gas is exploited technologically in the argon flash devices.
During bubble collapse, the inertia of the surrounding water causes high pressure and high temperature, reaching around 10,000 kelvins in the interior of the bubble, causing the ionization of a small fraction of the noble gas present. The amount ionized is small enough for the bubble to remain transparent, allowing volume emission; surface emission would produce more intense light of longer duration, dependent on wavelength, contradicting experimental results. Electrons from ionized atoms interact mainly with neutral atoms, causing thermal bremsstrahlung radiation. As the wave hits a low energy trough, the pressure drops, allowing electrons to recombine with atoms and light emission to cease due to this lack of free electrons. This makes for a 160-picosecond light pulse for argon (even a small drop in temperature causes a large drop in ionization, due to the large ionization energy relative to photon energy). This description is simplified from the literature above, which details various steps of differing duration from 15 microseconds (expansion) to 100 picoseconds (emission).
Computations based on the theory presented in the review produce radiation parameters (intensity and duration time versus wavelength) that match experimental results with errors no larger than expected due to some simplifications (e.g., assuming a uniform temperature in the entire bubble), so it seems the phenomenon of sonoluminescence is at least roughly explained, although some details of the process remain obscure.
Any discussion of sonoluminescence must include a detailed analysis of metastability. Sonoluminescence in this respect is what is physically termed a bounded phenomenon meaning that the sonoluminescence exists in a bounded region of parameter space for the bubble; a coupled magnetic field being one such parameter. The magnetic aspects of sonoluminescence are very well documented.
Other proposals
Quantum explanations
An unusually exotic hypothesis of sonoluminescence, which has received much popular attention, is the Casimir energy hypothesis suggested by noted physicist Julian Schwinger and more thoroughly considered in a paper by Claudia Eberlein of the University of Sussex. Eberlein's paper suggests that the light in sonoluminescence is generated by the vacuum within the bubble in a process similar to Hawking radiation, the radiation generated at the event horizon of black holes. According to this vacuum energy explanation, since quantum theory holds that vacuum contains virtual particles, the rapidly moving interface between water and gas converts virtual photons into real photons. This is related to the Unruh effect or the Casimir effect. The argument has been made that sonoluminescence releases too large an amount of energy and releases the energy on too short a time scale to be consistent with the vacuum energy explanation, although other credible sources argue the vacuum energy explanation might yet prove to be correct.
Nuclear reactions
Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins and speculates temperatures could reach into the millions of kelvins. Temperatures this high could cause thermonuclear fusion. This possibility is sometimes referred to as bubble fusion and is likened to the implosion design used in the fusion component of thermonuclear weapons.
Experiments in 2002 and 2005 by R. P. Taleyarkhan using deuterated acetone showed measurements of tritium and neutron output consistent with fusion. However, the papers were considered low quality and there were doubts cast by a report about the author's scientific misconduct. This made the report lose credibility among the scientific community.
On January 27, 2006, researchers at Rensselaer Polytechnic Institute claimed to have produced fusion in sonoluminescence experiments.
Biological sonoluminescence
Pistol shrimp (also called snapping shrimp) produce a type of cavitation luminescence from a collapsing bubble caused by quickly snapping its claw. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 60 miles per hour (97 km/h) and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. The light produced is of lower intensity than the light produced by typical sonoluminescence and is not visible to the naked eye. The light and heat produced by the bubble may have no direct significance, as it is the shockwave produced by the rapidly collapsing bubble which these shrimp use to stun or kill prey. However, it is the first known instance of an animal producing light by this effect and was whimsically dubbed "shrimpoluminescence" upon its discovery in 2001. It has subsequently been discovered that another group of crustaceans, the mantis shrimp, contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact.
A mechanical device with 3D printed snapper claw at five times the actual size was also reported to emit light in a similar fashion, this bioinspired design was based on the snapping shrimp snapper claw molt shed from an Alpheus formosus, the striped snapping shrimp.
See also
List of light sources
Seth Putterman
Sonochemistry
Triboluminescence
References
Further reading
For a "How to" guide for student science projects see:
This article was created in 1996 together with the alternative theory; both were seen by Ms Eberlein. It contains many references to the crucial experimental results in this field.
External links
Detailed description of a sonoluminescence experiment
A description of the effect and experiment, with a diagram of the apparatus
An mpg video of the collapsing bubble (934 kB)
Shrimpoluminescence
Impulse Devices
Applications of sonochemistry
Sound waves size up sonoluminescence
Sonoluminescence: Sound into light
Luminescence
Ultrasound
Light sources
Physical phenomena
Unsolved problems in physics
Articles containing video clips
1934 in science
Bubbles (physics)
Acoustics | Sonoluminescence | [
"Physics",
"Chemistry"
] | 2,890 | [
"Physical phenomena",
"Luminescence",
"Molecular physics",
"Bubbles (physics)",
"Foams",
"Unsolved problems in physics",
"Classical mechanics",
"Acoustics",
"Fluid dynamics"
] |
42,764 | https://en.wikipedia.org/wiki/Hagia%20Sophia | Hagia Sophia (; ; ; ), officially the Hagia Sophia Grand Mosque,(; ), is a mosque and former church serving as a major cultural and historical site in Istanbul, Turkey. The last of three church buildings to be successively erected on the site by the Eastern Roman Empire, it was completed in AD 537, becoming the world's largest interior space and among the first to employ a fully pendentive dome. It is considered the epitome of Byzantine architecture and is said to have "changed the history of architecture". The site was an Eastern rite church from AD 360 to 1453, except for a brief time as a Latin Catholic church between the Fourth Crusade in 1204 and 1261. After the fall of Constantinople in 1453, it served as a mosque until 1935, when it became an interfaith museum, until being controversially reclassified solely as a mosque in 2020.
The current structure was built by the Byzantine emperor Justinian I as the Christian cathedral of Constantinople for the Byzantine Empire between 532 and 537, and was designed by the Greek geometers Isidore of Miletus and Anthemius of Tralles. It was formally called the Church of God's Holy Wisdom, () the third church of the same name to occupy the site, as the prior one had been destroyed in the Nika riots. As the episcopal see of the ecumenical patriarch of Constantinople, it remained the world's largest cathedral for nearly a thousand years, until the Seville Cathedral was completed in 1520.
Hagia Sophia became the paradigmatic Orthodox church form, and its architectural style was emulated by Ottoman mosques a thousand years later. The Hagia Sophia served as an architectural inspiration for many other religious buildings including the Hagia Sophia in Thessaloniki, Panagia Ekatontapiliani, the Şehzade Mosque, the Süleymaniye Mosque, the Rüstem Pasha Mosque and the Kılıç Ali Pasha Complex.
The religious and spiritual centre of the Eastern Orthodox Church for nearly one thousand years, the church was dedicated to the Holy Wisdom. The church has been described as "holding a unique position in the Christian world", and as "an architectural and cultural icon of Byzantine and Eastern Orthodox civilization". It was where the excommunication of Patriarch Michael I Cerularius was officially delivered by Humbert of Silva Candida, the envoy of Pope Leo IX in 1054, an act considered the start of the East–West Schism. In 1204, it was converted during the Fourth Crusade into a Catholic cathedral under the Latin Empire, before being returned to the Eastern Orthodox Church upon the restoration of the Byzantine Empire in 1261. Enrico Dandolo, the doge of Venice who led the Fourth Crusade and the 1204 Sack of Constantinople, was buried in the church.
After the fall of Constantinople to the Ottoman Empire in 1453, it was converted to a mosque by Mehmed the Conqueror and became the principal mosque of Istanbul until the 1616 construction of the Sultan Ahmed Mosque. Upon its conversion, the bells, altar, iconostasis, ambo, and baptistery were removed, while iconography, such as the mosaic depictions of Jesus, Mary, Christian saints and angels were removed or plastered over. Islamic architectural additions included four minarets, a minbar and a mihrab. The patriarchate moved to the Church of the Holy Apostles, which became the city's cathedral.
The complex remained a mosque until 1931, when it was closed to the public for four years. It was re-opened in 1935 as a museum under the secular Republic of Turkey, and the building was Turkey's most visited tourist attraction .
In July 2020, the Council of State annulled the 1934 decision to establish the museum, and the Hagia Sophia was reclassified as a mosque. The 1934 decree was ruled to be unlawful under both Ottoman and Turkish law as Hagia Sophia's , endowed by Sultan Mehmed, had designated the site a mosque. Proponents of the decision argued the Hagia Sophia was the personal property of the sultan. The decision to designate Hagia Sophia as a mosque was highly controversial. It resulted in divided opinions and drew condemnation from the Turkish opposition, UNESCO, the World Council of Churches and the International Association of Byzantine Studies, as well as numerous international leaders, while several Muslim leaders in Turkey and other countries welcomed its conversion into a mosque.
History
Church of Constantius II
The first church on the site was known as the () because of its size compared to the sizes of the contemporary churches in the city. According to the Chronicon Paschale, the church was consecrated on 15 February 360, during the reign of the emperor Constantius II () by the Arian bishop Eudoxius of Antioch. It was built next to the area where the Great Palace was being developed. According to the 5th-century ecclesiastical historian Socrates of Constantinople, the emperor Constantius had "constructed the Great Church alongside that called Irene which because it was too small, the emperor's father [Constantine] had enlarged and beautified". A tradition which is not older than the 7th or 8th century reports that the edifice was built by Constantius' father, Constantine the Great (). Hesychius of Miletus wrote that Constantine built Hagia Sophia with a wooden roof and removed 427 (mostly pagan) statues from the site. The 12th-century chronicler Joannes Zonaras reconciles the two opinions, writing that Constantius had repaired the edifice consecrated by Eusebius of Nicomedia, after it had collapsed. Since Eusebius was the bishop of Constantinople from 339 to 341, and Constantine died in 337, it seems that the first church was erected by Constantius.
The nearby Hagia Irene ("Holy Peace") church was completed earlier and served as cathedral until the Great Church was completed. Besides Hagia Irene, there is no record of major churches in the city-centre before the late 4th century. Rowland Mainstone argued the 4th-century church was not yet known as Hagia Sophia. Though its name as the 'Great Church' implies that it was larger than other Constantinopolitan churches, the only other major churches of the 4th century were the Church of St Mocius, which lay outside the Constantinian walls and was perhaps attached to a cemetery, and the Church of the Holy Apostles.
The church itself is known to have had a timber roof, curtains, columns, and an entrance that faced west. It likely had a narthex and is described as being shaped like a Roman circus. This may mean that it had a U-shaped plan like the basilicas of San Marcellino e Pietro and Sant'Agnese fuori le mura in Rome. However, it may also have been a more conventional three-, four-, or five-aisled basilica, perhaps resembling the original Church of the Holy Sepulchre in Jerusalem or the Church of the Nativity in Bethlehem. The building was likely preceded by an atrium, as in the later churches on the site.
According to Ken Dark and Jan Kostenec, a further remnant of the 4th century basilica may exist in a wall of alternating brick and stone banded masonry immediately to the west of the Justinianic church. The top part of the wall is constructed with bricks stamped with brick-stamps dating from the 5th century, but the lower part is of constructed with bricks typical of the 4th century. This wall was probably part of the propylaeum at the west front of both the Constantinian and Theodosian Great Churches.
The building was accompanied by a baptistery and a skeuophylakion. A hypogeum, perhaps with an martyrium above it, was discovered before 1946, and the remnants of a brick wall with traces of marble revetment were identified in 2004. The hypogeum was a tomb which may have been part of the 4th-century church or may have been from the pre-Constantinian city of Byzantium. The skeuophylakion is said by Palladius to have had a circular floor plan, and since some U-shaped basilicas in Rome were funerary churches with attached circular mausolea (the Mausoleum of Constantina and the Mausoleum of Helena), it is possible it originally had a funerary function, though by 405 its use had changed. A later account credited a woman called Anna with donating the land on which the church was built in return for the right to be buried there.
Excavations on the western side of the site of the first church under the propylaeum wall reveal that the first church was built atop a road about wide. According to early accounts, the first Hagia Sophia was built on the site of an ancient pagan temple, although there are no artefacts to confirm this.
The Patriarch of Constantinople John Chrysostom came into a conflict with Empress Aelia Eudoxia, wife of the emperor Arcadius (), and was sent into exile on 20 June 404. During the subsequent riots, this first church was largely burnt down. Palladius noted that the 4th-century skeuophylakion survived the fire. According to Dark and Kostenec, the fire may only have affected the main basilica, leaving the surrounding ancillary buildings intact.
Church of Theodosius II
A second church on the site was ordered by Theodosius II (), who inaugurated it on 10 October 415. The Notitia Urbis Constantinopolitanae, a fifth-century list of monuments, names Hagia Sophia as , while the former cathedral Hagia Irene is referred to as . At the time of Socrates of Constantinople around 440, "both churches [were] enclosed by a single wall and served by the same clergy". Thus, the complex would have encompassed a large area including the future site of the Hospital of Samson. If the fire of 404 destroyed only the 4th-century main basilica church, then the 5th century Theodosian basilica could have been built surrounded by a complex constructed primarily during the fourth century.
During the reign of Theodosius II, the emperor's elder sister, the Augusta Pulcheria () was challenged by the patriarch Nestorius (). The patriarch denied the Augusta access to the sanctuary of the "Great Church", likely on 15 April 428. According to the anonymous Letter to Cosmas, the virgin empress, a promoter of the cult of the Virgin Mary who habitually partook in the Eucharist at the sanctuary of Nestorius's predecessors, claimed right of entry because of her equivalent position to the Theotokos – the Virgin Mary – "having given birth to God". Their theological differences were part of the controversy over the title theotokos that resulted in the Council of Ephesus and the stimulation of Monophysitism and Nestorianism, a doctrine, which like Nestorius, rejects the use of the title. Pulcheria along with Pope Celestine I and Patriarch Cyril of Alexandria had Nestorius overthrown, condemned at the ecumenical council, and exiled.
The area of the western entrance to the Justinianic Hagia Sophia revealed the western remains of its Theodosian predecessor, as well as some fragments of the Constantinian church. German archaeologist Alfons Maria Schneider began conducting archaeological excavations during the mid-1930s, publishing his final report in 1941. Excavations in the area that had once been the 6th-century atrium of the Justinianic church revealed the monumental western entrance and atrium, along with columns and sculptural fragments from both 4th- and 5th-century churches. Further digging was abandoned for fear of harming the structural integrity of the Justinianic building, but parts of the excavation trenches remain uncovered, laying bare the foundations of the Theodosian building.
The basilica was built by architect Rufinus. The church's main entrance, which may have had gilded doors, faced west, and there was an additional entrance to the east. There was a central pulpit and likely an upper gallery, possibly employed as a matroneum (women's section). The exterior was decorated with elaborate carvings of rich Theodosian-era designs, fragments of which have survived, while the floor just inside the portico was embellished with polychrome mosaics. The surviving carved gable end from the centre of the western façade is decorated with a cross-roundel. Fragments of a frieze of reliefs with 12 lambs representing the 12 apostles also remain; unlike Justinian's 6th-century church, the Theodosian Hagia Sophia had both colourful floor mosaics and external decorative sculpture.
At the western end, surviving stone fragments of the structure show there was vaulting, at least at the western end. The Theodosian building had a monumental propylaeum hall with a portico that may account for this vaulting, which was thought by the original excavators in the 1930s to be part of the western entrance of the church itself. The propylaeum opened onto an atrium which lay in front of the basilica church itself. Preceding the propylaeum was a steep monumental staircase following the contours of the ground as it sloped away westwards in the direction of the Strategion, the Basilica, and the harbours of the Golden Horn. This arrangement would have resembled the steps outside the atrium of the Constantinian Old St Peter's Basilica in Rome. Near the staircase, there was a cistern, perhaps to supply a fountain in the atrium or for worshippers to wash with before entering.
The 4th-century skeuophylakion was replaced in the 5th century by the present-day structure, a rotunda constructed of banded masonry in the lower two levels and of plain brick masonry in the third. Originally this rotunda, probably employed as a treasury for liturgical objects, had a second-floor internal gallery accessed by an external spiral staircase and two levels of niches for storage. A further row of windows with marble window frames on the third level remain bricked up. The gallery was supported on monumental consoles with carved acanthus designs, similar to those used on the late 5th-century Column of Leo. A large lintel of the skeuophylakion's western entrance – bricked up during the Ottoman era – was discovered inside the rotunda when it was archaeologically cleared to its foundations in 1979, during which time the brickwork was also repointed. The skeuophylakion was again restored in 2014 by the Vakıflar.
A fire started during the tumult of the Nika Revolt, which had begun nearby in the Hippodrome of Constantinople, and the second Hagia Sophia was burnt to the ground on 13–14 January 532. The court historian Procopius wrote:
Church of Justinian I (current structure)
On 23 February 532, only a few weeks after the destruction of the second basilica, Emperor Justinian I inaugurated the construction of a third and entirely different basilica, larger and more majestic than its predecessors. Justinian appointed two architects, mathematician Anthemius of Tralles and geometer and engineer Isidore of Miletus, to design the building.
Construction of the church began in 532 during the short tenure of Phocas as praetorian prefect. Although Phocas had been arrested in 529 as a suspected practitioner of paganism, he replaced John the Cappadocian after the Nika Riots saw the destruction of the Theodosian church. According to John the Lydian, Phocas was responsible for funding the initial construction of the building with 4,000 Roman pounds of gold, but he was dismissed from office in October 532. John the Lydian wrote that Phocas had acquired the funds by moral means, but Evagrius Scholasticus later wrote that the money had been obtained unjustly.
According to Anthony Kaldellis, both of Hagia Sophia's architects named by Procopius were associated with the school of the pagan philosopher Ammonius of Alexandria. It is possible that both they and John the Lydian considered Hagia Sophia a great temple for the supreme Neoplatonist deity who manifestated through light and the sun. John the Lydian describes the church as the "temenos of the Great God" ().
Originally the exterior of the church was covered with marble veneer, as indicated by remaining pieces of marble and surviving attachments for lost panels on the building's western face. The white marble cladding of much of the church, together with gilding of some parts, would have given Hagia Sophia a shimmering appearance quite different from the brick- and plaster-work of the modern period, and would have significantly increased its visibility from the sea. The cathedral's interior surfaces were sheathed with polychrome marbles, green and white with purple porphyry, and gold mosaics. The exterior was clad in stucco that was tinted yellow and red during the 19th-century restorations by the Fossati architects.
The construction is described by Procopius in On Buildings (, ). Columns and other marble elements were imported from throughout the Mediterranean, although the columns were once thought to be spoils from cities such as Rome and Ephesus. Even though they were made specifically for Hagia Sophia, they vary in size. More than ten thousand people were employed during the construction process. This new church was contemporaneously recognized as a major work of architecture. Outside the church was an elaborate array of monuments around the bronze-plated Column of Justinian, topped by an equestrian statue of the emperor which dominated the Augustaeum, the open square outside the church which connected it with the Great Palace complex through the Chalke Gate. At the edge of the Augustaeum was the Milion and the Regia, the first stretch of Constantinople's main thoroughfare, the Mese. Also facing the Augustaeum were the enormous Constantinian thermae, the Baths of Zeuxippus, and the Justinianic civic basilica under which was the vast cistern known as the Basilica Cistern. On the opposite side of Hagia Sophia was the former cathedral, Hagia Irene.
Referring to the destruction of the Theodosian Hagia Sophia and comparing the new church with the old, Procopius lauded the Justinianic building, writing in De aedificiis:
Upon seeing the finished building, the Emperor reportedly said: "Solomon, I have surpassed thee" ().
Justinian and Patriarch Menas inaugurated the new basilica on 27 December 537, 5 years and 10 months after construction started, with much pomp. Hagia Sophia was the seat of the Patriarchate of Constantinople and a principal setting for Byzantine imperial ceremonies, such as coronations. The basilica offered sanctuary from persecution to criminals, although there was disagreement about whether Justinian had intended for murderers to be eligible for asylum.
Earthquakes in August 553 and on 14 December 557 caused cracks in the main dome and eastern semi-dome. According to the Chronicle of John Malalas, during a subsequent earthquake on 7 May 558, the eastern semi-dome collapsed, destroying the ambon, altar, and ciborium. The collapse was due mainly to the excessive bearing load and to the enormous shear load of the dome, which was too flat. These caused the deformation of the piers which sustained the dome. Justinian ordered an immediate restoration. He entrusted it to Isidorus the Younger, nephew of Isidore of Miletus, who used lighter materials. The entire vault had to be taken down and rebuilt 20 Byzantine feet () higher than before, giving the building its current interior height of . Moreover, Isidorus changed the dome type, erecting a ribbed dome with pendentives whose diameter was between 32.7 and 33.5 m. Under Justinian's orders, eight Corinthian columns were disassembled from Baalbek, Lebanon and shipped to Constantinople around 560. This reconstruction, which gave the church its present 6th-century form, was completed in 562. The poet Paul the Silentiary composed an ekphrasis, or long visual poem, for the re-dedication of the basilica presided over by Patriarch Eutychius on 24 December 562. Paul the Silentiary's poem is conventionally known under the Latin title Descriptio Sanctae Sophiae, and he was also author of another ekphrasis on the ambon of the church, the Descripto Ambonis.
According to the history of the patriarch Nicephorus I and the chronicler Theophanes the Confessor, various liturgical vessels of the cathedral were melted down on the order of the emperor Heraclius () after the capture of Alexandria and Roman Egypt by the Sasanian Empire during the Byzantine–Sasanian War of 602–628. Theophanes states that these were made into gold and silver coins, and a tribute was paid to the Avars. The Avars attacked the extramural areas of Constantinople in 623, causing the Byzantines to move the "garment" relic () of Mary, mother of Jesus to Hagia Sophia from its usual shrine of the Church of the Theotokos at Blachernae just outside the Theodosian Walls. On 14 May 626, the Scholae Palatinae, an elite body of soldiers, protested in Hagia Sophia against a planned increase in bread prices, after a stoppage of the Cura Annonae rations resulting from the loss of the grain supply from Egypt. The Persians under Shahrbaraz and the Avars together laid the siege of Constantinople in 626; according to the Chronicon Paschale, on 2 August 626, Theodore Syncellus, a deacon and presbyter of Hagia Sophia, was among those who negotiated unsuccessfully with the khagan of the Avars. A homily, attributed by existing manuscripts to Theodore Syncellus and possibly delivered on the anniversary of the event, describes the translation of the Virgin's garment and its ceremonial re-translation to Blachernae by the patriarch Sergius I after the threat had passed. Another eyewitness account of the Avar–Persian siege was written by George of Pisidia, a deacon of Hagia Sophia and an administrative official in for the patriarchate from Antioch in Pisidia. Both George and Theodore, likely members of Sergius's literary circle, attribute the defeat of the Avars to the intervention of the Theotokos, a belief that strengthened in following centuries.
In 726, the emperor Leo the Isaurian issued a series of edicts against the veneration of images, ordering the army to destroy all icons – ushering in the period of Byzantine iconoclasm. At that time, all religious pictures and statues were removed from the Hagia Sophia. Following a brief hiatus during the reign of Empress Irene (797–802), the iconoclasts returned. Emperor Theophilus () had two-winged bronze doors with his monograms installed at the southern entrance of the church.
The basilica suffered damage, first in a great fire in 859, and again in an earthquake on 8 January 869 that caused the collapse of one of the half-domes. Emperor Basil I ordered repair of the tympanas, arches, and vaults.
In his book De caerimoniis aulae Byzantinae ("Book of Ceremonies"), the emperor Constantine VII () wrote a detailed account of the ceremonies held in the Hagia Sophia by the emperor and the patriarch.
Early in the 10th century, the pagan ruler of the Kievan Rus' sent emissaries to his neighbors to learn about Judaism, Islam, and Roman and Orthodox Christianity. After visiting Hagia Sophia his emissaries reported back: "We were led into a place where they serve their God, and we did not know where we were, in heaven or on earth."
In the 940s or 950s, probably around 954 or 955, after the Rus'–Byzantine War of 941 and the death of the Grand Prince of Kiev, Igor I (), his widow Olga of Kiev – regent for her infant son Sviatoslav I () – visited the emperor Constantine VII and was received as queen of the Rus' in Constantinople. She was probably baptized in Hagia Sophia's baptistery, taking the name of the reigning augusta, Helena Lecapena, and receiving the titles zōstē patrikía and the styles of archontissa and hegemon of the Rus'. Her baptism was an important step towards the Christianization of the Kievan Rus', though the emperor's treatment of her visit in De caerimoniis does not mention baptism. Olga is deemed a saint and equal-to-the-apostles () in the Eastern Orthodox Church. According to an early 14th-century source, the second church in Kiev, Saint Sophia's, was founded in anno mundi 6460 in the Byzantine calendar, or . The name of this future cathedral of Kiev probably commemorates Olga's baptism at Hagia Sophia.
After the great earthquake of 25 October 989, which collapsed the western dome arch, Emperor Basil II asked for the Armenian architect Trdat, creator of the Cathedral of Ani, to direct the repairs. He erected again and reinforced the fallen dome arch, and rebuilt the west side of the dome with 15 dome ribs. The extent of the damage required six years of repair and reconstruction; the church was re-opened on 13 May 994. At the end of the reconstruction, the church's decorations were renovated, including the addition of four immense paintings of cherubs; a new depiction of Christ on the dome; a burial cloth of Christ shown on Fridays, and on the apse a new depiction of the Virgin Mary holding Jesus, between the apostles Peter and Paul. On the great side arches were painted the prophets and the teachers of the church.
According to the 13th-century Greek historian Niketas Choniates, the emperor John II Comnenus celebrated a revived Roman triumph after his victory over the Danishmendids at the siege of Kastamon in 1133. After proceeding through the streets on foot carrying a cross with a silver quadriga bearing the icon of the Virgin Mary, the emperor participated in a ceremony at the cathedral before entering the imperial palace. In 1168, another triumph was held by the emperor Manuel I Comnenus, again preceding with a gilded silver quadriga bearing the icon of the Virgin from the now-demolished East Gate (or Gate of St Barbara, later the ) in the Propontis Wall, to Hagia Sophia for a thanks-giving service, and then to the imperial palace.
In 1181, the daughter of the emperor Manuel I, Maria Comnena, and her husband, the caesar Renier of Montferrat, fled to Hagia Sophia at the culmination of their dispute with the empress Maria of Antioch, regent for her son, the emperor Alexius II Comnenus. Maria Comnena and Renier occupied the cathedral with the support of the patriarch, refusing the imperial administration's demands for a peaceful departure. According to Niketas Choniates, they "transformed the sacred courtyard into a military camp", garrisoned the entrances to the complex with locals and mercenaries, and despite the strong opposition of the patriarch, made the "house of prayer into a den of thieves or a well-fortified and precipitous stronghold, impregnable to assault", while "all the dwellings adjacent to Hagia Sophia and adjoining the Augusteion were demolished by [Maria's] men". A battle ensued in the Augustaion and around the Milion, during which the defenders fought from the "gallery of the Catechumeneia (also called the Makron)" facing the Augusteion, from which they eventually retreated and took up positions in the exonarthex of Hagia Sophia itself. At this point, "the patriarch was anxious lest the enemy troops enter the temple, with unholy feet trample the holy floor, and with hands defiled and dripping with blood still warm plunder the all-holy dedicatory offerings". After a successful sally by Renier and his knights, Maria requested a truce, the imperial assault ceased, and an amnesty was negotiated by the megas doux Andronikos Kontostephanos and the megas hetaireiarches John Doukas. Greek historian Niketas Choniates compared the preservation of the cathedral to the efforts made by the 1st-century emperor Titus to avoid the destruction of the Second Temple during the siege of Jerusalem in the First Jewish–Roman War. Choniates reports that in 1182, a white hawk wearing jesses was seen to fly from the east to Hagia Sophia, flying three times from the "building of the Thōmaitēs" (a basilica erected on the southeastern side of the Augustaion) to the Palace of the Kathisma in the Great Palace, where new emperors were acclaimed. This was supposed to presage the end of the reign of Andronicus I Comnenus ().
Choniates further writes that in 1203, during the Fourth Crusade, the emperors Isaac II Angelus and Alexius IV Angelus stripped Hagia Sophia of all gold ornaments and silver oil-lamps in order to pay off the Crusaders who had ousted Alexius III Angelus and helped Isaac return to the throne. Upon the subsequent Sack of Constantinople in 1204, the church was further ransacked and desecrated by the Crusaders, as described by Choniates, though he did not witness the events in person. According to his account, composed at the court of the rump Empire of Nicaea, Hagia Sophia was stripped of its remaining metal ornaments, its altar was smashed into pieces, and a "woman laden with sins" sang and danced on the synthronon. He adds that mules and donkeys were brought into the cathedral's sanctuary to carry away the gilded silver plating of the bema, the ambo, and the doors and other furnishings, and that one of them slipped on the marble floor and was accidentally disembowelled, further contaminating the place. According to Ali ibn al-Athir, whose treatment of the Sack of Constantinople was probably dependent on a Christian source, the Crusaders massacred some clerics who had surrendered to them. Much of the interior was damaged and would not be repaired until its return to Orthodox control in 1261. The sack of Hagia Sophia, and Constantinople in general, remained a sore point in Catholic–Eastern Orthodox relations.
During the Latin occupation of Constantinople (1204–1261), the church became a Latin Catholic cathedral. Baldwin I of Constantinople () was crowned emperor on 16 May 1204 in Hagia Sophia in a ceremony which closely followed Byzantine practices. Enrico Dandolo, the Doge of Venice who commanded the sack and invasion of the city by the Latin Crusaders in 1204, is buried inside the church, probably in the upper eastern gallery. In the 19th century, an Italian restoration team placed a cenotaph marker, frequently mistaken as being a medieval artifact, near the probable location and is still visible today. The original tomb was destroyed by the Ottomans during the conversion of the church into a mosque.
Upon the capture of Constantinople in 1261 by the Empire of Nicaea and the emperor Michael VIII Palaeologus, (), the church was in a dilapidated state. In 1317, emperor Andronicus II Palaeologus () ordered four new buttresses () to be built in the eastern and northern parts of the church, financing them with the inheritance of his late wife, Irene of Montferrat (1314). New cracks developed in the dome after the earthquake of October 1344, and several parts of the building collapsed on 19 May 1346. Repairs by architects Astras and Peralta began in 1354.
On 12 December 1452, Isidore of Kiev proclaimed in Hagia Sophia the long-anticipated ecclesiastical union between the western Catholic and eastern Orthodox Churches as decided at the Council of Florence and decreed by the papal bull Laetentur Caeli, though it would be short-lived. The union was unpopular among the Byzantines, who had already expelled the Patriarch of Constantinople, Gregory III, for his pro-union stance. A new patriarch was not installed until after the Ottoman conquest. According to the Greek historian Doukas, the Hagia Sophia was tainted by these Catholic associations, and the anti-union Orthodox faithful avoided the cathedral, considering it to be a haunt of demons and a "Hellenic" temple of Roman paganism. Doukas also notes that after the Laetentur Caeli was proclaimed, the Byzantines dispersed discontentedly to nearby venues where they drank toasts to the Hodegetria icon, which had, according to late Byzantine tradition, interceded to save them in the former sieges of Constantinople by the Avar Khaganate and the Umayyad Caliphate.
According to Nestor Iskander's Tale on the Taking of Tsargrad, the Hagia Sophia was the focus of an alarming omen interpreted as the Holy Spirit abandoning Constantinople on 21 May 1453, in the final days of the Siege of Constantinople. The sky lit up, illuminating the city, and "many people gathered and saw on the Church of the Wisdom, at the top of the window, a large flame of fire issuing forth. It encircled the entire neck of the church for a long time. The flame gathered into one; its flame altered, and there was an indescribable light. At once it took to the sky. ... The light itself has gone up to heaven; the gates of heaven were opened; the light was received; and again they were closed." This phenomenon was perhaps St Elmo's fire induced by gunpowder smoke and unusual weather. The author relates that the fall of the city to "Mohammadenism" was foretold in an omen seen by Constantine the Great – an eagle fighting with a snake – which also signified that "in the end Christianity will overpower Mohammedanism, will receive the Seven Hills, and will be enthroned in it".
The eventual fall of Constantinople had long been predicted in apocalyptic literature. A reference to the destruction of a city founded on seven hills in the Book of Revelation was frequently understood to be about Constantinople, and the Apocalypse of Pseudo-Methodius had predicted an "Ishmaelite" conquest of the Roman Empire. In this text, the Muslim armies reach the Forum Bovis before being turned back by divine intervention; in later apocalyptic texts, the climactic turn takes place at the Column of Theodosius closer to Hagia Sophia; in others, it occurs at the Column of Constantine, which is closer still. Hagia Sophia is mentioned in a hagiography of uncertain date detailing the life of the Eastern Orthodox saint Andrew the Fool. The text is self-attributed to Nicephorus, a priest of Hagia Sophia, and contains a description of the end time in the form of a dialogue, in which the interlocutor, upon being told by the saint that Constantinople will be sunk in a flood and that "the waters as they gush forth will irresistibly deluge her and cover her and surrender her to the terrifying and immense sea of the abyss", says "some people say that the Great Church of God will not be submerged with the city but will be suspended in the air by an invisible power". The reply is given that "When the whole city sinks into the sea, how can the Great Church remain? Who will need her? Do you think God dwells in temples made with hands?" The Column of Constantine, however, is prophesied to endure.
From the time of Procopius in the reign of Justinian, the equestrian imperial statue on the Column of Justinian in the Augustaion beside Hagia Sophia, which gestured towards Asia with right hand, was understood to represent the emperor holding back the threat to the Romans from the Sasanian Empire in the Roman–Persian Wars, while the orb or globus cruciger held in the statue's left was an expression of the global power of the Roman emperor. Subsequently, in the Arab–Byzantine wars, the threat held back by the statue became the Umayyad Caliphate, and later, the statue was thought to be fending off the advance of the Turks. The identity of the emperor was often confused with that of other famous saint-emperors like Theodosius I and Heraclius. The orb was frequently referred to as an apple in foreigners' accounts of the city, and it was interpreted in Greek folklore as a symbol of the Turks' mythological homeland in Central Asia, the "Lone Apple Tree". The orb fell to the ground in 1316 and was replaced by 1325, but while it was still in place around 1412, by the time Johann Schiltberger saw the statue in 1427, the "empire-apple" () had fallen to the earth. An attempt to raise it again in 1435 failed, and this amplified the prophecies of the city's fall. For the Turks, the "red apple" () came to symbolize Constantinople itself and subsequently the military supremacy of the Islamic caliphate over the Christian empire. In Niccolò Barbaro's account of the fall of the city in 1453, the Justinianic monument was interpreted in the last days of the siege as representing the city's founder Constantine the Great, indicating "this is the way my conqueror will come".
According to Laonicus Chalcocondyles, Hagia Sophia was a refuge for the population during the city's capture. Despite the ill-repute and empty state of Hagia Sophia after December 1452, Doukas writes that after the Theodosian Walls were breached, the Byzantines took refuge there as the Turks advanced through the city: "All the women and men, monks, and nuns ran to the Great Church. They, both men and women, were holding in their arms their infants. What a spectacle! That street was crowded, full of human beings." He attributes their change of heart to a prophecy.
In accordance with the traditional custom of the time, Sultan Mehmed II allowed his troops and his entourage three full days of unbridled pillage and looting in the city shortly after it was captured. This period saw the destruction of many Orthodox churches; Hagia Sophia itself was looted as the invaders believed it to contain the greatest treasures of the city. Shortly after the defence of the Walls of Constantinople collapsed and the victorious Ottoman troops entered the city, the pillagers and looters made their way to the Hagia Sophia and battered down its doors before storming inside. Once the three days passed, Mehmed was to claim the city's remaining contents for himself. However, by the end of the first day, he proclaimed that the looting should cease as he felt profound sadness when he toured the looted and enslaved city.
Throughout the siege of Constantinople, the trapped people of the city participated in the Divine Liturgy and the Prayer of the Hours at the Hagia Sophia, and the church was a safe-haven and a refuge for many of those who were unable to contribute to the city's defence, including women, children, elderly, the sick and the wounded. As they were trapped in the church, the many congregants and other refugees inside became spoils-of-war to be divided amongst the triumphant invaders. The building was desecrated and looted, and those who sought shelter within the church were enslaved. While most of the elderly and the infirm, injured, and sick were killed, the remainder (mainly teenage males and young boys) were chained and sold into slavery.
Mosque (1453–1935)
Constantinople fell to the attacking Ottoman forces on 29 May 1453. Sultan Mehmed II entered the city and performed the Friday prayer and khutbah (sermon) in Hagia Sophia, and this action marked the official conversion of Hagia Sophia into a mosque. The church's priests and religious personnel continued to perform Christian rites, prayers, and ceremonies until they were compelled to stop by the invaders. When Mehmed and his entourage entered the church, he ordered that it be converted into a mosque immediately. One of the ʿulamāʾ (Islamic scholars) present climbed onto the church's ambo and recited the shahada ("There is no god but Allah, and Muhammad is his messenger"), thus marking the beginning of the conversion of the church into a mosque. Mehmed is reported to have taken a sword to a soldier who tried to pry up one of the paving slabs of the Proconnesian marble floor.
As described by Western visitors before 1453, such as the Córdoban nobleman Pero Tafur and the Florentine geographer Cristoforo Buondelmonti, the church was in a dilapidated state, with several of its doors fallen from their hinges. Mehmed II ordered a renovation of the building. Mehmed attended the first Friday prayer in the mosque on 1 June 1453. Aya Sofya became the first imperial mosque of Istanbul. Most of the existing houses in the city and the area of the future Topkapı Palace were endowed to the corresponding waqf. From 1478, 2,360 shops, 1,300 houses, 4 caravanserais, 30 boza shops, and 23 shops of sheep heads and trotters gave their income to the foundation. Through the imperial charters of 1520 (AH 926) and 1547 (AH 954), shops and parts of the Grand Bazaar and other markets were added to the foundation.
Before 1481, a small minaret was erected on the southwest corner of the building, above the stair tower. Mehmed's successor Bayezid II () later built another minaret at the northeast corner. One of the minarets collapsed after the earthquake of 1509, and around the middle of the 16th century they were both replaced by two diagonally opposite minarets built at the east and west corners of the edifice. In 1498, Bernardo Bonsignori was the last Western visitor to Hagia Sophia to report seeing the ancient Justinianic floor; shortly afterwards the floor was covered over with carpet and not seen again until the 19th century.
In the 16th century, Sultan Suleiman the Magnificent () brought two colossal candlesticks from his conquest of the Kingdom of Hungary and placed them on either side of the mihrab. During Suleiman's reign, the mosaics above the narthex and imperial gates depicting Jesus, Mary, and various Byzantine emperors were covered by whitewash and plaster, which were removed in 1930 under the Turkish Republic.
During the reign of Selim II (), the building started showing signs of fatigue and was extensively strengthened with the addition of structural supports to its exterior by Ottoman architect Mimar Sinan, who was also an earthquake engineer. In addition to strengthening the historic Byzantine structure, Sinan built two additional large minarets at the western end of the building, the original sultan's lodge and the türbe (mausoleum) of Selim II to the southeast of the building in 1576–1577 (AH 984). In order to do that, parts of the Patriarchate at the south corner of the building were pulled down the previous year. Moreover, the golden crescent was mounted on the top of the dome, and a respect zone 35 arşın (about 24 m) wide was imposed around the building, leading to the demolition of all houses within the perimeter. The türbe became the location of the tombs of 43 Ottoman princes. Murad III () imported two large alabaster Hellenistic urns from Pergamon (Bergama) and placed them on two sides of the nave.
In 1594 (AH 1004) Mimar (court architect) Davud Ağa built the türbe of Murad III, where the Sultan and his valide, Safiye Sultan were buried. The octagonal mausoleum of their son Mehmed III () and his valide was built next to it in 1608 (AH 1017) by royal architect Dalgiç Mehmet Aĝa. His son Mustafa I () converted the baptistery into his türbe.
In 1717, under the reign of Sultan Ahmed III (), the crumbling plaster of the interior was renovated, contributing indirectly to the preservation of many mosaics, which otherwise would have been destroyed by mosque workers. In fact, it was usual for the mosaic's tesserae—believed to be talismans—to be sold to visitors. Sultan Mahmud I ordered the restoration of the building in 1739 and added a medrese (a Koranic school, subsequently the library of the museum), an imaret (soup kitchen for distribution to the poor) and a library, and in 1740 he added a Şadirvan (fountain for ritual ablutions), thus transforming it into a külliye, or social complex. At the same time, a new sultan's lodge and a new mihrab were built inside.
Renovation of 1847–1849
The 19th-century restoration of the Hagia Sophia was ordered by Sultan Abdulmejid I () and completed between 1847 and 1849 by eight hundred workers under the supervision of the Swiss-Italian architect brothers Gaspare and Giuseppe Fossati. The brothers consolidated the dome with a restraining iron chain and strengthened the vaults, straightened the columns, and revised the decoration of the exterior and the interior of the building. The mosaics in the upper gallery were exposed and cleaned, although many were recovered "for protection against further damage".
Eight new gigantic circular-framed discs or medallions were hung from the cornice, on each of the four piers and at either side of the apse and the west doors. These were designed by the calligrapher Kazasker Mustafa Izzet Efendi (1801–1877) and painted with the names of Allah, Muhammad, the Rashidun (the first four caliphs: Abu Bakr, Umar, Uthman and Ali), and the two grandsons of Muhammad: Hasan and Husayn, the sons of Ali. In 1850, the architects Fossati built a new maqsura or caliphal loge in Neo-Byzantine columns and an Ottoman–Rococo style marble grille connecting to the royal pavilion behind the mosque. The new maqsura was built at the extreme east end of the northern aisle, next to the north-eastern pier. The existing maqsura in the apse, near the mihrab, was demolished. A new entrance was constructed for the sultan: the . The Fossati brothers also renovated the minbar and mihrab.
Outside the main building, the minarets were repaired and altered so that they were of equal height. A clock building, the , was built by the Fossatis for use by the muwaqqit (the mosque timekeeper), and a new madrasa (Islamic school) was constructed. The was also built under their direction. When the restoration was finished, the mosque was re-opened with a ceremony on 13 July 1849. An edition of lithographs from drawings made during the Fossatis' work on Hagia Sophia was published in London in 1852, entitled: Aya Sophia of Constantinople as Recently Restored by Order of H.M. The Sultan Abdulmejid.
Occupation of Istanbul (1918–1923)
In the aftermath of the defeat of the Ottoman Empire in World War I, Constantinople was occupied by British, French, Italian, and Greek forces. On , the Greek Orthodox Christian military priest Eleftherios Noufrakis performed an unauthorized Divine Liturgy in the Hagia Sophia, the only such instance since the 1453 fall of Constantinople. The anti-occupation Sultanahmet demonstrations were held next to Hagia Sophia from March to May 1919. In Greece, the 500 drachma banknotes issued in 1923 featured Hagia Sophia.
Museum (1935–2020)
In 1935, the first Turkish President and founder of the Republic of Turkey, Mustafa Kemal Atatürk, transformed the building into a museum. During the Second World War, the minarets of the museum housed MG 08 machine guns. The carpet and the layer of mortar underneath were removed and marble floor decorations such as the omphalion appeared for the first time since the Fossatis' restoration, when the white plaster covering many of the mosaics had been removed. Due to neglect, the condition of the structure continued to deteriorate, prompting the World Monuments Fund (WMF) to include the Hagia Sophia in their 1996 and 1998 Watch Lists. During this time period, the building's copper roof had cracked, causing water to leak down over the fragile frescoes and mosaics. Moisture entered from below as well. Rising ground water increased the level of humidity within the monument, creating an unstable environment for stone and paint. The WMF secured a series of grants from 1997 to 2002 for the restoration of the dome. The first stage of work involved the structural stabilization and repair of the cracked roof, which was undertaken with the participation of the Turkish Ministry of Culture and Tourism. The second phase, the preservation of the dome's interior, afforded the opportunity to employ and train young Turkish conservators in the care of mosaics. By 2006, the WMF project was complete, though many areas of Hagia Sophia continue to require significant stability improvement, restoration, and conservation.
In 2014, Hagia Sophia was the second most visited museum in Turkey, attracting almost 3.3 million visitors annually.
While use of the complex as a place of worship (mosque or church) was strictly prohibited, in 1991 the Turkish government allowed the allocation of a pavilion in the museum complex (Ayasofya Müzesi Hünkar Kasrı) for use as a prayer room, and, since 2013, two of the museum's minarets had been used for voicing the call to prayer (the ezan) regularly.
From the early 2010s, several campaigns and government high officials, notably Turkey's deputy prime minister Bülent Arınç in November 2013, demanded the Hagia Sophia be converted back into a mosque. In 2015, Pope Francis publicly acknowledged the Armenian genocide, which is officially denied in Turkey. In response, the mufti of Ankara, Mefail Hızlı, said he believed the Pope's remarks would accelerate the conversion of Hagia Sophia into a mosque.
On 1 July 2016, Muslim prayers were held again in the Hagia Sophia for the first time in 85 years. That November, a Turkish NGO, the Association for the Protection of Historic Monuments and the Environment, filed a lawsuit for converting the museum into a mosque. The court decided it should stay as a 'monument museum'. In October 2016, Turkey's Directorate of Religious Affairs (Diyanet) appointed, for the first time in 81 years, a designated imam, Önder Soy, to the Hagia Sophia mosque (Ayasofya Camii Hünkar Kasrı), located at the Hünkar Kasrı, a pavilion for the sultans' private ablutions. Since then, the adhan has been regularly called out from the Hagia Sophia's all four minarets five times a day.
On 13 May 2017, a large group of people, organized by the Anatolia Youth Association (AGD), gathered in front of Hagia Sophia and prayed the morning prayer with a call for the re-conversion of the museum into a mosque. On 21 June 2017 the Directorate of Religious Affairs () organized a special programme, broadcast live by state-run television TRT, which included the recitation of the Quran and prayers in Hagia Sophia, to mark the Laylat al-Qadr.
Reversion to mosque (2018–present)
Since 2018, Turkish president Recep Tayyip Erdoğan had talked of reverting the status of the Hagia Sophia back to a mosque, as a populist gesture. On 31 March 2018 Erdoğan recited the first verse of the Quran in the Hagia Sophia, dedicating the prayer to the "souls of all who left us this work as inheritance, especially Istanbul's conqueror," strengthening the political movement to make the Hagia Sophia a mosque once again, reversing Atatürk's measure of turning the Hagia Sophia into a secular museum. In March 2019 Erdoğan said that he would change the status of Hagia Sophia from a museum to a mosque, adding that it had been a "very big mistake" to turn it into a museum. As a UNESCO World Heritage site, this change would require approval from UNESCO's World Heritage Committee. In late 2019 Erdoğan's office took over the administration and upkeep of the nearby Topkapı Palace Museum, transferring responsibility for the site from the Ministry of Culture and Tourism by presidential decree.
In 2020, Turkey's government celebrated the 567th anniversary of the Conquest of Constantinople with an Islamic prayer in Hagia Sophia. Erdoğan said during a televised broadcast "Al-Fath surah will be recited and prayers will be done at Hagia Sophia as part of conquest festival". In May, during the anniversary events, passages from the Quran were read in the Hagia Sophia. Greece condemned this action, while Turkey in response accused Greece of making "futile and ineffective statements".
In June, the head of Turkey's Directorate of Religious Affairs () said that "we would be very happy to open Hagia Sophia for worship" and that if it happened "we will provide our religious services as we do in all our mosques". On 25 June, John Haldon, president of the International Association of Byzantine Studies, wrote an open letter to Erdoğan asking that he "consider the value of keeping the Aya Sofya as a museum".
On 10 July 2020, the decision of the Council of Ministers from 1935 to transform the Hagia Sophia into a museum was annulled by the Council of State, decreeing that Hagia Sophia cannot be used "for any other purpose" than being a mosque and that the Hagia Sophia was property of the Fatih Sultan Mehmet Han Foundation. The council reasoned Ottoman Sultan Mehmet II, who conquered Istanbul, deemed the property to be used by the public as a mosque without any fees and was not within the jurisdiction of the Parliament or a ministry council. Despite secular and global criticism, Erdoğan signed a decree annulling the Hagia Sophia's museum status, reverting it to a mosque. The call to prayer was broadcast from the minarets shortly after the announcement of the change and rebroadcast by major Turkish news networks. The Hagia Sophia Museum's social media channels were taken down the same day, with Erdoğan announcing at a press conference that prayers themselves would be held there from 24 July. A presidential spokesperson said it would become a working mosque, open to anyone similar to the Parisian churches Sacré-Cœur and Notre-Dame. The spokesperson also said that the change would not affect the status of the Hagia Sophia as a UNESCO World Heritage site, and that "Christian icons" within it would continue to be protected. Earlier the same day, before the final decision, the Turkish Finance and Treasury Minister Berat Albayrak and the Justice Minister Abdulhamit Gül expressed their expectations of opening the Hagia Sophia to worship for Muslims. Mustafa Şentop, Speaker of Turkey's Grand National Assembly, said "a longing in the heart of our nation has ended".
A presidential spokesperson claimed that all political parties in Turkey supported Erdoğan's decision, but the Peoples' Democratic Party had previously released a statement denouncing the decision, saying "decisions on human heritage cannot be made on the basis of political games played by the government". The mayor of Istanbul, Ekrem İmamoğlu, said that he supports the conversion "as long as it benefits Turkey", adding that he felt that Hagia Sophia has been a mosque since 1453. Ali Babacan attacked the policy of his former ally Erdoğan, saying the Hagia Sophia issue "has come to the agenda now only to cover up other problems". Orhan Pamuk, Turkish novelist and Nobel laureate, publicly denounced the move, saying "Kemal Atatürk changed... Hagia Sophia from a mosque to a museum, honouring all previous Greek Orthodox and Latin Catholic history, making it as a sign of Turkish modern secularism".
On 17 July, Erdoğan announced that the first prayers in the Hagia Sophia would be open to between 1,000 and 1,500 worshippers, stating that Turkey had sovereign power over Hagia Sophia and was not obligated to bend to international opinion.
While the Hagia Sophia has now been rehallowed as a mosque, the place remains open for visitors outside of prayer times. Entrance was initially free, but starting from 15 January 2024, foreign nationals have to pay an entrance fee.
On 22 July, a turquoise-coloured carpet was laid to prepare the mosque for worshippers, attended by Ali Erbaş, head of the Diyanet. The omphalion was left exposed. Due to the COVID-19 pandemic, Erbaş said Hagia Sophia would accommodate up to 1,000 worshippers at a time and asked that they bring "masks, a prayer rug, patience and understanding". The mosque opened for Friday prayers on 24 July, the 97th anniversary of the signature of the Treaty of Lausanne, which established the borders of the modern Turkish Republic. The mosaics of the Virgin and Child in the apse were covered by white drapes. There had been proposals to conceal the mosaics with lasers during prayer times, but this idea was ultimately shelved. Erbaş proclaimed during his sermon, "Sultan Mehmet the Conqueror dedicated this magnificent construction to believers to remain a mosque until the Day of Resurrection". Erdoğan and some government ministers attended the midday prayers as many worshippers prayed outside; at one point the security cordon was breached and dozens of people broke through police lines. Turkey invited foreign leaders and officials, including Pope Francis, for the prayers. It is the fourth Byzantine church converted from museum to a mosque during Erdoğan's rule.
In April 2022, the Hagia Sophia held its first Ramadan tarawih prayer in 88 years.
International reaction and discussions
Days before the final decision on the conversion was made, Ecumenical Patriarch Bartholomew I of Constantinople stated in a sermon that "the conversion of Hagia Sophia into a mosque would disappoint millions of Christians around the world", he also said that Hagia Sophia, which was "a vital center where East is embraced with the West", would "fracture these two worlds" in the event of conversion. The proposed conversion was decried by other Orthodox Christian leaders, the Russian Orthodox Church's Patriarch Kirill of Moscow stating that "a threat to Hagia Sophia [wa]s a threat to all of Christian civilization".
Following the Turkish government's decision, UNESCO announced it "deeply regret[ted]" the conversion "made without prior discussion", and asked Turkey to "open a dialogue without delay", stating that the lack of negotiation was "regrettable". UNESCO further announced that the "state of conservation" of Hagia Sophia would be "examined" at the next session of the World Heritage Committee, urging Turkey "to initiate dialogue without delay, in order to prevent any detrimental effect on the universal value of this exceptional heritage". Ernesto Ottone, UNESCO's Assistant Director-General for Culture said "It is important to avoid any implementing measure, without prior discussion with UNESCO, that would affect physical access to the site, the structure of the buildings, the site's moveable property, or the site's management". UNESCO's statement of 10 July said "these concerns were shared with the Republic of Turkey in several letters, and again yesterday evening with the representative of the Turkish Delegation" without a response.
The World Council of Churches, which claims to represent 500 million Christians of 350 denominations, condemned the decision to convert the building into a mosque, saying that would "inevitably create uncertainties, suspicions and mistrust"; the World Council of Churches urged Turkey's president Erdoğan "to reconsider and reverse" his decision "in the interests of promoting mutual understanding, respect, dialogue and cooperation, and avoiding cultivating old animosities and divisions". At the recitation of the Sunday Angelus prayer at St Peter's Square on 12 July Pope Francis said, "My thoughts go to Istanbul. I think of Santa Sophia and I am very pained" (). The International Association of Byzantine Studies announced that its 21st International Congress, due to be held in Istanbul in 2021, will no longer be held there and is postponed to 2022.
Josep Borrell, the European Union's High Representative for Foreign Affairs and Vice-President of the European Commission, released a statement calling the decisions by the Council of State and Erdoğan "regrettable" and pointing out that "as a founding member of the Alliance of Civilisations, Turkey has committed to the promotion of inter-religious and inter-cultural dialogue and to fostering of tolerance and co-existence." According to Borrell, the European Union member states' twenty-seven foreign ministers "condemned the Turkish decision to convert such an emblematic monument as the Hagia Sophia" at meeting on 13 July, saying it "will inevitably fuel the mistrust, promote renewed division between religious communities and undermine our efforts at dialog and cooperation" and that "there was a broad support to call on the Turkish authorities to urgently reconsider and reverse this decision".
Greece denounced the conversion and considered it a breach of the UNESCO World Heritage titling. Greek culture minister Lina Mendoni called it an "open provocation to the civilised world" which "absolutely confirms that there is no independent justice" in Erdoğan's Turkey, and that his Turkish nationalism "takes his country back six centuries". Greece and Cyprus called for EU sanctions on Turkey. Morgan Ortagus, the spokesperson for the United States Department of State, noted: "We are disappointed by the decision by the government of Turkey to change the status of the Hagia Sophia." Jean-Yves Le Drian, foreign minister of France, said his country "deplores" the move, saying "these decisions cast doubt on one of the most symbolic acts of modern and secular Turkey".
Vladimir Dzhabarov, deputy head of the foreign affairs committee of the Russian Federation Council, said that it "will not do anything for the Muslim world. It does not bring nations together, but on the contrary brings them into collision" and calling the move a "mistake". The former deputy prime minister of Italy, Matteo Salvini, held a demonstration in protest outside the Turkish consulate in Milan, calling for all plans for accession of Turkey to the European Union to be terminated "once and for all". In East Jerusalem, a protest was held outside the Turkish consulate on 13 July, with the burning of a Turkish flag and the display of the Greek flag and flag of the Greek Orthodox Church. In a statement the Turkish foreign ministry condemned the burning of the flag, saying "nobody can disrespect or encroach our glorious flag".
Ersin Tatar, prime minister of the Turkish Republic of Northern Cyprus, which is recognized only by Turkey, welcomed the decision, calling it "sound" and "pleasing". He further criticized the government of Cyprus, claiming that "the Greek Cypriot administration, who burned down our mosques, should not have a say in this". Through a spokesman the Foreign Ministry of Iran welcomed the change, saying the decision was an "issue that should be considered as part of Turkey's national sovereignty" and "Turkey's internal affair". Sergei Vershinin, deputy foreign minister of Russia, said that the matter was of one of "internal affairs, in which, of course, neither we nor others should interfere." The Arab Maghreb Union was supportive.
Ekrema Sabri, imam of the al-Aqsa Mosque, and Ahmed bin Hamad al-Khalili, grand mufti of Oman, both congratulated Turkey on the move. The Muslim Brotherhood was also in favour of the news. A spokesman for the Palestinian Islamist movement Hamas called the verdict "a proud moment for all Muslims". Pakistani politician Chaudhry Pervaiz Elahi of the Pakistan Muslim League (Q) welcomed the ruling, claiming it was "not only in accordance with the wishes of the people of Turkey but the entire Muslim world". The Muslim Judicial Council group in South Africa praised the move, calling it "a historic turning point". In Nouakchott, capital of Mauritania, there were prayers and celebrations topped by the sacrifice of a camel. On the other hand, Shawki Allam, grand mufti of Egypt, ruled that conversion of the Hagia Sophia to a mosque is "impermissible".
When President Erdoğan announced that the first Muslim prayers would be held inside the building on 24 July, he added that "like all our mosques, the doors of Hagia Sophia will be wide open to locals and foreigners, Muslims and non-Muslims." Presidential spokesman İbrahim Kalın said that the icons and mosaics of the building would be preserved, and that "in regards to the arguments of secularism, religious tolerance and coexistence, there are more than four hundred churches and synagogues open in Turkey today." Ömer Çelik, spokesman for the ruling Justice and Development Party (AKP), announced on 13 July that entry to Hagia Sophia would be free of charge and open to all visitors outside prayer times, during which Christian imagery in the building's mosaics would be covered by curtains or lasers. The Turkish foreign minister, Mevlüt Çavuşoğlu, told TRT Haber on 13 July that the government was surprised at the reaction of UNESCO, saying that "We have to protect our ancestors' heritage. The function can be this way or that way – it does not matter".
On 14 July the prime minister of Greece, Kyriakos Mitsotakis, said his government was "considering its response at all levels" to what he called Turkey's "unnecessary, petty initiative", and that "with this backward action, Turkey is opting to sever links with western world and its values". In relation to both Hagia Sophia and the Cyprus–Turkey maritime zones dispute, Mitsotakis called for European sanctions against Turkey, referring to it as "a regional troublemaker, and which is evolving into a threat to the stability of the whole south-east Mediterranean region". Dora Bakoyannis, Greek former foreign minister, said Turkey's actions had "crossed the Rubicon", distancing itself from the West. On the day of the building's re-opening, Mitsotakis called the re-conversion evidence of Turkey's weakness rather than a show of power.
Armenia's Foreign Ministry expressed "deep concern" about the move, adding that it brought to a close Hagia Sophia's symbolism of "cooperation and unity of humankind instead of clash of civilizations." Catholicos Karekin II, the head of the Armenian Apostolic Church, said the move "violat[ed] the rights of national religious minorities in Turkey." Sahak II Mashalian, the Armenian Patriarch of Constantinople, perceived as loyal to the Turkish government, endorsed the decision to convert the museum into a mosque. He said, "I believe that believers' praying suits better the spirit of the temple instead of curious tourists running around to take pictures."
In July 2021, UNESCO asked for an updated report on the state of conservation and expressed "grave concern". There were also some concerns about the future of its World Heritage status. Turkey responded that the changes had "no negative impact" on UNESCO standards and the criticism is "biased and political".
Architecture
Hagia Sophia is one of the greatest surviving examples of Byzantine architecture. Its interior is decorated with mosaics, marble pillars, and coverings of great artistic value. Justinian had overseen the completion of the greatest cathedral ever built up to that time, and it was to remain the largest cathedral for 1,000 years until the completion of the cathedral in Seville in Spain.
The Hagia Sophia uses masonry construction. The structure has brick and mortar joints that are 1.5 times the width of the bricks. The mortar joints are composed of a combination of sand and minute ceramic pieces distributed evenly throughout the mortar joints. This combination of sand and potsherds was often used in Roman concrete, a predecessor to modern concrete. A considerable amount of iron was used as well, in the form of cramps and ties.
Justinian's basilica was at once the culminating architectural achievement of late antiquity and the first masterpiece of Byzantine architecture. Its influence, both architecturally and liturgically, was widespread and enduring in the Eastern Christianity, Western Christianity, and Islam alike.
The vast interior has a complex structure. The nave is covered by a central dome which at its maximum is from floor level and rests on an arcade of 40 arched windows. Repairs to its structure have left the dome somewhat elliptical, with the diameter varying between .
At the western entrance and eastern liturgical side, there are arched openings extended by half domes of identical diameter to the central dome, carried on smaller semi-domed exedrae, a hierarchy of dome-headed elements built up to create a vast oblong interior crowned by the central dome, with a clear span of .The theories of Hero of Alexandria, a Hellenistic mathematician of the 1st century AD, may have been utilized to address the challenges presented by building such an expansive dome over so large a space. Svenshon and Stiffel proposed that the architects used Hero's proposed values for constructing vaults. The square measurements were calculated using the side-and-diagonal number progression, which results in squares defined by the numbers 12 and 17, wherein 12 defines the side of the square and 17 its diagonal, which have been used as standard values as early as in cuneiform Babylonian texts.
Each of the four sides of the great square Hagia Sophia is approximately 31 m long, and it was previously thought that this was the equivalent of 100 Byzantine feet. Svenshon suggested that the size of the side of the central square of Hagia Sophia is not 100 Byzantine feet but instead 99 feet. This measurement is not only rational, but it is also embedded in the system of the side-and-diagonal number progression (70/99) and therefore a usable value by the applied mathematics of antiquity. It gives a diagonal of 140 which is manageable for constructing a huge dome like that of the Hagia Sophia.
Floor
The stone floor of Hagia Sophia dates from the 6th century. After the first collapse of the vault, the broken dome was left in situ on the original Justinianic floor and a new floor was laid above the rubble when the dome was rebuilt in 558. From the installation of this second Justinianic floor, the floor became part of the liturgy, with significant locations and spaces demarcated in various ways using different-coloured stones and marbles.
The floor is predominantly made up of Proconnesian marble, quarried on Proconnesus (Marmara Island) in the Propontis (Sea of Marmara). This was the main white marble used in the monuments of Constantinople. Other parts of the floor, like the Thessalian verd antique "marble", were quarried in Thessaly in Roman Greece. The Thessalian verd antique bands across the nave floor were often likened to rivers.
The floor was praised by numerous authors and repeatedly compared to a sea. The Justinianic poet Paul the Silentiary likened the ambo and the solea connecting it to the sanctuary with an island in a sea, with the sanctuary itself a harbour. The 9th-century Narratio writes of it as "like the sea or the flowing waters of a river". Michael the Deacon in the 12th century also described the floor as a sea in which the ambo and other liturgical furniture stood as islands. During the 15th-century conquest of Constantinople, the Ottoman caliph Mehmed is said to have ascended to the dome and the galleries in order to admire the floor, which according to Tursun Beg resembled "a sea in a storm" or a "petrified sea". Other Ottoman-era authors also praised the floor; Tâcîzâde Cafer Çelebi compared it to waves of marble. The floor was hidden beneath a carpet on 22 July 2020.
Narthex and portals
The Imperial Gate, or Imperial Door, was the main entrance between the exo- and esonarthex, and it was originally exclusively used by the emperor. A long ramp from the northern part of the outer narthex leads up to the upper gallery.
Upper gallery
The upper gallery, or matroneum, is horseshoe-shaped; it encloses the nave on three sides and is interrupted by the apse. Several mosaics are preserved in the upper gallery, an area traditionally reserved for the Empress and her court. The best-preserved mosaics are located in the southern part of the gallery.
The northern first floor gallery contains runic graffiti believed to have been left by members of the Varangian Guard. Structural damage caused by natural disasters is visible on the Hagia Sophia's exterior surface. To ensure that the Hagia Sophia did not sustain any damage on the interior of the building, studies have been conducted using ground penetrating radar within the gallery of the Hagia Sophia. With the use of ground-penetrating radar (GPR), teams discovered weak zones within the Hagia Sophia's gallery and also concluded that the curvature of the vault dome has been shifted out of proportion, compared to its original angular orientation.
Dome
The dome of Hagia Sophia has spurred particular interest for many art historians, architects, and engineers because of the innovative way the original architects envisioned it. The dome is carried on four spherical triangular pendentives, making the Hagia Sophia one of the first large-scale uses of this element. The pendentives are the corners of the square base of the dome, and they curve upwards into the dome to support it, thus restraining the lateral forces of the dome and allowing its weight to flow downwards. The main dome of the Hagia Sophia was the largest pendentive dome in the world until the completion of St Peter's Basilica, and it has a much lower height than any other dome of such a large diameter.
The great dome at the Hagia Sophia is 32.6 meters (one hundred and seven feet) in diameter and is only 0.61 meters (two feet) thick. The main building materials for the original Hagia Sophia were brick and mortar. Brick aggregate was used to make roofs easier to construct. The aggregate weighs 2402.77 kilograms per cubic meter (150 pounds per cubic foot), an average weight of masonry construction at the time. Due to the materials plasticity, it was chosen over cut stone due to the fact that aggregate can be used over a longer distance. According to Rowland Mainstone, "it is unlikely that the vaulting-shell is anywhere more than one normal brick in thickness".
The weight of the dome remained a problem for most of the building's existence. The original cupola collapsed entirely after the earthquake of 558; in 563 a new dome was built by Isidore the Younger, a nephew of Isidore of Miletus. Unlike the original, this included 40 ribs and was raised 6.1 meters (20 feet), in order to lower the lateral forces on the church walls. A larger section of the second dome collapsed as well, over two episodes, so that as of 2021, only two sections of the present dome, the north and south sides, are from the 562 reconstructions. Of the whole dome's 40 ribs, the surviving north section contains eight ribs, while the south section includes six ribs.
Although this design stabilizes the dome and the surrounding walls and arches, the actual construction of the walls of Hagia Sophia weakened the overall structure. The bricklayers used more mortar than brick, which is more effective if the mortar was allowed to settle, as the building would have been more flexible; however, the builders did not allow the mortar to cure before they began the next layer. When the dome was erected, its weight caused the walls to lean outward because of the wet mortar underneath. When Isidore the Younger rebuilt the fallen cupola, he had first to build up the interior of the walls to make them vertical again. Additionally, the architect raised the height of the rebuilt dome by approximately so that the lateral forces would not be as strong and its weight would be transmitted more effectively down into the walls. Moreover, he shaped the new cupola like a scalloped shell or the inside of an umbrella, with ribs that extend from the top down to the base. These ribs allow the weight of the dome to flow between the windows, down the pendentives, and ultimately to the foundation.
Hagia Sophia is famous for the light that reflects everywhere in the interior of the nave, giving the dome the appearance of hovering above. This effect was achieved by inserting forty windows around the base of the original structure. Moreover, the insertion of the windows in the dome structure reduced its weight.
Buttresses
Numerous buttresses have been added throughout the centuries. The flying buttresses to the west of the building, although thought to have been constructed by the Crusaders upon their visit to Constantinople, were actually built during the Byzantine era. This shows that the Romans had prior knowledge of flying buttresses, which can also be seen at in Greece, at the Rotunda of Galerius in Thessaloniki, at the monastery of Hosios Loukas in Boeotia, and in Italy at the octagonal basilica of San Vitale in Ravenna. Other buttresses were constructed during the Ottoman times under the guidance of the architect Sinan. A total of 24 buttresses were added.
Minarets
The minarets were an Ottoman addition and not part of the original church's Byzantine design. They were built for notification of invitations for prayers (adhan) and announcements. Mehmed had built a wooden minaret over one of the half domes soon after Hagia Sophia's conversion from a cathedral to a mosque. This minaret does not exist today. One of the minarets (at southeast) was built from red brick and can be dated back from the reign of Mehmed or his successor Beyazıd II. The other three were built from white limestone and sandstone, of which the slender northeast column was erected by Bayezid II and the two identical, larger minarets to the west were erected by Selim II and designed by the famous Ottoman architect Mimar Sinan. Both are in height, and their thick and massive patterns complete Hagia Sophia's main structure. Many ornaments and details were added to these minarets on repairs during the 15th, 16th, and 19th centuries, which reflect each period's characteristics and ideals.
Notable elements and decorations
Originally, under Justinian's reign, the interior decorations consisted of abstract designs on marble slabs on the walls and floors as well as mosaics on the curving vaults. Of these mosaics, the two archangels Gabriel and Michael are still visible in the spandrels (corners) of the bema. There were already a few figurative decorations, as attested by the late 6th-century ekphrasis of Paul the Silentiary, the Description of Hagia Sophia. The spandrels of the gallery are faced in inlaid thin slabs (opus sectile), showing patterns and figures of flowers and birds in precisely cut pieces of white marble set against a background of black marble. In later stages, figurative mosaics were added, which were destroyed during the iconoclastic controversy (726–843). Present mosaics are from the post-iconoclastic period.
Apart from the mosaics, many figurative decorations were added during the second half of the 9th century: an image of Christ in the central dome; Eastern Orthodox saints, prophets and Church Fathers in the tympana below; historical figures connected with this church, such as Patriarch Ignatius; and some scenes from the Gospels in the galleries. Basil II let artists paint a giant six-winged seraph on each of the four pendentives. The Ottomans covered their faces with golden stars, but in 2009, one of them was restored to its original state.
Loggia of the Empress
The loggia of the empress is located in the centre of the gallery of the Hagia Sophia, above the Imperial Gate and directly opposite the apse. From this matroneum (women's gallery), the empress and the court-ladies would watch the proceedings down below. A green stone disc of verd antique marks the spot where the throne of the empress stood.
Lustration urns
Two huge marble lustration (ritual purification) urns were brought from Pergamon during the reign of Sultan Murad III. They are from the Hellenistic period and carved from single blocks of marble.
Marble Door
The Marble Door inside the Hagia Sophia is located in the southern upper enclosure or gallery. It was used by the participants in synods, who entered and left the meeting chamber through this door. It is said that each side is symbolic and that one side represents heaven while the other represents hell. Its panels are covered in fruits and fish motifs. The door opens into a space that was used as a venue for solemn meetings and important resolutions of patriarchate officials.
The Nice Door
The Nice Door is the oldest architectural element found in the Hagia Sophia dating back to the 2nd century BC. The decorations are of reliefs of geometric shapes as well as plants that are believed to have come from a pagan temple in Tarsus in Cilicia, part of the Cibyrrhaeot Theme in modern-day Mersin Province in south-eastern Turkey. It was incorporated into the building by Emperor Theophilos in 838 where it is placed in the south exit in the inner narthex.
Imperial Gate
The Imperial Gate is the door that was used solely by the Emperor and his personal bodyguard and retinue. It is the largest door in the Hagia Sophia and has been dated to the 6th century. It is about 7 meters long and Byzantine sources say it was made with wood from Noah's Ark.
In April 2022, the door was vandalised by unknown assailant(s). The incident became known after the Association of Art Historians published a photo with the destruction. The Greek Foreign Ministry condemned the incident, while Turkish officials claimed that "a citizen has taken a piece of the door" and started an investigation.
Wishing column
At the northwest of the building, there is a column with a hole in the middle covered by bronze plates. This column goes by different names; the "perspiring" or "sweating column", the "crying column", or the "wishing column". Legend states that it has been moist since the appearance of Gregory Thaumaturgus near the column in 1200. It is believed that touching the moisture cures many illnesses.
The Viking Inscription
In the southern section of Hagia Sophia, a 9th-century Viking inscription has been discovered, which reads, "Halvdan was here." It is theorized that the inscription was created by a Viking soldier serving as a mercenary in the Eastern Roman Empire.
Mosaics
The first mosaics which adorned the church were completed during the reign of Justin II. Many of the non-figurative mosaics in the church come from this period. Most of the mosaics, however, were created in the 10th and 12th centuries, following the periods of Byzantine Iconoclasm.
During the Sack of Constantinople in 1204, the Latin Crusaders vandalized valuable items in every important Byzantine structure of the city, including the golden mosaics of the Hagia Sophia. Many of these items were shipped to Venice, whose Doge Enrico Dandolo had organized the invasion and sack of Constantinople after an agreement with Prince Alexios Angelos, the son of a deposed Byzantine emperor.
19th-century restoration
Following the building's conversion into a mosque in 1453, many of its mosaics were covered with plaster, due to Islam's ban on representational imagery. This process was not completed at once, and reports exist from the 17th century in which travellers note that they could still see Christian images in the former church. In 1847–1849, the building was restored by two Swiss-Italian Fossati brothers, Gaspare and Giuseppe, and Sultan Abdulmejid I allowed them to also document any mosaics they might discover during this process, which were later archived in Swiss libraries. This work did not include repairing the mosaics, and after recording the details about an image, the Fossatis painted it over again. The Fossatis restored the mosaics of the two hexapteryga (singular , pr. hexapterygon, six-winged angel; it is uncertain whether they are seraphim or cherubim) located on the two east pendentives, and covered their faces again before the end of the restoration. The other two mosaics, placed on the west pendentives, are copies in paint created by the Fossatis since they could find no surviving remains of them. As in this case, the architects reproduced in paint damaged decorative mosaic patterns, sometimes redesigning them in the process. The Fossati records are the primary sources about a number of mosaic images now believed to have been completely or partially destroyed in the 1894 Istanbul earthquake. These include a mosaic over a now-unidentified Door of the Poor, a large image of a jewel-encrusted cross, and many images of angels, saints, patriarchs, and church fathers. Most of the missing images were located in the building's two tympana.
One mosaic they documented is Christ Pantocrator in a circle, which would indicate it to be a ceiling mosaic, possibly even of the main dome, which was later covered and painted over with Islamic calligraphy that expounds God as the light of the universe. The Fossatis' drawings of the Hagia Sophia mosaics are today kept in the Archive of the Canton of Ticino.
20th-century restoration
Many mosaics were uncovered in the 1930s by a team from the Byzantine Institute of America led by Thomas Whittemore. The team chose to let a number of simple cross images remain covered by plaster but uncovered all major mosaics found.
Because of its long history as both a church and a mosque, a particular challenge arises in the restoration process. Christian iconographic mosaics can be uncovered, but often at the expense of important and historic Islamic art. Restorers have attempted to maintain a balance between both Christian and Islamic cultures. In particular, much controversy rests upon whether the Islamic calligraphy on the dome of the cathedral should be removed, in order to permit the underlying Pantocrator mosaic of Christ as Master of the World to be exhibited (assuming the mosaic still exists).
The Hagia Sophia has been a victim of natural disasters that have caused deterioration to the buildings structure and walls. The deterioration of the Hagia Sophia's walls can be directly attributed to salt crystallization. The crystallization of salt is due to an intrusion of rainwater that causes the Hagia Sophia's deteriorating inner and outer walls. Diverting excess rainwater is the main solution to the deteriorating walls at the Hagia Sophia.
Built between 532 and 537, a subsurface structure under the Hagia Sophia has been under investigation, using LaCoste-Romberg gravimeters to determine the depth of the subsurface structure and to discover other hidden cavities beneath the Hagia Sophia. The hidden cavities have also acted as a support system against earthquakes. With these findings using the LaCoste-Romberg gravimeters, it was also discovered that the Hagia Sophia's foundation is built on a slope of natural rock.
Imperial Gate mosaic
The Imperial Gate mosaic is located in the tympanum above that gate, which was used only by the emperors when entering the church. Based on style analysis, it has been dated to the late 9th or early 10th century. The emperor with a nimbus or halo could possibly represent emperor Leo VI the Wise or his son Constantine VII Porphyrogenitus bowing down before Christ Pantocrator, seated on a jewelled throne, giving his blessing and holding in his left hand an open book. The text on the book reads: "Peace be with you" (John 20, , ) and "I am the light of the world" (John 8, ). On each side of Christ's shoulders is a circular medallion with busts: on his left the Archangel Gabriel, holding a staff, on his right his mother Mary.
Southwestern entrance mosaic
The southwestern entrance mosaic, situated in the tympanum of the southwestern entrance, dates from the reign of Basil II. It was rediscovered during the restorations of 1849 by the Fossatis. The Virgin sits on a throne without a back, her feet resting on a pedestal, embellished with precious stones. The Christ Child sits on her lap, giving his blessing and holding a scroll in his left hand. On her left side stands emperor Constantine in ceremonial attire, presenting a model of the city to Mary. The inscription next to him says: "Great emperor Constantine of the Saints". On her right side stands emperor Justinian I, offering a model of the Hagia Sophia. The medallions on both sides of the Virgin's head carry the nomina sacra and , abbreviations of the . The composition of the figure of the Virgin enthroned was probably copied from the mosaic inside the semi-dome of the apse inside the liturgical space.
Apse mosaics
The mosaic in the semi-dome above the apse at the east end shows Mary, mother of Jesus holding the Christ Child and seated on a jewelled thokos backless throne. Since its rediscovery after a period of concealment in the Ottoman era, it "has become one of the foremost monuments of Byzantium". The infant Jesus's garment is depicted with golden tesserae.
Guillaume-Joseph Grelot, who had travelled to Constantinople, in 1672 engraved and in 1680 published in Paris an image of the interior of Hagia Sophia which shows the apse mosaic indistinctly. Together with a picture by Cornelius Loos drawn in 1710, these images are early attestations of the mosiac before it was covered towards the end of the 18th century. The mosaic of the Virgin and Child was rediscovered during the restorations of the Fossati brothers in 1847–1848 and revealed by the restoration of Thomas Whittemore in 1935–1939. It was studied again in 1964 with the aid of scaffolding.
It is not known when this mosaic was installed. According to Cyril Mango, the mosaic is "a curious reflection on how little we know about Byzantine art". The work is generally believed to date from after the end of Byzantine Iconoclasm and usually dated to the patriarchate of Photius I () and the time of the emperors Michael III () and Basil I (). Most specifically, the mosaic has been connected with a surviving homily known to have been written and delivered by Photius in the cathedral on 29 March 867.
Other scholars have favoured earlier or later dates for the present mosaic or its composition. Nikolaos Oikonomides pointed out that Photius's homily refers to a standing portrait of the Theotokos – a Hodegetria – while the present mosaic shows her seated. Likewise, a biography of the patriarch Isidore I () by his successor Philotheus I () composed before 1363 describes Isidore seeing a standing image of the Virgin at Epiphany in 1347. Serious damage was done to the building by earthquakes in the 14th century, and it is possible that a standing image of the Virgin that existed in Photius's time was lost in the earthquake of 1346, in which the eastern end of Hagia Sophia was partly destroyed. This interpretation supposes that the present mosaic of the Virgin and Child enthroned is of the late 14th century, a time in which, beginning with Nilus of Constantinople (), the patriarchs of Constantinople began to have official seals depicting the Theotokos enthroned on a thokos.
Still other scholars have proposed an earlier date than the later 9th century. According to George Galavaris, the mosaic seen by Photius was a Hodegetria portrait which after the earthquake of 989 was replaced by the present image not later than the early 11th century. According to Oikonomides however, the image in fact dates to before the Triumph of Orthodoxy, having been completed , during the iconodule interlude between the First Iconoclast (726–787) and the Second Iconoclast (814–842) periods. Having been plastered over in the Second Iconoclasm, Oikonomides argues a new, standing image of the Virgin Hodegetria was created above the older mosaic in 867, which then fell off in the earthquakes of the 1340s and revealed again the late 8th-century image of the Virgin enthroned.
More recently, analysis of a hexaptych menologion icon panel from Saint Catherine's Monastery at Mount Sinai has determined that the panel, showing numerous scenes from the life of the Virgin and other theologically significant iconic representations, contains an image at the centre very similar to that in Hagia Sophia. The image is labelled in Greek merely , but in the Georgian language the inscription reveals the image is labelled "of the semi-dome of Hagia Sophia". This image is therefore the oldest depiction of the apse mosaic known and demonstrates that the apse mosaic's appearance was similar to the present day mosaic in the late 11th or early 12th centuries, when the hexaptych was inscribed in Georgian by a Georgian monk, which rules out a 14th-century date for the mosaic.
The portraits of the archangels Gabriel and Michael (largely destroyed) in the bema of the arch also date from the 9th century. The mosaics are set against the original golden background of the 6th century. These mosaics were believed to be a reconstruction of the mosaics of the 6th century that were previously destroyed during the iconoclastic era by the Byzantines of that time, as represented in the inaugural sermon by the patriarch Photios. However, no record of figurative decoration of Hagia Sophia exists before this time.
Emperor Alexander mosaic
The Emperor Alexander mosaic is not easy to find for the first-time visitor, located on the second floor in a dark corner of the ceiling. It depicts the emperor Alexander in full regalia, holding a scroll in his right hand and a globus cruciger in his left. A drawing by the Fossatis showed that the mosaic survived until 1849 and that Thomas Whittemore, founder of the Byzantine Institute of America who was granted permission to preserve the mosaics, assumed that it had been destroyed in the earthquake of 1894. Eight years after his death, the mosaic was discovered in 1958 largely through the researches of Robert Van Nice. Unlike most of the other mosaics in Hagia Sophia, which had been covered over by ordinary plaster, the Alexander mosaic was simply painted over and reflected the surrounding mosaic patterns and thus was well hidden. It was duly cleaned by the Byzantine Institute's successor to Whittemore, Paul A. Underwood.
Empress Zoe mosaic
The Empress Zoe mosaic on the eastern wall of the southern gallery dates from the 11th century. Christ Pantocrator, clad in the dark blue robe (as is the custom in Byzantine art), is seated in the middle against a golden background, giving his blessing with the right hand and holding the Bible in his left hand. On either side of his head are the nomina sacra and , meaning Iēsous Christos. He is flanked by Constantine IX Monomachus and Empress Zoe, both in ceremonial costumes. He is offering a purse, as a symbol of donation, he made to the church, while she is holding a scroll, symbol of the donations she made. The inscription over the head of the emperor says: "Constantine, pious emperor in Christ the God, king of the Romans, Monomachus". The inscription over the head of the empress reads as follows: "Zoë, the very pious Augusta". The previous heads have been scraped off and replaced by the three present ones. Perhaps the earlier mosaic showed her first husband Romanus III Argyrus or her second husband Michael IV. Another theory is that this mosaic was made for an earlier emperor and empress, with their heads changed into the present ones.
Comnenus mosaic
The Comnenus mosaic, also located on the eastern wall of the southern gallery, dates from 1122. The Virgin Mary is standing in the middle, depicted, as usual in Byzantine art, in a dark blue gown. She holds the Christ Child on her lap. He gives his blessing with his right hand while holding a scroll in his left hand. On her right side stands emperor John II Comnenus, represented in a garb embellished with precious stones. He holds a purse, symbol of an imperial donation to the church. His wife, the empress Irene of Hungary stands on the left side of the Virgin, wearing ceremonial garments and offering a document. Their eldest son Alexius Comnenus is represented on an adjacent pilaster. He is shown as a beardless youth, probably representing his appearance at his coronation aged seventeen. In this panel, one can already see a difference with the Empress Zoe mosaic that is one century older. There is a more realistic expression in the portraits instead of an idealized representation. The Empress Irene (born Piroska), daughter of Ladislaus I of Hungary, is shown with plaited blond hair, rosy cheeks, and grey eyes, revealing her Hungarian descent. The emperor is depicted in a dignified manner.
Deësis mosaic
The Deësis mosaic (, "Entreaty") probably dates from 1261. It was commissioned to mark the end of 57 years of Latin Catholic use and the return to the Eastern Orthodox faith. It is the third panel situated in the imperial enclosure of the upper galleries. It is widely considered the finest in Hagia Sophia, because of the softness of the features, the humane expressions and the tones of the mosaic. The style is close to that of the Italian painters of the late 13th or early 14th century, such as Duccio. In this panel the Virgin Mary and John the Baptist (Ioannes Prodromos), both shown in three-quarters profile, are imploring the intercession of Christ Pantocrator for humanity on Judgment Day. The bottom part of this mosaic is badly deteriorated. This mosaic is considered as the beginning of a renaissance in Byzantine pictorial art.
Northern tympanum mosaics
The northern tympanum mosaics feature various saints. They have been able to survive due to their high and inaccessible location. They depict Patriarchs of Constantinople John Chrysostom and Ignatios of Constantinople standing, clothed in white robes with crosses, and holding richly jewelled Bibles. The figures of each patriarch, revered as saints, are identifiable by labels in Greek. The other mosaics in the other tympana have not survived probably due to the frequent earthquakes, as opposed to any deliberate destruction by the Ottoman conquerors.
Dome mosaic
The dome was decorated with four non-identical figures of the six-winged angels which protect the Throne of God; it is uncertain whether they are seraphim or cherubim. The mosaics survive in the eastern part of the dome, but since the ones on the western side were damaged during the Byzantine period, they have been renewed as frescoes. During the Ottoman period each seraph's (or cherub's) face was covered with metallic lids in the shape of stars, but these were removed to reveal the faces during renovations in 2009.
Other burials
Selim II (1524 – 15 December 1574)
Murad III 1546–1595
Mustafa I ( – 20 January 1639), in the courtyard.
Enrico Dandolo ( – June 1205), in the east gallery.
Gli ( – 7 November 2020), in the garden.
Works influenced by the Hagia Sophia
Many buildings have been modeled on the Hagia Sophia's core structure of a large central dome resting on pendentives and buttressed by two semi-domes.
Byzantine churches influenced by the Hagia Sophia include the Hagia Sophia in Thessaloniki, and the Hagia Irene. The latter was remodeled to have a dome similar to the Hagia Sophia's during the reign of Justinian.
Several mosques commissioned by the Ottoman dynasty have plans based on the Hagia Sophia, including the Süleymaniye Mosque and the Bayezid II Mosque. Ottoman architects preferred to surround the central dome with four semi-domes rather than two. There are four semi-domes on the Sultan Ahmed Mosque, the Fatih Mosque, and the New Mosque (Istanbul). As with the original plan of the Hagia Sophia, these mosques are entered through colonnaded courtyards. However, the courtyard of the Hagia Sophia no longer exists.
Neo-Byzantine churches modeled on the Hagia Sophia include the Kronstadt Naval Cathedral, Holy Trinity Cathedral, Sibiu and Poti Cathedral. Each closely replicates the internal geometry of the Hagia Sophia. The layout of the Kronstadt Naval Cathedral is nearly identical to the Hagia Sophia in size and geometry. Its marble revetment also mimics the style of the Hagia Sophia.
As with Ottoman mosques, several churches based on the Hagia Sophia include four semi-domes rather than two, such as the Church of Saint Sava in Belgrade. The Catedral Metropolitana Ortodoxa in São Paulo and the Église du Saint-Esprit (Paris) both replace the two large tympanums beneath the main dome with two shallow semi-domes. The Église du Saint-Esprit is two thirds the size of the Hagia Sophia.
Several churches combine elements of the Hagia Sophia with a Latin cross plan. For instance, the transept of the Cathedral Basilica of Saint Louis (St. Louis) is formed by two semi-domes surrounding the main dome. The church's column capitals and mosaics also emulate the style of the Hagia Sophia. Other examples include the Alexander Nevsky Cathedral, Sofia, St Sophia's Cathedral, London, Saint Clement Catholic Church, Chicago, and the Basilica of the National Shrine of the Immaculate Conception.
Synagogues based on the Hagia Sophia include the Congregation Emanu-El (San Francisco), Great Synagogue of Florence, and Hurva Synagogue.
Gallery
See also
Runic inscriptions in Hagia Sophia
List of Byzantine inventions
List of tallest domes
List of largest monoliths
List of oldest church buildings
List of tallest structures built before the 20th century
List of Turkish Grand Mosques
Conversion of non-Islamic places of worship into mosques
Notes
References
Citations
Sources
Hagia Sophia. Hagia Sophia . Accessed 23 September 2014.
.
Runciman, Steven (1965). The Fall of Constantinople , 1453. Cambridge: Cambridge University Press. p. 145. .
Further reading
See also the thematically organised full bibliography in Stroth (2021), pp. 137–183.
Harris, Jonathan, Constantinople: Capital of Byzantium. Hambledon/Continuum (2007).
Scharf, Joachim: "Der Kaiser in Proskynese. Bemerkungen zur Deutung des Kaisermosaiks im Narthex der Hagia Sophia von Konstantinopel". In: Festschrift Percy Ernst Schramm zu seinem siebzigsten Geburtstag von Schülern und Freunden zugeeignet, Wiesbaden 1964, pp. 27–35.
Weitzmann, Kurt, ed., Age of spirituality: late antique and early Christian art, third to seventh century , no. 592, 1979, Metropolitan Museum of Art, New York,
Articles
Bordewich, Fergus M., "A Monumental Struggle to Preserve Hagia Sophia", Smithsonian magazine, December 2008
Calian, Florian, The Hagia Sophia and Turkey's Neo-Ottomanism , Armenian Weekly.
Ousterhout, Robert G. "Museum or Mosque? Istanbul's Hagia Sophia has been a monument to selective readings of history ." History Today (Sept 2020).
Suchkov, Maxim, Why did Moscow call Ankara's Hagia Sophia decision "Turkey's internal affair"? , Middle East Institute.
Mosaics
Hagia Sophia, hagiasophia.com: Mosaics.
External links
360 Degree Virtual Tour of Hagia Sophia Mosque Museum
Gigapixel of Hagia Sophia Dome (214 Billion Pixel)
Hagia Sophia Museum, Republic of Turkey, Ministry of Culture & Tourism
The Most Visited Museums of Turkey: Hagia Sophia Museum, Governorship of Istanbul | Hagia Sophia | [
"Engineering"
] | 21,686 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
42,766 | https://en.wikipedia.org/wiki/Climbing%20wall | A climbing wall is an artificially constructed wall with manufactured grips (or "holds") for the hands and feet. Most walls are located indoors, and climbing on such walls is often termed indoor climbing. Some walls are brick or wooden constructions but on modern walls, the material most often used is a thick multiplex board with holes drilled into it. Recently, manufactured steel and aluminum have also been used. The wall may have places to attach belay ropes, but may also be used to practice lead climbing or bouldering.
Each hole contains a specially formed t-nut to allow modular climbing holds to be screwed onto the wall. With manufactured steel or aluminum walls, an engineered industrial fastener is used to secure climbing holds. The face of the multiplex board climbing surface is covered with textured products including concrete and paint or polyurethane loaded with sand. In addition to the textured surface and hand holds the wall may contain surface structures such as indentions (in cuts) and protrusions (bulges), or take the form of an overhang, underhang or crack. Some grips or handholds are formed to mimic the conditions of outdoor rock, including some that are oversized and can have other grips bolted onto them.
History
The earliest artificial climbing walls were typically small concrete faces with protrusions made of medium-sized rocks for hand holds. Schurman Rock in Seattle, WA is believed by some to be the first artificial climbing structure in the United States, constructed in 1939.
The first artificial climbing wall in the world seems to be the wall commissioned by the King of Belgium Leopold III in 1937 near his palace. This was documented by Mark Sebille in his book Bel'Wall.
The modern artificial climbing wall began in the UK. The first wall was created in 1964 by Don Robinson, a lecturer in Physical Education at the University of Leeds, by inserting pieces of rock into a corridor wall. The first commercial wall in the UK, The Foundry Climbing Centre, was built in Sheffield in 1991, traditionally England's centre for climbing due to its proximity to the Peak District. The first indoor climbing gym in the United States was established by Vertical World in Seattle in 1987. Terre Neuve in the heart of Brussels (Belgium) was opened in 1987 as well. It is not clear which gym was opened first.
Types
The simplest type of wall is of plywood construction, known colloquially in the climbing community as a 'woody', with a combination of either bolt-on holds or screw-on holds. Bolt-on holds are fixed to a wall with iron bolts that are inserted through the hold, which will have specific bolt points, and then fixed into pre-allocated screw-threaded holes in the wall. Screw-on holds are, by contrast, usually much smaller, owing to the nature of their fixing. These holds are connected to the wall by screws, which may be fastened anywhere on the wall's surface.
Some other types of walls include slabs of granite, concrete sprayed onto a wire mesh, pre-made fiberglass panels, large trees, manufactured steel and aluminum panels, textured fiberglass walls, and inflatables. A newer innovation is the rotating climbing wall: a mechanical, mobile wall that rotates like a treadmill to match your climbing rate of ascent.
The most common construction method involves bolting resin hand and foot holds onto wooden boards. The boards can be of varying height & steepness (from completely horizontal 'roofs' to near-vertical 'slabs') with a mixture of holds attached. These can vary from very small 'crimps', and 'pinches', and slanted-surfaced 'slopers', to 'jugs', which are often large and easy to hold. This variety, coupled with the ability for the climbs to be changed by moving the holds to new positions on the wall, has resulted in indoor climbing becoming a very popular sport.
Equipment
Proper climbing equipment must be used during indoor climbing.
Most climbing gyms lend harnesses, ropes and belay devices. Some also lend climbing shoes and chalk bags. Some climbing gyms require use of chalk balls (as opposed to loose chalk) to reduce chalk dust in the air and chalk spills when a chalk bag is tipped over or stepped on. Reducing chalk in the air helps to avoid clogging ventilation systems and reduces the dust that accumulates on less-than-vertical surfaces.
Indoor climbing
Indoor climbing is an increasingly popular form of rock climbing performed on artificial structures that attempt to mimic the experience of outdoor rock. The first indoor climbing gym in North America, Vertical World in Seattle, was established in 1987. The first indoor climbing hall in the world was inaugurated in Brussels, Belgium on May 16, 1987, by Isabelle Dorsimond and Marc Bott.
Terres Neuves integrated the concept of pre-drilled plywood walls fitted with T-nuts, as developed in 1986 by the Brussels-based firm Alpi'In. Pierre d'Haenens is the inventor of this system, which is now used worldwide by all climbing wall manufacturers. Terres Neuves still exists today in an almost unchanged form.
The first indoor walls tended to be made primarily of brick, which limited the steepness of the wall and variety of the hand holds. More recently, indoor climbing terrain is constructed of plywood over a metal frame, with bolted-on plastic hand and footholds, and sometimes spray-coated with texture to simulate a rock face.
Most climbing competitions are held in climbing gyms, making them a part of indoor climbing.
Compared to outdoor climbing
Indoor and outdoor climbing can differ in techniques, style, and equipment. Climbing artificial walls, especially indoors, is much safer because anchor points and holds are able to be more firmly fixed, and environmental conditions can be controlled. During indoor climbing, holds are easily visible in contrast with natural walls where finding a good hold or foothold may be a challenge. Climbers on artificial walls are somewhat restricted to the holds prepared by the route setter, whereas on natural walls they can use every slope or crack in the surface of the wall. Some typical rock formations can be difficult to emulate on climbing walls.
Routes and grading
Holds come in different colours, those of the same colour often being used to denote a route, allowing routes of different difficulty levels to be overlaid on one another. Coloured tape placed under climbing holds is another way that is often used to mark different climbing routes. In attempting a given route, a climber is only allowed to use grips of the designated colour as handholds, but is usually allowed to use both handholds and footholds of the designated colour and surface structures and textures of the "rockface" as footholds.
The grade (difficulty) of the route is usually a consensus decision between the setter of the route and the first few people who climb the route. Many indoor climbing walls have people who are assigned to set these different climbing routes. These people are called route setters or course setters. As indoor climbing walls are often used to check the development of climbers' abilities, climbs are color-coded.
Route-setting is the design of routes by placing climbing holds in a strategic, technical, and fun way that sets how the route will flow. There are many different techniques involved with setting, and up to 5 levels of certifications are awarded to those qualified. Route setting can be defined as the backbone of indoor climbing; without a great set of routes, a gym cannot easily hope to keep a good hoard of climbers.
Gallery
See also
Climbing
Climbing gym
Speed climbing wall
Rock-climbing equipment
References
External links
UK Climbing Wall Manufacturers Association
Climbing Wall Association, a US non-profit, industry trade association
Odin - UK Mobile Climbing Wall Specialists
Indoorclimbing.com - a worldwide indoor climbing gym list.
The British Mountaineering Council
The Mountaineering Council of Scotland - indoor climbing page.
Types of wall | Climbing wall | [
"Engineering"
] | 1,612 | [
"Structural engineering",
"Types of wall"
] |
42,799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition.
Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output.
The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s.
A text-to-speech system (or "engine") is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations), which is then imposed on the output speech.
History
Long before the invention of electronic signal processing, some people tried to build machines to emulate human speech. There were also legends of the existence of "Brazen Heads", such as those involving Pope Silvester II (d. 1003 AD), Albertus Magnus (1198–1280), and Roger Bacon (1214–1294).
In 1779, the German-Danish scientist Christian Gottlieb Kratzenstein won the first prize in a competition announced by the Russian Imperial Academy of Sciences and Arts for models he built of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation: , , , and ). There followed the bellows-operated "acoustic-mechanical speech machine" of Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, and in 1846, Joseph Faber exhibited the "Euphonia". In 1923, Paget resurrected Wheatstone's design.
In the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder (Voice Demonstrator), which he exhibited at the 1939 New York World's Fair.
Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950. There were several different versions of this hardware device; only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues discovered acoustic cues for the perception of phonetic segments (consonants and vowels).
Electronic devices
The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda et al. developed the first general English text-to-speech system in 1968, at the Electrotechnical Laboratory in Japan. In 1961, physicist John Larry Kelly, Jr and his colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as astronaut Dave Bowman puts it to sleep. Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues.
Linear predictive coding (LPC), a form of speech coding, began development with the work of Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. Further developments in LPC technology were made by Bishnu S. Atal and Manfred R. Schroeder at Bell Labs during the 1970s. LPC was later the basis for early speech synthesizer chips, such as the Texas Instruments LPC Speech Chips used in the Speak & Spell toys from 1978.
In 1975, Fumitada Itakura developed the line spectral pairs (LSP) method for high-compression speech coding, while at NTT. From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method. In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet.
In 1975, MUSA was released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "a cappella" style.
Dominant systems in the 1980s and 1990s were the DECtalk system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system; the latter was one of the first multilingual language-independent systems, making extensive use of natural language processing methods.
Handheld electronics featuring speech synthesis began emerging in the 1970s. One of the first was the Telesensory Systems Inc. (TSI) Speech+ portable calculator for the blind in 1976. Other devices had primarily educational purposes, such as the Speak & Spell toy produced by Texas Instruments in 1978. Fidelity released a speaking version of its electronic chess computer in 1979. The first video game to feature speech synthesis was the 1980 shoot 'em up arcade game, Stratovox (known in Japan as Speak & Rescue), from Sun Electronics. The first personal computer game with speech synthesis was Manbiki Shoujo (Shoplifting Girl), released in 1980 for the PET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform. Another early example, the arcade version of Berzerk, also dates from 1980. The Milton Bradley Company produced the first multi-player electronic game using voice synthesis, Milton, in the same year.
In 1976, Computalker Consultants released their CT-1 Speech Synthesizer. Designed by D. Lloyd Rice and Jim Cooper, it was an analog synthesizer built to work with microcomputers using the S-100 bus standard.
Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech.
Synthesized voices typically sounded male until 1990, when Ann Syrdal, at AT&T Bell Laboratories, created a female voice.
Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs.
Synthesizer technologies
The most important qualities of a speech synthesis system are naturalness and intelligibility. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics.
The two primary technologies generating synthetic speech waveforms are concatenative synthesis and formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.
Concatenation synthesis
Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.
Unit selection synthesis
Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At run time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree.
Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.
Diphone synthesis
Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA. or more recent techniques such as pitch modification in the source domain using discrete cosine transform. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot, Leachim, that was invented by Michael J. Freeman. Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach. It was tested in a fourth grade classroom in the Bronx, New York.
Domain-specific synthesis
Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings.
Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the "r" in words like "clear" is usually only pronounced when the following word has a vowel as its first letter (e.g. "clear out" is realized as ). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive.
Formant synthesis
Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using additive synthesis and an acoustic model (physical modelling synthesis). Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components.
Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice.
Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines and in many Atari, Inc. arcade games using the TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.
Articulatory synthesis
Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.
Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model".
More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.
HMM-based synthesis
HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum (vocal tract), fundamental frequency (voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion.
Sinewave synthesis
Sinewave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles.
Deep learning-based synthesis
Deep learning speech synthesis uses deep neural networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder).
The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.
15.ai uses a multi-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context. The deep learning model used by the application is nondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering the emotion of a generated line using emotional contextualizers (a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference.
ElevenLabs is primarily known for its browser-based, AI-assisted text-to-speech software, Speech Synthesis, which can produce lifelike speech by synthesizing vocal emotion and intonation. The company states its software is built to adjust the intonation and pacing of delivery based on the context of language input used. It uses advanced algorithms to analyze the contextual aspects of text, aiming to detect emotions like anger, sadness, happiness, or alarm, which enables the system to understand the user's sentiment, resulting in a more realistic and human-like inflection. Other features include multilingual speech generation and long-form content creation with contextually-aware voices.
The DNN-based speech synthesizers are approaching the naturalness of the human voice.
Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models.
For tonal languages, such as Chinese or Taiwanese language, there are different levels of tone sandhi required and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi.
Audio deepfakes
In 2023, VICE reporter Joseph Cox published findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank's voice-authentication system.
Challenges
Text normalization challenges
The process of normalizing text is rarely straightforward. Texts are full of heteronyms, numbers, and abbreviations that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project".
Most text-to-speech (TTS) systems do not generate semantic representations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like examining neighboring words and using statistics about frequency of occurrence.
Recently TTS systems have begun to use HMMs (discussed above) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training corpora is frequently difficult in these languages.
Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous. Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight".
Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant".
Text-to-phoneme challenges
Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or grapheme-to-phoneme conversion (phoneme is the term used by linguists to describe distinctive sounds in a language). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciations is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", or synthetic phonics, approach to learning reading.
Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced .) As a result, nearly all speech synthesis systems use a combination of these approaches.
Languages with a phonemic orthography have a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that are not in their dictionaries.
Evaluation challenges
The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities.
Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset.
Prosodics and emotional content
A study in the journal Speech Communication by Amy Drahota and colleagues at the University of Portsmouth, UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling. It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of the pitch contour of the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification uses discrete cosine transform in the source domain (linear prediction residual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamic plosion index applied on the integrated linear prediction residual of the voiced regions of speech. In general, prosody remains a challenge for speech synthesizers, and is an active research topic.
Dedicated hardware
Icophone
General Instrument SP0256-AL2
National Semiconductor DT1050 Digitalker (Mozer – Forrest Mozer)
Texas Instruments LPC Speech Chips
Hardware and software systems
Popular systems offering speech synthesis as a built-in capability.
Texas Instruments
In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion included Alpiner and Parsec). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan.
Mattel
The Mattel Intellivision game console offered the Intellivoice Voice Synthesis module in 1982. It included the SP0256 Narrator speech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples.
SAM
Also released in 1982, Software Automatic Mouth was the first commercial all-software voice synthesis program. It was later used as the basis for Macintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip.
Atari
Arguably, the first speech system integrated into an operating system was the circa 1983 unreleased Atari 1400XL/1450XL computers. These used the Votrax SC01 chip and a finite-state machine to enable World English Spelling text-to-speech synthesis.
The Atari ST computers were sold with "stspeech.tos" on floppy disk.
Apple
The first speech system integrated into an operating system that shipped in quantity was Apple Computer's MacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with. So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introduced speech recognition into its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of Apple Macintosh has evolved into a fully supported program, PlainTalk, for people with vision problems. VoiceOver was for the first time featured in 2005 in Mac OS X Tiger (10.4). During 10.4 (Tiger) and first releases of 10.5 (Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includes say, a command-line based application that converts text to audible speech. The AppleScript Standard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text.
Amazon
Used in Alexa and as Software as a Service in AWS (from 2017).
AmigaOS
The second operating system to feature advanced speech synthesis capabilities was AmigaOS, introduced in 1985. The voice synthesis was licensed by Commodore International from SoftVoice, Inc., who also developed the original MacinTalk text-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through the Amiga's audio chipset. The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward.
Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language.
Microsoft Windows
Modern Windows desktop systems can use SAPI 4 and SAPI 5 components to support speech synthesis and speech recognition. SAPI 4.0 was available as an optional add-on for Windows 95 and Windows 98. Windows 2000 added Narrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly. Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard.
Microsoft Speech Server is a server-based package for voice synthesis and recognition. It is designed for network use with web applications and call centers.
Votrax
From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind.
Text-to-speech systems
Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.
Android
Version 1.6 of Android added support for speech synthesis (TTS).
Internet
Currently, there are a number of applications, plugins and gadgets that can read messages directly from an e-mail client and web pages from a web browser or Google Toolbar. Some specialized software can narrate RSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them to podcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help of podcast receiver, and listen to them while walking, jogging or commuting to work.
A growing field in Internet based TTS is web-based assistive technology, e.g. 'Browsealoud' from a UK company and Readspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit project Pediaphon was created in 2006 to provide a similar web-based TTS interface to the Wikipedia.
Other work is being done in the context of the W3C through the W3C Audio Incubator Group with the involvement of The BBC and Google Inc.
Open source
Some open-source software systems are available, such as:
eSpeak which supports a broad range of languages.
Festival Speech Synthesis System which uses diphone-based synthesis, as well as more modern and better-sounding techniques.
gnuspeech which uses articulatory synthesis from the Free Software Foundation.
Others
Following the commercial failure of the hardware-based Intellivoice, gaming developers sparingly used software synthesis in later games. Earlier systems from Atari, such as the Atari 5200 (Baseball) and the Atari 2600 (Quadrun and Open Sesame), also had games utilizing software synthesis.
Some e-book readers, such as the Amazon Kindle, Samsung E6, PocketBook eReader Pro, enTourage eDGe, and the Bebook Neo.
The BBC Micro incorporated the Texas Instruments TMS5220 speech synthesis chip.
Some models of Texas Instruments home computers produced in 1979 and 1981 (Texas Instruments TI-99/4 and TI-99/4A) were capable of text-to-phoneme synthesis or reciting complete words and phrases (text-to-dictionary), using a very popular Speech Synthesizer peripheral. TI used a proprietary codec to embed complete spoken phrases into applications, primarily video games.
IBM's OS/2 Warp 4 included VoiceType, a precursor to IBM ViaVoice.
GPS Navigation units produced by Garmin, Magellan, TomTom and others use speech synthesis for automobile navigation.
Yamaha produced a music synthesizer in 1999, the Yamaha FS1R which included a Formant synthesis capability. Sequences of up to 512 individual vowel and consonant formants could be stored and replayed, allowing short vocal phrases to be synthesized.
Digital sound-alikes
At the 2018 Conference on Neural Information Processing Systems (NeurIPS) researchers from Google presented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', which transfers learning from speaker verification to achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds.
Also researchers from Baidu Research presented a voice cloning system with similar aims at the 2018 NeurIPS conference, though the result is rather unconvincing.
By 2019 the digital sound-alikes found their way to the hands of criminals as Symantec researchers know of 3 cases where digital sound-alikes technology has been used for crime.
This increases the stress on the disinformation situation coupled with the facts that
Human image synthesis since the early 2000s has improved beyond the point of human's inability to tell a real human imaged with a real camera from a simulation of a human imaged with a simulation of a camera.
2D video forgery techniques were presented in 2016 that allow near real-time counterfeiting of facial expressions in existing 2D video.
In SIGGRAPH 2017 an audio driven digital look-alike of upper torso of Barack Obama was presented by researchers from University of Washington. It was driven only by a voice track as source data for the animation after the training phase to acquire lip sync and wider facial information from training material consisting of 2D videos with audio had been completed.
In March 2020, a freeware web application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released. Initial characters included GLaDOS from Portal, Twilight Sparkle and Fluttershy from the show My Little Pony: Friendship Is Magic, and the Tenth Doctor from Doctor Who.
Speech synthesis markup languages
A number of markup languages have been established for the rendition of text as speech in an XML-compliant format. The most recent is Speech Synthesis Markup Language (SSML), which became a W3C recommendation in 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) and SABLE. Although each of these was proposed as a standard, none of them have been widely adopted.
Speech synthesis markup languages are distinguished from dialogue markup languages. VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.
Applications
Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use of screen readers for people with visual impairment, but text-to-speech systems are now commonly used by people with dyslexia and other reading disabilities as well as by pre-literate children. They are also frequently employed to aid those with severe speech impairment usually through a dedicated voice output communication aid. Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available. A noted application, of speech synthesis, was the Kurzweil Reading Machine for the Blind which incorporated text-to-phonetics software based on work from Haskins Laboratories and a black-box synthesizer built by Votrax.
Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications. The application reached maturity in 2008, when NEC Biglobe announced a web service that allows users to create phrases from the voices of characters from the Japanese anime series Code Geass: Lelouch of the Rebellion R2. 15.ai has been frequently used for content creation in various fandoms, including the My Little Pony: Friendship Is Magic fandom, the Team Fortress 2 fandom, the Portal fandom, and the SpongeBob SquarePants fandom.
Text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined with speech recognition allows for interaction with mobile devices via natural language processing interfaces. Some users have also created AI virtual assistants using 15.ai and external voice control software.
Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media.
Content creators have used voice cloning tools to recreate their voices for podcasts, narration, and comedy shows. Publishers and authors have also used such software to narrate audiobooks and newsletters. Another area of application is AI video creation with talking heads. Webapps and video editors like Elai.io or Synthesia allow users to create video content involving AI avatars, who are made to speak using text-to-speech technology.
Speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. A voice quality synthesizer, developed by Jorge C. Lucero et al. at the University of Brasília, simulates the physics of phonation and includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries. The synthesizer has been used to mimic the timbre of dysphonic speakers with controlled levels of roughness, breathiness and strain.
Singing synthesis
See also
References
External links
Simulated singing with the singing robot Pavarobotti or a description from the BBC on how the robot synthesized the singing.
Applications of artificial intelligence
Assistive technology
Auditory displays
Computational linguistics
History of human–computer interaction | Speech synthesis | [
"Technology"
] | 8,837 | [
"History of human–computer interaction",
"Natural language and computing",
"Computational linguistics",
"History of computing"
] |
42,806 | https://en.wikipedia.org/wiki/Cyclone | In meteorology, a cyclone () is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone). Cyclones are characterized by inward-spiraling winds that rotate about a zone of low pressure. The largest low-pressure systems are polar vortices and extratropical cyclones of the largest scale (the synoptic scale). Warm-core cyclones such as tropical cyclones and subtropical cyclones also lie within the synoptic scale. Mesocyclones, tornadoes, and dust devils lie within the smaller mesoscale.
Upper level cyclones can exist without the presence of a surface low, and can pinch off from the base of the tropical upper tropospheric trough during the summer months in the Northern Hemisphere. Cyclones have also been seen on extraterrestrial planets, such as Mars, Jupiter, and Neptune. Cyclogenesis is the process of cyclone formation and intensification. Extratropical cyclones begin as waves in large regions of enhanced mid-latitude temperature contrasts called baroclinic zones. These zones contract and form weather fronts as the cyclonic circulation closes and intensifies. Later in their life cycle, extratropical cyclones occlude as cold air masses undercut the warmer air and become cold core systems. A cyclone's track is guided over the course of its 2 to 6 day life cycle by the steering flow of the subtropical jet stream.
Weather fronts mark the boundary between two masses of air of different temperature, humidity, and densities, and are associated with the most prominent meteorological phenomena. Strong cold fronts typically feature narrow bands of thunderstorms and severe weather, and may on occasion be preceded by squall lines or dry lines. Such fronts form west of the circulation center and generally move from west to east; warm fronts form east of the cyclone center and are usually preceded by stratiform precipitation and fog. Warm fronts move poleward ahead of the cyclone path. Occluded fronts form late in the cyclone life cycle near the center of the cyclone and often wrap around the storm center.
Tropical cyclogenesis describes the process of development of tropical cyclones. Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm core. Cyclones can transition between extratropical, subtropical, and tropical phases. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In the Atlantic and the northeastern Pacific oceans, a tropical cyclone is generally referred to as a hurricane (from the name of the ancient Central American deity of wind, Huracan), in the Indian and south Pacific oceans it is called a cyclone, and in the northwestern Pacific it is called a typhoon.
The growth of instability in the vortices is not universal. For example, the size, intensity, moist-convection, surface evaporation, the value of potential temperature at each potential height can affect the nonlinear evolution of a vortex.
Nomenclature
Henry Piddington published 40 papers dealing with tropical storms from Calcutta between 1836 and 1855 in The Journal of the Asiatic Society. He also coined the term cyclone, meaning the coil of a snake. In 1842, he published his landmark thesis, Laws of the Storms.
Structure
There are a number of structural characteristics common to all cyclones. A cyclone is a low-pressure area. A cyclone's center (often known in a mature tropical cyclone as the eye), is the area of lowest atmospheric pressure in the region. Near the center, the pressure gradient force (from the pressure in the center of the cyclone compared to the pressure outside the cyclone) and the force from the Coriolis effect must be in an approximate balance, or the cyclone would collapse on itself as a result of the difference in pressure.
Because of the Coriolis effect, the wind flow around a large cyclone is counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. In the Northern Hemisphere, the fastest winds relative to the surface of the Earth therefore occur on the eastern side of a northward-moving cyclone and on the northern side of a westward-moving one; the opposite occurs in the Southern Hemisphere. In contrast to low-pressure systems, the wind flow around high-pressure systems are clockwise (anticyclonic) in the northern hemisphere, and counterclockwise in the southern hemisphere.
Formation
Cyclogenesis is the development or strengthening of cyclonic circulation in the atmosphere. Cyclogenesis is an umbrella term for several different processes that all result in the development of some sort of cyclone. It can occur at various scales, from the microscale to the synoptic scale.
Extratropical cyclones begin as waves along weather fronts before occluding later in their life cycle as cold-core systems. However, some intense extratropical cyclones can become warm-core systems when a warm seclusion occurs.
Tropical cyclones form as a result of significant convective activity, and are warm core. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. Cyclolysis is the opposite of cyclogenesis, and is the high-pressure system equivalent, which deals with the formation of high-pressure areas—Anticyclogenesis.
A surface low can form in a variety of ways. Topography can create a surface low. Mesoscale convective systems can spawn surface lows that are initially warm-core. The disturbance can grow into a wave-like formation along the front and the low is positioned at the crest. Around the low, the flow becomes cyclonic. This rotational flow moves polar air towards the equator on the west side of the low, while warm air move towards the pole on the east side. A cold front appears on the west side, while a warm front forms on the east side. Usually, the cold front moves at a quicker pace than the warm front and "catches up" with it due to the slow erosion of higher density air mass out ahead of the cyclone. In addition, the higher density air mass sweeping in behind the cyclone strengthens the higher pressure, denser cold air mass. The cold front over takes the warm front, and reduces the length of the warm front. At this point an occluded front forms where the warm air mass is pushed upwards into a trough of warm air aloft, which is also known as a trowal.
Tropical cyclogenesis is the development and strengthening of a tropical cyclone. The mechanisms by which tropical cyclogenesis occurs are distinctly different from those that produce mid-latitude cyclones. Tropical cyclogenesis, the development of a warm-core cyclone, begins with significant convection in a favorable atmospheric environment. There are six main requirements for tropical cyclogenesis:
sufficiently warm sea surface temperatures,
atmospheric instability,
high humidity in the lower to middle levels of the troposphere
enough Coriolis force to develop a low-pressure center
a preexisting low-level focus or disturbance
low vertical wind shear.
An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane/typhoon strength, and 20 becoming intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson hurricane scale).
Synoptic scale
The following types of cyclones are identifiable in synoptic charts.
Surface-based types
There are three main types of surface-based cyclones: Extratropical cyclones, Subtropical cyclones and Tropical cyclones
Extratropical cyclone
An extratropical cyclone is a synoptic scale low-pressure weather system that does not have tropical characteristics, as it is connected with fronts and horizontal gradients (rather than vertical) in temperature and dew point otherwise known as "baroclinic zones".
"Extratropical" is applied to cyclones outside the tropics, in the middle latitudes. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" when a tropical cyclone has moved (extratropical transition) beyond the tropics. They are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena that, along with anticyclones, drive weather over much of the Earth.
Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient within the westerlies, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform with radius. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters sufficient to warm its core, and as a result develops central convection. A particularly intense type of extratropical cyclone that strikes during winter is known colloquially as a nor'easter.
Polar low
A polar low is a small-scale, short-lived atmospheric low-pressure system (depression) that is found over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. Polar lows were first identified on the meteorological satellite imagery that became available in the 1960s, which revealed many small-scale cloud vortices at high latitudes. The most active polar lows are found over certain ice-free maritime areas in or near the Arctic during the winter, such as the Norwegian Sea, Barents Sea, Labrador Sea and Gulf of Alaska. Polar lows dissipate rapidly when they make landfall. Antarctic systems tend to be weaker than their northern counterparts since the air-sea temperature differences around the continent are generally smaller . However, vigorous polar lows can be found over the Southern Ocean. During winter, when cold-core lows with temperatures in the mid-levels of the troposphere reach move over open waters, deep convection forms, which allows polar low development to become possible. The systems usually have a horizontal length scale of less than and exist for no more than a couple of days. They are part of the larger class of mesoscale weather systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and gas and oil platforms. Polar lows have been referred to by many other terms, such as polar mesoscale vortex, Arctic hurricane, Arctic low, and cold air depression. Today the term is usually reserved for the more vigorous systems that have near-surface winds of at least 17 m/s.
Subtropical
A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. As early as the 1950s, meteorologists were unclear whether they should be characterized as tropical cyclones or extratropical cyclones, and used terms such as quasi-tropical and semi-tropical to describe the cyclone hybrids. By 1972, the National Hurricane Center officially recognized this cyclone category. Subtropical cyclones began to receive names off the official tropical cyclone list in the Atlantic Basin in 2002. They have broad wind patterns with maximum sustained winds located farther from the center than typical tropical cyclones, and exist in areas of weak to moderate temperature gradient.
Since they form from extratropical cyclones, which have colder temperatures aloft than normally found in the tropics, the sea surface temperatures required is around 23 degrees Celsius (73 °F) for their formation, which is three degrees Celsius (5 °F) lower than for tropical cyclones. This means that subtropical cyclones are more likely to form outside the traditional bounds of the hurricane season. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm.
Tropical
A tropical cyclone is a storm system characterized by a low-pressure center and numerous thunderstorms that produce strong winds and flooding rain. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapour contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems.
The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their dependence on Maritime Tropical air masses for their formation. The term "cyclone" refers to the storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on their location and strength, tropical cyclones are referred to by other names, such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, or simply as a cyclone.
While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and a damaging storm surge. Their winds increase the wave size, and in so doing they draw more heat and moisture into their system, thereby increasing their strength. They develop over large bodies of warm water, and hence lose their strength if they move over land. This is the reason coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from strong winds. Heavy rains, however, can produce significant flooding inland. Storm surges are rises in sea level caused by the reduced pressure of the core that in effect "sucks" the water upward and from winds that in effect "pile" the water up. Storm surges can produce extensive coastal flooding up to from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere.
Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses. A tropical cyclone is usually not considered to become subtropical during its extratropical transition.
Upper level types
Polar cyclone
A polar, sub-polar, or Arctic cyclone (also known as a polar vortex) is a vast area of low pressure that strengthens in the winter and weakens in the summer. A polar cyclone is a low-pressure weather system, usually spanning to , in which the air circulates in a counterclockwise direction in the northern hemisphere, and a clockwise direction in the southern hemisphere. The Coriolis acceleration acting on the air masses moving poleward at high altitude, causes a counterclockwise circulation at high altitude. The poleward movement of air originates from the air circulation of the Polar cell. The polar low is not driven by convection as are tropical cyclones, nor the cold and warm air mass interactions as are extratropical cyclones, but is an artifact of the global air movement of the Polar cell. The base of the polar low is in the mid to upper troposphere. In the Northern Hemisphere, the polar cyclone has two centers on average. One center lies near Baffin Island and the other over northeast Siberia. In the southern hemisphere, it tends to be located near the edge of the Ross ice shelf near 160 west longitude. When the polar vortex is strong, its effect can be felt at the surface as a westerly wind (toward the east). When the polar cyclone is weak, significant cold outbreaks occur.
TUTT cell
Under specific circumstances, upper level cold lows can break off from the base of the tropical upper tropospheric trough (TUTT), which is located mid-ocean in the Northern Hemisphere during the summer months. These upper tropospheric cyclonic vortices, also known as TUTT cells or TUTT lows, usually move slowly from east-northeast to west-southwest, and their bases generally do not extend below in altitude. A weak inverted surface trough within the trade wind is generally found underneath them, and they may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus clouds and the appearance of a surface vortex. In rare cases, they become warm-core tropical cyclones. Upper cyclones and the upper troughs that trail tropical cyclones can cause additional outflow channels and aid in their intensification. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet emanating from the developing tropical disturbance/cyclone.
Mesoscale
The following types of cyclones are not identifiable in synoptic charts.
Mesocyclone
A mesocyclone is a vortex of air, to in diameter (the mesoscale of meteorology), within a convective storm. Air rises and rotates around a vertical axis, usually in the same direction as low-pressure systems in both northern and southern hemisphere. They are most often cyclonic, that is, associated with a localized low-pressure region within a supercell. Such storms can feature strong surface winds and severe hail. Mesocyclones often occur together with updrafts in supercells, where tornadoes may form. About 1,700 mesocyclones form annually across the United States, but only half produce tornadoes.
Tornado
A tornado is a violently rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. Also referred to as twisters, a colloquial term in America, or cyclones, although the word cyclone is used in meteorology, in a wider sense, to name any closed low-pressure circulation.
Dust devil
A dust devil is a strong, well-formed, and relatively long-lived whirlwind, ranging from small (half a metre wide and a few metres tall) to large (more than 10 metres wide and more than 1000 metres tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property.
Waterspout
A waterspout is a columnar vortex forming over water that is, in its most common form, a non-supercell tornado over water that is connected to a cumuliform cloud. While it is often weaker than most of its land counterparts, stronger versions spawned by mesocyclones do occur.
Steam devil
A gentle vortex over calm water or wet land made visible by rising water vapour.
Fire whirl
A fire whirl – also colloquially known as a fire devil, fire tornado, firenado, or fire twister – is a whirlwind induced by a fire and often made up of flame or ash.
Other planets
Cyclones are not unique to Earth. Cyclonic storms are common on giant planets, such as the Small Dark Spot on Neptune. It is about one third the diameter of the Great Dark Spot and received the nickname "Wizard's Eye" because it looks like an eye. This appearance is caused by a white cloud in the middle of the Wizard's Eye. Mars has also exhibited cyclonic storms. Jovian storms like the Great Red Spot are usually mistakenly named as giant hurricanes or cyclonic storms. However, this is inaccurate, as the Great Red Spot is, in fact, the inverse phenomenon, an anticyclone.
See also
Tropical cyclone
Subtropical cyclone
Extratropical cyclone
Tornado
Storm
Atlantic hurricane
Australian region tropical cyclone
Space hurricane
Space tornado
References
External links
Current map of global mean sea-level pressure
Meteorological phenomena
Tropical cyclone meteorology
Cyclone
Weather hazards
Vortices | Cyclone | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,172 | [
"Physical phenomena",
"Earth phenomena",
"Vortices",
"Weather hazards",
"Weather",
"Meteorological phenomena",
"Dynamical systems",
"Fluid dynamics"
] |
42,808 | https://en.wikipedia.org/wiki/Lignite | Lignite (derived from Latin lignum meaning 'wood'), often referred to as brown coal, is a soft, brown, combustible sedimentary rock formed from naturally compressed peat. It has a carbon content around 25–35% and is considered the lowest rank of coal due to its relatively low heat content. When removed from the ground, it contains a very high amount of moisture, which partially explains its low carbon content. Lignite is mined all around the world and is used almost exclusively as a fuel for steam-electric power generation.
Lignite combustion produces less heat for the amount of carbon dioxide and sulfur released than other ranks of coal. As a result, lignite is the most harmful coal to human health. Depending on the source, various toxic heavy metals, including naturally occurring radioactive materials, may be present in lignite and left over in the coal fly ash produced from its combustion, further increasing health risks.
Characteristics
Lignite is brownish-black in color and has a carbon content of 60–70 percent on a dry ash-free basis. However, its inherent moisture content is sometimes as high as 75 percent and its ash content ranges from 6–19 percent, compared with 6–12 percent for bituminous coal. As a result, its carbon content on the as-received basis (i.e., containing both inherent moisture and mineral matter) is typically just 25-35 percent.
The energy content of lignite ranges from 10 to 20 MJ/kg (9–17 million BTU per short ton) on a moist, mineral-matter-free basis. The energy content of lignite consumed in the United States averages (13 million BTU/ton), on the as-received basis. The energy content of lignite consumed in Victoria, Australia, averages (8.2 million BTU/ton) on a net wet basis.
Lignite has a high content of volatile matter which makes it easier to convert into gas and liquid petroleum products than higher-ranking coals. Its high moisture content and susceptibility to spontaneous combustion can cause problems in transportation and storage. Processes which remove water from brown coal reduce the risk of spontaneous combustion to the same level as black coal, increase the calorific value of brown coal to a black coal equivalent fuel, and significantly reduce the emissions profile of 'densified' brown coal to a level similar to or better than most black coals. However, removing the moisture increases the cost of the final lignite fuel.
Lignite rapidly degrades when exposed to air. This process is called slacking or slackening.
Uses
Most lignite is used to generate electricity. However, small amounts are used in agriculture, in industry, and even, as jet, in jewelry. Its historical use as fuel for home heating has continuously declined and is now of lower importance than its use to generate electricity.
As fuel
Lignite is often found in thick beds located near the surface, making it inexpensive to mine. However, because of its low energy density, tendency to crumble, and typically high moisture content, brown coal is inefficient to transport and is not traded extensively on the world market compared with higher coal grades. It is often burned in power stations near the mines, such as in Poland's Bełchatów plant and Turów plant, Australia's Latrobe Valley and Luminant's Monticello plant and Martin Lake plant in Texas. Primarily because of latent high moisture content and low energy density of brown coal, carbon dioxide emissions from traditional brown-coal-fired plants are generally much higher per megawatt-hour generated than for comparable black-coal plants, with the world's highest-emitting plant being Australia's Hazelwood Power Station until its closure in March 2017. The operation of traditional brown-coal plants, particularly in combination with strip mining, is politically contentious due to environmental concerns.
The German Democratic Republic relied extensively on lignite to become energy self-sufficient, and eventually obtained 70% of its energy requirements from lignite. Lignite was also an important chemical industry feedstock via Bergius process or Fischer-Tropsch synthesis in lieu of petroleum, which had to be imported for hard currency following a change in policy by the Soviet Union in the 1970s, which had previously delivered petroleum at below market rates. East German scientists even converted lignite into coke suitable for metallurgical uses (high temperature lignite coke) and much of the railway network was dependent on lignite either through steam trains or electrified lines mostly fed with lignite derived power. As per the table below, East Germany was the largest producer of lignite for much of its existence as an independent state.
In 2014, about 12 percent of Germany's energy and, specifically, 27 percent of Germany's electricity came from lignite power plants, while in 2014 in Greece, lignite provided about 50 percent of its power needs. Germany has announced plans to phase out lignite by 2038 at the latest. Greece has confirmed that the last coal plant will be shut in 2025 after receiving pressure from the European Union and plans to heavily invest in renewable energy.
Home heating
Lignite was and is used as a replacement for or in combination with firewood for home heating. It is usually pressed into briquettes for that use. Due to the smell it gives off when burned, lignite was often seen as a fuel for poor people compared to higher value hard coals. In Germany, briquettes are still readily available to end consumers in home improvement stores and supermarkets.
In agriculture
An environmentally beneficial use of lignite is in agriculture. Lignite may have value as an environmentally benign soil amendment, improving cation exchange and phosphorus availability in soils while reducing availability of heavy metals, and may be superior to commercial K humates. Lignite fly ash produced by combustion of lignite in power plants may also be valuable as a soil amendment and fertilizer. However, rigorous studies of the long-term benefits of lignite products in agriculture are lacking.
Lignite may also be used for the cultivation and distribution of biological control microbes that suppress plant pests. The carbon increases the organic matter in the soil while the biological control microbes provide an alternative to chemical pesticides.
Leonardite is a soil conditioner rich in humic acids that is formed by natural oxidation when lignite comes in contact with air. The process can be replicated artificially on a large scale. The less matured xyloid (wood-shaped) lignite also contains high amounts of humic acid.
In drilling mud
Reaction with quaternary amine forms a product called amine-treated lignite (ATL), which is used in drilling mud to reduce fluid loss during drilling.
As an industrial adsorbent
Lignite may have potential uses as an industrial adsorbent. Experiments show that its adsorption of methylene blue falls within the range of activated carbons currently used by industry.
In jewellery
Jet is a form of lignite that has been used as a gemstone. The earliest jet artifacts date to 10,000 BCE and jet was used extensively in necklaces and other ornamentation in Britain from the Neolithic until the end of Roman Britain. Jet experienced a brief revival in Victorian Britain.
Geology
Lignite begins as partially decayed plant material, or peat. Peat tends to accumulate in areas with high moisture, slow land subsidence, and no disturbance by rivers or oceans – under these conditions, the area remains saturated with water, which covers dead vegetation and protects it from atmospheric oxygen. Otherwise, peat swamps are found in a variety of climates and geographical settings. Anaerobic bacteria may contribute to the degradation of peat, but this process takes a long time, particularly in acidic water. Burial by other sediments further slows biological degradation, and subsequent transformations are a result of increased temperatures and pressures underground.
Lignite forms from peat that has not been subjected to deep burial and heating. It forms at temperatures below , primarily by biochemical degradation. This includes the process of humification, in which microorganisms extract hydrocarbons from peat and form humic acids, which decrease the rate of bacterial decay. In lignite, humification is partial, coming to completion only when the coal reaches sub-bituminous rank. The most characteristic chemical change in the organic material during formation of lignite is the sharp reduction in the number of C=O and C-O-R functional groups.
Lignite deposits are typically younger than higher-ranked coals, with the majority of them having formed during the Tertiary period.
Extraction
Lignite is often found in thick beds located near the surface. These are inexpensive to extract using various forms of surface mining, though this can result in serious environmental damage. Regulations in the United States and other countries require that land that is surface mined must be restored to its original productivity once mining is complete.
Strip mining of lignite in the United States begins with drilling to establish the extent of the subsurface beds. Topsoil and subsoil must be properly removed and either used to reclaim previously mined-out areas or stored for future reclamation. Excavator and truck overburden removal prepares the area for dragline overburden removal to expose the lignite beds. These are broken up using specially equipped tractors (coal ripping) and then loaded into bottom dump trucks using front loaders.
Once the lignite is removed, restoration involves grading the mine spoil to as close an approximation as practical of the original ground surface (Approximate Original Contour or AOC). Subsoil and topsoil are restored and the land reseeded with various grasses. In North Dakota, a performance bond is held against the mining company for at least ten years after the end of mining operations to guarantee that the land has been restored to full productivity. A bond (not necessary in this form) for mine reclamation is required in the US by the Surface Mining Control and Reclamation Act of 1977.
Resources and reserves
List of countries by lignite reserves
Australia
The Latrobe Valley in Victoria, Australia, contains estimated reserves of some 65 billion tonnes of brown coal. The deposit is equivalent to 25 percent of known world reserves. The coal seams are up to thick, with multiple coal seams often giving virtually continuous brown coal thickness of up to . Seams are covered by very little overburden ().
A partnership led by Kawasaki Heavy Industries and backed by the governments of Japan and Australia has begun extracting hydrogen from brown coal. The liquefied hydrogen will be shipped via the transporter Suiso Frontier to Japan.
North America
The largest lignite deposits in North America are the Gulf Coast lignites and the Fort Union lignite field. The Gulf Coast lignites are located in a band running from Texas to Alabama roughly parallel to the Gulf Coast. The Fort Union lignite field stretches from North Dakota to Saskatchewan. Both are important commercial sources of lignite.
Types
Lignite can be separated into two types: xyloid lignite or fossil wood, and compact lignite or perfect lignite.
Although xyloid lignite may sometimes have the tenacity and the appearance of ordinary wood, it can be seen that the combustible woody tissue has experienced a great modification. It is reducible to a fine powder by trituration, and if submitted to the action of a weak solution of potash, it yields a considerable quantity of humic acid. Leonardite is an oxidized form of lignite, which also contains high levels of humic acid.
Jet is a hardened, gem-like form of lignite used in various types of jewelry.
Production
Germany is the largest producer of lignite, followed by China, Russia, and United States. Lignite accounted for 8% of all U.S. coal production in 2019.
Gallery
See also
Rheinisches Braunkohlerevier
NLC India Limited
References
External links
Geography in action – an Irish case study
Photograph of lignite
Coldry:Lignite Dewatering Process
Coal
Organic minerals | Lignite | [
"Chemistry"
] | 2,511 | [
"Organic compounds",
"Organic minerals"
] |
42,852 | https://en.wikipedia.org/wiki/Radio%20frequency | Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around to around . This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies, and also encompasses the microwave range. These are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves, so they are used in radio technology, among other uses. Different sources specify different upper and lower bounds for the frequency range.
Electric current
Electric currents that oscillate at radio frequencies (RF currents) have special properties not shared by direct current or lower audio frequency alternating current, such as the 50 or 60 Hz current used in electrical power distribution.
Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology.
RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect.
RF currents applied to the body often do not cause the painful sensation and muscular contraction of electric shock that lower frequency currents produce. This is because the current changes direction too quickly to trigger depolarization of nerve membranes. However, this does not mean RF currents are harmless; they can cause internal injury as well as serious superficial burns called RF burns.
RF current can ionize air, creating a conductive path through it. This property is exploited by "high frequency" units used in electric arc welding, which use currents at higher frequencies than power distribution uses.
Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. This is because capacitive reactance in a circuit decreases with increasing frequency.
In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with increasing frequency.
When conducted by an ordinary electric cable, RF current has a tendency to reflect from discontinuities in the cable, such as connectors, and travel back down the cable toward the source, causing a condition called standing waves. RF current may be carried efficiently over transmission lines such as coaxial cables.
Frequency bands
The radio spectrum of frequencies is divided into bands with conventional names designated by the International Telecommunication Union (ITU):
{| class="wikitable" style="text-align:right"
|-
! scope="col" rowspan="2" | Frequencyrange !! scope="col" rowspan="2" | Wavelengthrange !! scope="col" colspan="2" | ITU designation !! scope="col" rowspan="2" | IEEE bands
|-
! scope="col" | Full name
! scope="col" | Abbreviation
|-
! scope="row" | Below 3 Hz
| >105 km || || style="text-align:center" | ||
|-
! scope="row" | 3–30 Hz
| 105–104 km|| Extremely low frequency || style="text-align:center" | ELF ||
|-
! scope="row" | 30–300 Hz
| 104–103 km|| Super low frequency || style="text-align:center" | SLF ||
|-
! scope="row" | 300–3000 Hz
| 103–100 km|| Ultra low frequency || style="text-align:center" | ULF ||
|-
! scope="row" | 3–30 kHz
| 100–10 km|| Very low frequency || style="text-align:center" | VLF ||
|-
! scope="row" | 30–300 kHz
| 10–1 km|| Low frequency || style="text-align:center" | LF ||
|-
! scope="row" | 300 kHz – 3 MHz
| 1 km – 100 m|| Medium frequency || style="text-align:center" | MF ||
|-
! scope="row" | 3–30 MHz
| 100–10 m|| High frequency || style="text-align:center" | HF || style="text-align:center" | HF
|-
! scope="row" | 30–300 MHz
| 10–1 m|| Very high frequency || style="text-align:center" | VHF || style="text-align:center" | VHF
|-
! scope="row" | 300 MHz – 3 GHz
| 1 m – 100 mm|| Ultra high frequency || style="text-align:center" | UHF || style="text-align:center" | UHF, L, S
|-
! scope="row" | 3–30 GHz
| 100–10 mm|| Super high frequency || style="text-align:center" | SHF || style="text-align:center" | S, C, X, Ku, K, Ka
|-
! scope="row" | 30–300 GHz
| 10–1 mm|| Extremely high frequency || style="text-align:center" | EHF || style="text-align:center" | Ka, V, W, mm
|-
! scope="row" | 300 GHz – 3 THz
| 1 mm – 0.1 mm|| Tremendously high frequency || style="text-align:center" | THF ||
|-
|
|
|}
Frequencies of 1 GHz and above are conventionally called microwave, while frequencies of 30 GHz and above are designated millimeter wave.
More detailed band designations are given by the standard IEEE letter- band frequency designations and the EU/NATO frequency designations.
Applications
Communications
Radio frequencies are used in communication devices such as transmitters, receivers, computers, televisions, and mobile phones, to name a few. Radio frequencies are also applied in carrier current systems including telephony and control circuits. The MOS integrated circuit is the technology behind the current proliferation of radio frequency wireless telecommunications devices such as cellphones.
Medicine
Medical applications of radio frequency (RF) energy, in the form of electromagnetic waves (radio waves) or electrical currents, have existed for over 125 years, and now include diathermy, hyperthermy treatment of cancer, electrosurgery scalpels used to cut and cauterize in operations, and radiofrequency ablation. Magnetic resonance imaging (MRI) uses radio frequency fields to generate images of the human body.
Non-surgical weight loss equipment
Radio Frequency or RF energy is also being used in devices that are being advertised for weight loss and fat removal. The possible effects RF might have on the body and whether RF can lead to fat reduction needs further study. Currently, there are devices such as trusculpt ID, Venus Bliss and many others utilizing this type of energy alongside heat to target fat pockets in certain areas of the body. That being said, there is limited studies on how effective these devices are.
Measurement
Test apparatus for radio frequencies can include standard instruments at the lower end of the range, but at higher frequencies, the test equipment becomes more specialized.
Mechanical oscillations
While RF usually refers to electrical oscillations, mechanical RF systems are not uncommon: see mechanical filter and RF MEMS.
See also
Amplitude modulation (AM)
Bandwidth (signal processing)
Electromagnetic interference
Electromagnetic radiation
Electromagnetic spectrum
EMF measurement
Frequency allocation
Frequency modulation (FM)
Plastic welding
Pulsed electromagnetic field therapy
Radio astronomy
Spectrum management
References
External links
Analog, RF and EMC Considerations in Printed Wiring Board (PWB) Design
Definition of frequency bands (VLF, ELF ... etc.) IK1QFK Home Page (vlf.it)
Radio, light, and sound waves, conversion between wavelength and frequency
RF Terms Glossary
Radio spectrum
Radio waves
Radio waves
Television terminology | Radio frequency | [
"Physics",
"Technology",
"Engineering"
] | 1,673 | [
"Information and communications technology",
"Physical phenomena",
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Radio technology",
"Waves",
"Motion (physics)"
] |
42,869 | https://en.wikipedia.org/wiki/Jakarta%20EE | Jakarta EE, formerly Java Platform, Enterprise Edition (Java EE) and Java 2 Platform, Enterprise Edition (J2EE), is a set of specifications, extending Java SE with specifications for enterprise features such as distributed computing and web services. Jakarta EE applications are run on reference runtimes, which can be microservices or application servers, which handle transactions, security, scalability, concurrency and management of the components they are deploying.
Jakarta EE is defined by its specification. The specification defines APIs (application programming interface) and their interactions. As with other Java Community Process specifications, providers must meet certain conformance requirements in order to declare their products as Jakarta EE compliant.
Examples of contexts in which Jakarta EE referencing runtimes are used are: e-commerce, accounting, banking information systems.
History
The platform was known as Java 2 Platform, Enterprise Edition or J2EE from version 1.2, until the name was changed to Java Platform, Enterprise Edition or Java EE in version 1.5.
Java EE was maintained by Oracle under the Java Community Process. On September 12, 2017, Oracle Corporation announced that it would submit Java EE to the Eclipse Foundation. The Eclipse top-level project has been named Eclipse Enterprise for Java (EE4J). The Eclipse Foundation could not agree with Oracle over the use of javax and Java trademarks. Oracle owns the trademark for the name "Java" and the platform was renamed from Java EE to Jakarta EE. The name refers to the largest city on the island of Java and also the capital of Indonesia, Jakarta. The name should not be confused with the former Jakarta Project which fostered a number of current and former Java projects at the Apache Software Foundation.
Specifications
Jakarta EE includes several specifications that serve different purposes, like generating web pages, reading and writing from a database in a transactional way, managing distributed queues.
The Jakarta EE APIs include several technologies that extend the functionality of the base Java SE APIs, such as Jakarta Enterprise Beans, connectors, servlets, Jakarta Server Pages and several web service technologies.
Web specifications
Jakarta Servlet: defines how to manage HTTP requests, in a synchronous or asynchronous way. It is low level and other Jakarta EE specifications rely on it;
Jakarta WebSocket: API specification that defines a set of APIs to service WebSocket connections;
Jakarta Faces: a technology for constructing user interfaces out of components;
Jakarta Expression Language (EL) is a simple language originally designed to satisfy the specific needs of web application developers. It is used specifically in Jakarta Faces to bind components to (backing) beans and in Contexts and Dependency Injection to named beans, but can be used throughout the entire platform.
Web service specifications
Jakarta RESTful Web Services provides support in creating web services according to the Representational State Transfer (REST) architectural pattern;
Jakarta JSON Processing is a set of specifications to manage information encoded in JSON format;
Jakarta JSON Binding provides specifications to convert JSON information into or from Java classes;
Jakarta XML Binding allows mapping XML into Java objects;
Jakarta XML Web Services can be used to create SOAP web services.
Enterprise specifications
Jakarta Activation (JAF) specifies an architecture to extend component Beans by providing data typing and bindings of such types.
Jakarta Contexts and Dependency Injection (CDI) is a specification to provide a dependency injection container;
Jakarta Enterprise Beans (EJB) specification defines a set of lightweight APIs that an object container (the EJB container) will support in order to provide transactions (using JTA), remote procedure calls (using RMI or RMI-IIOP), concurrency control, dependency injection and access control for business objects. This package contains the Jakarta Enterprise Beans classes and interfaces that define the contracts between the enterprise bean and its clients and between the enterprise bean and the ejb container.
Jakarta Persistence (JPA) are specifications about object-relational mapping between relation database tables and Java classes.
Jakarta Transactions (JTA) contains the interfaces and annotations to interact with the transaction support offered by Jakarta EE. Even though this API abstracts from the really low-level details, the interfaces are also considered somewhat low-level and the average application developer in Jakarta EE is either assumed to be relying on transparent handling of transactions by the higher level EJB abstractions, or using the annotations provided by this API in combination with CDI managed beans.
Jakarta Messaging (JMS) provides a common way for Java programs to create, send, receive and read an enterprise messaging system's messages.
Other specifications
Jakarta Validation: This package contains the annotations and interfaces for the declarative validation support offered by the Jakarta Validation API. Jakarta Validation provides a unified way to provide constraints on beans (e.g. Jakarta Persistence model classes) that can be enforced cross-layer. In Jakarta EE, Jakarta Persistence honors bean validation constraints in the persistence layer, while JSF does so in the view layer.
Jakarta Batch provides the means for batch processing in applications to run long running background tasks that possibly involve a large volume of data and which may need to be periodically executed.
Jakarta Connectors is a Java-based tool for connecting application servers and enterprise information systems (EIS) as part of enterprise application integration (EAI). This is a low-level API aimed at vendors that the average application developer typically does not come in contact with.
Web profile
In an attempt to limit the footprint of web containers, both in physical and in conceptual terms, the web profile was created, a subset of the Jakarta EE specifications. The Jakarta EE web profile comprises the following:
Certified referencing runtimes
Although by definition all Jakarta EE implementations provide the same base level of technologies (namely, the Jakarta EE spec and the associated APIs), they can differ considerably with respect to extra features (like connectors, clustering, fault tolerance, high availability, security, etc.), installed size, memory footprint, startup time, etc.
Jakarta EE
Java EE
Code sample
The code sample shown below demonstrates how various technologies in Java EE 7 are used together to build a web form for editing a user.
In Jakarta EE a (web) UI can be built using Jakarta Servlet, Jakarta Server Pages (JSP), or Jakarta Faces (JSF) with Facelets. The example below uses Faces and Facelets. Not explicitly shown is that the input components use the Jakarta EE Bean Validation API under the covers to validate constraints.
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html" xmlns:f="http://xmlns.jcp.org/jsf/core">
<f:metadata>
<f:viewParam name="user_id" value="#{userEdit.user}" converter="#{userConvertor}" />
</f:metadata>
<h:body>
<h:messages />
<h:form>
<h:panelGrid columns="2">
<h:outputLabel for="firstName" value="First name" />
<h:inputText id="firstName" value="#{userEdit.user.firstName}" label="First name" />
<h:outputLabel for="lastName" value="Last name" />
<h:inputText id="lastName" value="#{userEdit.user.lastName}" label="Last name" />
<h:commandButton action="#{userEdit.saveUser}" value="Save" />
</h:panelGrid>
</h:form>
</h:body>
</html>
Example Backing Bean class
To assist the view, Jakarta EE uses a concept called a "Backing Bean". The example below uses Contexts and Dependency Injection (CDI) and Jakarta Enterprise Beans (EJB).
@Named
@ViewScoped
public class UserEdit {
private User user;
@Inject
private UserDAO userDAO;
public String saveUser() {
userDAO.save(this.user);
addFlashMessage("User " + this.user.getId() + " saved");
return "users.xhtml?faces-redirect=true";
}
public void setUser(User user) {
this.user = user;
}
public User getUser() {
return user;
}
}
Example Data Access Object class
To implement business logic, Jakarta Enterprise Beans (EJB) is the dedicated technology in Jakarta EE. For the actual persistence, JDBC or Jakarta Persistence (JPA) can be used. The example below uses EJB and JPA. Not explicitly shown is that JTA is used under the covers by EJB to control transactional behavior.
@Stateless
public class UserDAO {
@PersistenceContext
private EntityManager entityManager;
public void save(User user) {
entityManager.persist(user);
}
public void update(User user) {
entityManager.merge(user);
}
public List<User> getAll() {
return entityManager.createNamedQuery("User.getAll", User.class)
.getResultList();
}
}
Example Entity class
For defining entity/model classes Jakarta EE provides the Jakarta Persistence (JPA), and for expressing constraints on those entities it provides the Bean Validation API. The example below uses both these technologies.
@Entity
public class User {
@Id
@GeneratedValue(strategy = IDENTITY)
private Integer id;
@Size(min = 2, message="First name too short")
private String firstName;
@Size(min = 2, message="Last name too short")
private String lastName;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
}
See also
Canigó (framework)
Deployment descriptor
Java BluePrints
Java Research License
Sun Community Source License
Sun Java System Portal Server
Web container
J2ME
References
External links
Jakarta EE Compatible Products: Enterprise Java Application and Web Servers - Eclipse Foundation
The Jakarta EE Tutorial
First Cup of Jakarta EE Tutorial: An Introduction to Jakarta EE
Java Platform, Enterprise Edition (Java EE), Oracle Technology Network
Jakarta EE official YouTube channel
Articles with example Java code
Computing platforms
Platform, Enterprise Edition
Platform, Enterprise Edition
Web frameworks | Jakarta EE | [
"Technology"
] | 2,263 | [
"Computing platforms",
"Java platform"
] |
42,870 | https://en.wikipedia.org/wiki/Java%20Platform%2C%20Micro%20Edition | Java Platform, Micro Edition or Java ME is a computing platform for development and deployment of portable code for embedded and mobile devices (micro-controllers, sensors, gateways, mobile phones, personal digital assistants, TV set-top boxes, printers). Java ME was formerly known as Java 2 Platform, Micro Edition or J2ME.
The platform uses the object-oriented Java programming language, and is part of the Java software-platform family. It was designed by Sun Microsystems (now Oracle Corporation) and replaced a similar technology, PersonalJava.
In 2013, with more than 3 billion Java ME enabled mobile phones in the market, the platform was in continued decline as smartphones have overtaken feature phones.
History
The platform used to be popular in feature phones, such as Nokia's Series 40 models. It was also supported on the Bada operating system and on Symbian OS along with native software. Users of Windows CE, Windows Mobile, Maemo, MeeGo and Android could download Java ME for their respective environments ("proof-of-concept" for Android).
Originally developed under the Java Community Process as JSR 68, the different flavors of Java ME have evolved in separate JSRs. Java ME devices implement a profile. The most common of these are the Mobile Information Device Profile aimed at mobile devices such as cell phones, and the Personal Profile aimed at consumer products and embedded devices like set-top boxes and PDAs. Profiles are subsets of configurations, of which there are currently two: the Connected Limited Device Configuration (CLDC) and the Connected Device Configuration (CDC).
In 2008, Java ME platforms were restricted to JRE 1.3 features and use that version of the class file format (internally known as version 47.0).
Implementations
Oracle provides a reference implementation of the specification, and some configurations and profiles for MIDP and CDC. Starting with the JavaME 3.0 SDK, a NetBeans-based IDE supported them in a single IDE.
In contrast to the numerous binary implementations of the Java Platform built by Sun for servers and workstations, Sun tended not to provide binaries for the platforms of Java ME targets, and instead relied on third parties to provide their own.
The exception was an MIDP 1.0 JRE (JVM) for Palm OS. Sun provides no J2ME JRE for the Microsoft Windows Mobile (Pocket PC) based devices, despite an open-letter campaign to Sun to release a rumored internal implementation of PersonalJava known by the code name "Captain America". Third party implementations are widely used by Windows Mobile vendors.
At some point, Sun released a now-abandoned reference implementation under the name phoneME.
Operating systems targeting Java ME have been implemented by DoCoMo in the form of DoJa, and by SavaJe as SavaJe OS. The latter company was purchased by Sun in April 2007 and now forms the basis of Sun's JavaFX Mobile.
The open-source Mika VM aims to implement JavaME CDC/FP, but is not certified as such (certified implementations are required to charge royalties, which is impractical for an open-source project). Consequently, devices which use this implementation are not allowed to claim JavaME CDC compatibility.
The Linux-based Android operating system uses a proprietary version of Java that is similar in intent, but very different in many ways from Java ME.
Emulators
Sun Java Wireless Toolkit (WTK, for short) — is a proprietary Java ME emulator, originally provided by Sun Microsystems, and later by Oracle.
MicroEmulator (MicroEMU, for short) — is an open-source Java ME emulator.
There are other emulators, including emulators provided as part of development kits by phone manufacturers, such as Nokia, Sony-Ericsson, Siemens Mobile, etc.
Connected Limited Device Configuration
The Connected Limited Device Configuration (CLDC) contains a strict subset of the Java-class libraries, and is the minimum amount needed for a Java virtual machine to operate. CLDC is basically used for classifying myriad devices into a fixed configuration.
A configuration provides the most basic set of libraries and virtual-machine features that must be present in each implementation of a J2ME environment. When coupled with one or more profiles, the Connected Limited Device Configuration gives developers a solid Java platform for creating applications for consumer and embedded devices.
The configuration is designed for devices with 160KB to 512KB total memory, which has a minimum of 160KB of ROM and 32KB of RAM available for the Java platform.
Mobile Information Device Profile
Designed for mobile phones, the Mobile Information Device Profile includes a GUI, and a data storage API, and MIDP 2.0 includes a basic 2D gaming API. Applications written for this profile are called MIDlets.
JSR 271: Mobile Information Device Profile 3 (Final release on Dec 9, 2009) specified the 3rd generation Mobile Information Device Profile (MIDP3), expanding upon the functionality in all areas as well as improving interoperability across devices. A key design goal of MIDP3 is backward compatibility with MIDP2 content.
Information Module Profile
The Information Module Profile (IMP) is a profile for embedded, "headless" devices such as vending machines, industrial embedded applications, security systems, and similar devices with either simple or no display and with some limited network connectivity.
Originally introduced by Siemens Mobile and Nokia as JSR-195, IMP 1.0 is a strict subset of MIDP 1.0 except that it does not include user interface APIs — in other words, it does not include support for the Java package javax.microedition.lcdui. JSR-228, also known as IMP-NG, is IMP's next generation that is based on MIDP 2.0, leveraging MIDP 2.0's new security and networking types and APIs, and other APIs such as PushRegistry and platformRequest(), but again it does not include UI APIs, nor the game API.
Connected Device Configuration
The Connected Device Configuration is a subset of Java SE, containing almost all the libraries that are not GUI related. It is richer than CLDC.
Foundation Profile
The Foundation Profile is a Java ME Connected Device Configuration (CDC) profile. This profile is intended to be used by devices requiring a complete implementation of the Java virtual machine up to and including the entire Java Platform, Standard Edition API. Typical implementations will use some subset of that API set depending on the additional profiles supported. This specification was developed under the Java Community Process.
Personal Basis Profile
The Personal Basis Profile extends the Foundation Profile to include lightweight GUI support in the form of an AWT subset. This is the platform that BD-J is built upon.
JSRs (Java Specification Requests)
Foundation
Main extensions
Future
ESR
The ESR consortium is devoted to Standards for embedded Java. Especially cost effective Standards.
Typical applications domains are industrial control, machine-to-machine, medical, e-metering, home automation, consumer, human-to-machine-interface, ...
See also
Android (operating system)
iOS
BlackBerry OS
Danger Hiptop
Embedded Java
JavaFX Mobile
Mobile development
Mobile games
Mobile learning
Qualcomm Brew
Smartphone
References
Notes
JSR 232: Mobile Operational Management an advanced OSGi technology based platform for mobile computing
JSR 291: Dynamic Component Support for Java SE symmetric programming model for Java SE to Java ME JSR 232
Bibliography
External links
Sun Developer Network, Java ME
Nokia's Developer Hub Java pages
Nokia S60 Java Runtime blogs
Sony Ericsson Developer World
Motorola Developer Network
J2ME Authoring Tool LMA Users Network
Samsung Mobile Developer's site
Sprint Application Developer's Website
Performance database of Java ME compatible devices
MicroEJ platforms for embedded systems
Book - Mobile Phone Programming using Java ME (J2ME)
Tutorial Master ng, J2ME
Computing platforms
Platform, Micro Edition
Platform, Micro Edition | Java Platform, Micro Edition | [
"Technology"
] | 1,619 | [
"Computing platforms",
"Java platform"
] |
42,871 | https://en.wikipedia.org/wiki/Java%20Platform%2C%20Standard%20Edition | Java Platform, Standard Edition (Java SE) is a computing platform for development and deployment of portable code for desktop and server environments. Java SE was formerly known as Java 2 Platform, Standard Edition (J2SE).
The platform uses the Java programming language and is part of the Java software-platform family. Java SE defines a range of general-purpose APIs—such as Java APIs for the Java Class Library—and also includes the Java Language Specification and the Java Virtual Machine Specification. OpenJDK is the official reference implementation since version 7.
Nomenclature, standards and specifications
The platform was known as Java 2 Platform, Standard Edition or J2SE from version 1.2, until the name was changed to Java Platform, Standard Edition or Java SE in version 1.5. The "SE" is used to distinguish the base platform from the Enterprise Edition (Java EE) and Micro Edition (Java ME) platforms. The "2" was originally intended to emphasize the major changes introduced in version 1.2, but was removed in version 1.6. The naming convention has been changed several times over the Java version history. Starting with J2SE 1.4 (Merlin), Java SE has been developed under the Java Community Process, which produces descriptions of proposed and final specifications for the Java platform called Java Specification Requests (JSR). JSR 59 was the umbrella specification for J2SE 1.4 and JSR 176 specified J2SE 5.0 (Tiger). Java SE 6 (Mustang) was released under JSR 270.
Java Platform, Enterprise Edition (Java EE) is a related specification that includes all the classes in Java SE, plus a number that are more useful to programs that run on servers as opposed to workstations.
Java Platform, Micro Edition (Java ME) is a related specification intended to provide a certified collection of Java APIs for the development of software for small, resource-constrained devices such as cell phones, PDAs and set-top boxes.
The Java Runtime Environment (JRE) and Java Development Kit (JDK) are the actual files downloaded and installed on a computer to run or develop Java programs, respectively.
General purpose packages
java.lang
The Java package contains fundamental classes and interfaces closely tied to the language and runtime system. This includes the root classes that form the class hierarchy, types tied to the language definition, basic exceptions, math functions, threading, security functions, as well as some information on the underlying native system. This package contains 22 of 32 Error classes provided in JDK 6.
The main classes and interfaces in java.lang are:
– the class that is the root of every class hierarchy.
– the base class for enumeration classes (as of J2SE 5.0).
– the class that is the root of the Java reflection system.
– the class that is the base class of the exception class hierarchy.
, , and – the base classes for each exception type.
– the class that allows operations on threads.
– the class for strings and string literals.
and – classes for performing string manipulation (StringBuilder as of J2SE 5.0).
– the interface that allows generic comparison and ordering of objects (as of J2SE 1.2).
– the interface that allows generic iteration using the enhanced for loop (as of J2SE 5.0).
, , , , and – classes that provide "system operations" that manage the dynamic loading of classes, creation of external processes, host environment inquiries such as the time of day, and enforcement of security policies.
and – classes that provide basic math functions such as sine, cosine, and square root (StrictMath as of J2SE 1.3).
The primitive wrapper classes that encapsulate primitive types as objects.
The basic exception classes thrown for language-level and other common exceptions.
Classes in java.lang are automatically imported into every source file.
java.lang.ref
The package provides more flexible types of references than are otherwise available, permitting limited interaction between the application and the Java Virtual Machine (JVM) garbage collector. It is an important package, central enough to the language for the language designers to give it a name that starts with "java.lang", but it is somewhat special-purpose and not used by a lot of developers. This package was added in J2SE 1.2.
Java has an expressive system of references and allows for special behavior for garbage collection. A normal reference in Java is known as a "strong reference". The java.lang.ref package defines three other types of references—soft, weak, and phantom references. Each type of reference is designed for a specific use.
A can be used to implement a cache. An object that is not reachable by a strong reference (that is, not strongly reachable), but is referenced by a soft reference is called "softly reachable". A softly reachable object may be garbage collected at the discretion of the garbage collector. This generally means that softly reachable objects are only garbage collected when free memory is low—but again, this is at the garbage collector's discretion. Semantically, a soft reference means, "Keep this object when nothing else references it, unless the memory is needed."
A is used to implement weak maps. An object that is not strongly or softly reachable, but is referenced by a weak reference is called "weakly reachable". A weakly reachable object is garbage collected in the next collection cycle. This behavior is used in the class . A weak map allows the programmer to put key/value pairs in the map and not worry about the objects taking up memory when the key is no longer reachable anywhere else. Another possible application of weak references is the string intern pool. Semantically, a weak reference means "get rid of this object when nothing else references it at the next garbage collection."
A is used to reference objects that have been marked for garbage collection and have been finalized, but have not yet been reclaimed. An object that is not strongly, softly or weakly reachable, but is referenced by a phantom reference is called "phantom reachable." This allows for more flexible cleanup than is possible with the finalization mechanism alone. Semantically, a phantom reference means "this object is no longer needed and has been finalized in preparation for being collected."
Each of these reference types extends the class, which provides the method to return a strong reference to the referent object (or null if the reference has been cleared or if the reference type is phantom), and the method to clear the reference.
The java.lang.ref also defines the class , which can be used in each of the applications discussed above to keep track of objects that have changed reference type. When a Reference is created it is optionally registered with a reference queue. The application polls the reference queue to get references that have changed reachability state.
java.lang.reflect
Reflection is a constituent of the Java API that lets Java code examine and "reflect" on Java components at runtime and use the reflected members. Classes in the package, along with java.lang.Class and accommodate applications such as debuggers, interpreters, object inspectors, class browsers, and services such as object serialization and JavaBeans that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class. This package was added in JDK 1.1.
Reflection is used to instantiate classes and invoke methods using their names, a concept that allows for dynamic programming. Classes, interfaces, methods, fields, and constructors can all be discovered and used at runtime. Reflection is supported by metadata that the JVM has about the program.
Techniques
There are basic techniques involved in reflection:
Discovery – this involves taking an object or class and discovering the members, superclasses, implemented interfaces, and then possibly using the discovered elements.
Use by name – involves starting with the symbolic name of an element and using the named element.
Discovery
Discovery typically starts with an object and calling the method to get the object's Class. The Class object has several methods for discovering the contents of the class, for example:
– returns an array of objects representing all the public methods of the class or interface
– returns an array of objects representing all the public constructors of the class
– returns an array of objects representing all the public fields of the class or interface
– returns an array of Class objects representing all the public classes and interfaces that are members (e.g. inner classes) of the class or interface
– returns the Class object representing the superclass of the class or interface (null is returned for interfaces)
– returns an array of Class objects representing all the interfaces that are implemented by the class or interface
Use by name
The Class object can be obtained either through discovery, by using the class literal (e.g. MyClass.class) or by using the name of the class (e.g. ). With a Class object, member Method, Constructor, or Field objects can be obtained using the symbolic name of the member. For example:
– returns the Method object representing the public method with the name "methodName" of the class or interface that accepts the parameters specified by the Class... parameters.
– returns the Constructor object representing the public constructor of the class that accepts the parameters specified by the Class... parameters.
– returns the Field object representing the public field with the name "fieldName" of the class or interface.
Method, Constructor, and Field objects can be used to dynamically access the represented member of the class. For example:
– returns an Object containing the value of the field from the instance of the object passed to get(). (If the Field object represents a static field then the Object parameter is ignored and may be null.)
– returns an Object containing the result of invoking the method for the instance of the first Object parameter passed to invoke(). The remaining Object... parameters are passed to the method. (If the Method object represents a static method then the first Object parameter is ignored and may be null.)
– returns the new Object instance from invoking the constructor. The Object... parameters are passed to the constructor. (Note that the parameterless constructor for a class can also be invoked by calling .)
Arrays and proxies
The java.lang.reflect package also provides an class that contains static methods for creating and manipulating array objects, and since J2SE 1.3, a class that supports dynamic creation of proxy classes that implement specified interfaces.
The implementation of a Proxy class is provided by a supplied object that implements the interface. The InvocationHandler's method is called for each method invoked on the proxy object—the first parameter is the proxy object, the second parameter is the Method object representing the method from the interface implemented by the proxy, and the third parameter is the array of parameters passed to the interface method. The invoke() method returns an Object result that contains the result returned to the code that called the proxy interface method.
java.io
The package contains classes that support input and output. The classes in the package are primarily stream-oriented; however, a class for random access files is also provided. The central classes in the package are and , which are abstract base classes for reading from and writing to byte streams, respectively. The related classes and are abstract base classes for reading from and writing to character streams, respectively. The package also has a few miscellaneous classes to support interactions with the host file system.
Streams
The stream classes follow the decorator pattern by extending the base subclass to add features to the stream classes. Subclasses of the base stream classes are typically named for one of the following attributes:
the source/destination of the stream data
the type of data written to/read from the stream
additional processing or filtering performed on the stream data
The stream subclasses are named using the naming pattern XxxStreamType where Xxx is the name describing the feature and StreamType is one of InputStream, OutputStream, Reader, or Writer.
The following table shows the sources/destinations supported directly by the java.io package:
Other standard library packages provide stream implementations for other destinations, such as the InputStream returned by the method or the Java EE class.
Data type handling and processing or filtering of stream data is accomplished through stream filters. The filter classes all accept another compatible stream object as a parameter to the constructor and decorate the enclosed stream with additional features. Filters are created by extending one of the base filter classes , , , or .
The Reader and Writer classes are really just byte streams with additional processing performed on the data stream to convert the bytes to characters. They use the default character encoding for the platform, which as of J2SE 5.0 is represented by the returned by the static method. The class converts an InputStream to a Reader and the class converts an OutputStream to a Writer. Both these classes have constructors that support specifying the character encoding to use. If no encoding is specified, the program uses the default encoding for the platform.
The following table shows the other processes and filters that the java.io package directly supports. All these classes extend the corresponding Filter class.
Random access
The class supports random access reading and writing of files. The class uses a file pointer that represents a byte-offset within the file for the next read or write operation. The file pointer is moved implicitly by reading or writing and explicitly by calling the or methods. The current position of the file pointer is returned by the method.
File system
The class represents a file or directory path in a file system. File objects support the creation, deletion and renaming of files and directories and the manipulation of file attributes such as read-only and last modified timestamp. File objects that represent directories can be used to get a list of all the contained files and directories.
The class is a file descriptor that represents a source or sink (destination) of bytes. Typically this is a file, but can also be a console or network socket. FileDescriptor objects are used to create File streams. They are obtained from File streams and java.net sockets and datagram sockets.
java.nio
In J2SE 1.4, the package (NIO or Non-blocking I/O) was added to support memory-mapped I/O, facilitating I/O operations closer to the underlying hardware with sometimes dramatically better performance. The java.nio package provides support for a number of buffer types. The subpackage provides support for different character encodings for character data. The subpackage provides support for channels, which represent connections to entities that are capable of performing I/O operations, such as files and sockets. The java.nio.channels package also provides support for fine-grained locking of files.
java.math
The package supports multiprecision arithmetic (including modular arithmetic operations) and provides multiprecision prime number generators used for cryptographic key generation. The main classes of the package are:
– provides arbitrary-precision signed decimal numbers. BigDecimal gives the user control over rounding behavior through RoundingMode.
– provides arbitrary-precision integers. Operations on BigInteger do not overflow or lose precision. In addition to standard arithmetic operations, it provides modular arithmetic, GCD calculation, primality testing, prime number generation, bit manipulation, and other miscellaneous operations.
– encapsulate the context settings that describe certain rules for numerical operators.
– an enumeration that provides eight rounding behaviors.
java.net
The package provides special IO routines for networks, allowing HTTP requests, as well as other common transactions.
java.text
The package implements parsing routines for strings and supports various human-readable languages and locale-specific parsing.
java.util
Data structures that aggregate objects are the focus of the package. Included in the package is the Collections API, an organized data structure hierarchy influenced heavily by the design patterns considerations.
Special purpose packages
java.applet
Created to support Java applet creation, the package lets applications be downloaded over a network and run within a guarded sandbox. Security restrictions are easily imposed on the sandbox. A developer, for example, may apply a digital signature to an applet, thereby labeling it as safe. Doing so allows the user to grant the applet permission to perform restricted operations (such as accessing the local hard drive), and removes some or all the sandbox restrictions. Digital certificates are issued by certificate authorities.
java.beans
Included in the package are various classes for developing and manipulating beans, reusable components defined by the JavaBeans architecture. The architecture provides mechanisms for manipulating properties of components and firing events when those properties change.
The APIs in java.beans are intended for use by a bean editing tool, in which beans can be combined, customized, and manipulated. One type of bean editor is a GUI designer in an integrated development environment.
java.awt
The , or Abstract Window Toolkit, provides access to a basic set of GUI widgets based on the underlying native platform's widget set, the core of the GUI event subsystem, and the interface between the native windowing system and the Java application. It also provides several basic layout managers, a datatransfer package for use with the Clipboard and Drag and Drop, the interface to input devices such as mice and keyboards, as well as access to the system tray on supporting systems. This package, along with javax.swing contains the largest number of enums (7 in all) in JDK 6.
java.rmi
The package provides Java remote method invocation to support remote procedure calls between two java applications running in different JVMs.
java.security
Support for security, including the message digest algorithm, is included in the package.
java.sql
An implementation of the JDBC API (used to access SQL databases) is grouped into the package.
javax.rmi
The javax.rmi package provided support for the remote communication between applications, using the RMI over IIOP protocol. This protocol combines RMI and CORBA features.
Java SE Core Technologies - CORBA / RMI-IIOP
javax.swing
Swing is a collection of routines that build on java.awt to provide a platform independent widget toolkit. uses the 2D drawing routines to render the user interface components instead of relying on the underlying native operating system GUI support.
This package contains the largest number of classes (133 in all) in JDK 6. This package, along with java.awt also contains the largest number of enums (7 in all) in JDK 6. It supports pluggable looks and feels (PLAFs) so that widgets in the GUI can imitate those from the underlying native system. Design patterns permeate the system, especially a modification of the model–view–controller pattern, which loosens the coupling between function and appearance. One inconsistency is that (as of J2SE 1.3) fonts are drawn by the underlying native system, and not by Java, limiting text portability. Workarounds, such as using bitmap fonts, do exist. In general, "layouts" are used and keep elements within an aesthetically consistent GUI across platforms.
javax.swing.text.html.parser
The package provides the error tolerant HTML parser that is used for writing various web browsers and web bots.
javax.xml.bind.annotation
The javax.xml.bind.annotation package contained the largest number of Annotation Types (30 in all) in JDK 6. It defines annotations for customizing Java program elements to XML Schema mapping.
OMG packages
org.omg.CORBA
The org.omg.CORBA package provided support for the remote communication between applications using the General Inter-ORB Protocol and supports other features of the common object request broker architecture. Same as RMI and RMI-IIOP, this package is for calling remote methods of objects on other virtual machines (usually via network).
This package contained the largest number of Exception classes (45 in all) in JDK 6. From all communication possibilities CORBA is portable between various languages; however, with this comes more complexity.
These packages were deprecated in Java 9 and removed from Java 11.
org.omg.PortableInterceptor
The org.omg.PortableInterceptor package contained the largest number of interfaces (39 in all) in JDK 6. It provides a mechanism to register ORB hooks through which ORB services intercept the normal flow of execution of the ORB.
Security
Several critical security vulnerabilities have been reported. Security alerts from Oracle announce critical security-related patches to Java SE.
References
External links
Oracle Technology Network's Java SE
Java SE API documentation
JSR 270 (Java SE 6)
1.8
1.7
1.6
Computing platforms
Platform, Standard Edition
Platform, Standard Edition | Java Platform, Standard Edition | [
"Technology"
] | 4,327 | [
"Computing platforms",
"Java platform"
] |
42,882 | https://en.wikipedia.org/wiki/Cosmogony | Cosmogony is any model concerning the origin of the cosmos or the universe.
Overview
Scientific theories
In astronomy, cosmogony is the study of the origin of particular astrophysical objects or systems, and is most commonly used in reference to the origin of the universe, the Solar System, or the Earth–Moon system. The prevalent cosmological model of the early development of the universe is the Big Bang theory.
Sean M. Carroll, who specializes in theoretical cosmology and field theory, explains two competing explanations for the origins of the singularity, which is the center of a space in which a characteristic is limitless (one example is the singularity of a black hole, where gravity is the characteristic that becomes infinite).
It is generally thought that the universe began at a point of singularity, but among Modern Cosmologists and Physicists, a singularity usually represents a lack of understanding, and in the case of Cosmology/Cosmogony, requires a theory of quantum gravity to understand. When the universe started to expand, what is colloquially known as the Big Bang occurred, which evidently began the universe. The other explanation, held by proponents such as Stephen Hawking, asserts that time did not exist when it emerged along with the universe. This assertion implies that the universe does not have a beginning, as time did not exist "prior" to the universe. Hence, it is unclear whether properties such as space or time emerged with the singularity and the known universe.
Despite the research, there is currently no theoretical model that explains the earliest moments of the universe's existence (during the Planck epoch) due to a lack of a testable theory of quantum gravity. Nevertheless, researchers of string theory, its extensions (such as M-theory), and of loop quantum cosmology, like Barton Zwiebach and Washington Taylor, have proposed solutions to assist in the explanation of the universe's earliest moments. Cosmogonists have only tentative theories for the early stages of the universe and its beginning. The proposed theoretical scenarios include string theory, M-theory, the Hartle–Hawking initial state, emergent Universe, string landscape, cosmic inflation, the Big Bang, and the ekpyrotic universe. Some of these proposed scenarios, like the string theory, are compatible, whereas others are not.
Mythology
In mythology, creation or cosmogonic myths are narratives describing the beginning of the universe or cosmos.
Some methods of the creation of the universe in mythology include:
the will or action of a supreme being or beings,
the process of metamorphosis,
the copulation of female and male deities,
from chaos,
or via a cosmic egg.
Creation myths may be etiological, attempting to provide explanations for the origin of the universe. For instance, Eridu Genesis, the oldest known creation myth, contains an account of the creation of the world in which the universe was created out of a primeval sea (Abzu). Creation myths vary, but they may share similar deities or symbols. For instance, the ruler of the gods in Greek mythology, Zeus, is similar to the ruler of the gods in Roman mythology, Jupiter. Another example is the ruler of the gods in Tagalog mythology, Bathala, who is similar to various rulers of certain pantheons within Philippine mythology such as the Bisaya's Kaptan.
Compared with cosmology
In the humanities, the distinction between cosmogony and cosmology is blurred. For example, in theology, the cosmological argument for the existence of God (pre-cosmic cosmogonic bearer of personhood) is an appeal to ideas concerning the origin of the universe and is thus cosmogonical. Some religious cosmogonies have an impersonal first cause (for example Taoism).
However, in astronomy, cosmogony can be distinguished from cosmology, which studies the universe and its existence, but does not necessarily inquire into its origins. There is therefore a scientific distinction between cosmological and cosmogonical ideas. Physical cosmology is the science that attempts to explain all observations relevant to the development and characteristics of the universe on its largest scale. Some questions regarding the behaviour of the universe have been described by some physicists and cosmologists as being extra-scientific or metaphysical. Attempted solutions to such questions may include the extrapolation of scientific theories to untested regimes (such as the Planck epoch), or the inclusion of philosophical or religious ideas.
See also
Why there is anything at all
References
External links
Creation myths
Greek words and phrases
Natural philosophy
Origins
Physical cosmology
Concepts in astronomy | Cosmogony | [
"Physics",
"Astronomy"
] | 950 | [
"Cosmogony",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Theoretical physics",
"Astrophysics",
"Creation myths",
"Physical cosmology"
] |
42,889 | https://en.wikipedia.org/wiki/Fusor | A fusor is a device that uses an electric field to heat ions to a temperature at which they undergo nuclear fusion. The machine induces a potential difference between two metal cages, inside a vacuum. Positive ions fall down this voltage drop, building up speed. If they collide in the center, they can fuse. This is one kind of an inertial electrostatic confinement device – a branch of fusion research.
A Farnsworth–Hirsch fusor is the most common type of fusor. This design came from work by Philo T. Farnsworth in 1964 and Robert L. Hirsch in 1967. A variant type of fusor had been proposed previously by William Elmore, James L. Tuck, and Ken Watson at the Los Alamos National Laboratory though they never built the machine.
Fusors have been built by various institutions. These include academic institutions such as the University of Wisconsin–Madison, the Massachusetts Institute of Technology and government entities, such as the Atomic Energy Organization of Iran and the Turkish Atomic Energy Authority. Fusors have also been developed commercially, as sources for neutrons by DaimlerChrysler Aerospace and as a method for generating medical isotopes. Fusors have also become very popular for hobbyists and amateurs. A growing number of amateurs have performed nuclear fusion using simple fusor machines. However, fusors are not considered a viable concept for large-scale energy production by scientists.
Mechanism
Underlying physics
Fusion takes place when nuclei approach to a distance where the nuclear force can pull them together into a single larger nucleus. Opposing this close approach are the positive charges in the nuclei, which force them apart due to the electrostatic force. In order to produce fusion events, the nuclei must have initial energy great enough to allow them to overcome this Coulomb barrier. As the nuclear force is increased with the number of nucleons, protons and neutrons, and the electromagnetic force is increased with the number of protons only, the easiest atoms to fuse are isotopes of hydrogen, deuterium with one neutron, and tritium with two. With hydrogen fuels, about 3 to 10 keV is needed to allow the reaction to take place.
Traditional approaches to fusion power have generally attempted to heat the fuel to temperatures where the Maxwell-Boltzmann distribution of their resulting energies is high enough that some of the particles in the long tail have the required energy. High enough in this case is such that the rate of the fusion reactions produces enough energy to offset energy losses to the environment and thus heat the surrounding fuel to the same temperatures and produce a self-sustaining reaction known as ignition. Calculations show this takes place at about 50 million kelvin (K), although higher numbers on the order of 100 million K are desirable in practical machines. Due to the extremely high temperatures, fusion reactions are also referred to as thermonuclear.
When atoms are heated to temperatures corresponding to thousands of degrees, the electrons become increasingly free of their nucleus. This leads to a gas-like state of matter known as a plasma, consisting of free nuclei known as ions, and their former electrons. As a plasma consists of free-moving charges, it can be controlled using magnetic and electrical fields. Fusion devices use this capability to retain the fuel at millions of degrees.
Fusor concept
The fusor is part of a broader class of devices that attempts to give the fuel fusion-relevant energies by directly accelerating the ions toward each other. In the case of the fusor, this is accomplished with electrostatic forces. For every volt that an ion of ±1 charge is accelerated across it gains 1 electronvolt in energy. To reach the required ~10 keV, a voltage of 10 kV is required, applied to both particles. For comparison, the electron gun in a typical television cathode-ray tube is on the order of 3 to 6 kV, so the complexity of such a device is fairly limited. For a variety of reasons, energies on the order of 15 keV are used. This corresponds to the average kinetic energy at a temperature of approximately 174 million Kelvin, a typical magnetic confinement fusion plasma temperature.
The problem with this colliding beam fusion approach, in general, is that the ions will most likely never hit each other no matter how precisely aimed. Even the most minor misalignment will cause the particles to scatter and thus fail to fuse. It is simple to demonstrate that the scattering chance is many orders of magnitude higher than the fusion rate, meaning that the vast majority of the energy supplied to the ions will go to waste and those fusion reactions that do occur cannot make up for these losses. To be energy positive, a fusion device must recycle these ions back into the fuel mass so that they have thousands or millions of such chances to fuse, and their energy must be retained as much as possible during this period.
The fusor attempts to meet this requirement through the spherical arrangement of its accelerator grid system. Ions that fail to fuse pass through the center of the device and back into the accelerator on the far side, where they are accelerated back into the center again. There is no energy lost in this action, and in theory, assuming infinitely thin grid wires, the ions can circulate forever with no additional energy needed. Even those that scatter will simply take on a new trajectory, exit the grid at some new point, and accelerate back into the center again, providing the circulation that is required for a fusion event to eventually take place.
Losses
It is important to consider the actual startup sequence of a fusor to understand the resulting operation. Normally the system is pumped down to a vacuum and then a small amount of gas is placed inside the vacuum chamber. This gas will spread out to fill the volume. When voltage is applied to the electrodes, the atoms between them will experience a field that will cause them to ionize and begin accelerating inward. As the atoms are randomly distributed to begin, the amount of energy they will gain differs; atoms initially near the anode will gain some large portion of the applied voltage, say 15 keV. Those initially near the cathode will gain much less energy, possibly far too low to undergo fusion with their counterparts on the far side of the central reaction area.
The fuel atoms inside the inner area during the startup period are not ionized. The accelerated ions scatter with these and lose their energy, while ionizing the formerly cold atom. This process, and the scatterings off other ions, causes the ion energies to become randomly distributed and the fuel rapidly takes on a non-thermal distribution. For this reason, the energy needed in a fusor system is higher than one where the fuel is heated by some other method, as some will be "lost" during startup.
Real electrodes are not infinitely thin, and the potential for scattering off the wires or even capture of the ions within the electrodes is a significant issue that causes high conduction losses. These losses can be at least five orders of magnitude higher than the energy released from the fusion reaction, even when the fusor is in star mode, which minimizes these reactions.
There are numerous other loss mechanisms as well. These include charge exchange between high-energy ions and low-energy neutral particles, which causes the ion to capture the electron, become electrically neutral, and then leave the fusor as it is no longer accelerated back into the chamber. This leaves behind a newly ionized atom of lower energy and thus cools the plasma. Scatterings may also increase the energy of an ion which allows it to move past the anode and escape, in this example anything above 15 keV.
Additionally, the scatterings of both the ions, and especially impurities left in the chamber, lead to significant Bremsstrahlung, creating X-rays that carries energy out of the fuel. This effect grows with particle energy, meaning the problem becomes more pronounced as the system approaches fusion-relevant operating conditions.
As a result of these loss mechanisms, no fusor has ever come close to break-even energy output and it appears it is unable to ever do so.
The common sources of the high voltage are ZVS flyback HV sources and neon-sign transformers. It can also be called an electrostatic particle accelerator.
History
The fusor was originally conceived by Philo T. Farnsworth, better known for his pioneering work in television. In the early 1930s, he investigated a number of vacuum tube designs for use in television, and found one that led to an interesting effect. In this design, which he called the "multipactor", electrons moving from one electrode to another were stopped in mid-flight with the proper application of a high-frequency magnetic field. The charge would then accumulate in the center of the tube, leading to high amplification. Unfortunately it also led to high erosion on the electrodes when the electrons eventually hit them, and today the multipactor effect is generally considered a problem to be avoided.
What particularly interested Farnsworth about the device was its ability to focus electrons at a particular point. One of the biggest problems in fusion research is to keep the hot fuel from hitting the walls of the container. If this is allowed to happen, the fuel cannot be kept hot enough for the fusion reaction to occur. Farnsworth reasoned that he could build an electrostatic plasma confinement system in which the "wall" fields of the reactor were electrons or ions being held in place by the multipactor. Fuel could then be injected through the wall, and once inside it would be unable to escape. He called this concept a virtual electrode, and the system as a whole the fusor.
Design
Farnsworth's original fusor designs were based on cylindrical arrangements of electrodes, like the original multipactors. Fuel was ionized and then fired from small accelerators through holes in the outer (physical) electrodes. Once through the hole they were accelerated towards the inner reaction area at high velocity. Electrostatic pressure from the positively charged electrodes would keep the fuel as a whole off the walls of the chamber, and impacts from new ions would keep the hottest plasma in the center. He referred to this as inertial electrostatic confinement, a term that continues to be used to this day. The voltage between the electrodes needs to be at least 25 kV for fusion to occur.
Work at Farnsworth Television labs
All of this work had taken place at the Farnsworth Television labs, which had been purchased in 1949 by ITT Corporation, as part of its plan to become the next RCA. However, a fusion research project was not regarded as immediately profitable. In 1965, the board of directors started asking Harold Geneen to sell off the Farnsworth division, but he had his 1966 budget approved with funding until the middle of 1967. Further funding was refused, and that ended ITT's experiments with fusion.
Things changed dramatically with the arrival of Robert Hirsch, and the introduction of the modified Hirsch–Meeks fusor patent. New fusors based on Hirsch's design were first constructed between 1964 and 1967. Hirsch published his design in a paper in 1967. His design included ion beams to shoot ions into the vacuum chamber.
The team then turned to the AEC, then in charge of fusion research funding, and provided them with a demonstration device mounted on a serving cart that produced more fusion than any existing "classical" device. The observers were startled, but the timing was bad; Hirsch himself had recently revealed the great progress being made by the Soviets using the tokamak. In response to this surprising development, the AEC decided to concentrate funding on large tokamak projects, and reduce backing for alternative concepts.
Recent developments
George H. Miley at the University of Illinois reexamined the fusor and re-introduced it into the field. A low but steady interest in the fusor has persisted since. An important development was the successful commercial introduction of a fusor-based neutron generator. From 2006 until his death in 2007, Robert W. Bussard gave talks on a reactor similar in design to the fusor, now called the polywell, that he stated would be capable of useful power generation. Most recently, the fusor has gained popularity among amateurs, who choose them as home projects due to their relatively low space, money, and power requirements. An online community of "fusioneers", The Open Source Fusor Research Consortium, or Fusor.net, is dedicated to reporting developments in the world of fusors and aiding other amateurs in their projects. The site includes forums, articles and papers done on the fusor, including Farnsworth's original patent, as well as Hirsch's patent of his version of the invention.
Fusion in fusors
Basic fusion
Nuclear fusion refers to reactions in which lighter nuclei are combined to become heavier nuclei. This process changes mass into energy which in turn may be captured to provide fusion power. Many types of atoms can be fused. The easiest to fuse are deuterium and tritium. For fusion to occur the ions must be at a temperature of at least 4 keV (kiloelectronvolts), or about 45 million kelvins. The second easiest reaction is fusing deuterium with itself. Because this gas is cheaper, it is the fuel commonly used by amateurs. The ease of doing a fusion reaction is measured by its cross section.
Net power
At such conditions, the atoms are ionized and make a plasma. The energy generated by fusion, inside a hot plasma cloud can be found with the following equation.
where
is the fusion power density (energy per time per volume),
n is the number density of species A or B (particles per volume),
is the product of the collision cross-section σ (which depends on the relative velocity) and the relative velocity v of the two species, averaged over all the particle velocities in the system,
is the energy released by a single fusion reaction.
This equation shows that energy varies with the temperature, density, speed of collision, and fuel used. To reach net power, fusion reactions have to occur fast enough to make up for energy losses. Any power plant using fusion will hold in this hot cloud. Plasma clouds lose energy through conduction and radiation. Conduction is when ions, electrons or neutrals touch a surface and leak out. Energy is lost with the particle. Radiation is when energy leaves the cloud as light. Radiation increases as the temperature rises. To get net power from fusion it's necessary to overcome these losses. This leads to an equation for power output.
where:
η is the efficiency,
is the power of conduction losses as energy-laden mass leaves,
is the power of radiation losses as energy leaves as light,
is the net power from fusion.
John Lawson used this equation to estimate some conditions for net power based on a Maxwellian cloud. This became the Lawson criterion. Fusors typically suffer from conduction losses due to the wire cage being in the path of the recirculating plasma.
In fusors
In the original fusor design, several small particle accelerators, essentially TV tubes with the ends removed, inject ions at a relatively low voltage into a vacuum chamber. In the Hirsch version of the fusor, the ions are produced by ionizing a dilute gas in the chamber. In either version there are two concentric spherical electrodes, the inner one being charged negatively with respect to the outer one (to about 80 kV). Once the ions enter the region between the electrodes, they are accelerated towards the center.
In the fusor, the ions are accelerated to several keV by the electrodes, so heating as such is not necessary (as long as the ions fuse before losing their energy by any process). Whereas 45 megakelvins is a very high temperature by any standard, the corresponding voltage is only 4 kV, a level commonly found in such devices as neon signs and CRT televisions. To the extent that the ions remain at their initial energy, the energy can be tuned to take advantage of the peak of the reaction cross section or to avoid disadvantageous (for example neutron-producing) reactions that might occur at higher energies.
Various attempts have been made at increasing deuterium ionization rate, including heaters within "ion-guns", (similar to the "electron gun" which forms the basis for old-style television display tubes), as well as magnetron type devices, (which are the power sources for microwave ovens), which can enhance ion formation using high-voltage electromagnetic fields. Any method which increases ion density (within limits which preserve ion mean-free path), or ion energy, can be expected to enhance the fusion yield, typically measured in the number of neutrons produced per second.
The ease with which the ion energy can be increased appears to be particularly useful when "high temperature" fusion reactions are considered, such as proton-boron fusion, which has plentiful fuel, requires no radioactive tritium, and produces no neutrons in the primary reaction.
Common considerations
Modes of operation
Fusors have at least two modes of operation (possibly more): star mode and halo mode. Halo mode is characterized by a broad symmetric glow, with one or two electron beams exiting the structure. There is little fusion. The halo mode occurs in higher pressure tanks, and as the vacuum improves, the device transitions to star mode. Star mode appears as bright beams of light emanating from the device center.
Power density
Because the electric field made by the cages is negative, it cannot simultaneously trap both positively charged ions and negative electrons. Hence, there must be some regions of charge accumulation, which will result in an upper limit on the achievable density. This could place an upper limit on the machine's power density, which may keep it too low for power production.
Thermalization of the ion velocities
When they first fall into the center of the fusor, the ions will all have the same energy, but the velocity distribution will rapidly approach a Maxwell–Boltzmann distribution. This would occur through simple Coulomb collisions in a matter of milliseconds, but beam-beam instabilities will occur orders of magnitude faster still. In comparison, any given ion will require a few minutes before undergoing a fusion reaction, so that the monoenergetic picture of the fusor, at least for power production, is not appropriate. One consequence of the thermalization is that some of the ions will gain enough energy to leave the potential well, taking their energy with them, without having undergone a fusion reaction.
Electrodes
There are a number of unsolved challenges with the electrodes in a fusor power system. To begin with, the electrodes cannot influence the potential within themselves, so it would seem at first glance that the fusion plasma would be in more or less direct contact with the inner electrode, resulting in contamination of the plasma and destruction of the electrode. However, the majority of the fusion tends to occur in microchannels formed in areas of minimum electric potential, seen as visible "rays" penetrating the core. These form because the forces within the region correspond to roughly stable "orbits". Approximately 40% of the high energy ions in a typical grid operating in star mode may be within these microchannels. Nonetheless, grid collisions remain the primary energy loss mechanism for Farnsworth–Hirsch fusors. Complicating issues is the challenge in cooling the central electrode; any fusor producing enough power to run a power plant seems destined to also destroy its inner electrode. As one fundamental limitation, any method which produces a neutron flux that is captured to heat a working fluid will also bombard its electrodes with that flux, heating them as well.
Attempts to resolve these problems include Bussard's Polywell system, D. C. Barnes' modified Penning trap approach, and the University of Illinois's fusor which retains grids but attempts to more tightly focus the ions into microchannels to attempt to avoid losses. While all three are Inertial electrostatic confinement (IEC) devices, only the last is actually a "fusor".
Radiation
Charged particles will radiate energy as light when they change velocity. This loss rate can be estimated for nonrelativistic particles using the Larmor formula. Inside a fusor there is a cloud of ions and electrons. These particles will accelerate or decelerate as they move about. These changes in speed make the cloud lose energy as light. The radiation from a fusor can (at least) be in the visible, ultraviolet and X-ray spectrum, depending on the type of fusor used. These changes in speed can be due to electrostatic interactions between particles (ion to ion, ion to electron, electron to electron). This is referred to bremsstrahlung radiation, and is common in fusors. Changes in speed can also be due to interactions between the particle and the electric field. Since there are no magnetic fields, fusors emit no cyclotron radiation at slow speeds, or synchrotron radiation at high speeds.
In Fundamental limitations on plasma fusion systems not in thermodynamic equilibrium, Todd Rider argues that a quasineutral isotropic plasma will lose energy due to Bremsstrahlung at a rate prohibitive for any fuel other than D-T (or possibly D-D or D-He3). This paper is not applicable to IEC fusion, as a quasineutral plasma cannot be contained by an electric field, which is a fundamental part of IEC fusion. However, in an earlier paper, "A general critique of inertial-electrostatic confinement fusion systems", Rider addresses the common IEC devices directly, including the fusor. In the case of the fusor the electrons are generally separated from the mass of the fuel isolated near the electrodes, which limits the loss rate. However, Rider demonstrates that practical fusors operate in a range of modes that either lead to significant electron mixing and losses, or alternately lower power densities. This appears to be a sort of catch-22 that limits the output of any fusor-like system.
Safety
There are several key safety considerations involved with the building and operation of a fusor. First, there is the high-voltage involved. Second, there are the x-ray and neutron emissions that are possible. Also there are the publicity / misinformation considerations with local and regulatory authorities.
Commercial applications
Neutron source
The fusor has been demonstrated as a viable neutron source. Typical fusors cannot reach fluxes as high as nuclear reactor or particle accelerator sources, but are sufficient for many uses. Importantly, the neutron generator easily sits on a benchtop, and can be turned off at the flick of a switch. A commercial fusor was developed as a non-core business within DaimlerChrysler Aerospace – Space Infrastructure, Bremen between 1996 and early 2001. After the project was effectively ended, the former project manager established a company which is called NSD-Fusion. To date, the highest neutron flux achieved by a fusor-like device has been 3 × 1011 neutrons per second with the deuterium-deuterium fusion reaction.
Medical isotopes
Commercial startups have used the neutron fluxes generated by fusors to generate Mo-99, a precursor to Technetium-99m, an isotope used for medical care.
Patents
Bennett, W. H., , February 1964. (Thermonuclear power).
P. T. Farnsworth, , June 1966 (Electric discharge — Nuclear interaction).
P. T. Farnsworth, . June 1968 (Method and apparatus).
Hirsch, Robert, . September 1970 (Apparatus).
Hirsch, Robert, . September 1970 (Generating apparatus — Hirsch/Meeks).
Hirsch, Robert, . October 1970 (Lithium-Ion source).
Hirsch, Robert, . April 1972 (Reduce plasma leakage).
P. T. Farnsworth, . May 1972 (Electrostatic containment).
R. W. Bussard, "Method and apparatus for controlling charged particles", , May 1989 (Method and apparatus — Magnetic grid fields).
R. W. Bussard, "Method and apparatus for creating and controlling nuclear fusion reactions", , November 1992 (Method and apparatus — Ion acoustic waves).
See also
Coulomb barrier
Helium-3 – possible fuel
List of Fusor examples
Polywell
References
Further reading
Reducing the Barriers to Fusion Electric Power; G. L. Kulcinski and J. F. Santarius, October 1997 Presented at "Pathways to Fusion Power", submitted to Journal of Fusion Energy, vol. 17, No. 1, 1998. (Abstract in PDF)
Robert L. Hirsch, "Inertial-Electrostatic Confinement of Ionized Fusion Gases", Journal of Applied Physics, v. 38, no. 7, October 1967
Irving Langmuir, Katharine B. Blodgett, "Currents limited by space charge between concentric spheres" Physical Review, vol. 24, No. 1, pp49–59, 1924
R. A. Anderl, J. K. Hartwell, J. H. Nadler, J. M. DeMora, R. A. Stubbers, and G. H. Miley, Development of an IEC Neutron Source for NDE, 16th Symposium on Fusion Engineering, eds. G. H. Miley and C. M. Elliott, IEEE Conf. Proc. 95CH35852, IEEE Piscataway, New Jersey, 1482–1485 (1996).
"On the Inertial-Electrostatic Confinement of a Plasma" William C. Elmore, James L. Tuck, Kenneth M. Watson, The Physics of Fluids v. 2, no 3, May–June, 1959
; R. P. Ashley, G. L. Kulcinski, J.F. Santarius, S. Krupakar Murali, G. Piefer; IEEE Publication 99CH37050, pp. 35–37, 18th Symposium on Fusion Engineering, Albuquerque NM, 25–29 October 1999.
G. L. Kulcinski, Progress in Steady State Fusion of Advanced Fuels in the University of Wisconsin IEC Device, March 2001
Fusion Reactivity Characterization of a Spherically Convergent Ion Focus, T.A. Thorson, R.D. Durst, R.J. Fonck, A.C. Sontag, Nuclear Fusion, Vol. 38, No. 4. p. 495, April 1998. (abstract)
Convergence, Electrostatic Potential, and Density Measurements in a Spherically Convergent Ion Focus, T. A. Thorson, R. D. Durst, R. J. Fonck, and L. P. Wainwright, Phys. Plasma, 4:1, January 1997.
R. W. Bussard and L. W. Jameson, "Inertial-Electrostatic Propulsion Spectrum: Airbreathing to Interstellar Flight", Journal of Propulsion and Power, v 11, no 2. The authors describe the proton — Boron 11 reaction and its application to ionic electrostatic confinement.
R. W. Bussard and L. W. Jameson, "Fusion as Electric Propulsion", Journal of Propulsion and Power, v 6, no 5, September–October, 1990
Todd H. Rider, "A general critique of inertial-electrostatic confinement fusion systems", M.S. thesis at MIT, 1994.
Todd H. Rider, "Fundamental limitations on plasma fusion systems not in thermodynamic equilibrium", Ph.D. thesis at MIT, 1995.
Todd H. Rider, "Fundamental limitations on plasma fusion systems not in thermodynamic equilibrium" Physics of Plasmas, April 1997, Volume 4, Issue 4, pp. 1039–1046.
Could Advanced Fusion Fuels Be Used with Today's Technology?; J.F. Santarius, G.L. Kulcinski, L.A. El-Guebaly, H.Y. Khater, January 1998 [presented at Fusion Power Associates Annual Meeting, 27–29 August 1997, Aspen CO; Journal of Fusion Energy, Vol. 17, No. 1, 1998, p. 33].
R. W. Bussard and L. W. Jameson, "From SSTO to Saturn's Moons, Superperformance Fusion Propulsion for Practical Spaceflight", 30th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, 27–29 June 1994, AIAA-94-3269
External links
David, Schneider, "Fusion from Television?". American Scientist, July–August
RTFTechnologies.org IEC Fusion Reactor Detailed IEC reactor construction information
Neutrons for sale — New Scientist article
Fusion Experiments Show Nuclear Power's Softer Side — Wired article
Various Patents and Articles Related to Fusion, IEC, ICC and Plasma Physics
How a Small Vacuum System and a Bit of Basketweaving Will Get You a Working Inertial-Electrostatic Confinement Neutron Source
Description of Bussard's "aneutronic" boron version
Fusor.net Forum for hobbyist fusor builders
NSD-Fusion
Teaches high school students fusors.
The Farnsworth Fusor at the Farnsworth Chronicles (farnovision.com)
How-to: Making A Fusor in 60 minutes
Neutron sources
Fusion reactors
American inventions | Fusor | [
"Chemistry"
] | 6,022 | [
"Nuclear fusion",
"Fusion reactors"
] |
42,896 | https://en.wikipedia.org/wiki/Viscometer | A viscometer (also called viscosimeter) is an instrument used to measure the viscosity of a fluid. For liquids with viscosities which vary with flow conditions, an instrument called a rheometer is used. Thus, a rheometer can be considered as a special type of viscometer. Viscometers can measure only constant viscosity, that is, viscosity that does not change with flow conditions.
In general, either the fluid remains stationary and an object moves through it, or the object is stationary and the fluid moves past it. The drag caused by relative motion of the fluid and a surface is a measure of the viscosity. The flow conditions must have a sufficiently small value of Reynolds number for there to be laminar flow.
At 20°C, the dynamic viscosity (kinematic viscosity × density) of water is 1.0038mPa·s and its kinematic viscosity (product of flow time × factor) is 1.0022mm2/s. These values are used for calibrating certain types of viscometers.
Standard laboratory viscometers for liquids
U-tube viscometers
These devices are also known as glass capillary viscometers or Ostwald viscometers, named after Wilhelm Ostwald. Another version is the Ubbelohde viscometer, which consists of a U-shaped glass tube held vertically in a controlled temperature bath. In one arm of the U is a vertical section of precise narrow bore (the capillary). Above there is a bulb, with it is another bulb lower down on the other arm. In use, liquid is drawn into the upper bulb by suction, then allowed to flow down through the capillary into the lower bulb. Two marks (one above and one below the upper bulb) indicate a known volume. The time taken for the level of the liquid to pass between these marks is proportional to the kinematic viscosity. The calibration can be done using a fluid of known properties. Most commercial units are provided with a conversion factor.
The time required for the test liquid to flow through a capillary of a known diameter of a certain factor between two marked points is measured. By multiplying the time taken by the factor of the viscometer, the kinematic viscosity is obtained.
Such viscometers can be classified as direct-flow or reverse-flow. Reverse-flow viscometers have the reservoir above the markings, and direct-flow are those with the reservoir below the markings. Such classifications exist so that the level can be determined even when opaque or staining liquids are measured, otherwise the liquid will cover the markings and make it impossible to gauge the time the level passes the mark. This also allows the viscometer to have more than 1 set of marks to allow , therefore yielding 2 timings and allowing subsequent calculation of determinability to ensure accurate results. The use of two timings in one viscometer in a single run is only possible if the sample being measured has Newtonian properties. Otherwise the change in driving head, which in turn changes the shear rate, will produce a different viscosity for the two bulbs.
Falling-sphere viscometers
Stokes' law is the basis of the falling-sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameter are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerol as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. It includes many different oils and polymer liquids .
In 1851, George Gabriel Stokes derived an expression for the frictional force (also called drag force) exerted on spherical objects with very small Reynolds numbers (e.g., very small particles) in a continuous viscous fluid by changing the small fluid-mass limit of the generally unsolvable Navier–Stokes equations:
where
is the frictional force,
is the radius of the spherical object,
is the fluid viscosity,
is the particle velocity.
If the particles are falling in the viscous fluid by their own weight, then a terminal velocity, also known as the settling velocity, is reached when this frictional force combined with the buoyant force exactly balance the gravitational force. The resulting settling velocity (or terminal velocity) is given by
where:
is the particle settling velocity (m/s), vertically downwards if , upwards if ,
is the Stokes radius of the particle (m),
is the gravitational acceleration (m/s2),
is the density of the particles (kg/m3),
is the density of the fluid (kg/m3),
is the (dynamic) fluid viscosity (Pa·s).
Note that Stokes flow is assumed, so the Reynolds number must be small.
A limiting factor on the validity of this result is the roughness of the sphere being used.
A modification of the straight falling-sphere viscometer is a rolling-ball viscometer, which times a ball rolling down a slope whilst immersed in the test fluid. This can be further improved by using a patented V plate, which increases the number of rotations to distance traveled, allowing smaller, more portable devices. The controlled rolling motion of the ball avoids turbulences in the fluid, which would otherwise occur with a falling ball. This type of device is also suitable for ship board use.
Falling-piston viscometer
Also known as the Norcross viscometer after its inventor, Austin Norcross. The principle of viscosity measurement in this rugged and sensitive industrial device is based on a piston and cylinder assembly. The piston is periodically raised by an air lifting mechanism, drawing the material being measured down through the clearance (gap) between the piston and the wall of the cylinder into the space formed below the piston as it is raised. The assembly is then typically held up for a few seconds, then allowed to fall by gravity, expelling the sample out through the same path that it entered, creating a shearing effect on the measured liquid, which makes this viscometer particularly sensitive and good for measuring certain thixotropic liquids. The time of fall is a measure of viscosity, with the clearance between the piston and inside of the cylinder forming the measuring orifice. The viscosity controller measures the time of fall (time-of-fall seconds being the measure of viscosity) and displays the resulting viscosity value. The controller can calibrate the time-of-fall value to cup seconds (known as efflux cup), Saybolt universal second (SUS) or centipoise.
Industrial use is popular due to simplicity, repeatability, low maintenance and longevity. This type of measurement is not affected by flow rate or external vibrations. The principle of operation can be adapted for many different conditions, making it ideal for process control environments.
Oscillating-piston viscometer
Sometimes referred to as electromagnetic viscometer or EMV viscometer, was invented at Cambridge Viscosity (Formally Cambridge Applied Systems) in 1986. The sensor (see figure below) comprises a measurement chamber and magnetically influenced piston. Measurements are taken whereby a sample is first introduced into the thermally controlled measurement chamber where the piston resides. Electronics drive the piston into oscillatory motion within the measurement chamber with a controlled magnetic field. A shear stress is imposed on the liquid (or gas) due to the piston travel, and the viscosity is determined by measuring the travel time of the piston. The construction parameters for the annular spacing between the piston and measurement chamber, the strength of the electromagnetic field, and the travel distance of the piston are used to calculate the viscosity according to Newton's law of viscosity.
The oscillating-piston viscometer technology has been adapted for small-sample viscosity and micro-sample viscosity testing in laboratory applications. It has also been adapted to high-pressure viscosity and high-temperature viscosity measurements in both laboratory and process environments. The viscosity sensors have been scaled for a wide range of industrial applications, such as small-size viscometers for use in compressors and engines, flow-through viscometers for dip coating processes, in-line viscometers for use in refineries, and hundreds of other applications. Improvements in sensitivity from modern electronics, is stimulating a growth in oscillating-piston viscometer popularity with academic laboratories exploring gas viscosity.
Vibrational viscometers
Vibrational viscometers date back to the 1950s Bendix instrument, which is of a class that operates by measuring the damping of an oscillating electromechanical resonator immersed in a fluid whose viscosity is to be determined. The resonator generally oscillates in torsion or transversely (as a cantilever beam or tuning fork). The higher the viscosity, the larger the damping imposed on the resonator. The resonator's damping may be measured by one of several methods:
Measuring the power input necessary to keep the oscillator vibrating at a constant amplitude. The higher the viscosity, the more power is needed to maintain the amplitude of oscillation.
Measuring the decay time of the oscillation once the excitation is switched off. The higher the viscosity, the faster the signal decays.
Measuring the frequency of the resonator as a function of phase angle between excitation and response waveforms. The higher the viscosity, the larger the frequency change for a given phase change.
The vibrational instrument also suffers from a lack of a defined shear field, which makes it unsuited to measuring the viscosity of a fluid whose flow behaviour is not known beforehand.
Vibrating viscometers are rugged industrial systems used to measure viscosity in the process condition. The active part of the sensor is a vibrating rod. The vibration amplitude varies according to the viscosity of the fluid in which the rod is immersed. These viscosity meters are suitable for measuring clogging fluid and high-viscosity fluids, including those with fibers (up to 1000 Pa·s). Currently, many industries around the world consider these viscometers to be the most efficient system with which to measure the viscosities of a wide range of fluids; by contrast, rotational viscometers require more maintenance, are unable to measure clogging fluid, and require frequent calibration after intensive use. Vibrating viscometers have no moving parts, no weak parts and the sensitive part is typically small. Even very basic or acidic fluids can be measured by adding a protective coating, such as enamel, or by changing the material of the sensor to a material such as 316L stainless steel. Vibrating viscometers are the most widely used inline instrument to monitor the viscosity of the process fluid in tanks, and pipes.
Quartz viscometer
The quartz viscometer is a special type of vibrational viscometer. Here, an oscillating quartz crystal is immersed into a fluid and the specific influence on the oscillating behavior defines the viscosity. The principle of quartz viscosimetry is based on the idea of W. P. Mason. The basic concept is the application of a piezoelectric crystal for the determination of viscosity. The high-frequency electric field that is applied to the oscillator causes a movement of the sensor and results in the shearing of the fluid. The movement of the sensor is then influenced by the external forces (the shear stress) of the fluid, which affects the electrical response of the sensor. The calibration procedure as a pre-condition of viscosity determination by means of a quartz crystal goes back to B. Bode, who facilitated the detailed analysis of the electrical and mechanical transmission behavior of the oscillating system. On the basis of this calibration, the quartz viscosimeter was developed which allows continuous viscosity determination in resting and flowing liquids.
Quartz crystal microbalance
The quartz crystal microbalance functions as a vibrational viscometer by the piezoelectric properties inherent in quartz to perform measurements of conductance spectra of liquids and thin films exposed to the surface of the crystal. From these spectra, frequency shifts and a broadening of the peaks for the resonant and overtone frequencies of the quartz crystal are tracked and used to determine changes in mass as well as the viscosity, shear modulus, and other viscoelastic properties of the liquid or thin film. One benefit of using the quartz crystal microbalance to measure viscosity is the small amount of sample required for obtaining an accurate measurement. However, due to the dependence viscoelastic properties on the sample preparation techniques and thickness of the film or bulk liquid, there can be errors up to 10% in measurements in viscosity between samples.
An interesting technique to measure the viscosity of a liquid using a quartz crystal microbalance which improves the consistency of measurements uses a drop method. Instead of creating a thin film or submerging the quartz crystal in a liquid, a single drop of the fluid of interest is dropped on the surface of the crystal. The viscosity is extracted from the shift in the frequency data using the following equation
where is the resonant frequency, is the density of the fluid, is the shear modulus of the quartz, and is the density of the quartz. An extension of this technique corrects the shift in the resonant frequency by the size of the drop deposited on the quartz crystal.
Rotational viscometers
Rotational viscometers use the idea that the torque required to rotate an object in a fluid is a function of the viscosity of that fluid. They measure the torque required to rotate a disk or bob in a fluid at a known speed.
"Cup and bob" viscometers work by defining the exact volume of a sample to be sheared within a test cell; the torque required to achieve a certain rotational speed is measured and plotted. There are two classical geometries in "cup and bob" viscometers, known as either the "Couette" or "Searle" systems, distinguished by whether the cup or bob rotates. The rotating cup is preferred in some cases because it reduces the onset of Taylor vortices at very high shear rates, but the rotating bob is more commonly used, as the instrument design can be more flexible for other geometries as well.
"Cone and plate" viscometers use a narrow-angled cone in close proximity to a flat plate. With this system, the shear rate between the geometries is constant at any given rotational speed. The viscosity can easily be calculated from shear stress (from the torque) and shear rate (from the angular velocity).
If a test with any geometries runs through a table of several shear rates or stresses, the data can be used to plot a flow curve, that is a graph of viscosity vs shear rate. If the above test is carried out slowly enough for the measured value (shear stress if rate is being controlled, or conversely) to reach a steady value at each step, the data is said to be at "equilibrium", and the graph is then an "equilibrium flow curve". This is preferable over non-equilibrium measurements, as the data can usually be replicated across multiple other instruments or with other geometries.
Calculation of shear rate and shear stress form factors
Rheometers and viscometers work with torque and angular velocity. Since viscosity is normally considered in terms of shear stress and shear rates, a method is needed to convert from "instrument numbers" to "rheology numbers". Each measuring system used in an instrument has its associated "form factors" to convert torque to shear stress and to convert angular velocity to shear rate.
We will call the shear stress form factor and the shear rate factor .
shear stress = torque ÷ .
shear rate = × angular velocity.
For some measuring systems such as parallel plates, the user can set the gap between the measuring systems. In this case the equation used is
shear rate = × angular velocity / gap.
viscosity = shear stress / shear rate.
The following sections show how the form factors are calculated for each measuring system.
Cone and plate
where
is the radius of the cone,
is the cone angle in radians.
Parallel plates
where is the radius of the plate.
Note: The shear stress varies across the radius for a parallel plate. The above formula refers to the 3/4 radius position if the test sample is Newtonian.
Coaxial cylinders
where:
is the average radius,
is the inner radius,
is the outer radius,
is the height of cylinder.
Note: takes the shear stress as that occurring at an average radius .
Electromagnetically spinning-sphere viscometer (EMS viscometer)
The EMS viscometer measures the viscosity of liquids through observation of the rotation of a sphere driven by electromagnetic interaction: Two magnets attached to a rotor create a rotating magnetic field. The sample ③ to be measured is in a small test tube ②. Inside the tube is an aluminium sphere ④. The tube is located in a temperature-controlled chamber ① and set such that the sphere is situated in the centre of the two magnets.
The rotating magnetic field induces eddy currents in the sphere. The resulting Lorentz interaction between the magnetic field and these eddy currents generate torque that rotates the sphere. The rotational speed of the sphere depends on the rotational velocity of the magnetic field, the magnitude of the magnetic field and the viscosity of the sample around the sphere. The motion of the sphere is monitored by a video camera ⑤ located below the cell. The torque applied to the sphere is proportional to the difference in the angular velocity of the magnetic field and the one of the sphere . There is thus a linear relationship between and the viscosity of the liquid.
This new measuring principle was developed by Sakai et al. at the University of Tokyo. The EMS viscometer distinguishes itself from other rotational viscometers by three main characteristics:
All parts of the viscometer that come in direct contact with the sample are disposable and inexpensive.
The measurements are performed in a sealed sample vessel.
The EMS viscometer requires only very small sample quantities (0.3 mL).
Stabinger viscometer
By modifying the classic Couette-type rotational viscometer, it is possible to combine the accuracy of kinematic viscosity determination with a wide measuring range.
The outer cylinder of the Stabinger viscometer is a sample-filled tube that rotates at constant speed in a temperature-controlled copper housing. The hollow internal cylinder – shaped as a conical rotor – is centered within the sample by hydrodynamic lubrication effects and centrifugal forces. In this way all bearing friction, an inevitable factor in most rotational devices, is fully avoided. The rotating fluid's shear forces drive the rotor, while a magnet inside the rotor forms an eddy current brake with the surrounding copper housing. An equilibrium rotor speed is established between driving and retarding forces, which is an unambiguous measure of the dynamic viscosity. The speed and torque measurement is implemented without direct contact by a Hall-effect sensor counting the frequency of the rotating magnetic field. This allows a highly precise torque resolution of 50 pN·m and a wide measuring range from 0.2 to 30,000 mPa·s with a single measuring system. A built-in density measurement based on the oscillating U-tube principle allows the determination of kinematic viscosity from the measured dynamic viscosity employing the relation
where:
is the kinematic viscosity (mm2/s),
is the dynamic viscosity (mPa·s),
is the density (g/cm3).
Bubble viscometer
Bubble viscometers are used to quickly determine kinematic viscosity of known liquids such as resins and varnishes. The time required for an air bubble to rise is directly proportional to the viscosity of the liquid, so the faster the bubble rises, the lower the viscosity. The alphabetical-comparison method uses 4 sets of lettered reference tubes, A5 through Z10, of known viscosity to cover a viscosity range from 0.005 to 1,000 stokes. The direct-time method uses a single 3-line times tube for determining the "bubble seconds", which may then be converted to stokes.
This method is considerably accurate, but the measurements can vary due to variances in buoyancy because of the changing in shape of the bubble in the tube. However, this does not cause any sort of serious miscalculation.
Rectangular-slit viscometer
The basic design of a rectangular-slit viscometer/rheometer consists of a rectangular-slit channel with uniform cross-sectional area. A test liquid is pumped at a constant flow rate through this channel. Multiple pressure sensors flush-mounted at linear distances along the stream-wise direction measure pressure drop as depicted in the figure:
Measuring principle: The slit viscometer/rheometer is based on the fundamental principle that a viscous liquid resists flow, exhibiting a decreasing pressure along the length of the slit. The pressure decrease or drop () is correlated with the shear stress at the wall boundary. The apparent shear rate is directly related to the flow rate and the dimension of the slit. The apparent shear rate, the shear stress, and the apparent viscosity are calculated:
where
is the apparent shear rate (s−1),
is the shear stress (Pa),
is the apparent viscosity (Pa·s),
is the pressure difference between the leading pressure sensor and the last pressure sensor (Pa),
is the flow rate (ml/s),
is the width of the flow channel (mm),
is the depth of the flow channel (mm),
is the distance between the leading pressure sensor and the last pressure sensor (mm).
To determine the viscosity of a liquid, the liquid sample is pumped through the slit channel at a constant flow rate, and the pressure drop is measured. Following these equations, the apparent viscosity is calculated for the apparent shear rate. For a Newtonian liquid, the apparent viscosity is the same as the true viscosity, and the single shear-rate measurement is sufficient. For non-Newtonian liquids, the apparent viscosity is not true viscosity. In order to obtain true viscosity, the apparent viscosities are measured at multiple apparent shear rates. Then true viscosities at various shear rates are calculated using Weissenberg–Rabinowitsch–Mooney correction factor:
The calculated true viscosity is the same as the cone and plate values at the same shear rate.
A modified version of the rectangular-slit viscometer/rheometer can also be used to determine apparent extensional viscosity.
Krebs Viscometer
The Krebs Viscometer uses a digital graph and a small sidearm spindle to measure the viscosity of a fluid. It is mostly used in the paint industry.
Miscellaneous viscometer types
Other viscometer types use balls or other objects. Viscometers that can characterize non-Newtonian fluids are usually called rheometers or plastometers. Some instruments like capillary or VROC® viscometers can measure both Newtonian and non-Newtonian fluids.
In the I.C.I "Oscar" viscometer, a sealed can of fluid was oscillated torsionally, and by clever measurement techniques it was possible to measure both viscosity and elasticity in the sample.
The Marsh funnel viscometer measures viscosity from the time (efflux time) it takes a known volume of liquid to flow from the base of a cone through a short tube. This is similar in principle to the flow cups (efflux cups) like the Ford, Zahn and Shell cups which use different shapes to the cone and various nozzle sizes. The measurements can be done according to ISO 2431, ASTM D1200 - 10 or DIN 53411.
The flexible-blade rheometer improves the accuracy of measurements for the lower-viscosity liquids utilizing the subtle changes in the flow field due to the flexibility of the moving or stationary blade (sometimes called wing or single-side-clamped cantilever).
A rotating disk viscometer is the standard viscometer for measuring material viscosity and scorch time for rubber before vulcanization.
See also
Flow measurement
Poiseuille equation
Viscotherm
References
British Standards Institute BS ISO/TR 3666:1998 Viscosity of water
British Standards Institute BS 188:1977 Methods for Determination of the viscosity of liquids
External links
RHEOTEST Medingen GmbH - History and Collection of rheological instruments from the time of Fritz Höppler
ASTM International (ASTM D7042)
Viscosity conversion tables
Home - Alpha Technologies (formerly Monsanto Instruments and Equipment) - Akron, Ohio USA
Anton Paar - Basics of viscometry
Polymers | Viscometer | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 5,323 | [
"Polymers",
"Viscosity meters",
"Polymer chemistry",
"Measuring instruments"
] |
42,909 | https://en.wikipedia.org/wiki/Jacobus%20Henricus%20van%20%27t%20Hoff | Jacobus Henricus van 't Hoff Jr. (; 30 August 1852 – 1 March 1911) was a Dutch physical chemist. A highly influential theoretical chemist of his time, van 't Hoff was the first winner of the Nobel Prize in Chemistry. His pioneering work helped found the modern theory of chemical affinity, chemical equilibrium, chemical kinetics, and chemical thermodynamics. In his 1874 pamphlet, Van 't Hoff formulated the theory of the tetrahedral carbon atom and laid the foundations of stereochemistry. In 1875, he predicted the correct structures of allenes and cumulenes as well as their axial chirality. He is also widely considered one of the founders of physical chemistry as the discipline is known today.
Biography
The third of seven children, van 't Hoff was born in Rotterdam, Netherlands, 30 August 1852. His father was Jacobus Henricus van 't Hoff Sr., a physician, and his mother was Alida Kolff van 't Hoff. From a young age, he was interested in science and nature, and frequently took part in botanical excursions. In his early school years, he showed a strong interest in poetry and philosophy. He considered Lord Byron to be his idol.
Against the wishes of his father, van 't Hoff chose to study chemistry. First, he enrolled at Delft University of Technology in September 1869, and studied until 1871, when he passed his final exam on 8 July and obtained a degree of chemical technologist. He passed all his courses in two years, although the time assigned to study was three years. Then he enrolled at University of Leiden to study chemistry. He then studied in Bonn, Germany, with August Kekulé and in Paris with Adolphe Wurtz. He received his doctorate under Eduard Mulder at the University of Utrecht in 1874.
In 1878, van 't Hoff married Johanna Francina Mees. They had two daughters, Johanna Francina (1880–1964) and Aleida Jacoba (1882–1971), and two sons, Jacobus Henricus van 't Hoff III (1883–1943) and Govert Jacob (1889–1918). Van 't Hoff died at the age of 58, on 1 March 1911, at Steglitz, near Berlin, of tuberculosis.
Career
Organic chemistry
Van 't Hoff earned his earliest reputation in the field of organic chemistry. In 1874, he accounted for the phenomenon of optical activity by assuming that the chemical bonds between carbon atoms and their neighbors were directed towards the corners of a regular tetrahedron. This three-dimensional structure accounted for the isomers found in nature. He shares credit for this with the French chemist Joseph Le Bel, who independently came up with the same idea.
Three months before his doctoral degree was awarded, van 't Hoff published this theory, which today is regarded as the foundation of stereochemistry, first in a Dutch pamphlet in the fall of 1874, and then in the following May in a small French book entitled La chimie dans l'espace. A German translation appeared in 1877, at a time when the only job van 't Hoff could find was at the Veterinary School in Utrecht. In these early years his theory was largely ignored by the scientific community, and was sharply criticized by one prominent chemist, Hermann Kolbe. Kolbe wrote:
"A Dr. J. H. van 't Hoff of the Veterinary School at Utrecht has no liking, apparently, for exact chemical investigation. He has considered it more convenient to mount Pegasus (apparently borrowed from the Veterinary School) and to proclaim in his ‘La chimie dans l’espace’ how, in his bold flight to the top of the chemical Parnassus, the atoms appeared to him to be arranged in cosmic space." However, by about 1880, support for van 't Hoff's theory by such important chemists as Johannes Wislicenus and Viktor Meyer brought recognition.
Physical chemistry
In 1884, Van 't Hoff published his research on chemical kinetics, titled Études de Dynamique chimique ("Studies in Chemical Dynamics"), in which he described a new method for determining the order of a reaction using graphics and applied the laws of thermodynamics to chemical equilibria. He also introduced the modern concept of chemical affinity. In 1886, he showed a similarity between the behaviour of dilute solutions and gases. In 1887, he and German chemist Wilhelm Ostwald founded an influential scientific magazine named Zeitschrift für physikalische Chemie ("Journal of Physical Chemistry"). He worked on Svante Arrhenius's theory of the dissociation of electrolytes and in 1889 provided physical justification for the Arrhenius equation. In 1896, he became a professor at the Prussian Academy of Sciences in Berlin. His studies of the salt deposits at Stassfurt were an important contribution to Prussia's chemical industry.
Van 't Hoff became a lecturer in chemistry and physics at the Veterinary College in Utrecht. He then worked as a professor of chemistry, mineralogy, and geology at the University of Amsterdam for almost 18 years before eventually becoming the chairman of the chemistry department. In 1896, van 't Hoff moved to Germany, where he finished his career at the University of Berlin in 1911. In 1901, he received the first Nobel Prize in Chemistry for his work with solutions. His work showed that very dilute solutions follow mathematical laws that closely resemble the laws describing the behavior of gases.
Honours and awards
In 1885, Van 't Hoff was appointed as a Member of the Royal Netherlands Academy of Arts and Sciences. In 1904, he was elected as a member to the American Philosophical Society. Other distinctions include honorary doctorates from Harvard and Yale (1901), Victoria University, the University of Manchester (1903), and University of Heidelberg (1908). He was awarded the Davy Medal of the Royal Society in 1893 (along with Le Bel), and elected a Foreign Member of the Royal Society (ForMemRS) in 1897. He was awarded the Helmholtz Medal of the Prussian Academy of Sciences (1911), and appointed Knight of the French Legion of Honour (1894) and Senator in the Kaiser-Wilhelm-Gesellschaft (1911). Van 't Hoff became an Honorary Member of the British Chemical Society in London, the Royal Netherlands Academy of Arts and Sciences (1892), American Chemical Society (1898), the Académie des Sciences in Paris (1905), and the Netherlands Chemical Society (1908). Of his numerous distinctions, van 't Hoff regarded winning the first Nobel Prize in Chemistry as the culmination of his career. The following are named after him:
Van 't Hoff factor
Van 't Hoff equation
Le Bel–Van 't Hoff rule
On 14 May 2021, asteroid 34978 van 't Hoff, discovered by astronomers with the Palomar–Leiden survey in 1977, was in his memory.
Works
References
Further reading
Patrick Coffey, Cathedrals of Science: The Personalities and Rivalries That Made Modern Chemistry, Oxford University Press, 2008.
Hornix WJ, Mannaerts SHWM, Van 't Hoff and the emergence of Chemical Thermodynamics, Delft University Press, 2001,
External links
including the Nobel Lecture, 13 December 1901 Osmotic Pressure and Chemical Equilibrium
20th-century deaths from tuberculosis
1852 births
1911 deaths
Delft University of Technology alumni
Dutch expatriates in Germany
Dutch Nobel laureates
Dutch physical chemists
20th-century Dutch chemists
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
Academic staff of the Humboldt University of Berlin
Leiden University alumni
Members of the Royal Netherlands Academy of Arts and Sciences
Nobel laureates in Chemistry
Dutch organic chemists
Recipients of the Pour le Mérite (civil class)
Scientists from Rotterdam
Stereochemists
Theoretical chemists
Tuberculosis deaths in Germany
Academic staff of the University of Amsterdam
University of Bonn alumni
Utrecht University alumni
Academic staff of Utrecht University
Members of the American Philosophical Society | Jacobus Henricus van 't Hoff | [
"Chemistry"
] | 1,645 | [
"Quantum chemistry",
"Physical chemists",
"Dutch organic chemists",
"Organic chemists",
"Stereochemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Stereochemists"
] |
42,934 | https://en.wikipedia.org/wiki/Cryostasis%20%28clathrate%20hydrates%29 | The term cryostasis was introduced to name the reversible preservation technology for live biological objects which is based on using clathrate-forming gaseous substances under increased hydrostatic pressure and hypothermic temperatures.
Living tissues cooled below the freezing point of water are damaged by the dehydration of the cells as ice is formed between the cells. The mechanism of freezing damage in living biological tissues has been elucidated by Renfret.
The vapor pressure of the ice is lower than the vapor pressure of the solute water in the surrounding cells and as heat is removed at the freezing point of the solutions, the ice crystals grow between the cells, extracting water from them. As the ice crystals grow, the volume of the cells shrinks, and the cells are crushed between the ice crystals. Additionally, as the cells shrink, the solutes inside the cells are concentrated in the remaining water, increasing the intracellular ionic strength and interfering with the organization of the proteins and other organized intercellular structures. Eventually, the solute concentration inside the cells reaches the eutectic and freezes. The final state of frozen tissues is pure ice in the former extracellular spaces, and inside the cell membranes a mixture of concentrated cellular components in ice and bound water. In general, this process is not reversible to the point of restoring the tissues to life.
Cryostasis utilizes clathrate-forming gases that penetrate and saturate the biological tissues causing clathrate hydrates formation (under specific pressure-temperature conditions) inside the cells and in the extracellular matrix. Clathrate hydrates are a class of solids in which gas molecules occupy "cages" made up of hydrogen-bonded water molecules. These "cages" are unstable when empty, collapsing into conventional ice crystal structure, but they are stabilised by the inclusion of the gas molecule within them. Most low molecular weight gases (including CH4, H2S, Ar, Kr, and Xe) will form a hydrate under some pressure-temperature conditions.
Clathrates formation will prevent the biological tissues from dehydration which will cause irreversible inactivation of intracellular enzymes.
See also
Cryopreservation
Cryoprotectant
Hibernation
References
Cryobiology | Cryostasis (clathrate hydrates) | [
"Physics",
"Chemistry",
"Biology"
] | 465 | [
"Physical phenomena",
"Phase transitions",
"Biotechnology stubs",
"Cryobiology",
"Biochemistry"
] |
42,935 | https://en.wikipedia.org/wiki/Detection | In general, detection is the action of accessing information without specific cooperation from with the sender.
In the history of radio communications, the term "detector" was first used for a device that detected the simple presence or absence of a radio signal, since all communications were in Morse code. The term is still in use today to describe a component that extracts a particular signal from all of the electromagnetic waves present. Detection is usually based on the frequency of the carrier signal, as in the familiar frequencies of radio broadcasting, but it may also involve filtering a faint signal from noise, as in radio astronomy, or reconstructing a hidden signal, as in steganography.
In optoelectronics, "detection" means converting a received optical input to an electrical output. For example, the light signal received through an optical fiber is converted to an electrical signal in a detector such as a photodiode.
In steganography, attempts to detect hidden signals in suspected carrier material is referred to as steganalysis. Steganalysis has an interesting difference from most other types of detection, in that it can often only determine the probability that a hidden message exists; this is in contrast to the detection of signals which are simply encrypted, as the ciphertext can often be identified with certainty, even if it cannot be decoded.
In the military, detection refers to the special discipline of reconnaissance with the aim to recognize the presence of an object in a location or ambiance.
Finally, the art of detection, also known as following clues, is the work of a detective in attempting to reconstruct a sequence of events by identifying the relevant information in a situation.
See also
Object detection
Signal detection theory
Communication
Wireless locating | Detection | [
"Technology"
] | 352 | [
"Wireless locating"
] |
42,940 | https://en.wikipedia.org/wiki/Biostasis | Biostasis is the ability of an organism to tolerate environmental changes without having to actively adapt to them. Biostasis is found in organisms that live in habitats that likely encounter unfavorable living conditions, such as drought, freezing temperatures, change in pH levels, pressure, or temperature. Insects undergo a type of dormancy to survive these conditions, called diapause. Diapause may be obligatory for these insects to survive. The insect may also be able to undergo change prior to the arrival of the initiating event.
Microorganisms
Biostasis in this context is also synonymous for viable but nonculturable state. In the past when bacteria were no longer growing on culture media it was assumed that they were dead. Now we can understand that there are many instances where bacteria cells may go into biostasis or suspended animation, fail to grow on media, and on resuscitation are again culturable. VBNC state differs from 'starvation survival state' (where a cell just reduces metabolism significantly). Bacteria cells may enter the VBNC state as a result of some outside stressor such as "starvation, incubation outside the temperature range of growth, elevated osmotic concentrations (seawater), oxygen concentrations, or exposure to white light". Any of these instances could very easily mean death for the bacteria if it was not able to enter this state of dormancy. It has also been observed that in may instances where it was thought that bacteria had been destroyed (pasteurization of milk) and later caused spoilage or harmful effects to consumers because the bacteria had entered the VBNC state.
Effects on cells entering the VBNC state include "dwarfing, changes in metabolic activity, reduced nutrient transport, respiration rates and macromolecular synthesis". Yet biosynthesis continues, and shock proteins are made. Most importantly has been observed that ATP levels and generation remain high, completely contrary to dying cells which show rapid decreases in generation and retention. Changes to the cell walls of bacteria in the VBNC state have also been observed. In Escherichia coli a large amount of cross-linking was observed in the peptidoglycan. The autolytic capability was also observed to be much higher in VBNC cells than those who were in the growth state.
It is far easier to induce bacteria to the VBNC state and once bacteria cells have entered the VBNC state it is very hard to return them to a culturable state. "They examined nonculturability and resuscitation in Legionella pneumophila and while entry into this state was easily induced by nutrient starvation, resuscitation could only be demonstrated following co-incubation of the VBNC cells with the amoeba, Acanthamoeba Castellani"
Fungistasis or mycostasis a naturally occurring VBNC (viable but nonculturable) state found in fungi in soil. Watson and Ford defined fungistasis as "when viable fungal propagules, which are not subject to endogenous or constitutive dormancy do not germinate in soil at their favorable temperature or moisture conditions or growth of fungal hyphae is retarded or terminated by conditions of the soil environment other than temperature or moisture". Essentially (and mostly observed naturally occurring in soil) several types of fungi have been found to enter the VBNC state resulting from outside stressors (temperature, available nutrients, oxygen availability etc.) or from no observable stressors at all.
Current research
On March 1, 2018, the Defense Advanced Research Projects Agency (DARPA) announced their new Biostasis program under the direction of Dr. Tristan McClure-Begley. The aim of the Biostasis program is to develop new possibilities for extending the golden hour in patients who suffered a traumatic injury by slowing down the human body at the cellular level, addressing the need for additional time in continuously operating biological systems faced with catastrophic, life-threatening events. By leveraging molecular biology, the program aims to control the speed at which living systems operate and figure out a way to "slow life to save life."
On March 20, 2018, the Biostasis team held a Webinar which, along with a Broad Agency Announcement (BAA), solicited five-year research proposals from outside organizations. The full proposals were due on May 22, 2018.
Possible approaches
In their Webinar, DARPA outlined a number of possible research approaches for the Biostasis project. These approaches are based on research into diapause in tardigrades and wood frogs which suggests that selective stabilization of intracellular machinery occurs at the protein level.
Protein chaperoning
In molecular biology, molecular chaperones are proteins that assist in the folding, unfolding, assembly, or disassembly of other macromolecular structures. Under typical conditions, molecular chaperones facilitate changes in shape (conformational change) of macromolecules in response to changes in environmental factors like temperature, pH, and voltage. By reducing conformational flexibility, scientists can constrain the function of certain proteins. Recent research has shown that proteins are promiscuous, or able to do jobs in addition to the ones they evolved to carry out. Additionally, protein promiscuity plays a key role in the adaptation of species to new environments. It is possible that finding a way to control conformational change in promiscuous proteins could allow scientists to induce biostasis in living organisms.
Intracellular crowding
The crowdedness of cells is a critical aspect of biological systems. Intracellular crowding refers to the fact that protein function and interaction with water is constrained when the interior of the cell is overcrowded. Intracellular organelles are either membrane-bound vesicles or membrane-less compartments that compartmentalize the cell and enable spatiotemporal control of biological reactions. By introducing these intracellular polymers to a biological system and manipulating the crowdedness of a cell, scientists may be able to slow down the rate of biological reactions in the system.
Tardigrade-disordered proteins
Tardigrades are microscopic animals that are able to enter a state of diapause and survive a remarkable array of environmental stressors, including freezing and desiccation. Research has shown that intrinsically disordered proteins in these organisms may work to stabilize cell function and protect against these extreme environmental stressors. By using peptide engineering, it is possible that scientists may be able to introduce intrinsically disordered proteins to the biological systems of larger animal organisms. This could allow larger animals to enter a state of biostasis similar to that of tardigrades under extreme biological stress.
References
Oliver, James D. "The viable but nonculturable state in bacteria." The Journal of Microbiology 43.1 (2005): 93-100.
Fungistasis and general soil biostasis A new synthesis Paolina Garbeva, W.H. Gera Holb, Aad J. Termorshuizenc, George A. Kowalchuka, Wietse de Boer
Watson, A.G., Ford E.J. 1972 Soil Fungistasis—a reappraisal. Annual Review of Phytopathology 10, 327.
Ecology
Physiology | Biostasis | [
"Biology"
] | 1,505 | [
"Ecology",
"Physiology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.