id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
41,494
https://en.wikipedia.org/wiki/Path%20quality%20analysis
Path quality analysis: In a communications path, an analysis that (a) includes the overall evaluation of the component quality measures, the individual link quality measures, and the aggregate path quality measures, and (b) is performed by evaluating communications parameters, such as bit error ratio, signal-plus-noise-plus-distortion to noise-plus-distortion ratio, and spectral distortion. References Radio frequency propagation
Path quality analysis
[ "Physics" ]
82
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,496
https://en.wikipedia.org/wiki/Pseudo%20bit%20error%20ratio
Pseudo bit error ratio (PBER) in adaptive high-frequency (HF) radio, is a bit error ratio derived by a majority decoder that processes redundant transmissions. Note: In adaptive HF radio automatic link establishment, PBER is determined by the extent of error correction, such as by using the fraction of non-unanimous votes in the 2-of-3 majority decoder. Engineering ratios Error detection and correction
Pseudo bit error ratio
[ "Mathematics", "Engineering" ]
87
[ "Metrics", "Reliability engineering", "Engineering ratios", "Quantity", "Error detection and correction" ]
41,506
https://en.wikipedia.org/wiki/Personal%20mobility
In Universal Personal Telecommunications (UPT), personal mobility is the ability of a user to access telecommunication services at any UPT terminal on the basis of a personal identifier, and the capability of the network to provide those services in accord with the user's service profile. Personal mobility involves the network's capability to locate the terminal associated with the user for the purposes of addressing, routing, and charging the user for calls. "Access" is intended to convey the concepts of both originating and terminating services. Management of the service profile by the user is not part of personal mobility. The personal mobility aspects of personal communications are based on the UPT number. References Telephone numbers
Personal mobility
[ "Mathematics" ]
138
[ "Mathematical objects", "Numbers", "Telephone numbers" ]
41,507
https://en.wikipedia.org/wiki/Phantom%20circuit
In telecommunications and electrical engineering, a phantom circuit is an electrical circuit derived from suitably arranged wires with one or more conductive paths being a circuit in itself and at the same time acting as one conductor of another circuit. Phantom group A phantom group is composed of three circuits that are derived from two single-channel circuits to form a phantom circuit. Here the phantom circuit is a third circuit derived from two suitably arranged pairs of wires, called side circuits, with each pair of wires being a circuit in itself and at the same time acting as one conductor of the third circuit. The "side circuits" within phantom circuits can be coupled to their respective voltage drops by center-tapped transformers, usually called "repeating coils". The center taps are on the line side of the side circuits. Current from the phantom circuit is split evenly by the center taps. This cancels crosstalk from the phantom circuit to the side circuits. Phantom working increased the number of circuits on long-distance routes in the early 20th century without putting up more wires. Phantoming declined with the adoption of carrier systems. It is theoretically possible to create a phantom circuit from two other phantom circuits and so on up in a pyramid with a maximum 2n-1 circuits being derived from n original circuits. However, more than one level of phantoming is usually impractical. Isolation between the phantom circuit and the side circuits relies on accurate balance of the line and transformers. Imperfect balance results in crosstalk between the phantom and side circuits and this effect accumulates as each level of phantoms is added. Even small levels of crosstalk are unacceptable on analogue telecommunications circuits since speech crosstalk is still intelligible down to quite low levels. Phantom microphone powering Condenser microphones have impedance converter (current amplifier) circuitry that requires powering; in addition, the capsule of any non-electret, non-RF condenser microphone requires a polarizing voltage to be applied. Since the mid- to late 1960s most balanced, professional condenser microphones for recording and broadcast have used phantom powering. It can be provided by outboard AC or battery supplies, but nowadays is most often built into the mixing console, recorder or microphone preamplifier to which the microphones are connected. By far the most common circuit uses +48 V DC fed through a matched pair of 6.8 kΩ resistors for each input channel. This arrangement has been standardized by the IEC and ISO, along with a less-commonly-used arrangement with +12 V DC and 680 Ω feed resistors. As a practical matter, phantom powering allows the same two-conductor shielded cables to be used for both dynamic microphones and condenser microphones, while being harmless to balanced microphones that aren't designed to consume it, since the circuit balance prevents any substantial DC from flowing through the output circuit of those microphones. DC phantom Simple DC signalling can be achieved on a telecommunications line in a similar way to phantom powering of microphones. A switch connected to the transformer centre-tap at one end of the line can operate a similarly connected relay at the other end. The return path is through the ground connection. This arrangement can be used for remotely controlling equipment. Carrier circuit phantoms From the 1950s to around the 1980s, using phantoms on star-quad trunk carrier circuits was a popular method of deriving a high quality broadcast audio circuit. The multiplexed FDM telecommunications carrier system usually did not use the baseband of the cable because it was inconvenient to separate low frequencies with filters. On the other hand, a one-way audio phantom could be formed from the two pairs (go and return signals) making up the star-quad cable. Unloaded phantom Unloaded phantom is a phantom configuration of loaded lines (a circuit fitted with loading coils). The idea here is not to create additional circuits. Rather, the purpose is to cancel or greatly reduce the effect of the loading coils fitted to a line. The reason for doing this is that loaded lines have a definite cut-off frequency and it may be desired to equalise the line to a frequency which is higher than this, for example to make a circuit suitable for use by a broadcaster. Ideally, the loading would be removed or reduced for a permanent connection, but this is not feasible for temporary arrangements such as a requirement for outside broadcast. Instead, two circuits in a phantom configuration can be used to greatly reduce the inductance being inserted by the loading coils, and hence the loading effect. It works because the loading coils used on balanced lines have two windings, one for each leg of the circuit. They are both wound on a common core and the windings are so arranged that the magnetic flux induced by both of them is in the same direction. Both windings induce an emf in each other as well as their own self-induction. This effect greatly increases the inductance of the coil and hence its loading effectiveness. By contrast, when the circuit is in the phantom configuration the currents in the two wires of each pair are in the same direction and the magnetic flux is being cancelled. This has precisely the opposite effect and the inductance is greatly reduced. This configuration is most commonly used on the two pairs of a star-quad cable. It is not so successful with other pairs of wires. The difference in the path of the two pairs can easily destroy the balance and results in crosstalk and interference. This configuration can also be called "bunched pairs". However, "bunched pairs" can also refer to the straightforward connection of two lines in parallel which is not a phantom circuit and will not reduce the loading. See also Bridge circuit - a closely related concept; the operation of a phantom circuit depends on it being a kind of balanced bridge Single-wire earth return - power transmission using one wire and the Earth as a return conductor Sources and references AT&T: 'Principles of Electricity Applied to Telephone and Telegraph Work', 1953 (PDF-File, 39MB) Communication circuits Telecommunications techniques
Phantom circuit
[ "Engineering" ]
1,230
[ "Telecommunications engineering", "Communication circuits" ]
41,509
https://en.wikipedia.org/wiki/Phased%20array
In antenna theory, a phased array consists of many small antennas, positioned in an array, next to each other. Instead of sending a radio wave with one antenna, a computer system can shift the phase of the original wave by fractions of a wavelength for each individual antenna in the array. The frequency of the signal is not changed, only the phase by precisely delaying the signal for each antenna in the array, resulting in a shift of the wave crest and trough for each antenna. This allows the radio wave being sent out to be concentrated in a specific direction by creating a constructive interference pattern from the shifted phases of each individual wave. The resulting signal just looks like any other radio wave, being the strongest in the desired direction, where the phase of all individual waves align and constructively interfere. Phased arrays are mainly used at the high frequency end of the radio spectrum. Smaller wavelengths can be sent with smaller antennas, allowing the total size of the array to be practical. Description A phased array is an electronically scanned array, a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas. The general theory of an electromagnetic phased array also finds applications in ultrasonic and medical imaging application (phased array ultrasonics) and in optics optical phased array. In a simple array antenna, the radio frequency current from the transmitter is fed to multiple individual antenna elements with the proper phase relationship so that the radio waves from the separate elements combine (superpose) to form beams, to increase power radiated in desired directions and suppress radiation in undesired directions. In a phased array, the power from the transmitter is fed to the radiating elements through devices called phase shifters, controlled by a computer system, which can alter the phase or signal delay electronically, thus steering the beam of radio waves to a different direction. Since the size of an antenna array must extend many wavelengths to achieve the high gain needed for narrow beam-width, phased arrays are mainly practical at the high frequency end of the radio spectrum, in the UHF and microwave bands, in which the operating wavelengths are conveniently small. Phased arrays were originally conceived for use in military radar systems, to steer a beam of radio waves quickly across the sky to detect planes and missiles. These systems are now widely used and have spread to civilian applications such as 5G MIMO for cell phones. The phased array principle is also used in acoustics, and phased arrays of acoustic transducers are used in medical ultrasound imaging scanners (phased array ultrasonics), oil and gas prospecting (reflection seismology), and military sonar systems. The term "phased array" is also used to a lesser extent for non steerable array antennas in which the phase of the feed power and thus the radiation pattern of the antenna array is fixed. For example, AM broadcast radio antennas consisting of multiple mast radiators fed so as to create a specific radiation pattern are also called "phased arrays". Types Phased arrays take multiple forms. However, the four most common are the passive electronically scanned array (PESA), active electronically scanned array (AESA), hybrid beam forming phased array, and digital beam forming (DBF) array. A passive phased array or passive electronically scanned array (PESA) is a phased array in which the antenna elements are connected to a single transmitter and/or receiver, as shown in the first animation at top. PESAs are the most common type of phased array. Generally speaking, a PESA uses one receiver/exciter for the entire array. An active phased array or active electronically scanned array (AESA) is a phased array in which each antenna element has an analog transmitter/receiver (T/R) module which creates the phase shifting required to electronically steer the antenna beam. Active arrays are a more advanced, second-generation phased-array technology that are used in military applications; unlike PESAs they can radiate several beams of radio waves at multiple frequencies in different directions simultaneously. However, the number of simultaneous beams is limited by practical reasons of electronic packaging of the beam formers to approximately three simultaneous beams for an AESA. Each beam former has a receiver/exciter connected to it. A digital beam forming (DBF) phased array has a digital receiver/exciter at each element in the array. The signal at each element is digitized by the receiver/exciter. This means that antenna beams can be formed digitally in a field programmable gate array (FPGA) or the array computer. This approach allows for multiple simultaneous antenna beams to be formed. A hybrid beam forming phased array can be thought of as a combination of an AESA and a digital beam forming phased array. It uses subarrays that are active phased arrays (for instance, a subarray may be 64, 128 or 256 elements and the number of elements depends upon system requirements). The subarrays are combined to form the full array. Each subarray has its own digital receiver/exciter. This approach allows clusters of simultaneous beams to be created. A conformal antenna is a phased array in which the individual antennas, instead of being arranged in a flat plane, are mounted on a curved surface. The phase shifters compensate for the different path lengths of the waves due to the antenna elements' varying position on the surface, allowing the array to radiate a plane wave. Conformal antennas are used in aircraft and missiles, to integrate the antenna into the curving surface of the aircraft to reduce aerodynamic drag. History Phased array transmission was originally shown in 1905 by Nobel laureate Karl Ferdinand Braun who demonstrated enhanced transmission of radio waves in one direction. During World War II, Nobel laureate Luis Alvarez used phased array transmission in a rapidly steerable radar system for "ground-controlled approach", a system to aid in the landing of aircraft. At the same time, the GEMA in Germany built the Mammut 1. It was later adapted for radio astronomy leading to Nobel Prizes for Physics for Antony Hewish and Martin Ryle after several large phased arrays were developed at the University of Cambridge Interplanetary Scintillation Array. This design is also used for radar, and is generalized in interferometric radio antennas. In 1966, most phased-array radars use ferrite phase shifters or traveling-wave tubes to dynamically adjust the phase. The AN/SPS-33 -- installed on the nuclear-powered ships Long Beach and Enterprise around 1961 -- was claimed to be the only operational 3-D phased array in the world in 1966. The AN/SPG-59 was designed to generate multiple tracking beams from the transmitting array and simultaneously program independent receiving arrays. The first civilian 3D phased array was built in 1960 at the National Aviation Facilities Experimental Center; but was abandoned in 1961. In 2004, Caltech researchers demonstrated the first integrated silicon-based phased array receiver at 24 GHz with 8 elements. This was followed by their demonstration of a CMOS 24 GHz phased array transmitter in 2005 and a fully integrated 77 GHz phased array transceiver with integrated antennas in 2006 by the Caltech team. In 2007, DARPA researchers announced a 16-element phased-array radar antenna which was also integrated with all the necessary circuits on a single silicon chip and operated at 30–50 GHz. The relative amplitudes of—and constructive and destructive interference effects among—the signals radiated by the individual antennas determine the effective radiation pattern of the array. A phased array may be used to point a fixed radiation pattern, or to scan rapidly in azimuth or elevation. Simultaneous electrical scanning in both azimuth and elevation was first demonstrated in a phased array antenna at Hughes Aircraft Company, California in 1957. Applications Broadcasting In broadcast engineering, the term 'phased array' has a meaning different from its normal meaning, it means an ordinary array antenna, an array of multiple mast radiators designed to radiate a directional radiation pattern, as opposed to a single mast which radiates an omnidirectional pattern. Broadcast phased arrays have fixed radiation patterns and are not 'steered' during operation as are other phased arrays. Phased arrays are used by many AM broadcast radio stations to enhance signal strength and therefore coverage in the city of license, while minimizing interference to other areas. Due to the differences between daytime and nighttime ionospheric propagation at mediumwave frequencies, it is common for AM broadcast stations to change between day (groundwave) and night (skywave) radiation patterns by switching the phase and power levels supplied to the individual antenna elements (mast radiators) daily at sunrise and sunset. For shortwave broadcasts many stations use arrays of horizontal dipoles. A common arrangement uses 16 dipoles in a 4×4 array. Usually this is in front of a wire grid reflector. The phasing is often switchable to allow beam steering in azimuth and sometimes elevation. Radar Phased arrays were invented for radar tracking of ballistic missiles, and because of their fast tracking abilities phased array radars are widely used in military applications. For example, because of the rapidity with which the beam can be steered, phased array radars allow a warship to use one radar system for surface detection and tracking (finding ships), air detection and tracking (finding aircraft and missiles) and missile uplink capabilities. Before using these systems, each surface-to-air missile in flight required a dedicated fire-control radar, which meant that radar-guided weapons could only engage a small number of simultaneous targets. Phased array systems can be used to control missiles during the mid-course phase of the missile's flight. During the terminal portion of the flight, continuous-wave fire control directors provide the final guidance to the target. Because the antenna pattern is electronically steered, phased array systems can direct radar beams fast enough to maintain a fire control quality track on many targets simultaneously while also controlling several in-flight missiles. The AN/SPY-1 phased array radar, part of the Aegis Combat System deployed on modern U.S. cruisers and destroyers, "is able to perform search, track and missile guidance functions simultaneously with a capability of over 100 targets." Likewise, the Thales Herakles phased array multi-function radar used in service with France and Singapore has a track capacity of 200 targets and is able to achieve automatic target detection, confirmation and track initiation in a single scan, while simultaneously providing mid-course guidance updates to the MBDA Aster missiles launched from the ship. The German Navy and the Royal Dutch Navy have developed the Active Phased Array Radar System (APAR). The MIM-104 Patriot and other ground-based antiaircraft systems use phased array radar for similar benefits. Phased arrays are used in naval sonar, in active (transmit and receive) and passive (receive only) and hull-mounted and towed array sonar. Space probe communication The MESSENGER spacecraft was a space probe mission to the planet Mercury (2011–2015). This was the first deep-space mission to use a phased-array antenna for communications. The radiating elements are circularly-polarized, slotted waveguides. The antenna, which uses the X band, used 26 radiative elements and can gracefully degrade. Weather research usage The National Severe Storms Laboratory has been using a SPY-1A phased array antenna, provided by the US Navy, for weather research at its Norman, Oklahoma facility since April 23, 2003. It is hoped that research will lead to a better understanding of thunderstorms and tornadoes, eventually leading to increased warning times and enhanced prediction of tornadoes. Current project participants include the National Severe Storms Laboratory and National Weather Service Radar Operations Center, Lockheed Martin, United States Navy, University of Oklahoma School of Meteorology, School of Electrical and Computer Engineering, and Atmospheric Radar Research Center, Oklahoma State Regents for Higher Education, the Federal Aviation Administration, and Basic Commerce and Industries. The project includes research and development, future technology transfer and potential deployment of the system throughout the United States. It is expected to take 10 to 15 years to complete and initial construction was approximately $25 million. A team from Japan's RIKEN Advanced Institute for Computational Science (AICS) has begun experimental work on using phased-array radar with a new algorithm for instant weather forecasts. Optics Within the visible or infrared spectrum of electromagnetic waves it is possible to construct optical phased arrays. They are used in wavelength multiplexers and filters for telecommunication purposes, laser beam steering, and holography. Synthetic array heterodyne detection is an efficient method for multiplexing an entire phased array onto a single element photodetector. The dynamic beam forming in an optical phased array transmitter can be used to electronically raster or vector scan images without using lenses or mechanically moving parts in a lensless projector. Optical phased array receivers have been demonstrated to be able to act as lensless cameras by selectively looking at different directions. Satellite broadband internet transceivers Starlink is a low Earth orbit satellite constellation that is under construction . It is designed to provide broadband internet connectivity to consumers; the user terminals of the system will use phased array antennas. Radio-frequency identification (RFID) By 2014, phased array antennas were integrated into RFID systems to increase the area of coverage of a single system by 100% to while still using traditional passive UHF tags. Human-machine interfaces (HMI) A phased array of acoustic transducers, denominated airborne ultrasound tactile display (AUTD), was developed in 2008 at the University of Tokyo's Shinoda Lab to induce tactile feedback. This system was demonstrated to enable a user to interactively manipulate virtual holographic objects. Radio astronomy Phased Array Feeds (PAF) have recently been used at the focus of radio telescopes to provide many beams, giving the radio telescope a very wide field of view. Three examples are the ASKAP telescope in Australia, the Apertif upgrade to the Westerbork Synthesis Radio Telescope in The Netherlands, and the Florida Space Institute in the United States . Critical theory and arithmetic Array factor The total directivity of a phased array will be a result of the gain of the individual array elements, and the directivity due their positioning in an array. This latter component is closely tied (but not equal to) to the array factor. In a (rectangular) planar phased array, of dimensions , with inter-element spacing and , respectively, the array factor can be calculated accordingly: Here, and are the directions which we are taking the array factor in, in the coordinate frame depicted to the right. The factors and are the progressive phase shift that is used to steer the beam electronically. The factors and are the excitation coefficients of the individual elements. Beam steering is indicated in the same coordinate frame, however the direction of steering is indicated with and , which is used in calculation of progressive phase: In all above equations, the value describes the wavenumber of the frequency used in transmission. These equations can be solved to predict the nulls, main lobe, and grating lobes of the array. Referring to the exponents in the array factor equation, we can say that major and grating lobes will occur at integer solutions to the following equations: Worked example It is common in engineering to provide phased array values in decibels through . Recalling the complex exponential in the array factor equation above, often, what is really meant by array factor is the magnitude of the summed phasor produced at the end of array factor calculation. With this, we can produce the following equation:For the ease of visualization, we will analyze array factor given an input azimuth and elevation, which we will map to the array frame and through the following conversion: This represents a coordinate frame whose axis is aligned with the array axis, and whose axis is aligned with the array axis. If we consider a phased array, this process provides the following values for , when steering to bore-sight (,): These values have been clipped to have a minimum of -50 dB, however, in reality, null points in the array factor pattern will have values significantly smaller than this. Different types of phased arrays There are two main types of beamformers. These are time domain beamformers and frequency domain beamformers. From a theoretical point of view, both are in principle the same operation, with just a Fourier transform allowing conversion from one to the other type. A graduated attenuation window is sometimes applied across the face of the array to improve side-lobe suppression performance, in addition to the phase shift. Time domain beamformer works by introducing time delays. The basic operation is called "delay and sum". It delays the incoming signal from each array element by a certain amount of time, and then adds them together. A Butler matrix allows several beams to be formed simultaneously, or one beam to be scanned through an arc. The most common kind of time domain beam former is serpentine waveguide. Active phased array designs use individual delay lines that are switched on and off. Yttrium iron garnet phase shifters vary the phase delay using the strength of a magnetic field. There are two different types of frequency domain beamformers. The first type separates the different frequency components that are present in the received signal into multiple frequency bins (using either a Discrete Fourier transform (DFT) or a filterbank). When different delay and sum beamformers are applied to each frequency bin, the result is that the main lobe simultaneously points in multiple different directions at each of the different frequencies. This can be an advantage for communication links, and is used with the SPS-48 radar. The other type of frequency domain beamformer makes use of Spatial Frequency. Discrete samples are taken from each of the individual array elements. The samples are processed using a DFT. The DFT introduces multiple different discrete phase shifts during processing. The outputs of the DFT are individual channels that correspond with evenly spaced beams formed simultaneously. A 1-dimensional DFT produces a fan of different beams. A 2-dimensional DFT produces beams with a pineapple configuration. These techniques are used to create two kinds of phased array. Dynamic an array of variable phase shifters are used to move the beam Fixed the beam position is stationary with respect to the array face and the whole antenna is moved There are two further sub-categories that modify the kind of dynamic array or fixed array. Active amplifiers or processors are in each phase shifter element Passive large central amplifier with attenuating phase shifters Dynamic phased array Each array element incorporates an adjustable phase shifter. These are collectively used to move the beam with respect to the array face. Dynamic phased arrays require no physical movement to aim the beam. The beam is moved electronically. This can produce antenna motion fast enough to use a small pencil beam to simultaneously track multiple targets while searching for new targets using just one radar set, a capability known as track while search. As an example, an antenna with a 2-degree beam with a pulse rate of 1 kHz will require approximately 8 seconds to cover an entire hemisphere consisting of 8,000 pointing positions. This configuration provides 12 opportunities to detect a vehicle over a range of , which is suitable for military applications. The position of mechanically steered antennas can be predicted, which can be used to create electronic countermeasures that interfere with radar operation. The flexibility resulting from phased array operation allows beams to be aimed at random locations, which eliminates this vulnerability. This is also desirable for military applications. Fixed phased array Fixed phased array antennas are typically used to create an antenna with a more desirable form factor than the conventional parabolic reflector or cassegrain reflector. Fixed phased arrays incorporate fixed phase shifters. For example, most commercial FM Radio and TV antenna towers use a collinear antenna array, which is a fixed phased array of dipole elements. In radar applications, this kind of phased array is physically moved during the track and scan process. There are two configurations. Multiple frequencies with a delay-line Multiple adjacent beams The SPS-48 radar uses multiple transmit frequencies with a serpentine delay line along the left side of the array to produce vertical fan of stacked beams. Each frequency experiences a different phase shift as it propagates down the serpentine delay line, which forms different beams. A filter bank is used to split apart the individual receive beams. The antenna is mechanically rotated. Semi-active radar homing uses monopulse radar that relies on a fixed phased array to produce multiple adjacent beams that measure angle errors. This form factor is suitable for gimbal mounting in missile seekers. Active phased array Active electronically-scanned arrays (AESA) elements incorporate transmit amplification with phase shift in each antenna element (or group of elements). Each element also includes receive pre-amplification. The phase shifter setting is the same for transmit and receive. Active phased arrays do not require phase reset after the end of the transmit pulse, which is compatible with Doppler radar and pulse-Doppler radar. Passive phased array Passive phased arrays typically use large amplifiers that produce all of the microwave transmit signal for the antenna. Phase shifters typically consist of waveguide elements controlled by magnetic field, voltage gradient, or equivalent technology. The phase shift process used with passive phased arrays typically puts the receive beam and transmit beam into diagonally opposite quadrants. The sign of the phase shift must be inverted after the transmit pulse is finished and before the receive period begins to place the receive beam into the same location as the transmit beam. That requires a phase impulse that degrades sub-clutter visibility performance on Doppler radar and Pulse-Doppler radar. As an example, Yttrium iron garnet phase shifters must be changed after transmit pulse quench and before receiver processing starts to align transmit and receive beams. That impulse introduces FM noise that degrades clutter performance. Passive phased array design is used in the AEGIS Combat System for direction-of-arrival estimation. See also Aperture synthesis History of smart antennas Huygens–Fresnel principle Interferometric synthetic-aperture radar Inverse synthetic-aperture radar Multi-user MIMO Optical heterodyne detection Radar MASINT Reconfigurable antenna Side-scan sonar Single-frequency network Smart antenna Standard linear array Synthetic-aperture radar Synthetic aperture sonar Synthetically thinned aperture radar Thinned-array curse Wave field synthesis References External links Radar Research and Development - Phased Array Radar—National Severe Storms Laboratory Shipboard Phased Array Radars NASA Report: MMICs For Multiple Scanning Beam Antennas for Space Applications Principle of Phased Array 'Phased Array' microphone system of Tony Faulkner Principles of Phased Array systems - Tutorial 1 Antennas (radio) Broadcast engineering Domes Radar Radio frequency antenna types Wireless locating
Phased array
[ "Technology", "Engineering" ]
4,612
[ "Broadcast engineering", "Electronic engineering", "Wireless locating" ]
41,510
https://en.wikipedia.org/wiki/Phase%20distortion
In signal processing, phase distortion or phase-frequency distortion is distortion, that is, change in the shape of the waveform, that occurs when (a) a filter's phase response is not linear over the frequency range of interest, that is, the phase shift introduced by a circuit or device is not directly proportional to frequency, or (b) the zero-frequency intercept of the phase-frequency characteristic is not 0 or an integral multiple of 2π radians. Audibility of phase distortion Grossly changed phase relationships, without changing amplitudes, can be audible but the degree of audibility of the type of phase shifts expected from typical sound systems remains debated. See also Audio system measurements Phase noise References Electrical parameters Audio amplifier specifications
Phase distortion
[ "Engineering" ]
150
[ "Electronic engineering", "Electrical engineering", "Audio engineering", "Audio amplifier specifications", "Electrical parameters" ]
41,519
https://en.wikipedia.org/wiki/Photic%20zone
The photic zone (or euphotic zone, epipelagic zone, or sunlight zone) is the uppermost layer of a body of water that receives sunlight, allowing phytoplankton to perform photosynthesis. It undergoes a series of physical, chemical, and biological processes that supply nutrients into the upper water column. The photic zone is home to the majority of aquatic life due to the activity (primary production) of the phytoplankton. The thicknesses of the photic and euphotic zones vary with the intensity of sunlight as a function of season and latitude and with the degree of water turbidity. The bottommost, or aphotic, zone is the region of perpetual darkness that lies beneath the photic zone and includes most of the ocean waters. Photosynthesis in photic zone In the photic zone, the photosynthesis rate exceeds the respiration rate. This is due to the abundant solar energy which is used as an energy source for photosynthesis by primary producers such as phytoplankton. These phytoplankton grow extremely quickly because of sunlight's heavy influence, enabling it to be produced at a fast rate. In fact, ninety five percent of photosynthesis in the ocean occurs in the photic zone. Therefore, if we go deeper, beyond the photic zone, such as into the compensation point, there is little to no phytoplankton, because of insufficient sunlight. The zone which extends from the base of the euphotic zone to the aphotic zone is sometimes called the dysphotic zone. Life in the photic zone Ninety percent of marine life lives in the photic zone, which is approximately two hundred meters deep. This includes phytoplankton (plants), including dinoflagellates, diatoms, cyanobacteria, coccolithophores, and cryptomonads. It also includes zooplankton, the consumers in the photic zone. There are carnivorous meat eaters and herbivorous plant eaters. Next, copepods are the small crustaceans distributed everywhere in the photic zone. Finally, there are nekton (animals that can propel themselves, like fish, squids, and crabs), which are the largest and the most obvious animals in the photic zone, but their quantity is the smallest among all the groups. Phytoplankton are microscopic plants living suspended in the water column that have little or no means of motility. They are primary producers that use solar energy as a food source. Detritivores and scavengers are rare in the photic zone. Microbial decomposition of dead organisms begins here and continues once the bodies sink to the aphotic zone where they form the most important source of nutrients for deep sea organisms. The depth of the photic zone depends on the transparency of the water. If the water is very clear, the photic zone can become very deep. If it is very murky, it can be only fifty feet (fifteen meters) deep. Animals within the photic zone use the cycle of light and dark as an important environmental signal, migration is directly linked to this fact, fishes use the concept of dusk and dawn when its time to migrate, the photic zone resembles this concept providing a sense of time. These animals can be herrings and sardines and other fishes that consistently live within the photic zone. Nutrient uptake in the photic zone Due to biological uptake, the photic zone has relatively low levels of nutrient concentrations. As a result, phytoplankton doesn't receive enough nutrients when there is high water-column stability. The spatial distribution of organisms can be controlled by a number of factors. Physical factors include: temperature, hydrostatic pressure, turbulent mixing such as the upward turbulent flux of inorganic nitrogen across the nutricline. Chemical factors include oxygen and trace elements. Biological factors include grazing and migrations. Upwelling carries nutrients from the deep waters into the photic zone, strengthening phytoplankton growth. The remixing and upwelling eventually bring nutrient-rich wastes back into the photic zone. The Ekman transport additionally brings more nutrients to the photic zone. Nutrient pulse frequency affects the phytoplankton competition. Photosynthesis produces more of it. Being the first link in the food chain, what happens to phytoplankton creates a rippling effect for other species. Besides phytoplankton, many other animals also live in this zone and utilize these nutrients. The majority of ocean life occurs in the photic zone, the smallest ocean zone by water volume. The photic zone, although small, has a large impact on those who reside in it. Photic zone depth The depth is, by definition, where radiation is degraded down to 1% of its surface strength. Accordingly, its thickness depends on the extent of light attenuation in the water column. As incoming light at the surface can vary widely, this says little about the net growth of phytoplankton. Typical euphotic depths vary from only a few centimetres in highly turbid eutrophic lakes, to around 200 meters in the open ocean. It also varies with seasonal changes in turbidity, which can be strongly driven by phytoplankton concentrations, such that the depth of the photic zone often decreases as primary production increases. Moreover, the respiration rate is actually greater than the photosynthesis rate. The reason why phytoplankton production is so important is because it plays a prominent role when interwoven with other food webs. Light attenuation Most of the solar energy reaching the Earth is in the range of visible light, with wavelengths between about 400-700 nm. Each colour of visible light has a unique wavelength, and together they make up white light. The shortest wavelengths are on the violet and ultraviolet end of the spectrum, while the longest wavelengths are at the red and infrared end. In between, the colours of the visible spectrum comprise the familiar “ROYGBIV”; red, orange, yellow, green, blue, indigo, and violet. Water is very effective at absorbing incoming light, so the amount of light penetrating the ocean declines rapidly (is attenuated) with depth. At one metre depth only 45% of the solar energy that falls on the ocean surface remains. At 10 metres depth only 16% of the light is still present, and only 1% of the original light is left at 100 metres. No light penetrates beyond 1000 metres. In addition to overall attenuation, the oceans absorb the different wavelengths of light at different rates. The wavelengths at the extreme ends of the visible spectrum are attenuated faster than those wavelengths in the middle. Longer wavelengths are absorbed first; red is absorbed in the upper 10 metres, orange by about 40 metres, and yellow disappears before 100 metres. Shorter wavelengths penetrate further, with blue and green light reaching the deepest depths. This is why things appear blue underwater. How colours are perceived by the eye depends on the wavelengths of light that are received by the eye. An object appears red to the eye because it reflects red light and absorbs other colours. So the only colour reaching the eye is red. Blue is the only colour of light available at depth underwater, so it is the only colour that can be reflected back to the eye, and everything has a blue tinge under water. A red object at depth will not appear red to us because there is no red light available to reflect off of the object. Objects in water will only appear as their real colours near the surface where all wavelengths of light are still available, or if the other wavelengths of light are provided artificially, such as by illuminating the object with a dive light. Water in the open ocean appears clear and blue because it contains much less particulate matter, such as phytoplankton or other suspended particles, and the clearer the water, the deeper the light penetration. Blue light penetrates deeply and is scattered by the water molecules, while all other colours are absorbed; thus the water appears blue. On the other hand, coastal water often appears greenish. Coastal water contains much more suspended silt and algae and microscopic organisms than the open ocean. Many of these organisms, such as phytoplankton, absorb light in the blue and red range through their photosynthetic pigments, leaving green as the dominant wavelength of reflected light. Therefore the higher the phytoplankton concentration in water, the greener it appears. Small silt particles may also absorb blue light, further shifting the colour of water away from blue when there are high concentrations of suspended particles. The ocean can be divided into depth layers depending on the amount of light penetration, as discussed in pelagic zone. The upper 200 metres is referred to as the photic or euphotic zone. This represents the region where enough light can penetrate to support photosynthesis, and it corresponds to the epipelagic zone. From 200 to 1000 metres lies the dysphotic zone, or the twilight zone (corresponding with the mesopelagic zone). There is still some light at these depths, but not enough to support photosynthesis. Below 1000 metres is the aphotic (or midnight) zone, where no light penetrates. This region includes the majority of the ocean volume, which exists in complete darkness. Paleoclimatology Phytoplankton are unicellular microorganisms which form the base of the ocean food chains. They are dominated by diatoms, which grow silicate shells called frustules. When diatoms die their shells can settle on the seafloor and become microfossils. Over time, these microfossils become buried as opal deposits in the marine sediment. Paleoclimatology is the study of past climates. Proxy data is used in order to relate elements collected in modern-day sedimentary samples to climatic and oceanic conditions in the past. Paleoclimate proxies refer to preserved or fossilized physical markers which serve as substitutes for direct meteorological or ocean measurements. An example of proxies is the use of diatom isotope records of δ13C, δ18O, δ30Si (δ13Cdiatom, δ18Odiatom, and δ30Sidiatom). In 2015, Swann and Snelling used these isotope records to document historic changes in the photic zone conditions of the north-west Pacific Ocean, including nutrient supply and the efficiency of the soft-tissue biological pump, from the modern day back to marine isotope stage 5e, which coincides with the last interglacial period. Peaks in opal productivity in the marine isotope stage are associated with the breakdown of the regional halocline stratification and increased nutrient supply to the photic zone. The initial development of the halocline and stratified water column has been attributed to the onset of major Northern Hemisphere glaciation at 2.73 Ma, which increased the flux of freshwater to the region, via increased monsoonal rainfall and/or glacial meltwater, and sea surface temperatures. The decrease of abyssal water upwelling associated with this may have contributed to the establishment of globally cooler conditions and the expansion of glaciers across the Northern Hemisphere from 2.73 Ma. While the halocline appears to have prevailed through the late Pliocene and early Quaternary glacial–interglacial cycles, other studies have shown that the stratification boundary may have broken down in the late Quaternary at glacial terminations and during the early part of interglacials. Phytoplankton side notes. Phytoplankton are restricted to the photo zone only. As its growth is completely dependent upon photosynthesis. This results in the 50–100 m water level inside the ocean. Growth can also come from land factors, for example minerals that are dissolved from rocks, mineral nutrients from generations of plants and animals ,that made its way into the photic zone. An increase in the amount of phytoplankton also creates an increase in zooplankton, the zooplankton feeds on the phytoplankton as they are at the bottom of the food chain. Dimethylsulfide Dimethylsulfide loss within the photic zone is controlled by microbial uptake and photochemical degradation. But what exactly is dimethylsulfide and why is it important? This compound (see the photo) helps regulate sulfur cycle and ecology within the ocean. Marine bacteria, algae, coral and most other organisms within the ocean release this, constituting a range of gene families. However this compound can be toxic to humans if swallowed, absorbed through the skin and inhaled. Proteins within plants and animals depend on this compound. Making it a significant part of ecology, it's good to know that it lives in the photic zone as well. See also Electromagnetic absorption by water Epipelagic fish Mesophotic coral reef References Aquatic ecology Oceanographical terminology
Photic zone
[ "Biology" ]
2,709
[ "Aquatic ecology", "Ecosystems" ]
41,524
https://en.wikipedia.org/wiki/Renaissance%20architecture
Renaissance architecture is the European architecture of the period between the early 15th and early 16th centuries in different regions, demonstrating a conscious revival and development of certain elements of ancient Greek and Roman thought and material culture. Stylistically, Renaissance architecture followed Gothic architecture and was succeeded by Baroque architecture and neoclassical architecture. Developed first in Florence, with Filippo Brunelleschi as one of its innovators, the Renaissance style quickly spread to other Italian cities. The style was carried to other parts of Europe at different dates and with varying degrees of impact. Renaissance style places emphasis on symmetry, proportion, geometry and the regularity of parts, as demonstrated in the architecture of classical antiquity and in particular ancient Roman architecture, of which many examples remained. Orderly arrangements of columns, pilasters and lintels, as well as the use of semicircular arches, hemispherical domes, niches and aediculae replaced the more complex proportional systems and irregular profiles of medieval buildings. Historiography The word "Renaissance" derives from the term rinascita, which means rebirth, first appeared in Giorgio Vasari's Lives of the Most Excellent Painters, Sculptors, and Architects, 1550. Although the term Renaissance was used first by the French historian Jules Michelet, it was given its more lasting definition from the Swiss historian Jacob Burckhardt, whose book The Civilization of the Renaissance in Italy, 1860,<ref>Die Kultur der Renaissance in Italien, 1860 (The Civilization of the Renaissance in Italy, 1860, English translation, by SGC Middlemore, in 2 vols., London, 1878</ref> was influential in the development of the modern interpretation of the Italian Renaissance. The folio of measured drawings Édifices de Rome moderne; ou, Recueil des palais, maisons, églises, couvents et autres monuments (The Buildings of Modern Rome), first published in 1840 by Paul Letarouilly, also played an important part in the revival of interest in this period. Erwin Panofsky, Renaissance and Renascences in Western Art, (New York: Harper and Row, 1960) The Renaissance style was recognized by contemporaries in the term "all'antica", or "in the ancient manner" (of the Romans). Principal phases Historians often divide the Renaissance in Italy into three phases. Whereas art historians might talk of an Early Renaissance period, in which they include developments in 14th-century painting and sculpture, this is usually not the case in architectural history. The bleak economic conditions of the late 14th century did not produce buildings that are considered to be part of the Renaissance. As a result, the word Renaissance among architectural historians usually applies to the period 1400 to , or later in the case of non-Italian Renaissances. Historians often use the following designations: Quattrocento () During the Quattrocento, sometimes known as the Early Renaissance, concepts of architectural order were explored and rules were formulated. The study of classical antiquity led in particular to the adoption of Classical detail and ornamentation. Space, as an element of architecture, was used differently than it was in the Middle Ages. Space was organised by proportional logic, its form and rhythm subject to geometry, rather than being created by intuition as in Medieval buildings. The prime example of this is the Basilica of San Lorenzo, Florence by Filippo Brunelleschi (1377–1446). High Renaissance () During the High Renaissance, concepts derived from classical antiquity were developed and used with greater confidence. The most representative architect is Donato Bramante (1444–1514), who expanded the applicability of classical architecture to contemporary buildings. His Tempietto di San Pietro in Montorio (1503) was directly inspired by circular Roman temples. He was, however, hardly a slave to the classical forms and it was his style that was to dominate Italian architecture in the 16th century. Mannerism () During the Mannerist period, architects experimented with using architectural forms to emphasize solid and spatial relationships. The Renaissance ideal of harmony gave way to freer and more imaginative rhythms. The best known architect associated with the Mannerist style was Michelangelo (1475–1564), who frequently used the giant order in his architecture, a large pilaster that stretches from the bottom to the top of a façade. He used this in his design for the Piazza del Campidoglio in Rome. Prior to the 20th century, the term Mannerism had negative connotations, but it is now used to describe the historical period in more general non-judgemental terms. From Renaissance to Baroque As the new style of architecture spread out from Italy, most other European countries developed a sort of Proto-Renaissance style, before the construction of fully formulated Renaissance buildings. Each country in turn then grafted its own architectural traditions to the new style, so that Renaissance buildings across Europe are diversified by region. Within Italy the evolution of Renaissance architecture into Mannerism, with widely diverging tendencies in the work of Michelangelo, Giulio Romano and Andrea Palladio, led to the Baroque style in which the same architectural vocabulary was used for very different rhetoric. Outside Italy, Baroque architecture was more widespread and fully developed than the Renaissance style, with significant buildings as far afield as Mexico and the Philippines. History Development in Italy Italy of the 15th century, and the city of Florence in particular, was home to the Renaissance. It is in Florence that the new architectural style had its beginning, not slowly evolving in the way that Gothic grew out of Romanesque, but consciously brought to being by particular architects who sought to revive the order of a past "Golden Age". The scholarly approach to the architecture of the ancient coincided with the general revival of learning. A number of factors were influential in bringing this about. Architectural Italian architects had always preferred forms that were clearly defined and structural members that expressed their purpose. Many Tuscan Romanesque buildings demonstrate these characteristics, as seen in the Florence Baptistery and Pisa Cathedral. Italy had never fully adopted the Gothic style of architecture. Apart from Milan Cathedral, (influenced by French Rayonnant Gothic), few Italian churches show the emphasis on vertical, the clustered shafts, ornate tracery and complex ribbed vaulting that characterise Gothic in other parts of Europe. The presence, particularly in Rome, of ancient architectural remains showing the ordered Classical style provided an inspiration to artists at a time when philosophy was also turning towards the Classical. Political In the 15th century, Florence and Venice extended their power through much of the area that surrounded them, making the movement of artists possible. This enabled Florence to have significant artistic influence in Milan, and through Milan, France. In 1377, the return of the Pope from the Avignon Papacy and the re-establishment of the Papal court in Rome, brought wealth and importance to that city, as well as a renewal in the importance of the Pope in Italy, which was further strengthened by the Council of Constance in 1417. Successive Popes, especially Julius II, 1503–13, sought to extend the Papacy's temporal power throughout Italy. Commercial In the early Renaissance, Venice controlled sea trade over goods from the East. The large towns of Northern Italy were prosperous through trade with the rest of Europe, Genoa providing a seaport for the goods of France and Spain; Milan and Turin being centres of overland trade, and maintaining substantial metalworking industries. Trade brought wool from England to Florence, ideally located on the river for the production of fine cloth, the industry on which its wealth was founded. By dominating Pisa, Florence gained a seaport, and became the most powerful state in Tuscany. In this commercial climate, one family in particular turned their attention from trade to the lucrative business of money-lending. The Medici became the chief bankers to the princes of Europe, becoming virtually princes themselves as they did so, by reason of both wealth and influence. Along the trade routes, and thus offered some protection by commercial interest, moved not only goods but also artists, scientists and philosophers. Religious The return of the Pope Gregory XI from Avignon in September 1377 and the resultant new emphasis on Rome as the center of Christian spirituality, brought about a surge in the building of churches in Rome such as had not taken place for nearly a thousand years. This commenced in the mid 15th century and gained momentum in the 16th century, reaching its peak in the Baroque period. The construction of the Sistine Chapel with its uniquely important decorations and the entire rebuilding of St. Peter's Basilica, one of Christendom's most significant churches, were part of this process. In the wealthy Republic of Florence, the impetus for church-building was more civic than spiritual. The unfinished state of the enormous Florence Cathedral dedicated to the Blessed Virgin Mary did no honour to the city under her patronage. However, as the technology and finance were found to complete it, the rising dome did credit not only to the Virgin Mary, its architect and the Church but also to the Signoria, the Guilds and the sectors of the city from which the manpower to construct it was drawn. The dome inspired further religious works in Florence. Philosophic The development of printed books, the rediscovery of ancient writings, the expanding of political and trade contacts and the exploration of the world all increased knowledge and the desire for education. The reading of philosophies that were not based on Christian theology led to the development of humanism through which it was clear that while God had established and maintained order in the Universe, it was the role of Man to establish and maintain order in Society. Civil Through humanism, civic pride and the promotion of civil peace and order were seen as the marks of citizenship. This led to the building of structures such as Brunelleschi's Hospital of the Innocents with its elegant colonnade forming a link between the charitable building and the public square, and the Laurentian Library where the collection of books established by the Medici family could be consulted by scholars. Some major ecclesiastical building works were also commissioned, not by the church, but by guilds representing the wealth and power of the city. Brunelleschi's dome at Florence Cathedral, more than any other building, belonged to the populace because the construction of each of the eight segments was achieved by a different quarter of the city. Patronage As in the Platonic Academy of Athens, it was seen by those of Humanist understanding that those people who had the benefit of wealth and education ought to promote the pursuit of learning and the creation of that which was beautiful. To this end, wealthy families—the Medici of Florence, the Gonzaga of Mantua, the Farnese in Rome, the Sforzas in Milan—gathered around them people of learning and ability, promoting the skills and creating employment for the most talented artists and architects of their day. Rise of architectural theory During the Renaissance, architecture became not only a question of practice, but also a matter for theoretical discussion. Printing played a large role in the dissemination of ideas. The first treatise on architecture was ("On the Subject of Building") by Leon Battista Alberti in 1450. It was to some degree dependent on Vitruvius's De architectura, a manuscript of which was discovered in 1414 in a library in Switzerland. De re aedificatoria in 1485 became the first printed book on architecture. Sebastiano Serlio (1475 – c. 1554) produced the next important text, the first volume of which appeared in Venice in 1537; it was entitled Regole generali d'architettura ("General Rules of Architecture"). It is known as Serlio's "Fourth Book" since it was the fourth in Serlio's original plan of a treatise in seven books. In all, five books were published. In 1570, Andrea Palladio (1508–1580) published I quattro libri dell'architettura ("The Four Books of Architecture") in Venice. This book was widely printed and responsible to a great degree for spreading the ideas of the Renaissance through Europe. All these books were intended to be read and studied not only by architects, but also by patrons. Spread of the Renaissance in Italy In the 15th century the courts of certain other Italian states became centres for spreading of Renaissance philosophy, art and architecture. In Mantua at the court of the Gonzaga, Alberti designed two churches, the Basilica of Sant'Andrea and San Sebastiano. Urbino was an important centre with the Ducal Palace being constructed for Federico da Montefeltro in the mid 15th century. The Duke employed Luciano Laurana from Dalmatia, renowned for his expertise at fortification. The design incorporates much of the earlier medieval building and includes an unusual turreted three-storeyed façade. Laurana was assisted by Francesco di Giorgio Martini. Later parts of the building are clearly Florentine in style, particularly the inner courtyard, but it is not known who the designer was. Ferrara, under the Este, was expanded in the late 15th century, with several new palaces being built such as the Palazzo dei Diamanti and Palazzo Schifanoia for Borso d'Este. In Milan, under the Visconti, the Certosa di Pavia was completed, and then later under the Sforza, the Castello Sforzesco was built. Venetian Renaissance architecture developed a particularly distinctive character because of local conditions. San Zaccaria received its Renaissance façade at the hands of Antonio Gambello and Mauro Codussi, begun in the 1480s. Giovanni Maria Falconetto, the Veronese architect-sculptor, introduced Renaissance architecture to Padua with the Loggia and Odeo Cornaro in the garden of Alvise Cornaro. In southern Italy, Renaissance masters were called to Naples by Alfonso V of Aragon after his conquest of the Kingdom of Naples. The most notable examples of Renaissance architecture in that city are the Cappella Caracciolo, attributed to Bramante, and the Palazzo Orsini di Gravina, built by Gabriele d'Angelo between 1513 and 1549. Characteristics The Classical orders were analysed and reconstructed to serve new purposes. While the obvious distinguishing features of Classical Roman architecture were adopted by Renaissance architects, the forms and purposes of buildings had changed over time, as had the structure of cities. Among the earliest buildings of the reborn Classicism were the type of churches that the Romans had never constructed. Neither were there models for the type of large city dwellings required by wealthy merchants of the 15th century. Conversely, there was no call for enormous sporting fixtures and public bath houses such as the Romans had built. Plan The plans of Renaissance buildings have a square, symmetrical appearance in which proportions are usually based on a module. Within a church, the module is often the width of an aisle. The need to integrate the design of the plan with the façade was introduced as an issue in the work of Filippo Brunelleschi, but he was never able to carry this aspect of his work into fruition. The first building to demonstrate this was Basilica of Sant'Andrea, Mantua by Leone Battista Alberti. The development of the plan in secular architecture was to take place in the 16th century and culminated with the work of Palladio. Façade Façades are symmetrical around their vertical axis. Church façades are generally surmounted by a pediment and organised by a system of pilasters, arches and entablatures. The columns and windows show a progression towards the centre. One of the first true Renaissance façades was Pienza Cathedral (1459–62), which has been attributed to the Florentine architect Bernardo Gambarelli (known as Rossellino) with Leone Battista Alberti perhaps having some responsibility in its design as well. Domestic buildings are often surmounted by a cornice. There is a regular repetition of openings on each floor, and the centrally placed door is marked by a feature such as a balcony, or rusticated surround. An early and much copied prototype was the façade for the Palazzo Rucellai (1446 and 1451) in Florence with its three registers of pilasters. Columns and pilasters Roman and Greek orders of columns are used: Tuscan, Doric, Ionic, Corinthian and Composite. The orders can either be structural, supporting an arcade or architrave, or purely decorative, set against a wall in the form of pilasters. During the Renaissance, architects aimed to use columns, pilasters, and entablatures as an integrated system. One of the first buildings to use pilasters as an integrated system was in the Old Sacristy (1421–1440) by Brunelleschi. Arches Arches are semi-circular or (in the Mannerist style) segmental. Arches are often used in arcades, supported on piers or columns with capitals. There may be a section of entablature between the capital and the springing of the arch. Alberti was one of the first to use the arch on a monumental scale at the Basilica of Sant'Andrea, Mantua. Vaults Vaults do not have ribs. They are semi-circular or segmental and on a square plan, unlike the Gothic vault which is frequently rectangular. The barrel vault is returned to architectural vocabulary as at St. Andrea in Mantua. Domes The dome is used frequently, both as a very large structural feature that is visible from the exterior, and also as a means of roofing smaller spaces where they are only visible internally. After the success of the dome in Brunelleschi's design for Florence Cathedral and its use in Bramante's plan for St. Peter's Basilica (1506) in Rome, the dome became an indispensable element in church architecture and later even for secular architecture, such as Palladio's Villa Rotonda. Ceilings Roofs are fitted with flat or coffered ceilings. They are not left open as in Medieval architecture. They are frequently painted or decorated. Doors Doors usually have square lintels. They may be set with in an arch or surmounted by a triangular or segmental pediment. Openings that do not have doors are usually arched and frequently have a large or decorative keystone. Windows Windows may be paired and set within a semi-circular arch. They may have square lintels and triangular or segmental pediments, which are often used alternately. Emblematic in this respect is the Palazzo Farnese in Rome, begun in 1517. In the Mannerist period the Palladian arch was employed, using a motif of a high semi-circular topped opening flanked with two lower square-topped openings. Windows are used to bring light into the building and in domestic architecture, to give views. Stained glass, although sometimes present, is not a feature. Walls External walls are generally constructed of brick, rendered, or faced with stone in highly finished ashlar masonry, laid in straight courses. The corners of buildings are often emphasized by rusticated quoins. Basements and ground floors were often rusticated, as at the Palazzo Medici Riccardi (1444–1460) in Florence. Internal walls are smoothly plastered and surfaced with lime wash. For more formal spaces, internal surfaces are decorated with frescoes. Details Courses, mouldings and all decorative details are carved with great precision. Studying and mastering the details of the ancient Romans was one of the important aspects of Renaissance theory. The different orders each required different sets of details. Some architects were stricter in their use of classical details than others, but there was also a good deal of innovation in solving problems, especially at corners. Mouldings stand out around doors and windows rather than being recessed, as in Gothic architecture. Sculptured figures may be set in niches or placed on plinths. They are not integral to the building as in Medieval architecture. Early Renaissance The leading architects of the Early Renaissance or Quattrocento were Filippo Brunelleschi, Michelozzo and Leon Battista Alberti. Brunelleschi The person generally credited with bringing about the Renaissance view of architecture is Filippo Brunelleschi, (1377–1446). The underlying feature of the work of Brunelleschi was "order". In the early 15th century, Brunelleschi began to look at the world to see what the rules were that governed one's way of seeing. He observed that the way one sees regular structures such as the Florence Baptistery and the tiled pavement surrounding it follows a mathematical order – linear perspective. The buildings remaining among the ruins of ancient Rome appeared to respect a simple mathematical order in the way that Gothic buildings did not. One incontrovertible rule governed all Ancient Roman architecture – a semi-circular arch is exactly twice as wide as it is high. A fixed proportion with implications of such magnitude occurred nowhere in Gothic architecture. A Gothic pointed arch could be extended upwards or flattened to any proportion that suited the location. Arches of differing angles frequently occurred within the same structure. No set rules of proportion applied. From the observation of the architecture of Rome came a desire for symmetry and careful proportion in which the form and composition of the building as a whole and all its subsidiary details have fixed relationships, each section in proportion to the next, and the architectural features serving to define exactly what those rules of proportion are. Brunelleschi gained the support of a number of wealthy Florentine patrons, including the Silk Guild and Cosimo de' Medici. Florence Cathedral Brunelleschi's first major architectural commission was for the enormous brick dome which covers the central space of Florence's cathedral, designed by Arnolfo di Cambio in the 14th century but left unroofed. While often described as the first building of the Renaissance, Brunelleschi's daring design utilises the pointed Gothic arch and Gothic ribs that were apparently planned by Arnolfo. It seems certain, however, that while stylistically Gothic, in keeping with the building it surmounts, the dome is in fact structurally influenced by the great dome of Ancient Rome, which Brunelleschi could hardly have ignored in seeking a solution. This is the dome of the Pantheon, a circular temple, now a church. Inside the Pantheon's single-shell concrete dome is coffering which greatly decreases the weight. The vertical partitions of the coffering effectively serve as ribs, although this feature does not dominate visually. At the apex of the Pantheon's dome is an opening, 8 meters across. Brunelleschi was aware that a dome of enormous proportion could in fact be engineered without a keystone. The dome in Florence is supported by the eight large ribs and sixteen more internal ones holding a brick shell, with the bricks arranged in a herringbone manner. Although the techniques employed are different, in practice, both domes comprise a thick network of ribs supporting very much lighter and thinner infilling. And both have a large opening at the top. San Lorenzo The new architectural philosophy of the Renaissance is best demonstrated in the churches of San Lorenzo, and Santo Spirito, Florence. Designed by Brunelleschi in about 1425 and 1428 respectively, both have the shape of the Latin cross. Each has a modular plan, each portion being a multiple of the square bay of the aisle. This same formula controlled also the vertical dimensions. In the case of Santo Spirito, which is entirely regular in plan, transepts and chancel are identical, while the nave is an extended version of these. In 1434 Brunelleschi designed the first Renaissance centrally planned building, Santa Maria degli Angeli, Florence. It is composed of a central octagon surrounded by a circuit of eight smaller chapels. From this date onwards numerous churches were built in variations of these designs. Michelozzo Michelozzo Michelozzi (1396–1472), was another architect under patronage of the Medici family, his most famous work being the Palazzo Medici Riccardi, which he was commissioned to design for Cosimo de' Medici in 1444. A decade later he built the Villa Medici, Fiesole. Among his other works for Cosimo are the library at the Convent of San Marco, Florence. He went into exile in Venice for a time with his patron. He was one of the first architects to work in the Renaissance style outside Italy, building a palace at Dubrovnik. The Palazzo Medici Riccardi is Classical in the details of its pedimented windows and recessed doors, but, unlike the works of Brunelleschi and Alberti, there are no classical orders of columns in evidence. Instead, Michelozzo has respected the Florentine liking for rusticated stone. He has seemingly created three orders out of the three defined rusticated levels, the whole being surmounted by an enormous Roman-style cornice which juts out over the street by 2.5 meters. Alberti Leon Battista Alberti, born in Genoa (1402–1472), was an important Humanist theoretician and designer whose book on architecture De re Aedificatoria was to have lasting effect. An aspect of Renaissance humanism was an emphasis of the anatomy of nature, in particular the human form, a science first studied by the Ancient Greeks. Humanism made man the measure of things. Alberti perceived the architect as a person with great social responsibilities. He designed a number of buildings, but unlike Brunelleschi, he did not see himself as a builder in a practical sense and so left the supervision of the work to others. Miraculously, one of his greatest designs, that of the Basilica of Sant'Andrea, Mantua, was brought to completion with its character essentially intact. Not so the Church of San Francesco in Rimini, a rebuilding of a Gothic structure, which, like Sant'Andrea, was to have a façade reminiscent of a Roman triumphal arch. This was left sadly incomplete. Sant'Andrea is an extremely dynamic building both without and within. Its triumphal façade is marked by extreme contrasts. The projection of the order of pilasters that define the architectural elements, but are essentially non-functional, is very shallow. This contrasts with the gaping deeply recessed arch which makes a huge portico before the main door. The size of this arch is in direct contrast to the two low square-topped openings that frame it. The light and shade play dramatically over the surface of the building because of the shallowness of its mouldings and the depth of its porch. In the interior Alberti has dispensed with the traditional nave and aisles. Instead there is a slow and majestic progression of alternating tall arches and low square doorways, repeating the "triumphal arch" motif of the façade. Two of Alberti's best known buildings are in Florence, the Palazzo Rucellai and at Santa Maria Novella. For the palace, Alberti applied the classical orders of columns to the façade on the three levels, 1446–51. At Santa Maria Novella he was commissioned to finish the decoration of the façade. He completed the design in 1456 but the work was not finished until 1470. The lower section of the building had Gothic niches and typical polychrome marble decoration. There was a large ocular window in the end of the nave which had to be taken into account. Alberti simply respected what was already in place, and the Florentine tradition for polychrome that was well established at the Baptistery of San Giovanni, the most revered building in the city. The decoration, being mainly polychrome marble, is mostly very flat in nature, but a sort of order is established by the regular compartments and the circular motifs which repeat the shape of the round window. For the first time, Alberti linked the lower roofs of the aisles to nave using two large scrolls. These were to become a standard Renaissance device for solving the problem of different roof heights and bridge the space between horizontal and vertical surfaces. High Renaissance In the late 15th century and early 16th century, architects such as Bramante, Antonio da Sangallo the Younger and others showed a mastery of the revived style and ability to apply it to buildings such as churches and city palazzo which were quite different from the structures of ancient times. The style became more decorated and ornamental, statuary, domes and cupolas becoming very evident. The architectural period is known as the "High Renaissance" and coincides with the age of Leonardo, Michelangelo and Raphael. Bramante Donato Bramante, (1444–1514), was born in Urbino and turned from painting to architecture, finding his first important patronage under Ludovico Sforza, Duke of Milan, for whom he produced a number of buildings over 20 years. After the fall of Milan to the French in 1499, Bramante travelled to Rome where he achieved great success under papal patronage. Bramante's finest architectural achievement in Milan is his addition of crossing and choir to the abbey church of Santa Maria delle Grazie (Milan). This is a brick structure, the form of which owes much to the Northern Italian tradition of square domed baptisteries. The new building is almost centrally planned, except that, because of the site, the chancel extends further than the transept arms. The hemispherical dome, of approximately 20 metres across, rises up hidden inside an octagonal drum pierced at the upper level with arched classical openings. The whole exterior has delineated details decorated with the local terracotta ornamentation. From 1488 to 1492 he worked for Ascanio Sforza on Pavia Cathedral, on which he imposed a central plan scheme and built some apses and the crypt, inspired by the thermal baths of the Roman age. In Rome Bramante created what has been described as "a perfect architectural gem", the Tempietto in the Cloister of San Pietro in Montorio. This small circular temple marks the spot where St Peter was martyred and is thus the most sacred site in Rome. The building adapts the style apparent in the remains of the Temple of Vesta, the most sacred site of Ancient Rome. It is enclosed by and in spatial contrast with the cloister which surrounds it. As approached from the cloister, as in the picture above, it is seen framed by an arch and columns, the shape of which are echoed in its free-standing form. Bramante went on to work on the Apostolic Palace, where he designed the Cortile del Belvedere. In 1506 his design for Pope Julius II's rebuilding of St. Peter's Basilica was selected, and the foundation stone laid. After Bramante's death and many changes of plan, Michelangelo, as chief architect, reverted to something closer to Bramante's original proposal. Sangallo Antonio da Sangallo the Younger (1485–1546) was one of a family of military engineers. His uncle, Giuliano da Sangallo was one of those who submitted a plan for the rebuilding of St Peter's and was briefly a co-director of the project, with Raphael. Antonio da Sangallo also submitted a plan for St Peter's and became the chief architect after the death of Raphael, to be succeeded himself by Michelangelo. His fame does not rest upon his association with St Peter's but in his building of the Farnese Palace, "the grandest palace of this period", started in 1530. The impression of grandness lies in part in its sheer size, (56 m long by 29.5 meters high) and in its lofty location overlooking a broad piazza. Unusually for such a large and luxurious house of the time, it was built principally of stuccoed brick, rather than of stone. Against the smooth pink-washed walls the stone quoins of the corners, the massive rusticated portal and the repetition of finely detailed windows produce an elegant effect. The upper of the three equally sized floors was added by Michelangelo. The travertine for its architectural details came not from a quarry, but from the Colosseum. Raphael Raphael (1483–1520), born in Urbino, trained under Perugino in Perugia before moving to Florence, was for a time the chief architect for St. Peter's, working in conjunction with Antonio Sangallo. He also designed a number of buildings, most of which were finished by others. His single most influential work is the Palazzo Pandolfini in Florence with its two stories of strongly articulated windows of a "tabernacle" type, each set around with ordered pilasters, cornice and alternate arched and triangular pediments. Mannerism Mannerism in architecture was marked by widely diverging tendencies in the work of Michelangelo, Giulio Romano, Baldassare Peruzzi and Andrea Palladio, that led to the Baroque style in which the same architectural vocabulary was used for very different rhetoric. Peruzzi Baldassare Peruzzi, (1481–1536), was an architect born in Siena, but working in Rome, whose work bridges the High Renaissance and the Mannerist period. His Villa Farnesina of 1509 is a very regular monumental cube of two equal stories, the bays being strongly articulated by orders of pilasters. The building is unusual for its frescoed walls. Peruzzi's most famous work is the Palazzo Massimo alle Colonne in Rome. The unusual features of this building are that its façade curves gently around a curving street. It has in its ground floor a dark central portico running parallel to the street, but as a semi enclosed space, rather than an open loggia. Above this rise three undifferentiated floors, the upper two with identical small horizontal windows in thin flat frames which contrast strangely with the deep porch, which has served, from the time of its construction, as a refuge to the city's poor. Giulio Romano Giulio Romano (1499–1546), was a pupil of Raphael, assisting him on various works for the Vatican. Romano was also a highly inventive designer, working for Federico II Gonzaga at Mantua on the Palazzo Te (1524–1534), a project which combined his skills as architect, sculptor and painter. In this work, incorporating garden grottoes and extensive frescoes, he uses illusionistic effects, surprising combinations of architectural form and texture, and the frequent use of features that seem somewhat disproportionate or out of alignment. The total effect is eerie and disturbing. Ilan Rachum cites Romano as "one of the first promoters of Mannerism". Michelangelo Michelangelo Buonarroti (1475–1564) was one of the creative giants whose achievements mark the High Renaissance. He excelled in each of the fields of painting, sculpture and architecture, and his achievements brought about significant changes in each area. His architectural fame lies chiefly in two buildings: the interiors of the Laurentian Library and its lobby at the monastery of San Lorenzo in Florence, and St Peter's Basilica in Rome. St. Peter's was "the greatest creation of the Renaissance", and a great number of architects contributed their skills to it. But at its completion, there was more of Michelangelo's design than of any other architect, before or after him. St. Peter's The plan that was accepted at the laying of the foundation stone in 1506 was that by Bramante. Various changes in plan occurred in the series of architects that succeeded him, but Michelangelo, when he took over the project in 1546, reverted to Bramante's Greek-cross plan and redesigned the piers, the walls and the dome, giving the lower weight-bearing members massive proportions and eliminating the encircling aisles from the chancel and identical transept arms. Helen Gardner says: "Michelangelo, with a few strokes of the pen, converted its snowflake complexity into a massive, cohesive unity." Michelangelo's dome was a masterpiece of design using two masonry shells, one within the other and crowned by a massive roof lantern supported, as at Florence, on ribs. For the exterior of the building he designed a giant order which defines every external bay, the whole lot being held together by a wide cornice which runs unbroken like a rippling ribbon around the entire building. There is a wooden model of the dome, showing its outer shell as hemispherical. When Michelangelo died in 1564, the building had reached the height of the drum. The architect who succeeded Michelangelo was Giacomo della Porta. The dome, as built, has a much steeper projection than the dome of the model. It is generally presumed that it was della Porta who made this change to the design, to lessen the outward thrust. But, in fact it is unknown who it was that made this change, and it is equally possible and a stylistic likelihood that the person who decided upon the more dynamic outline was Michelangelo himself at some time during the years that he supervised the project. Laurentian Library Michelangelo was at his most Mannerist in the design of the vestibule of the Laurentian Library, also built by him to house the Medici collection of books at the convent of San Lorenzo, Florence, the same San Lorenzo's at which Brunelleschi had recast church architecture into a Classical mold and established clear formula for the use of Classical orders and their various components. Michelangelo takes all Brunelleschi's components and bends them to his will. The Library is upstairs. It is a long low building with an ornate wooden ceiling, a matching floor and crowded with corrals finished by his successors to Michelangelo's design. But it is a light room, the natural lighting streaming through a long row of windows that appear positively crammed between the order of pilasters that march along the wall. The vestibule, on the other hand, is tall, taller than it is wide and is crowded by a large staircase that pours out of the library in what Nikolaus Pevsner refers to as a "flow of lava", and bursts in three directions when it meets the balustrade of the landing. It is an intimidating staircase, made all the more so because the rise of the stairs at the center is steeper than at the two sides, fitting only eight steps into the space of nine. The space is crowded and it is to be expected that the wall spaces would be divided by pilasters of low projection. But Michelangelo has chosen to use paired columns, which, instead of standing out boldly from the wall, he has sunk deep into recesses within the wall itself. In the Basilica di San Lorenzo nearby, Brunelleschi used little scrolling console brackets to break the strongly horizontal line of the course above the arcade. Michelangelo has borrowed Brunelleschi's motifs and stood each pair of sunken columns on a pair of twin console brackets. Pevsner says the "Laurenziana [...] reveals Mannerism in its most sublime architectural form".Ludwig Goldscheider, Michelangelo, 1964, Phaidon. Giacomo della Porta Giacomo della Porta, (–1602), was famous as the architect who made the dome of St. Peter's Basilica a reality. The change in outline between the dome as it appears in the model and the dome as it was built, has brought about speculation as to whether the changes originated with della Porta or with Michelangelo himself. Della Porta spent nearly all his working life in Rome, designing villas, palazzi and churches in the Mannerist style. One of his most famous works is the façade of the Church of the Gesù, a project that he inherited from his teacher Jacopo Barozzi da Vignola. Most characteristics of the original design are maintained, subtly transformed to give more weight to the central section, where della Porta uses, among other motifs, a low triangular pediment overlaid on a segmental one above the main door. The upper storey and its pediment give the impression of compressing the lower one. The center section, like that of Sant'Andrea at Mantua, is based on the triumphal arch, but has two clear horizontal divisions like Santa Maria Novella. See Alberti above. The problem of linking the aisles to the nave is solved using Alberti's scrolls, in contrast to Vignola's solution which provided much smaller brackets and four statues to stand above the paired pilasters, visually weighing down the corners of the building. The influence of the design may be seen in Baroque churches throughout Europe. Andrea Palladio Andrea Palladio, (1508–80), "the most influential architect of the whole Renaissance", was, as a stonemason, introduced to Humanism by the poet Giangiorgio Trissino. His first major architectural commission was the rebuilding of the Basilica Palladiana at Vicenza, in the Veneto where he was to work most of his life. Palladio was to transform the architectural style of both palaces and churches by taking a different perspective on the notion of Classicism. While the architects of Florence and Rome looked to structures like the Colosseum and the Arch of Constantine to provide formulae, Palladio looked to classical temples with their simple peristyle form. When he used the triumphal arch motif of a large arched opening with lower square-topped opening on either side, he invariably applied it on a small scale, such as windows, rather than on a large scale as Alberti used it at Sant'Andrea's. This Ancient Roman motif is often referred to as the Palladian Arch. The best known of Palladio's domestic buildings is Villa Capra, otherwise known as "La Rotonda", a centrally planned house with a domed central hall and four identical façades, each with a temple-like portico like that of the Pantheon, Rome. At the Villa Cornaro, the projecting portico of the north façade and recessed loggia of the garden façade are of two ordered stories, the upper forming a balcony. Like Alberti, della Porta and others, in the designing of a church façade, Palladio was confronted by the problem of visually linking the aisles to the nave while maintaining and defining the structure of the building. Palladio's solution was entirely different from that employed by della Porta. At the church of San Giorgio Maggiore in Venice he overlays a tall temple, its columns raised on high plinths, over another low wide temple façade, its columns rising from the basements and its narrow lintel and pilasters appearing behind the giant order of the central nave. Progression from Early Renaissance through to Baroque In Italy, there appears to be a seamless progression from Early Renaissance architecture through the High Renaissance and Mannerism to the Baroque style. Pevsner comments about the vestibule of the Laurentian Library that it "has often been said that the motifs of the walls show Michelangelo as the father of the Baroque". While continuity may be the case in Italy, it was not necessarily the case elsewhere. The adoption of the Renaissance style of architecture was slower in some areas than in others, as may be seen in England, for example. Indeed, as Pope Julius II was having the Old St. Peter's Basilica demolished to make way for the new, Henry VII of England was adding a glorious new chapel in the Perpendicular Gothic style to Westminster Abbey. Likewise, the style that was to become known as Baroque evolved in Italy in the early 17th century, at about the time that the first fully Renaissance buildings were constructed at Greenwich and Whitehall in England, after a prolonged period of experimentation with Classical motifs applied to local architectural forms, or conversely, the adoption of Renaissance structural forms in the broadest sense with an absence of the formulae that governed their use. While the English were just discovering what the rules of Classicism were, the Italians were experimenting with methods of breaking them. In England, following the Restoration of the Monarchy in 1660, the architectural climate changed, and taste moved in the direction of the Baroque. Rather than evolving, as it did in Italy, it arrived fully fledged. In a similar way, in many parts of Europe that had few purely classical and ordered buildings like Brunelleschi's Santo Spirito and Michelozzo's Medici Riccardi Palace, Baroque architecture appeared almost unheralded, on the heels of a sort of Proto-Renaissance local style. The spread of the Baroque and its replacement of traditional and more conservative Renaissance architecture was particularly apparent in the building of churches as part of the Counter Reformation. Spread in Europe The 16th century saw the economic and political ascendancy of France, Spain and Portugal, then later the rise of England, Poland and Russia and the Dutch Republic. The result was that these places began to import the Renaissance style as indicators of their new cultural position. This also meant that it was not until about 1500 and later that signs of Renaissance architectural style began to appear outside Italy. Though Italian architects were highly sought after, such as Sebastiano Serlio in France, Aristotile Fioravanti in Russia, and Francesco Fiorentino in Poland, soon, non-Italians were studying Italian architecture and translating it into their own idiom. These included Philibert de l'Orme (1510–1570) in France, Juan Bautista de Toledo (died: 1567) in Spain, Inigo Jones (1573–1652) in England and Elias Holl (1573–1646) in Germany. Books or ornament prints with engraved illustrations demonstrating plans and ornament were very important in spreading Renaissance styles in Northern Europe, with among the most important authors being Androuet du Cerceau in France, and Hans Vredeman de Vries in the Netherlands, and Wendel Dietterlin, author of Architectura (1593–94) in Germany. Baltic States The Renaissance arrived late in what is today Estonia, Latvia and Lithuania, the so-called Baltic States, and did not make a great imprint architecturally. It was a politically tumultuous time, marked by the decline of the State of the Teutonic Order and the Livonian War. In Estonia, artistic influences came from Dutch, Swedish and Polish sources. The building of the Brotherhood of the Blackheads in Tallinn with a façade designed by Arent Passer, is the only truly Renaissance building in the country that has survived more or less intact. Significantly for these troubled times, the only other examples are purely military buildings, such as the Fat Margaret cannon tower, also in Tallinn. Latvian Renaissance architecture was influenced by Polish-Lithuanian and Dutch style, with Mannerism following from Gothic without intermediaries. St. John's Church in the Latvian capital of Riga is an example of an earlier Gothic church which was reconstructed in 1587–89 by the Dutch architect Gert Freze (Joris Phraeze). The prime example of Renaissance architecture in Latvia is the heavily decorated House of the Blackheads, rebuilt from an earlier Medieval structure into its present Mannerist forms as late as 1619–25 by the architects A. and L. Jansen. It was destroyed during World War II and rebuilt during the 1990s. Lithuania meanwhile formed a large dual state with Poland, known as the Polish–Lithuanian Commonwealth. Renaissance influences grew stronger during the reign of Sigismund I the Old and Sigismund II Augustus. The Palace of the Grand Dukes of Lithuania (destroyed in 1801, a copy built in 2002–2009) show Italian influences. Several architects of Italian origin were active in the country, including Bernardino Zanobi de Gianotis, Giovanni Cini and Giovanni Maria Mosca. Bohemia The Renaissance style first appeared in the Crown of Bohemia in the 1490s. Bohemia together with its incorporated lands, especially Moravia, thus ranked among the areas of the Holy Roman Empire with the earliest known examples of the Renaissance architecture. The lands of the Bohemian Crown were never part of the ancient Roman Empire, thus they missed their own ancient classical heritage and had to be dependent on the primarily Italian models. As well as in other Central European countries the Gothic style kept its position especially in the church architecture. The traditional Gothic architecture was considered timeless and therefore able to express the sacredness. The Renaissance architecture coexisted with the Gothic style in Bohemia and Moravia until the late 16th century (e. g. the residential part of a palace was built in the modern Renaissance style but its chapel was designed with Gothic elements). The façades of Czech Renaissance buildings were often decorated with sgraffito (figural or ornamental). During the reign of Rudolph II, Holy Roman Emperor and Bohemian king, the city of Prague became one of the most important European centers of the late Renaissance art (so-called Mannerism). Nevertheless, not many architecturally significant buildings have been preserved from that time. Croatia In the 15th century, Croatia was divided into three states: the northern and central part of Croatia and Slavonia were in union with the Kingdom of Hungary, while Dalmatia, with the exception of the independent Republic of Ragusa, was under the rule of the Venetian Republic. The Cathedral of St James in Šibenik, was begun in 1441 in the Gothic style by Giorgio da Sebenico (Juraj Dalmatinac). Its unusual construction does not use mortar, the stone blocks, pilasters and ribs being bonded with joints and slots in the way that was usual in wooden constructions. In 1477 the work was unfinished, and continued under Niccolò di Giovanni Fiorentino, who respected the mode of construction and the plan of the former architect, but continued the work which includes the upper windows, the vaults and the dome, in the Renaissance style. The combination of a high barrel vault with lower half-barrel vaults over the aisles the gives the façade its distinctive trefoil shape, the first of this type in the region. The cathedral was listed as a UNESCO World Heritage List in 2001. England After some first efforts by kings and courtiers, most now vanished, like Henry VII's Richmond Palace (), Henry VIII's Nonsuch Palace, and the first Somerset House in London, a local style of Renaissance architecture emerged in England during the reign of Elizabeth I, much influenced by the Low countries where among other features it acquired versions of the Dutch gable, and Flemish strapwork in geometric designs adorning the walls. The new style tended to manifest itself in large square tall prodigy houses such as Longleat House. The first great exponent of classicizing Italian Renaissance architecture in England was Inigo Jones (1573–1652), who had studied architecture in Italy where the influence of Palladio was very strong. Jones returned to England full of enthusiasm for the new movement and immediately began to design such buildings as the Queen's House at Greenwich in 1616 and the Banqueting House, Whitehall three years later. These works, with their clean lines, and symmetry were revolutionary in a country still enamoured with mullion windows, crenellations and turrets.John Summerson, Architecture in Britain 1530–1830, 1977 ed., Pelican, France During the early years of the 16th century the French were involved in wars in northern Italy, bringing back to France not just the Renaissance art treasures as their war booty, but also stylistic ideas. In the Loire Valley a wave of building was carried and many Renaissance châteaux appeared at this time, the earliest example being the Château d'Amboise () in which Leonardo da Vinci spent his last years. The style became dominant under Francis I (See Châteaux of the Loire Valley). Germany The Renaissance in Germany was inspired first by German philosophers and artists such as Albrecht Dürer and Johannes Reuchlin who visited Italy. Important early examples of this period are especially the Landshut Residence, Heidelberg Castle, Johannisburg Palace in Aschaffenburg, Schloss Weilburg, the City Hall and Fugger Houses in Augsburg and St. Michael's Church, Munich. A particular form of Renaissance architecture in Germany is the Weser Renaissance, with prominent examples such as Bremen City Hall and the Juleum in Helmstedt. In July 1567 the city council of Cologne approved a design in the Renaissance style by Wilhelm Vernukken for a two storied loggia for Cologne City Hall. St Michael in Munich is the largest Renaissance church north of the Alps. It was built by William V, Duke of Bavaria between 1583 and 1597 as a spiritual center for the Counter Reformation and was inspired by the Church of the Gesù in Rome. The architect is unknown. Many examples of Brick Renaissance buildings can be found in Hanseatic old towns, such as Stralsund, Wismar, Lübeck, Lüneburg, Friedrichstadt and Stade. Notable German Renaissance architects include Friedrich Sustris, Benedikt Rejt, Abraham van den Blocke, Elias Holl and Hans Krumpper. Hungary One of the earliest places to be influenced by the Renaissance style of architecture was the Kingdom of Hungary. The style appeared following the marriage of King Matthias Corvinus and Beatrice of Naples in 1476. Many Italian artists, craftsmen and masons arrived at Buda with the new queen. Important remains of the Early Renaissance summer palace of King Matthias can be found in Visegrád. The Ottoman conquest of Hungary after 1526 cut short the development of Renaissance architecture in the country and destroyed its most famous examples. Today, the only completely preserved work of Hungarian Renaissance architecture is the Bakócz Chapel (commissioned by the Hungarian cardinal Tamás Bakócz), now part of the Esztergom Basilica. Habsburg Netherlands As in painting, Renaissance architecture took some time to reach the Habsburg Netherlands and did not entirely supplant the Gothic elements. An architect directly influenced by the Italian masters was Cornelis Floris de Vriendt, who designed Antwerp City Hall, finished in 1564. The style is sometimes called the Flemish-Italian Renaissance style and is also known as the Floris style. In this style the overall structure was similar to that of late-Gothic buildings, but with larger windows and much florid decoration and detailing in the Renaissance styles. This style became widely influential across Northern Europe, for example in Elizabethan architecture, and is part of the wider movement of Northern Mannerism. Dutch Republic In the early 17th century Dutch Republic, Hendrick de Keyser played an important role in developing the "Amsterdam Renaissance" style, which has local characteristics including the prevalence of tall narrow town-houses, the trapgevel or Dutch gable and the employment of decorative triangular pediments over doors and windows in which the apex rises much more steeply than in most other Renaissance architecture, but in keeping with the profile of the gable. Carved stone details are often of low profile, in strapwork resembling leatherwork, a stylistic feature originating in the School of Fontainebleau. This feature was exported to England. Poland Polish Renaissance architecture is divided into three periods: The first period (1500–50) is the so-called "Italian" as most of Renaissance buildings of this time were designed by Italian architects, mainly from Florence, including Francesco Fiorentino and Bartolomeo Berrecci. Renowned architects from Southern Europe became sought-after during the reign of Sigismund I the Old and his Italian-born wife, Queen Bona Sforza. Notable examples from this period include Wawel Castle Courtyard and Sigismund's Chapel. In the second period (1550–1600), Renaissance architecture became more common, with the beginnings of Mannerist and under the influence of the Netherlands, particularly in northern Poland and Pomerania, but also in parts of Lesser Poland. Buildings of this kind include the Cloth Hall in Kraków and city halls of Tarnów and Sandomierz. The most famous example is the 16th-century Poznań Town Hall, designed by Giovanni Battista di Quadro. In the third period (1600–50), the rising power of sponsored Jesuits and Counter Reformation gave impetus to the development of Mannerist architecture and Baroque. Most notable example of this period is Kalwaria Zebrzydowska park, mannerist architectural and park landscape complex and pilgrimage park, which consists Basilica of St. Mary and 42 chapels modelled and named after the places in Jerusalem and Holy Land. This is an UNESCO World Heritage Site. Another great example from this period is Krasiczyn Castle, which is an palazzo in fortezza with a unique sgraffito wall decorations, whose total area is about 7000 square meters. Portugal The adoption of the Renaissance style in Portugal was gradual. The so-called Manueline style (–1535) married Renaissance elements to Gothic structures with the superficial application of exuberant ornament similar to the Isabelline Gothic of Spain. Examples of Manueline include the Belém Tower, a defensive building of Gothic form decorated with Renaissance-style loggias, and the Jerónimos Monastery, with Renaissance ornaments decorating portals, columns and cloisters. The first "pure" Renaissance structures appear under King John III, like the Chapel of Nossa Senhora da Conceição in Tomar (1532–40), the Porta Especiosa of Coimbra Cathedral and the Church of Nossa Senhora da Graça (Évora) (–1540), as well as the cloisters of Viseu Cathedral (–1534) and Convent of Christ in Tomar (John III Cloisters, 1557–1591). The Lisbon buildings of São Roque Church (1565–87) and the Mannerist Monastery of São Vicente de Fora (1582–1629), strongly influenced religious architecture in both Portugal and its colonies in the next centuries. Russia Prince Ivan III introduced Renaissance architecture to Russia by inviting a number of architects from Italy, who brought new construction techniques and some Renaissance style elements with them, while in general following the traditional designs of the Russian architecture. In 1475 the Bolognese architect Aristotele Fioravanti came to rebuild the Cathedral of the Dormition in the Moscow Kremlin, damaged in an earthquake. Fioravanti was given the 12th-century Assumption Cathedral in Vladimir as a model, and produced a design combining traditional Russian style with a Renaissance sense of spaciousness, proportion and symmetry. In 1485, Ivan III commissioned the building of a royal Terem Palace within the Kremlin, with Aloisio da Milano being the architect of the first three floors. Aloisio da Milano, as well as the other Italian architects, also greatly contributed to the construction of the Kremlin walls and towers. The small banqueting hall of the Russian Tsars, called the Palace of Facets because of its facetted upper story, is the work of two Italians, Marco Ruffo and Pietro Solario, and shows a more Italian style. In 1505, an Italian known in Russia as Aleviz Novyi built twelve churches for Ivan III, including the Cathedral of the Archangel, a building remarkable for the successful blending of Russian tradition, Orthodox requirements and Renaissance style. Scandinavia The Renaissance architecture that found its way to Scandinavia was influenced by the Flemish architecture, and included high gables and a castle air as demonstrated in the architecture of Frederiksborg Palace. Consequently, much of the Neo-Renaissance to be found in the Scandinavian countries is derived from this source. In Denmark, Renaissance architecture thrived during the reigns of Frederick II and especially Christian IV. Inspired by the French castles of the times, Flemish architects designed masterpieces such as Kronborg Castle in Helsingør and Frederiksborg Castle in Hillerød. The Frederiksborg Castle (1602–1620) is the largest Renaissance palace in Scandinavia. Elsewhere in Sweden, with Gustav Vasa's seizure of power and the onset of the Protestant reformation, church construction and aristocratic building projects came to a near standstill. During this time period, several magnificent so-called "Vasa castles" appeared. They were erected at strategic locations to control the country as well as to accommodate the travelling royal court. Gripsholm Castle, Kalmar Castle and Vadstena Castle are known for their fusion of medieval elements with Renaissance architecture. The architecture of Norway was influenced partly by the occurrence of the plague during the Renaissance era. After the Black Death, monumental construction in Norway came to a standstill. There are few examples of Renaissance architecture in Norway, the most prominent being renovations to the medieval Rosenkrantz Tower in Bergen, Barony Rosendal in Hardanger, and the contemporary Austrat manor near Trondheim, and parts of Akershus Fortress. There is little evidence of Renaissance influence in Finnish architecture. Spain In Spain, Renaissance began to be grafted to Gothic forms in the last decades of the 15th century. The new style is called Plateresque, because of the extremely decorated façade, that brought to the mind the decorative motifs of the intricately detailed work of silversmiths, the Plateros. Classical orders and candelabra motifs (a candelieri) combined freely. As decades passed, the Gothic influence disappeared and the research of an orthodox classicism reached high levels. Although Plateresco is a commonly used term to define most of the architectural production of the late 15th and first half of 16th century, some architects acquired a more sober personal style, like Diego Siloe, and Andrés de Vandelvira in Andalusia, and Alonso de Covarrubias and Rodrigo Gil de Hontañón in Castile. This phase of Spanish Renaissance is called Purism. From the mid-sixteenth century, under such architects as Pedro Machuca, Juan Bautista de Toledo and Juan de Herrera there was a closer adherence to the art of ancient Rome, sometimes anticipating Mannerism, examples of which include the palace of Charles V in Granada and El Escorial. This Herrerian style or arquitectura herreriana of architecture was developed during the last third of the 16th century under the reign of Philip II (1556–1598), and continued in force in the 17th century, but transformed by the Baroque style of the time. Spread in the Colonial Americas Bolivia Renaissance architecture spread to Colonial Bolivia, with examples being the Church of Curahuara de Carangas built between 1587 and 1608 known as the "Sistine Chapel of the Andes" by the Bolivians for its rich Mannerist decoration in its interior; and the Basilica of Our Lady of Copacabana built between 1601 and 1619 designed by the Spanish architect Francisco Jiménez de Siguenza. Brazil The best-known examples of the Renaissance architecture in Colonial Brazil are the Mannerist Cathedral Basilica of Salvador built between 1657 and 1746 and the Franciscan Convent of Santo Antônio in João Pessoa built between 1634 and 1779. Dominican Republic The House of the Five Medallions is a historic house built in 1540, located in Santo Domingo, this preserves a Plateresque Renaissance façade. Ecuador The large Basilica and Convent of San Francisco, Quito, built between 1535 and 1650, is of Mannerist Renaissance style. Mexico A notable example of Renaissance architecture in New Spain is the Cathedral of Mérida, Yucatán, one of the oldest cathedrals in the Americas, built between 1562 and 1598 and designed by Pedro de Aulestia and Juan Miguel de Agüero. Peru Several of the churches of the city of Cusco were begun during the Renaissance period, including Cusco Cathedral, (1539). Many others are Baroque in style. Legacy Many styles of Late Renaissance and Mannerist architecture transitioned fairly easily in local styles of Baroque architecture; in other areas the change was more abrupt. Baroque and Neoclassical architecture dominated the later 17th and the 18th century in most areas, and persisted well into the 19th century in many places and individual buildings. During the 19th century there was a conscious revival of the style in Renaissance Revival architecture, that paralleled the Gothic Revival. Whereas the Gothic style was perceived by architectural theorists as being the most appropriate style for Church building, the Renaissance palazzo was a good model for urban secular buildings requiring an appearance of dignity and reliability such as banks, gentlemen's clubs and apartment blocks. Buildings that sought to impress, such as the Palais Garnier, were often of a more Mannerist or Baroque style. Architects of factories, office blocks and department stores continued to use the Renaissance palazzo form into the 20th century, in Mediterranean Revival Style architecture with an Italian Renaissance emphasis. Many of the concepts and forms of Renaissance architecture can be traced through subsequent architectural movements—from Renaissance to High-Renaissance, to Mannerism, to Baroque (or Rococo), to Neo-Classicism, and to Eclecticism. While Renaissance style and motifs were largely purged from Modernism, they have been reasserted in some Postmodern architecture. The influence of Renaissance architecture can still be seen in many of the modern styles and rules of architecture today. See also List of Renaissance structures Notes References Bibliography Christy Anderson. Renaissance Architecture. Oxford 2013. Sir Banister Fletcher; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). . Tadeusz Broniewski, Historia architektury dla wszystkich Wydawnictwo Ossolineum, 1990 Arnaldo Bruschi, Bramante, London: Thames and Hudson, 1977. Harald Busch, Bernd Lohse, Hans Weigert, Baukunst der Renaissance in Europa. Von Spätgotik bis zum Manierismus, Frankfurt af Main, 1960 Trewin Cropplestone, World Architecture, 1963, Hamlyn. ISBN unknown Giovanni Fanelli, Brunelleschi, 1980, Becocci editore Firenze. ISBN unknown Christopher Luitpold Frommel, The Architecture of the Italian Renaissance, London: Thames and Hudson, 2007. Helen Gardner, Art through the Ages, 5th edition, Harcourt, Brace and World, inc., Mieczysław Gębarowicz, Studia nad dziejami kultury artystycznej późnego renesansu w Polsce, Toruń 1962 Ludwig Goldscheider, Michelangelo, 1964, Phaidon, J.R.Hale, Renaissance Europe, 1480–1520, 1971, Fontana Arnold Hauser, Mannerism: The Crisis of the Renaissance and the Origins of Modern Art, Cambridge: Harvard University Press, 1965, Brigitte Hintzen-Bohlen, Jurgen Sorges, Rome and the Vatican City, Konemann, Janson, H.W., Anthony F. Janson, History of Art, 1997, New York: Harry N. Abrams, Inc.. Marion Kaminski, Art and Architecture of Venice, 1999, Könemann, Wilfried Koch, Style w architekturze, Warsaw 1996, Andrew Martindale, Man and the Renaissance, 1966, Paul Hamlyn, ISBN Anne Mueller von der Haegen, Ruth Strasser, Art and Architecture of Tuscany, 2000, Konemann, Nikolaus Pevsner, An Outline of European Architecture, Pelican, 1964, Ilan Rachum, The Renaissance, an Illustrated Encyclopedia, 1979, Octopus, Joseph Rykwert, Leonis Baptiste Alberti, Architectural Design, Vol 49 No 5–6, Holland St, London Howard Saalman, Filippo Brunelleschi: The Buildings, London: Zwemmer, 1993, John Summerson, Architecture in Britain 1530–1830, 1977 ed., Pelican, Paolo Villa: Giardino Giusti 1993–94, pdf with maps and 200 photos Robert Erich Wolf and Ronald Millen, Renaissance and Mannerist Art, 1968, Harry N. Abrams, ISBN not known Manfred Wundram, Thomas Pape, Paolo Marton, Andrea Palladio, Taschen, Further reading Alberti, Leon Battista. 1988. On the Art of Building in Ten Books. Translated by Joseph Rykwert. Cambridge, MA: MIT Press. Anderson, Christy. 2013. Renaissance Architecture. Oxford: Oxford Univ. Press. Buddensieg, Tilmann. 1976. "Criticism of Ancient Architecture in the Sixteenth and Seventeenth Centuries." In Classical Influences on European Culture A.D. 1500–1700, 335–348. Edited by R. R. Bolgar. Cambridge, UK: Cambridge Univ. Press. Hart, Vaughan, and Peter Hicks, eds. 1998. Paper Palaces: The Rise of the Architectural Treatise in the Renaissance. New Haven, CT: Yale Univ. Press. Jokilehto, Jukka. 2017. A History of Architectural Conservation. 2d ed. New York: Routledge. Koortbojian, Michael. 2011. "Renaissance Spolia and Renaissance Antiquity (One Neighborhood, Three Cases)." In Reuse Value: Spolia and Appropriation in Art and Architecture, from Constantine to Sherrie Levine. Edited by Richard Brilliant and Dale Kinney, 149–165. Farnham, UK: Ashgate. Serlio, Sebastiano. 1996–2001. Sebastiano Serlio on Architecture. 2 vols. Translated by Vaughan Hart and Peter Hicks. New Haven, CT: Yale Univ. Press. Smith, Christine. 1992. Architecture in the Culture of Early Humanism: Ethics, Aesthetics, and Eloquence 1400–1470. New York: Oxford Univ. Press. Waters, Michael J. 2012. "A Renaissance Without Order Ornament, Single-Sheet Engravings, and the Mutability of Architectural Prints." Journal of the Society of Architectural Historians 71:488–523. Tafuri, Manfredo. 2006. Interpreting the Renaissance: Princes, Cities, Architects. New Haven: Yale University Press. Wittkower, Rudolf. 1971. Architectural Principles In the Age of Humanism. New York: Norton. Yerkes, Carolyn. 2017. Drawing after Architecture: Renaissance Architectural Drawings and their Reception.'' Venice: Marsilio. External links Renaissance Architecture in Great Buildings Online Architecture in the Classical Tradition Architectural history Architectural styles European architecture Architecture in Italy Architecture 15th-century architecture 16th-century architecture 17th-century architecture
Renaissance architecture
[ "Engineering" ]
14,281
[ "Architectural history", "Architecture" ]
41,527
https://en.wikipedia.org/wiki/Contrapposto
() is an Italian term that means "counterpoise". It is used in the visual arts to describe a human figure standing with most of its weight on one foot, so that its shoulders and arms twist off-axis from the hips and legs in the axial plane. First appearing in Ancient Greece in the early 5th century BCE, contrapposto is considered a crucial development in the history of Ancient Greek art (and, by extension, Western art), as it marks the first time in Western art that the human body is used to express a psychological disposition. The style was further developed and popularized by sculptors in the Hellenistic and Imperial Roman periods, fell out of use in the Middle Ages, and was later revived during the Renaissance. Michelangelo's statue of David, one of the most iconic sculptures in the world, is a famous example of contrapposto. Definition Contrapposto was historically an important sculptural development, for its appearance marks the first time in Western art that the human body is used to express a more relaxed psychological disposition. This gives the figure a more dynamic, or alternatively relaxed appearance. In the frontal plane this also results in opposite levels of shoulders and hips, for example: if the right hip is higher than the left; correspondingly the right shoulder will be lower than the left, and vice versa. It can further encompass the tension as a figure changes from resting on a given leg to walking or running upon it (so-called ponderation). The leg that carries the weight of the body is known as the engaged leg, the relaxed leg is known as the free leg. Usually, the engaged leg is straight, or very slightly bent, and the free leg is slightly bent. Contrapposto is less emphasized than the more sinuous S-curve, and creates the illusion of past and future movement. A 2019 eye tracking study, by showing that contrapposto acts as supernormal stimulus and increases perceived attractiveness, has provided evidence and insight as to why, in artistic presentation, goddesses of beauty and love are often depicted in contrapposto pose. This was later supported in a neuroimaging study. The term contrapposto can also be used to refer to multiple figures which are in counter-pose (or opposite pose) to one another. History Classical The first known statue to use contrapposto is Kritios Boy, c. 480 BCE, so called because it was once attributed to the sculptor Kritios. It is possible, even likely, that earlier bronze statues had used the technique, but if they did, they have not survived and Kenneth Clark called the statue "the first beautiful nude in art". The statue is a Greek marble original and not a Roman copy. Prior to the introduction of contrapposto, the statues that dominated ancient Greece were the archaic kouros (male) and the kore (female). Contrapposto has been used since the dawn of classical western sculpture. According to the canon of the Classical Greek sculptor Polykleitos in the 4th century BCE, it is one of the most important characteristics of his figurative works and those of his successors, Lysippos, Skopas, etc. The Polykletian statues (Discophoros ("discus-bearer") and Doryphoros ("spear-bearer"), for example) are idealized athletic young men with the divine sense, and captured in contrapposto. In these works, the pelvis is no longer axial with the vertical statue as in the archaic style of earlier Greek sculpture before Kritios Boy. Contrapposto can be clearly seen in the Roman copies of the statues of Hermes and Heracles. A famous example is the marble statue of Hermes and the Infant Dionysus in Olympia by Praxiteles. It can also be seen in the Roman copies of Polyclitus's Amazon. Greek art emphasized humanism along with the human mind and the human body's beauty. Greek youths trained and competed in athletic contests in the nude. A great contribution to the contrapposto pose was the concept of a canon of proportions, in which mathematical properties are used to create proportions. Renaissance Classical contrapposto was revived in Renaissance art by the Italian artists Donatello and Leonardo da Vinci, followed by Michelangelo, Raphael and other artists of the High Renaissance. One of the achievements of the Italian Renaissance was the re-discovery of contrapposto. Modern times The technique continues to be widely employed in sculpture. Modern psychological research confirms the attractiveness of the pose. Examples See also Greek statue Tribhanga, an Indian stance References and sources References Sources Andrew Stewart, "Polykleitos of Argos", One Hundred Greek Sculptors: Their Careers and Extant Works, 16.72 Polykleitos, The J. Paul Getty Museum (archived) Understanding Contrapposto at Roberto Osti's Web Site External links Art history Composition in visual art History of sculpture Human positions Italian words and phrases Sculpture terms
Contrapposto
[ "Biology" ]
1,033
[ "Behavior", "Human positions", "Human behavior" ]
41,545
https://en.wikipedia.org/wiki/Avogadro%20constant
The Avogadro constant, commonly denoted or , is an SI defining constant with an exact value of (reciprocal moles). It is this defined number of constituent particles (usually molecules, atoms, ions, or ion pairs—in general, entities) per mole (SI unit) and used as a normalization factor in relating the amount of substance, n(X), in a sample of a substance X to the corresponding number of entities, N(X): n(X) = N(X)(1/N), an aggregate of N(X) reciprocal Avogadro constants. By setting N(X) = 1, a reciprocal Avogadro constant is seen to be equal to one entity, which means that n(X) is more easily interpreted as an aggregate of N(X) entities. In the SI dimensional analysis of measurement units, the dimension of the Avogadro constant is the reciprocal of amount of substance, denoted N−1. The Avogadro number, sometimes denoted , is the numeric value of the Avogadro constant (i.e., without a unit), namely the dimensionless number ; the value chosen based on the number of atoms in 12 grams of carbon-12 in alignment with the historical definition of a mole. The constant is named after the Italian physicist and chemist Amedeo Avogadro (1776–1856). The Avogadro constant is also the factor that converts the average mass () of one particle, in grams, to the molar mass () of the substance, in grams per mole (g/mol). That is, . The constant also relates the molar volume (the volume per mole) of a substance to the average volume nominally occupied by one of its particles, when both are expressed in the same units of volume. For example, since the molar volume of water in ordinary conditions is about , the volume occupied by one molecule of water is about , or about (cubic nanometres). For a crystalline substance, relates the volume of a crystal with one mole worth of repeating unit cells, to the volume of a single cell (both in the same units). Definition The Avogadro constant was historically derived from the old definition of the mole as the amount of substance in 12 grams of carbon-12 (12C); or, equivalently, the number of daltons in a gram, where the dalton is defined as of the mass of a 12C atom. By this old definition, the numerical value of the Avogadro constant in mol−1 (the Avogadro number) was a physical constant that had to be determined experimentally. The redefinition of the mole in 2019, as being the amount of substance containing exactly particles, meant that the mass of 1 mole of a substance is now exactly the product of the Avogadro number and the average mass of its particles. The dalton, however, is still defined as of the mass of a 12C atom, which must be determined experimentally and is known only with finite accuracy. The prior experiments that aimed to determine the Avogadro constant are now re-interpreted as measurements of the value in grams of the dalton. By the old definition of mole, the numerical value of one mole of a substance, expressed in grams, was precisely equal to the average mass of one particle in daltons. With the new definition, this numerical equivalence is no longer exact, as it is affected by the uncertainty of the value of the dalton in SI units. However, it is still applicable for all practical purposes. For example, the average mass of one molecule of water is about 18.0153 daltons, and of one mole of water is about 18.0153 grams. Also, the Avogadro number is the approximate number of nucleons (protons and neutrons) in one gram of ordinary matter. In older literature, the Avogadro number was also denoted , although that conflicts with the symbol for number of particles in statistical mechanics. History Origin of the concept The Avogadro constant is named after the Italian scientist Amedeo Avogadro (1776–1856), who, in 1811, first proposed that the volume of a gas (at a given pressure and temperature) is proportional to the number of atoms or molecules regardless of the nature of the gas. Avogadro's hypothesis was popularized four years after his death by Stanislao Cannizzaro, who advocated Avogadro's work at the Karlsruhe Congress in 1860. The name Avogadro's number was coined in 1909 by the physicist Jean Perrin, who defined it as the number of molecules in exactly 32 grams of oxygen gas. The goal of this definition was to make the mass of a mole of a substance, in grams, be numerically equal to the mass of one molecule relative to the mass of the hydrogen atom; which, because of the law of definite proportions, was the natural unit of atomic mass, and was assumed to be of the atomic mass of oxygen. First measurements The value of Avogadro's number (not yet known by that name) was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. This value, the number density of particles in an ideal gas, is now called the Loschmidt constant in his honor, and is related to the Avogadro constant, , by where is the pressure, is the gas constant, and is the absolute temperature. Because of this work, the symbol is sometimes used for the Avogadro constant, and, in German literature, that name may be used for both constants, distinguished only by the units of measurement. (However, should not be confused with the entirely different Loschmidt constant in English-language literature.) Perrin himself determined the Avogadro number by several different experimental methods. He was awarded the 1926 Nobel Prize in Physics, largely for this work. The electric charge per mole of electrons is a constant called the Faraday constant and has been known since 1834, when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan with the help of Harvey Fletcher obtained the first measurement of the charge on an electron. Dividing the charge on a mole of electrons by the charge on a single electron provided a more accurate estimate of the Avogadro number. SI definition of 1971 In 1971, in its 14th conference, the International Bureau of Weights and Measures (BIPM) decided to regard the amount of substance as an independent dimension of measurement, with the mole as its base unit in the International System of Units (SI). Specifically, the mole was defined as an amount of a substance that contains as many elementary entities as there are atoms in () of carbon-12 (12C). Thus, in particular, one mole of carbon-12 was exactly of the element. By this definition, one mole of any substance contained exactly as many elementary entities as one mole of any other substance. However, this number was a physical constant that had to be experimentally determined since it depended on the mass (in grams) of one atom of 12C, and therefore, it was known only to a limited number of decimal digits. The common rule of thumb that "one gram of matter contains nucleons" was exact for carbon-12, but slightly inexact for other elements and isotopes. In the same conference, the BIPM also named (the factor that converted moles into number of particles) the "Avogadro constant". However, the term "Avogadro number" continued to be used, especially in introductory works. As a consequence of this definition, was not a pure number, but had the metric dimension of reciprocal of amount of substance (mol−1). SI redefinition of 2019 In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant as the exact value , thus redefining the mole as exactly constituent particles of the substance under consideration. One consequence of this change is that the mass of a mole of 12C atoms is no longer exactly 0.012 kg. On the other hand, the dalton ( universal atomic mass unit) remains unchanged as of the mass of 12C. Thus, the molar mass constant remains very close to but no longer exactly equal to 1 g/mol, although the difference ( in relative terms, as of March 2019) is insignificant for all practical purposes. Connection to other constants The Avogadro constant is related to other physical constants and properties. It relates the molar gas constant and the Boltzmann constant , which in the SI is defined to be exactly :   It relates the Faraday constant and the elementary charge , which in the SI is defined as exactly :   It relates the molar mass constant and the atomic mass constant currently See also CODATA 2018 List of scientists whose names are used in physical constants Mole Day References External links 1996 definition of the Avogadro constant from the IUPAC Compendium of Chemical Terminology ("Gold Book") Some Notes on Avogadro's Number, (historical notes) An Exact Value for Avogadro's Number – American Scientist Avogadro and molar Planck constants for the redefinition of the kilogram Scanned version of "Two hypothesis of Avogadro", 1811 Avogadro's article, on BibNum Amount of substance Fundamental constants Physical constants Units of amount
Avogadro constant
[ "Physics", "Chemistry", "Mathematics" ]
1,944
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Chemical quantities", "Amount of substance", "Physical constants", "Wikipedia categories named after physical quantities", "Fundamental constants" ]
41,548
https://en.wikipedia.org/wiki/Phase-locked%20loop
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is fixed relative to the phase of an input signal. Keeping the input and output phase in lockstep also implies keeping the input and output frequencies the same, thus a phase-locked loop can also track an input frequency. And by incorporating a frequency divider, a PLL can generate a stable frequency that is a multiple of the input frequency. These properties are used for clock synchronization, demodulation, frequency synthesis, clock multipliers, and signal recovery from a noisy communication channel. Since 1969, a single integrated circuit can provide a complete PLL building block, and nowadays have output frequencies from a fraction of a hertz up to many gigahertz. Thus, PLLs are widely employed in radio, telecommunications, computers (e.g. to distribute precisely timed clock signals in microprocessors), grid-tie inverters (electronic power converters used to integrate DC renewable resources and storage elements such as photovoltaics and batteries with the power grid), and other electronic applications. Simple example A simple analog PLL is an electronic circuit consisting of a variable frequency oscillator and a phase detector in a feedback loop (Figure 1). The oscillator generates a periodic signal with frequency proportional to an applied voltage, hence the term voltage-controlled oscillator (VCO). The phase detector compares the phase of the VCO's output signal with the phase of periodic input reference signal and outputs a voltage (stabilized by the filter) to adjust the oscillator's frequency to match the phase of to the phase of . Clock analogy Phase can be proportional to time, so a phase difference can correspond to a time difference. Left alone, different clocks will mark time at slightly different rates. A mechanical clock, for example, might be fast or slow by a few seconds per hour compared to a reference atomic clock (such as the NIST-F2). That time difference becomes substantial over time. Instead, the owner can synchronize their mechanical clock (with varying degrees of accuracy) by phase-locking it to a reference clock. An inefficient synchronization method involves the owner resetting their clock to that more accurate clock's time every week. But, left alone, their clock will still continue to diverge from the reference clock at the same few seconds per hour rate. A more efficient synchronization method (analogous to the simple PLL in Figure 1) utilizes the fast-slow timing adjust control (analogous to how the VCO's frequency can be adjusted) available on some clocks. Analogously to the phase comparator, the owner could notice their clock's misalignment and turn its timing adjust a small proportional amount to make their clock's frequency a little slower (if their clock was fast) or faster (if their clock was slow). If they don't overcompensate, then their clock will be more accurate than before. Over a series of such weekly adjustments, their clock's notion of a second would agree close enough with the reference clock, so they could be said to be locked both in frequency and phase. An early electromechanical version of a phase-locked loop was used in 1921 in the Shortt-Synchronome clock. History Spontaneous synchronization of weakly coupled pendulum clocks was noted by the Dutch physicist Christiaan Huygens as early as 1673. Around the turn of the 19th century, Lord Rayleigh observed synchronization of weakly coupled organ pipes and tuning forks. In 1919, W. H. Eccles and J. H. Vincent found that two electronic oscillators that had been tuned to oscillate at slightly different frequencies but that were coupled to a resonant circuit would soon oscillate at the same frequency. Automatic synchronization of electronic oscillators was described in 1923 by Edward Victor Appleton. In 1925, David Robertson, first professor of electrical engineering at the University of Bristol, introduced phase locking in his clock design to control the striking of the bell Great George in the new Wills Memorial Building. Robertson's clock incorporated an electromechanical device that could vary the rate of oscillation of the pendulum, and derived correction signals from a circuit that compared the pendulum phase with that of an incoming telegraph pulse from Greenwich Observatory every morning at 10:00 GMT. Including equivalents of every element of a modern electronic PLL, Robertson's system was notably ahead of its time in that its phase detector was a relay logic implementation of the transistor circuits for phase/frequency detectors not seen until the 1970s.  Robertson's work predated research towards what was later named the phase-lock loop in 1932, when British researchers developed an alternative to Edwin Armstrong's superheterodyne receiver, the Homodyne or direct-conversion receiver. In the homodyne or synchrodyne system, a local oscillator was tuned to the desired input frequency and multiplied with the input signal. The resulting output signal included the original modulation information. The intent was to develop an alternative receiver circuit that required fewer tuned circuits than the superheterodyne receiver. Since the local oscillator would rapidly drift in frequency, an automatic correction signal was applied to the oscillator, maintaining it in the same phase and frequency of the desired signal. The technique was described in 1932, in a paper by Henri de Bellescize, in the French journal L'Onde Électrique. In analog television receivers since at least the late 1930s, phase-locked-loop horizontal and vertical sweep circuits are locked to synchronization pulses in the broadcast signal. In 1969, Signetics introduced a line of low-cost monolithic integrated circuits like the NE565 using bipolar transistors, that were complete phase-locked loop systems on a chip, and applications for the technique multiplied. A few years later, RCA introduced the CD4046 Micropower Phase-Locked Loop using CMOS, which also became a popular integrated circuit building block. Structure and function Phase-locked loop mechanisms may be implemented as either analog or digital circuits. Both implementations use the same basic structure. Analog PLL circuits include four basic elements: Phase detector Low-pass filter Voltage controlled oscillator Feedback path, which may include a frequency divider Variations There are several variations of PLLs. Some terms that are used are "analog phase-locked loop" (APLL), also referred to as a linear phase-locked loop" (LPLL), "digital phase-locked loop" (DPLL), "all digital phase-locked loop" (ADPLL), and "software phase-locked loop" (SPLL). Analog or linear PLL (APLL)Phase detector is an analog multiplier. Loop filter is active or passive. Uses a voltage-controlled oscillator (VCO). APLL is said to be a type II if its loop filter has transfer function with exactly one pole at the origin (see also Egan's conjecture on the pull-in range of type II APLL). Digital PLL (DPLL) An analog PLL with a digital phase detector (such as XOR, edge-triggered JK flip flop, phase frequency detector). May have digital divider in the loop. All digital PLL (ADPLL) Phase detector, filter and oscillator are digital. Uses a numerically controlled oscillator (NCO). Neuronal PLL (NPLL) Phase detector is implemented by neuronal non-linearity, oscillator by rate-controlled oscillating neurons. Software PLL (SPLL) Functional blocks are implemented by software rather than specialized hardware. Charge-pump PLL (CP-PLL)CP-PLL is a modification of phase-locked loops with phase-frequency detector and square waveform signals. See also Gardner's conjecture on CP-PLL. Performance parameters Type and order. Frequency ranges: hold-in range (tracking range), pull-in range (capture range, acquisition range), lock-in range. See also Gardner's problem on the lock-in range, Egan's conjecture on the pull-in range of type II APLL, Viterbi's problem on the PLL ranges coincidence. Loop bandwidth: Defining the speed of the control loop. Transient response: Like overshoot and settling time to a certain accuracy (like 50 ppm). Steady-state errors: Like remaining phase or timing error. Output spectrum purity: Like sidebands generated from a certain VCO tuning voltage ripple. Phase-noise: Defined by noise energy in a certain frequency band (like 10 kHz offset from carrier). Highly dependent on VCO phase-noise, PLL bandwidth, etc. General parameters: Such as power consumption, supply voltage range, output amplitude, etc. Applications Phase-locked loops are widely used for synchronization purposes; in space communications for coherent demodulation and threshold extension, bit synchronization, and symbol synchronization. Phase-locked loops can also be used to demodulate frequency-modulated signals. In radio transmitters, a PLL is used to synthesize new frequencies which are a multiple of a reference frequency, with the same stability as the reference frequency. Other applications include: Demodulation of frequency modulation (FM): If PLL is locked to an FM signal, the VCO tracks the instantaneous frequency of the input signal. The filtered error voltage which controls the VCO and maintains lock with the input signal is demodulated FM output. The VCO transfer characteristics determine the linearity of the demodulated out. Since the VCO used in an integrated-circuit PLL is highly linear, it is possible to realize highly linear FM demodulators. Demodulation of frequency-shift keying (FSK): In digital data communication and computer peripherals, binary data is transmitted by means of a carrier frequency which is shifted between two preset frequencies. Recovery of small signals that otherwise would be lost in noise (lock-in amplifier to track the reference frequency) Recovery of clock timing information from a data stream such as from a disk drive Clock multipliers in microprocessors that allow internal processor elements to run faster than external connections, while maintaining precise timing relationships Demodulation of modems and other tone signals for telecommunications and remote control. DSP of video signals; Phase-locked loops are also used to synchronize phase and frequency to the input analog video signal so it can be sampled and digitally processed Atomic force microscopy in frequency modulation mode, to detect changes of the cantilever resonance frequency due to tip–surface interactions DC motor drive Clock recovery Some data streams, especially high-speed serial data streams (such as the raw stream of data from the magnetic head of a disk drive), are sent without an accompanying clock. The receiver generates a clock from an approximate frequency reference, and then uses a PLL to phase-align it to the data stream's signal edges. This process is referred to as clock recovery. For this scheme to work, the data stream must have edges frequently-enough to correct any drift in the PLL's oscillator. Thus a line code with a hard upper bound on the maximum time between edges (e.g. 8b/10b encoding) is typically used to encode the data. Deskewing If a clock is sent in parallel with data, that clock can be used to sample the data. Because the clock must be received and amplified before it can drive the flip-flops which sample the data, there will be a finite, and process-, temperature-, and voltage-dependent delay between the detected clock edge and the received data window. This delay limits the frequency at which data can be sent. One way of eliminating this delay is to include a deskew PLL on the receive side, so that the clock at each data flip-flop is phase-matched to the received clock. In that type of application, a special form of a PLL called a delay-locked loop (DLL) is frequently used. Clock generation Many electronic systems include processors of various sorts that operate at hundreds of megahertz to gigahertz, well above the practical frequencies of crystal oscillators. Typically, the clocks supplied to these processors come from clock generator PLLs, which multiply a lower-frequency reference clock (usually 50 or 100 MHz) up to the operating frequency of the processor. The multiplication factor can be quite large in cases where the operating frequency is multiple gigahertz and the reference crystal is just tens or hundreds of megahertz. Spread spectrum All electronic systems emit some unwanted radio frequency energy. Various regulatory agencies (such as the FCC in the United States) put limits on the emitted energy and any interference caused by it. The emitted noise generally appears at sharp spectral peaks (usually at the operating frequency of the device, and a few harmonics). A system designer can use a spread-spectrum PLL to reduce interference with high-Q receivers by spreading the energy over a larger portion of the spectrum. For example, by changing the operating frequency up and down by a small amount (about 1%), a device running at hundreds of megahertz can spread its interference evenly over a few megahertz of spectrum, which drastically reduces the amount of noise seen on broadcast FM radio channels, which have a bandwidth of several tens of kilohertz. Clock distribution Typically, the reference clock enters the chip and drives a phase locked loop (PLL), which then drives the system's clock distribution. The clock distribution is usually balanced so that the clock arrives at every endpoint simultaneously. One of those endpoints is the PLL's feedback input. The function of the PLL is to compare the distributed clock to the incoming reference clock, and vary the phase and frequency of its output until the reference and feedback clocks are phase and frequency matched. PLLs are ubiquitous—they tune clocks in systems several feet across, as well as clocks in small portions of individual chips. Sometimes the reference clock may not actually be a pure clock at all, but rather a data stream with enough transitions that the PLL is able to recover a regular clock from that stream. Sometimes the reference clock is the same frequency as the clock driven through the clock distribution, other times the distributed clock may be some rational multiple of the reference. AM detection A PLL may be used to synchronously demodulate amplitude modulated (AM) signals. The PLL recovers the phase and frequency of the incoming AM signal's carrier. The recovered phase at the VCO differs from the carrier's by 90°, so it is shifted in phase to match, and then fed to a multiplier. The output of the multiplier contains both the sum and the difference frequency signals, and the demodulated output is obtained by low-pass filtering. Since the PLL responds only to the carrier frequencies which are very close to the VCO output, a PLL AM detector exhibits a high degree of selectivity and noise immunity which is not possible with conventional peak type AM demodulators. However, the loop may lose lock where AM signals have 100% modulation depth. Jitter and noise reduction One desirable property of all PLLs is that the reference and feedback clock edges be brought into very close alignment. The average difference in time between the phases of the two signals when the PLL has achieved lock is called the static phase offset (also called the steady-state phase error). The variance between these phases is called tracking jitter. Ideally, the static phase offset should be zero, and the tracking jitter should be as low as possible. Phase noise is another type of jitter observed in PLLs, and is caused by the oscillator itself and by elements used in the oscillator's frequency control circuit. Some technologies are known to perform better than others in this regard. The best digital PLLs are constructed with emitter-coupled logic (ECL) elements, at the expense of high power consumption. To keep phase noise low in PLL circuits, it is best to avoid saturating logic families such as transistor-transistor logic (TTL) or CMOS. Another desirable property of all PLLs is that the phase and frequency of the generated clock be unaffected by rapid changes in the voltages of the power and ground supply lines, as well as the voltage of the substrate on which the PLL circuits are fabricated. This is called substrate and supply noise rejection. The higher the noise rejection, the better. To further improve the phase noise of the output, an injection locked oscillator can be employed following the VCO in the PLL. Frequency synthesis In digital wireless communication systems (GSM, CDMA etc.), PLLs are used to provide the local oscillator up-conversion during transmission and down-conversion during reception. In most cellular handsets this function has been largely integrated into a single integrated circuit to reduce the cost and size of the handset. However, due to the high performance required of base station terminals, the transmission and reception circuits are built with discrete components to achieve the levels of performance required. GSM local oscillator modules are typically built with a frequency synthesizer integrated circuit and discrete resonator VCOs. Phase angle reference Grid-tie inverters based on voltage source inverters source or sink real power into the AC electric grid as a function of the phase angle of the voltage they generate relative to the grid's voltage phase angle, which is measured using a PLL. In photovoltaic applications, the more the sine wave produced leads the grid voltage wave, the more power is injected into the grid. For battery applications, the more the sine wave produced lags the grid voltage wave, the more the battery charges from the grid, and the more the sine wave produced leads the grid voltage wave, the more the battery discharges into the grid. Block diagram The block diagram shown in the figure shows an input signal, FI, which is used to generate an output, FO. The input signal is often called the reference signal (also abbreviated FREF). At the input, a phase detector (shown as the Phase frequency detector and Charge pump blocks in the figure) compares two input signals, producing an error signal which is proportional to their phase difference. The error signal is then low-pass filtered and used to drive a VCO which creates an output phase. The output is fed through an optional divider back to the input of the system, producing a negative feedback loop. If the output phase drifts, the error signal will increase, driving the VCO phase in the opposite direction so as to reduce the error. Thus the output phase is locked to the phase of the input. Analog phase locked loops are generally built with an analog phase detector, low-pass filter and VCO placed in a negative feedback configuration. A digital phase locked loop uses a digital phase detector; it may also have a divider in the feedback path or in the reference path, or both, in order to make the PLL's output signal frequency a rational multiple of the reference frequency. A non-integer multiple of the reference frequency can also be created by replacing the simple divide-by-N counter in the feedback path with a programmable pulse swallowing counter. This technique is usually referred to as a fractional-N synthesizer or fractional-N PLL. The oscillator generates a periodic output signal. Assume that initially the oscillator is at nearly the same frequency as the reference signal. If the phase from the oscillator falls behind that of the reference, the phase detector changes the control voltage of the oscillator so that it speeds up. Likewise, if the phase creeps ahead of the reference, the phase detector changes the control voltage to slow down the oscillator. Since initially the oscillator may be far from the reference frequency, practical phase detectors may also respond to frequency differences, so as to increase the lock-in range of allowable inputs. Depending on the application, either the output of the controlled oscillator, or the control signal to the oscillator, provides the useful output of the PLL system. Elements Phase detector A phase detector (PD) generates a voltage, which represents the phase difference between two signals. In a PLL, the two inputs of the phase detector are the reference input and the feedback from the VCO. The PD output voltage is used to control the VCO such that the phase difference between the two inputs is held constant, making it a negative feedback system. Different types of phase detectors have different performance characteristics. For instance, the frequency mixer produces harmonics that adds complexity in applications where spectral purity of the VCO signal is important. The resulting unwanted (spurious) sidebands, also called "reference spurs" can dominate the filter requirements and reduce the capture range well below or increase the lock time beyond the requirements. In these applications the more complex digital phase detectors are used which do not have as severe a reference spur component on their output. Also, when in lock, the steady-state phase difference at the inputs using this type of phase detector is near 90 degrees. In PLL applications it is frequently required to know when the loop is out of lock. The more complex digital phase-frequency detectors usually have an output that allows a reliable indication of an out of lock condition. An XOR gate is often used for digital PLLs as an effective yet simple phase detector. It can also be used in an analog sense with only slight modification to the circuitry. Filter The block commonly called the PLL loop filter (usually a low-pass filter) generally has two distinct functions. The primary function is to determine loop dynamics, also called stability. This is how the loop responds to disturbances, such as changes in the reference frequency, changes of the feedback divider, or at startup. Common considerations are the range over which the loop can achieve lock (pull-in range, lock range or capture range), how fast the loop achieves lock (lock time, lock-up time or settling time) and damping behavior. Depending on the application, this may require one or more of the following: a simple proportion (gain or attenuation), an integral (low-pass filter) and/or derivative (high-pass filter). Loop parameters commonly examined for this are the loop's gain margin and phase margin. Common concepts in control theory including the PID controller are used to design this function. The second common consideration is limiting the amount of reference frequency energy (ripple) appearing at the phase detector output that is then applied to the VCO control input. This frequency modulates the VCO and produces FM sidebands commonly called "reference spurs". The design of this block can be dominated by either of these considerations, or can be a complex process juggling the interactions of the two. The typical trade-off of increasing the bandwidth is degraded stability. Conversely, the tradeoff of extra damping for better stability is reduced speed and increased settling time. Often the phase-noise is also affected. Oscillator All phase-locked loops employ an oscillator element with variable frequency capability. This can be an analog VCO either driven by analog circuitry in the case of an APLL or driven digitally through the use of a digital-to-analog converter as is the case for some DPLL designs. Pure digital oscillators such as a numerically controlled oscillator are used in ADPLLs. Feedback path and optional divider PLLs may include a divider between the oscillator and the feedback input to the phase detector to produce a frequency synthesizer. A programmable divider is particularly useful in radio transmitter applications and for computer clocking, since a large number of frequencies can be produced from a single stable, accurate, quartz crystal–controlled reference oscillator (which were expensive before commercial-scale hydrothermal synthesis provided cheap synthetic quartz). Some PLLs also include a divider between the reference clock and the reference input to the phase detector. If the divider in the feedback path divides by and the reference input divider divides by , it allows the PLL to multiply the reference frequency by . It might seem simpler to just feed the PLL a lower frequency, but in some cases the reference frequency may be constrained by other issues, and then the reference divider is useful. Frequency multiplication can also be attained by locking the VCO output to the Nth harmonic of the reference signal. Instead of a simple phase detector, the design uses a harmonic mixer (sampling mixer). The harmonic mixer turns the reference signal into an impulse train that is rich in harmonics. The VCO output is coarse tuned to be close to one of those harmonics. Consequently, the desired harmonic mixer output (representing the difference between the N harmonic and the VCO output) falls within the loop filter passband. It should also be noted that the feedback is not limited to a frequency divider. This element can be other elements such as a frequency multiplier, or a mixer. The multiplier will make the VCO output a sub-multiple (rather than a multiple) of the reference frequency. A mixer can translate the VCO frequency by a fixed offset. It may also be a combination of these. For example, a divider following a mixer allows the divider to operate at a much lower frequency than the VCO without a loss in loop gain. Modeling Time domain model of APLL The equations governing a phase-locked loop with an analog multiplier as the phase detector and linear filter may be derived as follows. Let the input to the phase detector be and the output of the VCO is with phases and . The functions and describe waveforms of signals. Then the output of the phase detector is given by The VCO frequency is usually taken as a function of the VCO input as where is the sensitivity of the VCO and is expressed in Hz / V; is a free-running frequency of VCO. The loop filter can be described by a system of linear differential equations where is an input of the filter, is an output of the filter, is -by- matrix, . represents an initial state of the filter. The star symbol is a conjugate transpose. Hence the following system describes PLL where is an initial phase shift. Phase domain model of APLL Consider the input of PLL and VCO output are high frequency signals. Then for any piecewise differentiable -periodic functions and there is a function such that the output of Filter in phase domain is asymptotically equal (the difference is small with respect to the frequencies) to the output of the Filter in time domain model. Here function is a phase detector characteristic. Denote by the phase difference Then the following dynamical system describes PLL behavior Here ; is the frequency of a reference oscillator (we assume that is constant). Example Consider sinusoidal signals and a simple one-pole RC circuit as a filter. The time-domain model takes the form PD characteristics for this signals is equal to Hence the phase domain model takes the form This system of equations is equivalent to the equation of mathematical pendulum Linearized phase domain model Phase locked loops can also be analyzed as control systems by applying the Laplace transform. The loop response can be written as Where is the output phase in radians is the input phase in radians is the phase detector gain in volts per radian is the VCO gain in radians per volt-second is the loop filter transfer function (dimensionless) The loop characteristics can be controlled by inserting different types of loop filters. The simplest filter is a one-pole RC circuit. The loop transfer function in this case is The loop response becomes: This is the form of a classic harmonic oscillator. The denominator can be related to that of a second order system: where is the damping factor and is the natural frequency of the loop. For the one-pole RC filter, The loop natural frequency is a measure of the response time of the loop, and the damping factor is a measure of the overshoot and ringing. Ideally, the natural frequency should be high and the damping factor should be near 0.707 (critical damping). With a single pole filter, it is not possible to control the loop frequency and damping factor independently. For the case of critical damping, A slightly more effective filter, the lag-lead filter includes one pole and one zero. This can be realized with two resistors and one capacitor. The transfer function for this filter is This filter has two time constants Substituting above yields the following natural frequency and damping factor The loop filter components can be calculated independently for a given natural frequency and damping factor Real world loop filter design can be much more complex e.g. using higher order filters to reduce various types or source of phase noise. (See the D Banerjee ref below) Implementing a digital phase-locked loop in software Digital phase locked loops can be implemented in hardware, using integrated circuits such as a CMOS 4046. However, with microcontrollers becoming faster, it may make sense to implement a phase locked loop in software for applications that do not require locking onto signals in the MHz range or faster, such as precisely controlling motor speeds. Software implementation has several advantages including easy customization of the feedback loop including changing the multiplication or division ratio between the signal being tracked and the output oscillator. Furthermore, a software implementation is useful to understand and experiment with. As an example of a phase-locked loop implemented using a phase frequency detector is presented in MATLAB, as this type of phase detector is robust and easy to implement. % This example is written in MATLAB % Initialize variables vcofreq = zeros(1, numiterations); ervec = zeros(1, numiterations); % Keep track of last states of reference, signal, and error signal qsig = 0; qref = 0; lref = 0; lsig = 0; lersig = 0; phs = 0; freq = 0; % Loop filter constants (proportional and derivative) % Currently powers of two to facilitate multiplication by shifts prop = 1 / 128; deriv = 64; for it = 1:numiterations % Simulate a local oscillator using a 16-bit counter phs = mod(phs + floor(freq / 2 ^ 16), 2 ^ 16); ref = phs < 32768; % Get the next digital value (0 or 1) of the signal to track sig = tracksig(it); % Implement the phase-frequency detector rst = ~ (qsig & qref); % Reset the "flip-flop" of the phase-frequency % detector when both signal and reference are high qsig = (qsig | (sig & ~ lsig)) & rst; % Trigger signal flip-flop and leading edge of signal qref = (qref | (ref & ~ lref)) & rst; % Trigger reference flip-flop on leading edge of reference lref = ref; lsig = sig; % Store these values for next iteration (for edge detection) ersig = qref - qsig; % Compute the error signal (whether frequency should increase or decrease) % Error signal is given by one or the other flip flop signal % Implement a pole-zero filter by proportional and derivative input to frequency filtered_ersig = ersig + (ersig - lersig) * deriv; % Keep error signal for proportional output lersig = ersig; % Integrate VCO frequency using the error signal freq = freq - 2 ^ 16 * filtered_ersig * prop; % Frequency is tracked as a fixed-point binary fraction % Store the current VCO frequency vcofreq(1, it) = freq / 2 ^ 16; % Store the error signal to show whether signal or reference is higher frequency ervec(1, it) = ersig; end In this example, an array tracksig is assumed to contain a reference signal to be tracked. The oscillator is implemented by a counter, with the most significant bit of the counter indicating the on/off status of the oscillator. This code simulates the two D-type flip-flops that comprise a phase-frequency comparator. When either the reference or signal has a positive edge, the corresponding flip-flop switches high. Once both reference and signal is high, both flip-flops are reset. Which flip-flop is high determines at that instant whether the reference or signal leads the other. The error signal is the difference between these two flip-flop values. The pole-zero filter is implemented by adding the error signal and its derivative to the filtered error signal. This in turn is integrated to find the oscillator frequency. In practice, one would likely insert other operations into the feedback of this phase-locked loop. For example, if the phase locked loop were to implement a frequency multiplier, the oscillator signal could be divided in frequency before it is compared to the reference signal. See also Frequency-locked loop Charge-pump phase-locked loop Carrier recovery Circle map – A simple mathematical model of the phase-locked loop showing both mode-locking and chaotic behavior. Costas loop Delay-locked loop (DLL) Direct conversion receiver Direct digital synthesizer Kalman filter PLL multibit Shortt–Synchronome clock – Slave pendulum phase-locked to master (ca 1921) Notes References Further reading . . (provides useful Matlab scripts for simulation) . (provides useful Matlab scripts for simulation) . (FM Demodulation) . An article on designing a standard PLL IC for Bluetooth applications. External links Phase locked loop primer – Includes embedded video Excel Unusual hosts an animated PLL model and the tutorials to code such a model. Articles with example MATLAB/Octave code Communication circuits Electronic design Electronic oscillators Radio electronics
Phase-locked loop
[ "Engineering" ]
7,126
[ "Radio electronics", "Telecommunications engineering", "Electronic design", "Electronic engineering", "Design", "Communication circuits" ]
41,549
https://en.wikipedia.org/wiki/Phase%20noise
In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers speak of the phase noise of an oscillator, whereas digital-system engineers work with the jitter of a clock. Definitions An ideal oscillator would generate a pure sine wave. In the frequency domain, this would be represented as a single pair of Dirac delta functions (positive and negative conjugates) at the oscillator's frequency; i.e., all the signal's power is at a single frequency. All real oscillators have phase modulated noise components. The phase noise components spread the power of a signal to adjacent frequencies, resulting in noise sidebands. Consider the following noise-free signal: Phase noise is added to this signal by adding a stochastic process represented by to the signal as follows: Different phase noise processes, , possess different power Spectral density (PSD). For example, a white noise PSD follows a trend, a pink noise PSD follows a trend, and a brown noise PSD follows a trend. is the single-sided (f>0) phase noise PSD , given by the Fourier transform of the Autocorrelation of the phase noise. The noise can also be represented at the single-sided (f>0) frequency noise PSD, , or the fractional frequency stability PSD, , which defines the frequency fluctuations in terms of the deviation from the carrier frequency, . The phase noise can also be given as the spectral purity, , the single-sideband power in a 1Hz bandwidth at a frequency offset, f, from the carrier frequency, , referenced to the carrier power. Jitter conversions Phase noise is sometimes also measured and expressed as a power obtained by integrating over a certain range of offset frequencies. For example, the phase noise may be −40 dBc integrated over the range of 1 kHz to 100 kHz. This integrated phase noise (expressed in degrees) can be converted to jitter (expressed in seconds) using the following formula: In the absence of 1/f noise in a region where the phase noise displays a –20dBc/decade slope (Leeson's equation), the RMS cycle jitter can be related to the phase noise by: Likewise: Measurement Phase noise can be measured using a spectrum analyzer if the phase noise of the device under test (DUT) is large with respect to the spectrum analyzer's local oscillator. Care should be taken that observed values are due to the measured signal and not the shape factor of the spectrum analyzer's filters. Spectrum analyzer based measurement can show the phase-noise power over many decades of frequency; e.g., 1 Hz to 10 MHz. The slope with offset frequency in various offset frequency regions can provide clues as to the source of the noise; e.g., low frequency flicker noise decreasing at 30 dB per decade (= 9 dB per octave). Phase noise measurement systems are alternatives to spectrum analyzers. These systems may use internal and external references and allow measurement of both residual (additive) and absolute noise. Additionally, these systems can make low-noise, close-to-the-carrier, measurements. Linewidths The sinusoidal output of an ideal oscillator is a Dirac delta function in the power spectral density centered at the frequency of the sinusoid. Such perfect spectral purity is not achievable in a practical oscillator. Spreading of the spectrum line caused by phase noise is characterized by the fundamental linewidth and the integral linewidth. The fundamental linewidth, also known as the White noise-limited linewidth or the intrinsic linewidth, is the linewidth of an oscillator's PSD in the presence of only white noise sources (noise with a PSD that follows a trend, ie. equivalent across all frequencies). The fundamental linewidth takes Lorentzian spectral line shape. White noise provides a Allan Deviation plot at small averaging times. The integral linewidth, also known as the effective linewidth or the total linewidth, is the linewidth of an oscillator's PSD in the presence of both white noise sources (noise with a PSD that follows a trend) and pink noise sources (noise with a PSD that follows a trend). Pink noise is sometimes called Flicker noise, or simply 1/f noise. The integral linewidth takes Voigt lineshape, a convolution of the white noise-induced Lorentzian lineshape and the pink noise-induced Gaussian lineshape. Pink noise provides a Allan Deviation plot at moderate averaging times. This flat line on the Allan Deviation plot is also known as the flicker floor. Additionally, the oscillator might experience Frequency drift over long periods of time, slowly moving the center frequency of the Voigt lineshape. This drift is a brown noise source (noise with a PSD that follows a trend), and provides a Allan Deviation plot at large averaging times. Limiting System Performance A laser is a common oscillator that is characterized by its noise, and thus its Laser linewidth. The laser noise provides fundamental limitations of the systems that the laser is used in, such as loss of sensitivity in radar and communications systems, lack of definition in imaging systems, and a higher bit error rate in digital systems. Lasers with a near-Infrared center wavelength are used in many atomic, molecular, and optical physics experiments to provide photons that interact with atoms. The requirements for the spectral purity at specific frequency offsets of the lasers used in qubit operation (such as clock transition lasers and state preparation lasers) are highly stringent because the coherence time of the qubit is directly related to the linewidth of the lasers. See also Allan variance Flicker noise Leeson's equation Maximum time interval error Noise spectral density Spectral density Spectral phase Opto-electronic oscillator References Further reading Ulrich L. Rohde, A New and Efficient Method of Designing Low Noise Microwave Oscillators, https://depositonce.tu-berlin.de/bitstream/11303/1306/1/Dokument_16.pdf Ajay Poddar, Ulrich Rohde, Anisha Apte, “ How Low Can They Go, Oscillator Phase noise model, Theoretical, Experimental Validation, and Phase Noise Measurements”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 50–72, September/October 2013. Ulrich Rohde, Ajay Poddar, Anisha Apte, “Getting Its Measure”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 73–86, September/October 2013 U. L. Rohde, A. K. Poddar, Anisha Apte, “Phase noise measurement and its limitations”, Microwave Journal, pp. 22–46, May 2013 A. K. Poddar, U.L. Rohde, “Technique to Minimize Phase Noise of Crystal Oscillators”, Microwave Journal, pp. 132–150, May 2013. A. K. Poddar, U. L. Rohde, and E. Rubiola, “Phase noise measurement: Challenges and uncertainty”, 2014 IEEE IMaRC, Bangalore, Dec 2014. Oscillators Frequency-domain analysis Telecommunication theory Noise (electronics)
Phase noise
[ "Physics" ]
1,582
[ "Frequency-domain analysis", "Spectrum (physical sciences)" ]
41,550
https://en.wikipedia.org/wiki/Phase%20perturbation
Phase perturbation is the shifting, from whatever cause, in the phase of an electronic signal. The shifting is often quite rapid, and may appear to be random or cyclic. The phase departure in phase perturbation usually is larger, but less rapid, than in phase jitter. Phase perturbation may be expressed in degrees, with any cyclic component expressed in hertz. References Frequency-domain analysis Telecommunication theory
Phase perturbation
[ "Physics" ]
86
[ "Frequency-domain analysis", "Spectrum (physical sciences)" ]
41,553
https://en.wikipedia.org/wiki/Photocurrent
Photocurrent is the electric current through a photosensitive device, such as a photodiode, as the result of exposure to radiant power. The photocurrent may occur as a result of the photoelectric, photoemissive, or photovoltaic effect. The photocurrent may be enhanced by internal gain caused by interaction among ions and photons under the influence of applied fields, such as occurs in an avalanche photodiode (APD). When a suitable radiation is used, the photoelectric current is directly proportional to intensity of radiation and increases with the increase in accelerating potential till the stage is reached when photo-current becomes maximum and does not increase with further increase in accelerating potential. The highest (maximum) value of the photo-current is called saturation current. The value of retarding potential at which photo-current becomes zero is called cut-off voltage or stopping potential for the given frequency of the incident ray. Photovoltaics The generation of a photocurrent forms the basis of the photovoltaic cell. Photocurrent spectroscopy A characterization technique called photocurrent spectroscopy (PCS), also known as photoconductivity spectroscopy, is widely used for studying optoelectronic properties of semiconductors and other light absorbing materials. The setup of the technique involves having a semiconductor contacted with electrodes allowing for application of an electric bias, while at the same time a tunable light source incident with a given specific wavelength (energy) and power, usually pulsed by a mechanical chopper. The quantity measured is the electrical response of the circuit, coupled with the spectrograph obtained by varying the incident light energy by a monochromator. The circuit and optics are coupled by use of a lock-in amplifier. The measurements give information related to the band gap of the semiconductor, allowing for identification of various charge transitions like exciton and trion energies. This is highly relevant for studying semiconductor nanostructures like quantum wells, and other nanomaterials like transition metal dichalcogenide monolayers. Furthermore, by using a piezo stage to vary the lateral position of the semiconductor with micron precision, one can generate a micrograph false color image of the spectra for different positions. This is called scanning photocurrent microscopy (SPCM). See also Photoconductivity Transient photocurrent (TPC) References Electromagnetism
Photocurrent
[ "Physics" ]
502
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
41,556
https://en.wikipedia.org/wiki/PIN%20diode
A PIN diode is a diode with a wide, undoped intrinsic semiconductor region between a p-type semiconductor and an n-type semiconductor region. The p-type and n-type regions are typically heavily doped because they are used for ohmic contacts. The wide intrinsic region is in contrast to an ordinary p–n diode. The wide intrinsic region makes the PIN diode an inferior rectifier (one typical function of a diode), but it makes it suitable for attenuators, fast switches, photodetectors, and high-voltage power electronics applications. The PIN photodiode was invented by Jun-Ichi Nishizawa and his colleagues in 1950. It is a semiconductor device. Operation A PIN diode operates under what is known as high-level injection. In other words, the intrinsic "i" region is flooded with charge carriers from the "p" and "n" regions. Its function can be likened to filling up a water bucket with a hole on the side. Once the water reaches the hole's level it will begin to pour out. Similarly, the diode will conduct current once the flooded electrons and holes reach an equilibrium point, where the number of electrons is equal to the number of holes in the intrinsic region. When the diode is forward biased, the injected carrier concentration is typically several orders of magnitude higher than the intrinsic carrier concentration. Due to this high level injection, which in turn is due to the depletion process, the electric field extends deeply (almost the entire length) into the region. This electric field helps in speeding up of the transport of charge carriers from the P to the N region, which results in faster operation of the diode, making it a suitable device for high-frequency operation. Characteristics The PIN diode obeys the standard diode equation for low-frequency signals. At higher frequencies, the diode looks like an almost perfect (very linear, even for large signals) resistor. The P-I-N diode has a relatively large stored charge adrift in a thick intrinsic region. At a low-enough frequency, the stored charge can be fully swept and the diode turns off. At higher frequencies, there is not enough time to sweep the charge from the drift region, so the diode never turns off. The time required to sweep the stored charge from a diode junction is its reverse recovery time, and it is relatively long in a PIN diode. For a given semiconductor material, on-state impedance, and minimum usable RF frequency, the reverse recovery time is fixed. This property can be exploited; one variety of P-I-N diode, the step recovery diode, exploits the abrupt impedance change at the end of the reverse recovery to create a narrow impulse waveform useful for frequency multiplication with high multiples. The high-frequency resistance is inversely proportional to the DC bias current through the diode. A PIN diode, suitably biased, therefore acts as a variable resistor. This high-frequency resistance may vary over a wide range (from to in some cases; the useful range is smaller, though). The wide intrinsic region also means the diode will have a low capacitance when reverse-biased. In a PIN diode the depletion region exists almost completely within the intrinsic region. This depletion region is much larger than in a PN diode and almost constant-size, independent of the reverse bias applied to the diode. This increases the volume where electron-hole pairs can be generated by an incident photon. Some photodetector devices, such as PIN photodiodes and phototransistors (in which the base-collector junction is a PIN diode), use a PIN junction in their construction. The diode design has some design trade-offs. Increasing the cross-section area of the intrinsic region increases its stored charge reducing its RF on-state resistance while also increasing reverse bias capacitance and increasing the drive current required to remove the charge during a fixed switching time, with no effect on the minimum time required to sweep the charge from the I region. Increasing the thickness of the intrinsic region increases the total stored charge, decreases the minimum RF frequency, and decreases the reverse-bias capacitance, but doesn't decrease the forward-bias RF resistance and increases the minimum time required to sweep the drift charge and transition from low to high RF resistance. Diodes are sold commercially in a variety of geometries for specific RF bands and uses. Applications PIN diodes are useful as RF switches, attenuators, photodetectors, and phase shifters. RF and microwave switches Under zero- or reverse-bias (the "off" state), a PIN diode has a low capacitance. The low capacitance will not pass much of an RF signal. Under a forward bias of 1 mA (the "on" state), a typical PIN diode will have an RF resistance of about , making it a good conductor of RF. Consequently, the PIN diode makes a good RF switch. Although RF relays can be used as switches, they switch relatively slowly (on the order of ). A PIN diode switch can switch much more quickly (e.g., ), although at lower RF frequencies it isn't reasonable to expect switching times in the same order of magnitude as the RF period. For example, the capacitance of an "off"-state discrete PIN diode might be . At , the capacitive reactance of is : As a series element in a system, the off-state attenuation is: This attenuation may not be adequate. In applications where higher isolation is needed, both shunt and series elements may be used, with the shunt diodes biased in complementary fashion to the series elements. Adding shunt elements effectively reduces the source and load impedances, reducing the impedance ratio and increasing the off-state attenuation. However, in addition to the added complexity, the on-state attenuation is increased due to the series resistance of the on-state blocking element and the capacitance of the off-state shunt elements. PIN diode switches are used not only for signal selection, but also component selection. For example, some low-phase-noise oscillators use them to range-switch inductors. RF and microwave variable attenuators By changing the bias current through a PIN diode, it is possible to quickly change its RF resistance. At high frequencies, the PIN diode appears as a resistor whose resistance is an inverse function of its forward current. Consequently, PIN diode can be used in some variable attenuator designs as amplitude modulators or output leveling circuits. PIN diodes might be used, for example, as the bridge and shunt resistors in a bridged-T attenuator. Another common approach is to use PIN diodes as terminations connected to the 0 degree and -90 degree ports of a quadrature hybrid. The signal to be attenuated is applied to the input port, and the attenuated result is taken from the isolation port. The advantages of this approach over the bridged-T and pi approaches are (1) complementary PIN diode bias drives are not needed—the same bias is applied to both diodes—and (2) the loss in the attenuator equals the return loss of the terminations, which can be varied over a very wide range. Limiters PIN diodes are sometimes designed for use as input protection devices for high-frequency test probes and other circuits. If the input signal is small, the PIN diode has negligible impact, presenting only a small parasitic capacitance. Unlike a rectifier diode, it does not present a nonlinear resistance at RF frequencies, which would give rise to harmonics and intermodulation products. If the signal is large, then when the PIN diode starts to rectify the signal, the forward current charges the drift region and the device RF impedance is inversely proportional to the signal amplitude. That signal amplitude varying resistance can be used to terminate some predetermined portion of the signal in a resistive network dissipating the energy or to create an impedance mismatch that reflects the incident signal back toward the source. The latter may be combined with an isolator, a device containing a circulator which uses a permanent magnetic field to break reciprocity and a resistive load to separate and terminate the backward traveling wave. When used as a shunt limiter the PIN diode impedance is low over the entire RF cycle, unlike paired rectifier diodes that would swing from a high resistance to a low resistance during each RF cycle clamping the waveform and not reflecting it as completely. The ionization recovery time of gas molecules that permits the creation of the higher power spark gap input protection device ultimately relies on similar physics in a gas. Photodetector and photovoltaic cell The PIN photodiode was invented by Jun-ichi Nishizawa and his colleagues in 1950. PIN photodiodes are used in fibre optic network cards and switches. As a photodetector, the PIN diode is reverse-biased. Under reverse bias, the diode ordinarily does not conduct (save a small dark current or Is leakage). When a photon of sufficient energy enters the depletion region of the diode, it creates an electron-hole pair. The reverse-bias field sweeps the carriers out of the region, creating current. Some detectors can use avalanche multiplication. The same mechanism applies to the PIN structure, or p-i-n junction, of a solar cell. In this case, the advantage of using a PIN structure over conventional semiconductor p–n junction is better long-wavelength response of the former. In case of long wavelength irradiation, photons penetrate deep into the cell. But only those electron-hole pairs generated in and near the depletion region contribute to current generation. The depletion region of a PIN structure extends across the intrinsic region, deep into the device. This wider depletion width enables electron-hole pair generation deep within the device, which increases the quantum efficiency of the cell. Commercially available PIN photodiodes have quantum efficiencies above 80-90% in the telecom wavelength range (~1500 nm), and are typically made of germanium or InGaAs. They feature fast response times (higher than their p-n counterparts), running into several tens of gigahertz, making them ideal for high speed optical telecommunication applications. Similarly, silicon p-i-n photodiodes have even higher quantum efficiencies, but can only detect wavelengths below the bandgap of silicon, i.e. ~1100 nm. Typically, amorphous silicon thin-film cells use PIN structures. On the other hand, CdTe cells use NIP structure, a variation of the PIN structure. In a NIP structure, an intrinsic CdTe layer is sandwiched by n-doped CdS and p-doped ZnTe; the photons are incident on the n-doped layer, unlike in a PIN diode. A PIN photodiode can also detect ionizing radiation in case it is used as a semiconductor detector. In modern fiber-optical communications, the speed of optical transmitters and receivers is one of the most important parameters. Due to the small surface of the photodiode, its parasitic (unwanted) capacitance is reduced. The bandwidth of modern pin photodiodes is reaching the microwave and millimeter waves range. Example PIN photodiodes SFH203 and BPW34 are cheap general purpose PIN diodes in 5 mm clear plastic cases with bandwidths over 100 MHz. See also Fiber-optic cable Interconnect bottleneck Optical communication Optical interconnect Parallel optical interface Step recovery diode References External links The PIN Diode Designers' Handbook PIN Limiter Diodes in Receiver Protectors, Skyworks application note Diodes Microwave technology Optical diodes Power electronics Japanese inventions ja:PINダイオード Photodetectors
PIN diode
[ "Engineering" ]
2,514
[ "Electronic engineering", "Power electronics" ]
41,559
https://en.wikipedia.org/wiki/Plane%20wave
In physics, a plane wave is a special case of a wave or field: a physical quantity whose value, at any given moment, is constant through any plane that is perpendicular to a fixed direction in space. For any position in space and any time , the value of such a field can be written as where is a unit-length vector, and is a function that gives the field's value as dependent on only two real parameters: the time , and the scalar-valued displacement of the point along the direction . The displacement is constant over each plane perpendicular to . The values of the field may be scalars, vectors, or any other physical or mathematical quantity. They can be complex numbers, as in a complex exponential plane wave. When the values of are vectors, the wave is said to be a longitudinal wave if the vectors are always collinear with the vector , and a transverse wave if they are always orthogonal (perpendicular) to it. Special types Traveling plane wave Often the term "plane wave" refers specifically to a traveling plane wave, whose evolution in time can be described as simple translation of the field at a constant wave speed along the direction perpendicular to the wavefronts. Such a field can be written as where is now a function of a single real parameter , that describes the "profile" of the wave, namely the value of the field at time , for each displacement . In that case, is called the direction of propagation. For each displacement , the moving plane perpendicular to at distance from the origin is called a "wavefront". This plane travels along the direction of propagation with velocity ; and the value of the field is then the same, and constant in time, at every one of its points. Sinusoidal plane wave The term is also used, even more specifically, to mean a "monochromatic" or sinusoidal plane wave: a travelling plane wave whose profile is a sinusoidal function. That is, The parameter , which may be a scalar or a vector, is called the amplitude of the wave; the scalar coefficient is its "spatial frequency"; and the scalar is its "phase shift". A true plane wave cannot physically exist, because it would have to fill all space. Nevertheless, the plane wave model is important and widely used in physics. The waves emitted by any source with finite extent into a large homogeneous region of space can be well approximated by plane waves when viewed over any part of that region that is sufficiently small compared to its distance from the source. That is the case, for example, of the light waves from a distant star that arrive at a telescope. Plane standing wave A standing wave is a field whose value can be expressed as the product of two functions, one depending only on position, the other only on time. A plane standing wave, in particular, can be expressed as where is a function of one scalar parameter (the displacement ) with scalar or vector values, and is a scalar function of time. This representation is not unique, since the same field values are obtained if and are scaled by reciprocal factors. If is bounded in the time interval of interest (which is usually the case in physical contexts), and can be scaled so that the maximum value of is 1. Then will be the maximum field magnitude seen at the point . Properties A plane wave can be studied by ignoring the directions perpendicular to the direction vector ; that is, by considering the function as a wave in a one-dimensional medium. Any local operator, linear or not, applied to a plane wave yields a plane wave. Any linear combination of plane waves with the same normal vector is also a plane wave. For a scalar plane wave in two or three dimensions, the gradient of the field is always collinear with the direction ; specifically, , where is the partial derivative of with respect to the first argument. The divergence of a vector-valued plane wave depends only on the projection of the vector in the direction . Specifically, In particular, a transverse planar wave satisfies for all and . See also Plane-wave expansion Rectilinear propagation Wave equation Weyl expansion References Wave mechanics Planes (geometry)
Plane wave
[ "Physics", "Mathematics" ]
847
[ "Physical phenomena", "Mathematical objects", "Classical mechanics", "Infinity", "Waves", "Wave mechanics", "Planes (geometry)" ]
41,564
https://en.wikipedia.org/wiki/Polarization%20%28waves%29
(also ) is a property of transverse waves which specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave. One example of a polarized transverse wave is vibrations traveling along a taut string (see image), for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves (shear waves) in solids. An electromagnetic wave such as light consists of a coupled oscillating electric field and magnetic field which are always perpendicular to each other. Different states of polarization correspond to different relationships between polarization and the direction of propagation. In linear polarization, the fields oscillate in a single direction. In circular or elliptical polarization, the fields rotate at a constant rate in a plane as the wave travels, either in the right-hand or in the left-hand direction. Light or other electromagnetic radiation from many sources, such as the sun, flames, and incandescent lamps, consists of short wave trains with an equal mixture of polarizations; this is called unpolarized light. Polarized light can be produced by passing unpolarized light through a polarizer, which allows waves of only one polarization to pass through. The most common optical materials do not affect the polarization of light, but some materials—those that exhibit birefringence, dichroism, or optical activity—affect light differently depending on its polarization. Some of these are used to make polarizing filters. Light also becomes partially polarized when it reflects at an angle from a surface. According to quantum mechanics, electromagnetic waves can also be viewed as streams of particles called photons. When viewed in this way, the polarization of an electromagnetic wave is determined by a quantum mechanical property of photons called their spin. A photon has one of two possible spins: it can either spin in a right hand sense or a left hand sense about its direction of travel. Circularly polarized electromagnetic waves are composed of photons with only one type of spin, either right- or left-hand. Linearly polarized waves consist of photons that are in a superposition of right and left circularly polarized states, with equal amplitude and phases synchronized to give oscillation in a plane. Polarization is an important parameter in areas of science dealing with transverse waves, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar. Introduction Wave propagation and polarization Most sources of light are classified as incoherent and unpolarized (or only "partially polarized") because they consist of a random mixture of waves having different spatial characteristics, frequencies (wavelengths), phases, and polarization states. However, for understanding electromagnetic waves and polarization in particular, it is easier to just consider coherent plane waves; these are sinusoidal waves of one particular direction (or wavevector), frequency, phase, and polarization state. Characterizing an optical system in relation to a plane wave with those given parameters can then be used to predict its response to a more general case, since a wave with any specified spatial structure can be decomposed into a combination of plane waves (its so-called angular spectrum). Incoherent states can be modeled stochastically as a weighted combination of such uncorrelated waves with some distribution of frequencies (its spectrum), phases, and polarizations. Transverse electromagnetic waves Electromagnetic waves (such as light), traveling in free space or another homogeneous isotropic non-attenuating medium, are properly described as transverse waves, meaning that a plane wave's electric field vector and magnetic field are each in some direction perpendicular to (or "transverse" to) the direction of wave propagation; and are also perpendicular to each other. By convention, the "polarization" direction of an electromagnetic wave is given by its electric field vector. Considering a monochromatic plane wave of optical frequency (light of vacuum wavelength has a frequency of where is the speed of light), let us take the direction of propagation as the axis. Being a transverse wave the and fields must then contain components only in the and directions whereas . Using complex (or phasor) notation, the instantaneous physical electric and magnetic fields are given by the real parts of the complex quantities occurring in the following equations. As a function of time and spatial position (since for a plane wave in the direction the fields have no dependence on or ) these complex fields can be written as: and where is the wavelength (whose refractive index is ) and is the period of the wave. Here , , , and are complex numbers. In the second more compact form, as these equations are customarily expressed, these factors are described using the wavenumber and angular frequency (or "radian frequency") . In a more general formulation with propagation restricted to the direction, then the spatial dependence is replaced by where is called the wave vector, the magnitude of which is the wavenumber. Thus the leading vectors and each contain up to two nonzero (complex) components describing the amplitude and phase of the wave's and polarization components (again, there can be no polarization component for a transverse wave in the direction). For a given medium with a characteristic impedance , is related to by: In a dielectric, is real and has the value , where is the refractive index and is the impedance of free space. The impedance will be complex in a conducting medium. Note that given that relationship, the dot product of and must be zero: indicating that these vectors are orthogonal (at right angles to each other), as expected. Knowing the propagation direction ( in this case) and , one can just as well specify the wave in terms of just and describing the electric field. The vector containing and (but without the component which is necessarily zero for a transverse wave) is known as a Jones vector. In addition to specifying the polarization state of the wave, a general Jones vector also specifies the overall magnitude and phase of that wave. Specifically, the intensity of the light wave is proportional to the sum of the squared magnitudes of the two electric field components: However, the wave's state of polarization is only dependent on the (complex) ratio of to . So let us just consider waves whose ; this happens to correspond to an intensity of about in free space (where ). And because the absolute phase of a wave is unimportant in discussing its polarization state, let us stipulate that the phase of is zero; in other words is a real number while may be complex. Under these restrictions, and can be represented as follows: where the polarization state is now fully parameterized by the value of (such that ) and the relative phase . Non-transverse waves In addition to transverse waves, there are many wave motions where the oscillation is not limited to directions perpendicular to the direction of propagation. These cases are far beyond the scope of the current article which concentrates on transverse waves (such as most electromagnetic waves in bulk media), but one should be aware of cases where the polarization of a coherent wave cannot be described simply using a Jones vector, as we have just done. Just considering electromagnetic waves, we note that the preceding discussion strictly applies to plane waves in a homogeneous isotropic non-attenuating medium, whereas in an anisotropic medium (such as birefringent crystals as discussed below) the electric or magnetic field may have longitudinal as well as transverse components. In those cases the electric displacement and magnetic flux density still obey the above geometry but due to anisotropy in the electric susceptibility (or in the magnetic permeability), now given by a tensor, the direction of (or ) may differ from that of (or ). Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part (or "extinction coefficient") such as metals; these fields are also not strictly transverse. Surface waves or waves propagating in a waveguide (such as an optical fiber) are generally transverse waves, but might be described as an electric or magnetic transverse mode, or a hybrid mode. Even in free space, longitudinal field components can be generated in focal regions, where the plane wave approximation breaks down. An extreme example is radially or tangentially polarized light, at the focus of which the electric or magnetic field respectively is longitudinal (along the direction of propagation). For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so the issue of polarization is normally not even mentioned. On the other hand, sound waves in a bulk solid can be transverse as well as longitudinal, for a total of three polarization components. In this case, the transverse polarization is associated with the direction of the shear stress and displacement in directions perpendicular to the propagation direction, while the longitudinal polarization describes compression of the solid and vibration along the direction of propagation. The differential propagation of transverse and longitudinal polarizations is important in seismology. Polarization state Polarization can be defined in terms of pure polarization states with only a coherent sinusoidal wave at one optical frequency. The vector in the adjacent diagram might describe the oscillation of the electric field emitted by a single-mode laser (whose oscillation frequency would be typically times faster). The field oscillates in the -plane, along the page, with the wave propagating in the direction, perpendicular to the page. The first two diagrams below trace the electric field vector over a complete cycle for linear polarization at two different orientations; these are each considered a distinct state of polarization (SOP). The linear polarization at 45° can also be viewed as the addition of a horizontally linearly polarized wave (as in the leftmost figure) and a vertically polarized wave of the same amplitude . Now if one were to introduce a phase shift in between those horizontal and vertical polarization components, one would generally obtain elliptical polarization as is shown in the third figure. When the phase shift is exactly ±90°, and the amplitudes are the same, then circular polarization is produced (fourth and fifth figures). Circular polarization can be created by sending linearly polarized light through a quarter-wave plate oriented at 45° to the linear polarization to create two components of the same amplitude with the required phase shift. The superposition of the original and phase-shifted components causes a rotating electric field vector, which is depicted in the animation on the right. Note that circular or elliptical polarization can involve either a clockwise or counterclockwise rotation of the field, depending on the relative phases of the components. These correspond to distinct polarization states, such as the two circular polarizations shown above. The orientation of the and axes used in this description is arbitrary. The choice of such a coordinate system and viewing the polarization ellipse in terms of the and polarization components, corresponds to the definition of the Jones vector (below) in terms of those basis polarizations. Axes are selected to suit a particular problem, such as being in the plane of incidence. Since there are separate reflection coefficients for the linear polarizations in and orthogonal to the plane of incidence (p and s polarizations, see below), that choice greatly simplifies the calculation of a wave's reflection from a surface. Any pair of orthogonal polarization states may be used as basis functions, not just linear polarizations. For instance, choosing right and left circular polarizations as basis functions simplifies the solution of problems involving circular birefringence (optical activity) or circular dichroism. Polarization ellipse For a purely polarized monochromatic wave the electric field vector over one cycle of oscillation traces out an ellipse. A polarization state can then be described in relation to the geometrical parameters of the ellipse, and its "handedness", that is, whether the rotation around the ellipse is clockwise or counter clockwise. One parameterization of the elliptical figure specifies the orientation angle , defined as the angle between the major axis of the ellipse and the -axis along with the ellipticity , the ratio of the ellipse's major to minor axis. (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of an ellipse's eccentricity or the ellipticity angle, as is shown in the figure. The angle is also significant in that the latitude (angle from the equator) of the polarization state as represented on the Poincaré sphere (see below) is equal to . The special cases of linear and circular polarization correspond to an ellipticity of infinity and unity (or of zero and 45°) respectively. Jones vector Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization. Coordinate frame Regardless of whether polarization state is represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. s and p designations Another coordinate system frequently used relates to the plane of incidence. This is the plane made by the incoming propagation direction and the vector perpendicular to the plane of an interface, in other words, the plane in which the ray travels before and after reflection or refraction. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from , German for 'perpendicular'). Polarized light with its electric field along the plane of incidence is thus denoted , while light whose electric field is normal to the plane of incidence is called . P-polarization is commonly referred to as transverse-magnetic (TM), and has also been termed pi-polarized or -polarized, or tangential plane polarized. S-polarization is also called transverse-electric (TE), as well as sigma-polarized or σ-polarized, or sagittal plane polarized. Degree of polarization Degree of polarization (DOP) is a quantity used to describe the portion of an electromagnetic wave which is polarized. can be calculated from the Stokes parameters. A perfectly polarized wave has a of 100%, whereas an unpolarized wave has a of 0%. A wave which is partially polarized, and therefore can be represented by a superposition of a polarized and unpolarized component, will have a somewhere in between 0 and 100%. is calculated as the fraction of the total power that is carried by the polarized component of the wave. can be used to map the strain field in materials when considering the of the photoluminescence. The polarization of the photoluminescence is related to the strain in a material by way of the given material's photoelasticity tensor. is also visualized using the Poincaré sphere representation of a polarized beam. In this representation, is equal to the length of the vector measured from the center of the sphere. Unpolarized and partially polarized light Implications for reflection and propagation Polarization in wave propagation In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector of a plane wave in the direction follows: where is the wavenumber. As noted above, the instantaneous electric field is the real part of the product of the Jones vector times the phase factor When an electromagnetic wave interacts with matter, its propagation is altered according to the material's (complex) index of refraction. When the real or imaginary part of that refractive index is dependent on the polarization state of a wave, properties known as birefringence and polarization dichroism (or diattenuation) respectively, then the polarization state of a wave will generally be altered. In such media, an electromagnetic wave with any given state of polarization may be decomposed into two orthogonally polarized components that encounter different propagation constants. The effect of propagation over a given path on those two components is most easily characterized in the form of a complex transformation matrix known as a Jones matrix: The Jones matrix due to passage through a transparent material is dependent on the propagation distance as well as the birefringence. The birefringence (as well as the average refractive index) will generally be dispersive, that is, it will vary as a function of optical frequency (wavelength). In the case of non-birefringent materials, however, the Jones matrix is the identity matrix (multiplied by a scalar phase factor and attenuation factor), implying no change in polarization during propagation. For propagation effects in two orthogonal modes, the Jones matrix can be written as where and are complex numbers describing the phase delay and possibly the amplitude attenuation due to propagation in each of the two polarization eigenmodes. is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors; in the case of linear birefringence or diattenuation the modes are themselves linear polarization states so and can be omitted if the coordinate axes have been chosen appropriately. Birefringence In a birefringent substance, electromagnetic waves of different polarizations travel at different speeds (phase velocities). As a result, when unpolarized waves travel through a plate of birefringent material, one polarization component has a shorter wavelength than the other, resulting in a phase difference between the components which increases the further the waves travel through the material. The Jones matrix is a unitary matrix: . Media termed diattenuating (or dichroic in the sense of polarization), in which only the amplitudes of the two polarizations are affected differentially, may be described using a Hermitian matrix (generally multiplied by a common phase factor). In fact, since matrix may be written as the product of unitary and positive Hermitian matrices, light propagation through any sequence of polarization-dependent optical components can be written as the product of these two basic types of transformations. In birefringent media there is no attenuation, but two modes accrue a differential phase delay. Well known manifestations of linear birefringence (that is, in which the basis polarizations are orthogonal linear polarizations) appear in optical wave plates/retarders and many crystals. If linearly polarized light passes through a birefringent material, its state of polarization will generally change, its polarization direction is identical to one of those basis polarizations. Since the phase shift, and thus the change in polarization state, is usually wavelength-dependent, such objects viewed under white light in between two polarizers may give rise to colorful effects, as seen in the accompanying photograph. Circular birefringence is also termed optical activity, especially in chiral fluids, or Faraday rotation, when due to the presence of a magnetic field along the direction of propagation. When linearly polarized light is passed through such an object, it will exit still linearly polarized, but with the axis of polarization rotated. A combination of linear and circular birefringence will have as basis polarizations two orthogonal elliptical polarizations; however, the term "elliptical birefringence" is rarely used. One can visualize the case of linear birefringence (with two orthogonal linear propagation modes) with an incoming wave linearly polarized at a 45° angle to those modes. As a differential phase starts to accrue, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) perpendicular to the original polarization, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes. Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, the two polarization components of a collimated beam (or ray) can exit the material with a positional offset, even though their final propagation directions will be the same (assuming the entrance face and exit face are parallel). This is commonly viewed using calcite crystals, which present the viewer with two slightly offset images, in opposite polarizations, of an object behind the crystal. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. Dichroism Media in which transmission of one polarization mode is preferentially reduced are called dichroic or diattenuating. Like birefringence, diattenuation can be with respect to linear polarization modes (in a crystal) or circular polarization modes (usually in a liquid). Devices that block nearly all of the radiation in one mode are known as or simply "polarizers". This corresponds to in the above representation of the Jones matrix. The output of an ideal polarizer is a specific polarization state (usually linear polarization) with an amplitude equal to the input wave's original amplitude in that polarization mode. Power in the other polarization mode is eliminated. Thus if unpolarized light is passed through an ideal polarizer (where and ) exactly half of its initial power is retained. Practical polarizers, especially inexpensive sheet polarizers, have additional loss so that . However, in many instances the more relevant figure of merit is the polarizer's degree of polarization or extinction ratio, which involve a comparison of to . Since Jones vectors refer to waves' amplitudes (rather than intensity), when illuminated by unpolarized light the remaining power in the unwanted polarization will be of the power in the intended polarization. Specular reflection In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected; for a given material those proportions (and also the phase of reflection) are dependent on the angle of incidence and are different for the s- and p-polarizations. Therefore, the polarization state of reflected light (even if initially unpolarized) is generally changed. Any light striking a surface at a special angle of incidence known as Brewster's angle, where the reflection coefficient for p-polarization is zero, will be reflected with only the s-polarization remaining. This principle is employed in the so-called "pile of plates polarizer" (see figure) in which part of the s-polarization is removed by reflection at each Brewster angle surface, leaving only the p-polarization after transmission through many such surfaces. The generally smaller reflection coefficient of the p-polarization is also the basis of polarized sunglasses; by blocking the s- (horizontal) polarization, most of the glare due to reflection from a wet street, for instance, is removed. In the important special case of reflection at normal incidence (not involving anisotropic materials) there is no particular s- or p-polarization. Both the and polarization components are reflected identically, and therefore the polarization of the reflected wave is identical to that of the incident wave. However, in the case of circular (or elliptical) polarization, the handedness of the polarization state is thereby reversed, since by convention this is specified relative to the direction of propagation. The circular rotation of the electric field around the axes called "right-handed" for a wave in the direction is "left-handed" for a wave in the direction. But in the general case of reflection at a nonzero angle of incidence, no such generalization can be made. For instance, right-circularly polarized light reflected from a dielectric surface at a grazing angle, will still be right-handed (but elliptically) polarized. Linear polarized light reflected from a metal at non-normal incidence will generally become elliptically polarized. These cases are handled using Jones vectors acted upon by the different Fresnel coefficients for the s- and p-polarization components. Measurement techniques involving polarization Some optical measurement techniques are based on polarization. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention. Measurement of stress In engineering, the phenomenon of stress induced birefringence allows for stresses in transparent materials to be readily observed. As noted above and seen in the accompanying photograph, the chromaticity of birefringence typically creates colored patterns when viewed in between two polarizers. As external forces are applied, internal stress induced in the material is thereby observed. Additionally, birefringence is frequently observed due to stresses "frozen in" at the time of manufacture. This is famously observed in cellophane tape whose birefringence is due to the stretching of the material during the manufacturing process. Ellipsometry Ellipsometry is a powerful technique for the measurement of the optical properties of a uniform surface. It involves measuring the polarization state of light following specular reflection from such a surface. This is typically done as a function of incidence angle or wavelength (or both). Since ellipsometry relies on reflection, it is not required for the sample to be transparent to light or for its back side to be accessible. Ellipsometry can be used to model the (complex) refractive index of a surface of a bulk material. It is also very useful in determining parameters of one or more thin film layers deposited on a substrate. Due to their reflection properties, not only are the predicted magnitude of the p and s polarization components, but their relative phase shifts upon reflection, compared to measurements using an ellipsometer. A normal ellipsometer does not measure the actual reflection coefficient (which requires careful photometric calibration of the illuminating beam) but the ratio of the p and s reflections, as well as change of polarization ellipticity (hence the name) induced upon reflection by the surface being studied. In addition to use in science and research, ellipsometers are used in situ to control production processes for instance. Geology The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details. Sound waves in solid materials exhibit polarization. Differential propagation of the three polarizations through the earth is a crucial in the field of seismology. Horizontally and vertically polarized seismic waves (shear waves) are termed SH and SV, while waves with longitudinal polarization (compressional waves) are termed P-waves. Autopsy Similarly, polarization microscopes can be used to aid in the detection of foreign matter in biological tissue slices if it is birefringent; autopsies often mention (a lack of or presence of) "polarizable foreign debris." Chemistry We have seen (above) that the birefringence of a type of crystal is useful in identifying it, and thus detection of linear birefringence is especially useful in geology and mineralogy. Linearly polarized light generally has its polarization state altered upon transmission through such a crystal, making it stand out when viewed in between two crossed polarizers, as seen in the photograph, above. Likewise, in chemistry, rotation of polarization axes in a liquid solution can be a useful measurement. In a liquid, linear birefringence is impossible, but there may be circular birefringence when a chiral molecule is in solution. When the right and left handed enantiomers of such a molecule are present in equal numbers (a so-called racemic mixture) then their effects cancel out. However, when there is only one (or a preponderance of one), as is more often the case for organic molecules, a net circular birefringence (or optical activity) is observed, revealing the magnitude of that imbalance (or the concentration of the molecule itself, when it can be assumed that only one enantiomer is present). This is measured using a polarimeter in which polarized light is passed through a tube of the liquid, at the end of which is another polarizer which is rotated in order to null the transmission of light through it. Astronomy In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarized. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth, but chirality selection on inorganic crystals has been proposed as an alternative theory. Applications and examples Polarized sunglasses Unpolarized light, after being reflected by a specular (shiny) surface, generally obtains a degree of polarization. This phenomenon was observed in the early 1800s by the mathematician Étienne-Louis Malus, after whom Malus's law is named. Polarizing sunglasses exploit this effect to reduce glare from reflections by horizontal surfaces, notably the road ahead viewed at a grazing angle. Wearers of polarized sunglasses will occasionally observe inadvertent polarization effects such as color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics, in conjunction with natural polarization by reflection or scattering. The polarized light from LCD monitors (see below) is extremely conspicuous when these are worn. Sky polarization and photography Polarization is observed in the light of the sky, as this is due to sunlight scattered by aerosols as it passes through Earth's atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is most strongly observed at points on the sky making a 90° angle to the Sun. Polarizing filters use these effects to optimize the results of photographing scenes in which reflection or scattering by the sky is involved. Sky polarization has been used for orientation in navigation. The Pfund sky compass was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass from Asia to Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century. Display technologies The principle of liquid-crystal display (LCD) technology relies on the rotation of the axis of linear polarization by the liquid crystal array. Light from the backlight (or the back reflective layer, in devices not including or requiring a backlight) first passes through a linear polarizing sheet. That polarized light passes through the actual liquid crystal layer which may be organized in pixels (for a TV or computer monitor) or in another format such as a seven-segment display or one with custom symbols for a particular product. The liquid crystal layer is produced with a consistent right (or left) handed chirality, essentially consisting of tiny helices. This causes circular birefringence, and is engineered so that there is a 90 degree rotation of the linear polarization state. However, when a voltage is applied across a cell, the molecules straighten out, lessening or totally losing the circular birefringence. On the viewing side of the display is another linear polarizing sheet, usually oriented at 90 degrees from the one behind the active layer. Therefore, when the circular birefringence is removed by the application of a sufficient voltage, the polarization of the transmitted light remains at right angles to the front polarizer, and the pixel appears dark. With no voltage, however, the 90 degree rotation of the polarization causes it to exactly match the axis of the front polarizer, allowing the light through. Intermediate voltages create intermediate rotation of the polarization axis and the pixel has an intermediate intensity. Displays based on this principle are widespread, and now are used in the vast majority of televisions, computer monitors and video projectors, rendering the previous CRT technology essentially obsolete. The use of polarization in the operation of LCD displays is immediately apparent to someone wearing polarized sunglasses, often making the display unreadable. In a totally different sense, polarization encoding has become the leading (but not sole) method for delivering separate images to the left and right eye in stereoscopic displays used for 3D movies. This involves separate images intended for each eye either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarizing filters ensure that each eye receives only the intended image. Historically such systems used linear polarization encoding because it was inexpensive and offered good separation. However, circular polarization makes separation of the two images insensitive to tilting of the head, and is widely used in 3-D movie exhibition today, such as the system from RealD. Projecting such images requires screens that maintain the polarization of the projected light when viewed in reflection (such as silver screens); a normal diffuse white projection screen causes depolarization of the projected images, making it unsuitable for this application. Although now obsolete, CRT computer displays suffered from reflection by the glass envelope, causing glare from room lights and consequently poor contrast. Several anti-reflection solutions were employed to ameliorate this problem. One solution utilized the principle of reflection of circularly polarized light. A circular polarizing filter in front of the screen allows for the transmission of (say) only right circularly polarized room light. Now, right circularly polarized light (depending on the convention used) has its electric (and magnetic) field direction rotating clockwise while propagating in the +z direction. Upon reflection, the field still has the same direction of rotation, but now propagation is in the −z direction making the reflected wave left circularly polarized. With the right circular polarization filter placed in front of the reflecting glass, the unwanted light reflected from the glass will thus be in very polarization state that is blocked by that filter, eliminating the reflection problem. The reversal of circular polarization on reflection and elimination of reflections in this manner can be easily observed by looking in a mirror while wearing 3-D movie glasses which employ left- and right-handed circular polarization in the two lenses. Closing one eye, the other eye will see a reflection in which it cannot see itself; that lens appears black. However, the other lens (of the closed eye) will have the correct circular polarization allowing the closed eye to be easily seen by the open one. Radio transmission and reception All radio (and microwave) antennas used for transmitting or receiving are intrinsically polarized. They transmit in (or receive signals from) a particular polarization, being totally insensitive to the opposite polarization; in certain cases that polarization is a function of direction. Most antennas are nominally linearly polarized, but elliptical and circular polarization is a possibility. In the case of linear polarization, the same kind of filtering as described above, is possible. In the case of elliptical polarization (circular polarization is in reality just a kind of elliptical polarization where the length of both elasticity factors is the same), filtering out a single angle (e.g. 90°) will have virtually no impact as the wave at any time can be in any of the 360 degrees. The vast majority of antennas are linearly polarized. In fact it can be shown from considerations of symmetry that an antenna that lies entirely in a plane which also includes the observer, can only have its polarization in the direction of that plane. This applies to many cases, allowing one to easily infer such an antenna's polarization at an intended direction of propagation. So a typical rooftop Yagi or log-periodic antenna with horizontal conductors, as viewed from a second station toward the horizon, is necessarily horizontally polarized. But a vertical "whip antenna" or AM broadcast tower used as an antenna element (again, for observers horizontally displaced from it) will transmit in the vertical polarization. A turnstile antenna with its four arms in the horizontal plane, likewise transmits horizontally polarized radiation toward the horizon. However, when that same turnstile antenna is used in the "axial mode" (upwards, for the same horizontally-oriented structure) its radiation is circularly polarized. At intermediate elevations it is elliptically polarized. Polarization is important in radio communications because, for instance, if one attempts to use a horizontally polarized antenna to receive a vertically polarized transmission, the signal strength will be substantially reduced (or under very controlled conditions, reduced to nothing). This principle is used in satellite television in order to double the channel capacity over a fixed frequency band. The same frequency channel can be used for two signals broadcast in opposite polarizations. By adjusting the receiving antenna for one or the other polarization, either signal can be selected without interference from the other. Especially due to the presence of the ground, there are some differences in propagation (and also in reflections responsible for TV ghosting) between horizontal and vertical polarizations. AM and FM broadcast radio usually use vertical polarization, while television uses horizontal polarization. At low frequencies especially, horizontal polarization is avoided. That is because the phase of a horizontally polarized wave is reversed upon reflection by the ground. A distant station in the horizontal direction will receive both the direct and reflected wave, which thus tend to cancel each other. This problem is avoided with vertical polarization. Polarization is also important in the transmission of radar pulses and reception of radar reflections by the same or a different antenna. For instance, back scattering of radar pulses by rain drops can be avoided by using circular polarization. Just as specular reflection of circularly polarized light reverses the handedness of the polarization, as discussed above, the same principle applies to scattering by objects much smaller than a wavelength such as rain drops. On the other hand, reflection of that wave by an irregular metal object (such as an airplane) will typically introduce a change in polarization and (partial) reception of the return wave by the same antenna. The effect of free electrons in the ionosphere, in conjunction with the earth's magnetic field, causes Faraday rotation, a sort of circular birefringence. This is the same mechanism which can rotate the axis of linear polarization by electrons in interstellar space as mentioned below. The magnitude of Faraday rotation caused by such a plasma is greatly exaggerated at lower frequencies, so at the higher microwave frequencies used by satellites the effect is minimal. However, medium or short wave transmissions received following refraction by the ionosphere are strongly affected. Since a wave's path through the ionosphere and the earth's magnetic field vector along such a path are rather unpredictable, a wave transmitted with vertical (or horizontal) polarization will generally have a resulting polarization in an arbitrary orientation at the receiver. Polarization and vision Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth. The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye. Angular momentum using circular polarization It is well known that electromagnetic radiation carries a certain linear momentum in the direction of propagation. In addition, however, light carries a certain angular momentum if it is circularly polarized (or partially so). In comparison with lower frequencies such as microwaves, the amount of angular momentum in light, even of pure circular polarization, compared to the same wave's linear momentum (or radiation pressure) is very small and difficult to even measure. However, it was utilized in an experiment to achieve speeds of up to 600 million revolutions per minute. See also Quantum Physics Plane of polarization Spin angular momentum of light Optics Depolarizer (optics) Fluorescence anisotropy Glan–Taylor prism Kerr effect Nicol prism Pockels effect Polarization rotator Polarized light microscopy Polarizer Polaroid (polarizer) Radial polarization Rayleigh sky model Waveplate References Cited references General references External links Feynman's lecture on polarization Polarized Light Digital Image Gallery: Microscopic images made using polarization effects MathPages: The relationship between photon spin and polarization A virtual polarization microscope Polarization angle in satellite dishes. Molecular Expressions: Science, Optics and You — Polarization of Light: Interactive Java tutorial Antenna Polarization Animations of Linear, Circular and Elliptical Polarizations on YouTube Electromagnetic radiation Antennas (radio) Broadcast engineering Physical optics
Polarization (waves)
[ "Physics", "Engineering" ]
9,416
[ "Broadcast engineering", "Physical phenomena", "Electromagnetic radiation", "Astrophysics", "Electronic engineering", "Radiation", "Polarization (waves)" ]
41,568
https://en.wikipedia.org/wiki/Power%20factor
In electrical engineering, the power factor of an AC power system is defined as the ratio of the real power absorbed by the load to the apparent power flowing in the circuit. Real power is the average of the instantaneous product of voltage and current and represents the capacity of the electricity for performing work. Apparent power is the product of root mean square (RMS) current and voltage. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power may be greater than the real power, so more current flows in the circuit than would be required to transfer real power alone. A power factor magnitude of less than one indicates the voltage and current are not in phase, reducing the average product of the two. A negative power factor occurs when the device (normally the load) generates real power, which then flows back towards the source. In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The larger currents increase the energy lost in the distribution system and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers with a low power factor. Power-factor correction increases the power factor of a load, improving efficiency for the distribution system to which it is attached. Linear loads with a low power factor (such as induction motors) can be corrected with a passive network of capacitors or inductors. Non-linear loads, such as rectifiers, distort the current drawn from the system. In such cases, active or passive power factor correction may be used to counteract the distortion and raise the power factor. The devices for correction of the power factor may be at a central substation, spread out over a distribution system, or built into power-consuming equipment. General case The general expression for power factor is given by where is the real power measured by an ideal wattmeter, is the rms current measured by an ideal ammeter, and is the rms voltage measured by an ideal voltmeter. Apparent power, , is the product of the rms current and the rms voltage. If the load is sourcing power back toward the generator, then and will be negative. Periodic waveforms If the waveforms are periodic with the same fundamental period, then the power factor can be computed by the following where is the instantaneous current, is the instantaneous voltage, is an arbitrary starting time, and is the period of the waveforms. Nonperiodic waveforms If the waveforms are not periodic and the physical meters have the same averaging time, then the equations for the periodic case can be used with the exception that is the averaging time of the meters instead of the waveform period. Linear circuits In a linear circuit, consisting of combinations of resistors, inductors, and capacitors, current flow has a sinusoidal response to the sinusoidal line voltage. A linear load does not change the shape of the input waveform but may change the relative timing (phase) between voltage and current, due to its inductance or capacitance. In a purely resistive AC circuit, voltage and current waveforms are in step (or in phase), changing polarity at the same instant in each cycle. All the power entering the load is consumed (or dissipated). Where reactive loads are present, such as with capacitors or inductors, energy storage in the loads results in a phase difference between the current and voltage waveforms. During each cycle of the AC voltage, extra energy, in addition to any energy consumed in the load, is temporarily stored in the load in electric or magnetic fields then returned to the power grid a fraction of the period later. Electrical circuits containing predominantly resistive loads (incandescent lamps, devices using heating elements like electric toasters and ovens) have a power factor of almost 1, but circuits containing inductive or capacitive loads (electric motors, solenoid valves, transformers, fluorescent lamp ballasts, and others) can have a power factor well below 1. A circuit with a low power factor will use a greater amount of current to transfer a given quantity of real power than a circuit with a high power factor thus causing increased losses due to resistive heating in power lines, and requiring the use of higher-rated conductors and transformers. Definition and calculation AC power has two components: Real power or active power () (sometimes called average power), expressed in watts (W) Reactive power (), usually expressed in reactive volt-amperes (var) Together, they form the complex power () expressed as volt-amperes (VA). The magnitude of the complex power is the apparent power (), also expressed in volt-amperes (VA). The VA and var are non-SI units dimensionally similar to the watt but are used in engineering practice instead of the watt to state what quantity is being expressed. The SI explicitly disallows using units for this purpose or as the only source of information about a physical quantity as used. The power factor is defined as the ratio of real power to apparent power. As power is transferred along a transmission line, it does not consist purely of real power that can do work once transferred to the load, but rather consists of a combination of real and reactive power, called apparent power. The power factor describes the amount of real power transmitted along a transmission line relative to the total apparent power flowing in the line. The power factor can also be computed as the cosine of the angle θ by which the current waveform lags or leads the voltage waveform. Power triangle One can relate the various components of AC power by using the power triangle in vector space. Real power extends horizontally in the real axis and reactive power extends in the direction of the imaginary axis. Complex power (and its magnitude, apparent power) represents a combination of both real and reactive power, and therefore can be calculated by using the vector sum of these two components. We can conclude that the mathematical relationship between these components is: As the angle θ increases with fixed total apparent power, current and voltage are further out of phase with each other. Real power decreases, and reactive power increases. Lagging, leading and unity power factors Power factor is described as leading if the current waveform is advanced in phase concerning voltage, or lagging when the current waveform is behind the voltage waveform. A lagging power factor signifies that the load is inductive, as the load will consume reactive power. The reactive component is positive as reactive power travels through the circuit and is consumed by the inductive load. A leading power factor signifies that the load is capacitive, as the load supplies reactive power, and therefore the reactive component is negative as reactive power is being supplied to the circuit. If θ is the phase angle between the current and voltage, then the power factor is equal to the cosine of the angle, : Since the units are consistent, the power factor is by definition a dimensionless number between -1 and 1. When the power factor is equal to 0, the energy flow is entirely reactive, and stored energy in the load returns to the source on each cycle. When the power factor is 1, referred to as the unity power factor, all the energy supplied by the source is consumed by the load. Power factors are usually stated as leading or lagging to show the sign of the phase angle. Capacitive loads are leading (current leads voltage), and inductive loads are lagging (current lags voltage). If a purely resistive load is connected to a power supply, current and voltage will change polarity in step, the power factor will be 1, and the electrical energy flows in a single direction across the network in each cycle. Inductive loads such as induction motors (any type of wound coil) consume reactive power with the current waveform lagging the voltage. Capacitive loads such as capacitor banks or buried cables generate reactive power with the current phase leading the voltage. Both types of loads will absorb energy during part of the AC cycle, which is stored in the device's magnetic or electric field, only to return this energy back to the source during the rest of the cycle. For example, to get 1 kW of real power, if the power factor is unity, 1 kVA of apparent power needs to be transferred (1 kW ÷ 1 = 1 kVA). At low values of power factor, more apparent power needs to be transferred to get the same real power. To get 1 kW of real power at 0.2 power factor, 5 kVA of apparent power needs to be transferred (1 kW ÷ 0.2 = 5 kVA). This apparent power must be produced and transmitted to the load and is subject to losses in the production and transmission processes. Electrical loads consuming alternating current power consume both real power and reactive power. The vector sum of real and reactive power is the complex power, and its magnitude is the apparent power. The presence of reactive power causes the real power to be less than the apparent power, and so, the electric load has a power factor of less than 1. A negative power factor (0 to −1) can result from returning active power to the source, such as in the case of a building fitted with solar panels when surplus power is fed back into the supply. Power factor correction of linear loads A high power factor is generally desirable in a power delivery system to reduce losses and improve voltage regulation at the load. Compensating elements near an electrical load will reduce the apparent power demand on the supply system. Power factor correction may be applied by an electric power transmission utility to improve the stability and efficiency of the network. Individual electrical customers who are charged by their utility for low power factor may install correction equipment to increase their power factor to reduce costs. Power factor correction brings the power factor of an AC power circuit closer to 1 by supplying or absorbing reactive power, adding capacitors or inductors that act to cancel the inductive or capacitive effects of the load, respectively. In the case of offsetting the inductive effect of motor loads, capacitors can be locally connected. These capacitors help to generate reactive power to meet the demand of the inductive loads. This will keep that reactive power from having to flow from the utility generator to the load. In the electricity industry, inductors are said to consume reactive power, and capacitors are said to supply it, even though reactive power is just energy moving back and forth on each AC cycle. The reactive elements in power factor correction devices can create voltage fluctuations and harmonic noise when switched on or off. They will supply or sink reactive power regardless of whether there is a corresponding load operating nearby, increasing the system's no-load losses. In the worst case, reactive elements can interact with the system and with each other to create resonant conditions, resulting in system instability and severe overvoltage fluctuations. As such, reactive elements cannot simply be applied without engineering analysis. An automatic power factor correction unit consists of some capacitors that are switched by means of contactors. These contactors are controlled by a regulator that measures power factor in an electrical network. Depending on the load and power factor of the network, the power factor controller will switch the necessary blocks of capacitors in steps to make sure the power factor stays above a selected value. In place of a set of switched capacitors, an unloaded synchronous motor can supply reactive power. The reactive power drawn by the synchronous motor is a function of its field excitation. It is referred to as a synchronous condenser. It is started and connected to the electrical network. It operates at a leading power factor and puts vars onto the network as required to support a system's voltage or to maintain the system power factor at a specified level. The synchronous condenser's installation and operation are identical to those of large electric motors. Its principal advantage is the ease with which the amount of correction can be adjusted; it behaves like a variable capacitor. Unlike with capacitors, the amount of reactive power furnished is proportional to voltage, not the square of voltage; this improves voltage stability on large networks. Synchronous condensers are often used in connection with high-voltage direct-current transmission projects or in large industrial plants such as steel mills. For power factor correction of high-voltage power systems or large, fluctuating industrial loads, power electronic devices such as the static VAR compensator or STATCOM are increasingly used. These systems are able to compensate sudden changes of power factor much more rapidly than contactor-switched capacitor banks and, being solid-state, require less maintenance than synchronous condensers. Non-linear loads Examples of non-linear loads on a power system are rectifiers (such as used in a power supply), and arc discharge devices such as fluorescent lamps, electric welding machines, or arc furnaces. Because current in these systems is interrupted by a switching action, the current contains frequency components that are multiples of the power system frequency. Distortion power factor is a measure of how much the harmonic distortion of a load current decreases the average power transferred to the load. Non-sinusoidal components In linear circuits having only sinusoidal currents and voltages of one frequency, the power factor arises only from the difference in phase between the current and voltage. This is displacement power factor. Non-linear loads change the shape of the current waveform from a sine wave to some other form. Non-linear loads create harmonic currents in addition to the original (fundamental frequency) AC current. This is of importance in practical power systems that contain non-linear loads such as rectifiers, some forms of electric lighting, electric arc furnaces, welding equipment, switched-mode power supplies, variable speed drives and other devices. Filters consisting of linear capacitors and inductors can prevent harmonic currents from entering the supplying system. To measure the real power or reactive power, a wattmeter designed to work properly with non-sinusoidal currents must be used. Distortion power factor The distortion power factor is the distortion component associated with the harmonic voltages and currents present in the system. is the total harmonic distortion of the load current. is the fundamental component of the current, is the total current, and is the current on the hth harmonic; all are root mean square values (distortion power factor can also be used to describe individual order harmonics, using the corresponding current in place of total current). This definition with respect to total harmonic distortion assumes that the voltage stays undistorted (sinusoidal, without harmonics). This simplification is often a good approximation for stiff voltage sources (not being affected by changes in load downstream in the distribution network). Total harmonic distortion of typical generators from current distortion in the network is on the order of 1–2%, which can have larger scale implications but can be ignored in common practice. The result when multiplied with the displacement power factor is the overall, true power factor or just power factor (PF): Distortion in three-phase networks In practice, the local effects of distortion current on devices in a three-phase distribution network rely on the magnitude of certain order harmonics rather than the total harmonic distortion. For example, the triplen, or zero-sequence, harmonics (3rd, 9th, 15th, etc.) have the property of being in-phase when compared line-to-line. In a delta-wye transformer, these harmonics can result in circulating currents in the delta windings and result in greater resistive heating. In a wye-configuration of a transformer, triplen harmonics will not create these currents, but they will result in a non-zero current in the neutral wire. This could overload the neutral wire in some cases and create error in kilowatt-hour metering systems and billing revenue. The presence of current harmonics in a transformer also result in larger eddy currents in the magnetic core of the transformer. Eddy current losses generally increase as the square of the frequency, lowering the transformer's efficiency, dissipating additional heat, and reducing its service life. Negative-sequence harmonics (5th, 11th, 17th, etc.) combine 120 degrees out of phase, similarly to the fundamental harmonic but in a reversed sequence. In generators and motors, these currents produce magnetic fields which oppose the rotation of the shaft and sometimes result in damaging mechanical vibrations. Power factor correction (PFC) in non-linear loads Passive PFC The simplest way to control the harmonic current is to use a filter that passes current only at line frequency (50 or 60 Hz). The filter consists of capacitors or inductors and makes a non-linear device look more like a linear load. An example of passive PFC is a valley-fill circuit. A disadvantage of passive PFC is that it requires larger inductors or capacitors than an equivalent power active PFC circuit. Also, in practice, passive PFC is often less effective at improving the power factor. Active PFC Active PFC is the use of power electronics to change the waveform of current drawn by a load to improve the power factor. Some types of the active PFC are buck, boost, buck-boost and synchronous condenser. Active power factor correction can be single-stage or multi-stage. In the case of a switched-mode power supply, a boost converter is inserted between the bridge rectifier and the main input capacitors. The boost converter attempts to maintain a constant voltage at its output while drawing a current that is always in phase with and at the same frequency as the line voltage. Another switched-mode converter inside the power supply produces the desired output voltage from the DC bus. This approach requires additional semiconductor switches and control electronics but permits cheaper and smaller passive components. It is frequently used in practice. For a three-phase SMPS, the Vienna rectifier configuration may be used to substantially improve the power factor. SMPSs with passive PFC can achieve power factor of about 0.7–0.75, SMPSs with active PFC, up to 0.99 power factor, while a SMPS without any power factor correction have a power factor of only about 0.55–0.65. Due to their very wide input voltage range, many power supplies with active PFC can automatically adjust to operate on AC power from about 100 V (Japan) to 240 V (Europe). That feature is particularly welcome in power supplies for laptops. Dynamic PFC Dynamic power factor correction (DPFC), sometimes referred to as real-time power factor correction, is used for electrical stabilization in cases of rapid load changes (e.g. at large manufacturing sites). DPFC is useful when standard power factor correction would cause over or under correction. DPFC uses semiconductor switches, typically thyristors, to quickly connect and disconnect capacitors or inductors to improve power factor. Importance in distribution systems Power factors below 1.0 require a utility to generate more than the minimum volt-amperes necessary to supply the real power (watts). This increases generation and transmission costs. For example, if the load power factor were as low as 0.7, the apparent power would be 1.4 times the real power used by the load. Line current in the circuit would also be 1.4 times the current required at 1.0 power factor, so the losses in the circuit would be doubled (since they are proportional to the square of the current). Alternatively, all components of the system such as generators, conductors, transformers, and switchgear would be increased in size (and cost) to carry the extra current. When the power factor is close to unity, for the same kVA rating of the transformer more load current can be supplied. Utilities typically charge additional costs to commercial customers who have a power factor below some limit, which is typically 0.9 to 0.95. Engineers are often interested in the power factor of a load as one of the factors that affect the efficiency of power transmission. With the rising cost of energy and concerns over the efficient delivery of power, active PFC has become more common in consumer electronics. Current Energy Star guidelines for computers call for a power factor of ≥ 0.9 at 100% of rated output in the PC's power supply. According to a white paper authored by Intel and the U.S. Environmental Protection Agency, PCs with internal power supplies will require the use of active power factor correction to meet the ENERGY STAR 5.0 Program Requirements for Computers. In Europe, EN 61000-3-2 requires power factor correction be incorporated into consumer products. Small customers, such as households, are not usually charged for reactive power and so power factor metering equipment for such customers will not be installed. Measurement techniques The power factor in a single-phase circuit (or balanced three-phase circuit) can be measured with the wattmeter-ammeter-voltmeter method, where the power in watts is divided by the product of measured voltage and current. The power factor of a balanced polyphase circuit is the same as that of any phase. The power factor of an unbalanced polyphase circuit is not uniquely defined. A direct reading power factor meter can be made with a moving coil meter of the electrodynamic type, carrying two perpendicular coils on the moving part of the instrument. The field of the instrument is energized by the circuit current flow. The two moving coils, A and B, are connected in parallel with the circuit load. One coil, A, will be connected through a resistor and the second coil, B, through an inductor, so that the current in coil B is delayed with respect to current in A. At unity power factor, the current in A is in phase with the circuit current, and coil A provides maximum torque, driving the instrument pointer toward the 1.0 mark on the scale. At zero power factor, the current in coil B is in phase with circuit current, and coil B provides torque to drive the pointer towards 0. At intermediate values of power factor, the torques provided by the two coils add and the pointer takes up intermediate positions. Another electromechanical instrument is the polarized-vane type. In this instrument a stationary field coil produces a rotating magnetic field, just like a polyphase motor. The field coils are connected either directly to polyphase voltage sources or to a phase-shifting reactor if a single-phase application. A second stationary field coil, perpendicular to the voltage coils, carries a current proportional to current in one phase of the circuit. The moving system of the instrument consists of two vanes that are magnetized by the current coil. In operation, the moving vanes take up a physical angle equivalent to the electrical angle between the voltage source and the current source. This type of instrument can be made to register for currents in both directions, giving a four-quadrant display of power factor or phase angle. Digital instruments exist that directly measure the time lag between voltage and current waveforms. Low-cost instruments of this type measure the peak of the waveforms. More sophisticated versions measure the peak of the fundamental harmonic only, thus giving a more accurate reading for phase angle on distorted waveforms. Calculating power factor from voltage and current phases is only accurate if both waveforms are sinusoidal. Power Quality Analyzers, often referred to as Power Analyzers, make a digital recording of the voltage and current waveform (typically either one phase or three phase) and accurately calculate true power (watts), apparent power (VA) power factor, AC voltage, AC current, DC voltage, DC current, frequency, IEC61000-3-2/3-12 Harmonic measurement, IEC61000-3-3/3-11 flicker measurement, individual phase voltages in delta applications where there is no neutral line, total harmonic distortion, phase and amplitude of individual voltage or current harmonics, etc. Mnemonics Anglophone power engineering students are advised to remember: ELI the ICE man or ELI on ICE – the voltage E, leads the current I, in an inductor L. The current I leads the voltage E in a capacitor C. Another common mnemonic is CIVIL – in a capacitor (C) the current (I) leads voltage (V), voltage (V) leads current (I) in an inductor (L). References External links . Electrical parameters AC power Electrical engineering Engineering ratios
Power factor
[ "Mathematics", "Engineering" ]
5,112
[ "Metrics", "Engineering ratios", "Quantity", "Electrical engineering", "Electrical parameters" ]
41,571
https://en.wikipedia.org/wiki/Power%20margin
In telecommunications, the power margin is the difference between available signal power and the minimum signal power needed to overcome system losses and still satisfy the minimum input requirements of the receiver for a given performance level. System power margin reflects the excess signal level, present at the input of the receiver, that is available to compensate for (a) the effects of component aging in the transmitter, receiver, or physical transmission medium, and (b) a deterioration in propagation conditions. Synonym system power margin. References Telecommunications engineering
Power margin
[ "Engineering" ]
101
[ "Electrical engineering", "Telecommunications engineering" ]
41,577
https://en.wikipedia.org/wiki/Primary%20channel
In telecommunications, the term primary channel has the following meanings: The communication channel that is designated as a prime transmission channel and is used as the first choice in restoring priority circuits. In a communications network, the channel that has the highest data rate of all the channels sharing a common interface. The main Television channel of several operated by a broadcaster. A primary channel may support the transfer of information in one direction only, either direction alternately, or both directions simultaneously. On a computer motherboard, it is the first physical connector for an IDE or ATA drive. Sources Network architecture
Primary channel
[ "Engineering" ]
116
[ "Network architecture", "Computer networks engineering" ]
41,579
https://en.wikipedia.org/wiki/Primary%20station
In a data communication network, the primary station is the station responsible for unbalanced control of a data link. The primary station generates commands and interprets responses, and is responsible for initialization of data and control information interchange, organization and control of data flow, retransmission control, and all recovery functions at the link level. References Data transmission
Primary station
[ "Technology" ]
73
[ "Computing stubs", "Computer network stubs" ]
41,580
https://en.wikipedia.org/wiki/Primary%20time%20standard
In telecommunications, a primary time standard is a time standard that does not require calibration against another time standard. Examples of primary time, (i.e., frequency standards) are caesium standards and hydrogen masers. The international second is based on the microwave frequency (9,192,631,770 Hz) associated with the atomic resonance of the hyperfine ground state levels of the caesium-133 atom in a magnetically neutral environment. Realizable caesium frequency standards use a strong electromagnet to deliberately introduce a magnetic field which overwhelms that of the Earth. The presence of this strong magnetic field introduces a slight, but known, increase in the atomic resonance frequency. However, very small variations in the calibration of the electric current in the electromagnet introduce minuscule frequency variations among different caesium oscillators. References Timekeeping Electronics standards Telecommunications standards
Primary time standard
[ "Physics" ]
190
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
41,584
https://en.wikipedia.org/wiki/Private%20line
In telecommunications, a private line is typically a telephone company service that uses a dedicated, usually unswitched point-to-point circuit, but it may involve private switching arrangements, or predefined transmission physical or virtual paths. Most private lines connect only two locations, but some have multiple drop points. If the circuit is used for interconnecting switching systems, including manual switchboards, it is often called a tie line. Among subscribers to the public switched telephone network, the term private line is often erroneously used to describe an individual telephone line for service for only one subscriber, as opposed to a party line with multiple stations connected. In radio or wireless telephony, Private Line is a term trademarked by Motorola to describe an implementation of a Continuous Tone-Coded Squelch System (CTCSS), a method of using low-frequency subaudible tones to share a single radio channel among multiple users. Each user group would use a different low frequency tone. Motorola's trade name, especially the abbreviation PL, has become a genericized trademark for the method. General Electric used the term Channel Guard to describe the same system and other manufacturers used other terms. A later digital version of a private line is called digital private line (DPL). Advantages of private lines are: High security as one dedicated link is provided Always available Instant data transmission Types PLNA - Private Line No Apparatus PLNE - Private Line No Equipment PLAR - Private Line Automatic Ringdown (think the "Red Phone") See also Leased line Dedicated line Party line Lec billing Interexchange carrier References Communication circuits
Private line
[ "Engineering" ]
330
[ "Telecommunications engineering", "Communication circuits" ]
41,586
https://en.wikipedia.org/wiki/Propagation%20constant
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the dimensionless change in magnitude or phase per unit length. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor. The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change. Alternative names The term "propagation constant" is somewhat of a misnomer as it usually varies strongly with ω. It is probably the most widely used term but there are a large variety of alternative names used by various authors for this quantity. These include transmission parameter, transmission function, propagation parameter, propagation coefficient and transmission constant. If the plural is used, it suggests that α and β are being referenced separately but collectively as in transmission parameters, propagation parameters, etc. In transmission line theory, α and β are counted among the "secondary coefficients", the term secondary being used to contrast to the primary line coefficients. The primary coefficients are the physical properties of the line, namely R,C,L and G, from which the secondary coefficients may be derived using the telegrapher's equation. In the field of transmission lines, the term transmission coefficient has a different meaning despite the similarity of name: it is the companion of the reflection coefficient. Definition The propagation constant, symbol , for a given system is defined by the ratio of the complex amplitude at the source of the wave to the complex amplitude at some distance , such that, Inverting the above equation and isolating results in the quotient of the complex amplitude ratio's natural logarithm and the distance traveled: Since the propagation constant is a complex quantity we can write: where , the real part, is called the attenuation constant , the imaginary part, is called the phase constant more often is used for electrical circuits. That does indeed represent phase can be seen from Euler's formula: which is a sinusoid which varies in phase as varies but does not vary in amplitude because The reason for the use of base is also now made clear. The imaginary phase constant, , can be added directly to the attenuation constant, , to form a single complex number that can be handled in one mathematical operation provided they are to the same base. Angles measured in radians require base , so the attenuation is likewise in base . The propagation constant for conducting lines can be calculated from the primary line coefficients by means of the relationship where the series impedance of the line per unit length and, the shunt admittance of the line per unit length. Plane wave The propagation factor of a plane wave traveling in a linear media in the direction is given by where distance traveled in the direction attenuation constant in the units of nepers/meter phase constant in the units of radians/meter frequency in radians/second conductivity of the media = complex permitivity of the media = complex permeability of the media The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the direction. Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant: Attenuation constant In telecommunications, the term attenuation constant, also called attenuation parameter or attenuation coefficient, is the attenuation of an electromagnetic wave propagating through a medium per unit distance from the source. It is the real part of the propagation constant and is measured in nepers per metre. A neper is approximately 8.7 dB. Attenuation constant can be defined by the amplitude ratio The propagation constant per unit length is defined as the natural logarithm of the ratio of the sending end current or voltage to the receiving end current or voltage, divided by the distance x involved: Conductive lines The attenuation constant for conductive lines can be calculated from the primary line coefficients as shown above. For a line meeting the distortionless condition, with a conductance G in the insulator, the attenuation constant is given by however, a real line is unlikely to meet this condition without the addition of loading coils and, furthermore, there are some frequency dependent effects operating on the primary "constants" which cause a frequency dependence of the loss. There are two main components to these losses, the metal loss and the dielectric loss. The loss of most transmission lines are dominated by the metal loss, which causes a frequency dependency due to finite conductivity of metals, and the skin effect inside a conductor. The skin effect causes R along the conductor to be approximately dependent on frequency according to Losses in the dielectric depend on the loss tangent (tan δ) of the material divided by the wavelength of the signal. Thus they are directly proportional to the frequency. Optical fiber The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant. Phase constant In electromagnetic theory, the phase constant, also called phase change constant, parameter or coefficient is the imaginary component of the propagation constant for a plane wave. It represents the change in phase per unit length along the path traveled by the wave at any instant and is equal to the real part of the angular wavenumber of the wave. It is represented by the symbol β and is measured in units of radians per unit length. From the definition of (angular) wavenumber for transverse electromagnetic (TEM) waves in lossless media, For a transmission line, the telegrapher's equations tells us that the wavenumber must be proportional to frequency for the transmission of the wave to be undistorted in the time domain. This includes, but is not limited to, the ideal case of a lossless line. The reason for this condition can be seen by considering that a useful signal is composed of many different wavelengths in the frequency domain. For there to be no distortion of the waveform, all these waves must travel at the same velocity so that they arrive at the far end of the line at the same time as a group. Since wave phase velocity is given by it is proved that β is required to be proportional to ω. In terms of primary coefficients of the line, this yields from the telegrapher's equation for a distortionless line the condition where L and C are, respectively, the inductance and capacitance per unit length of the line. However, practical lines can only be expected to approximately meet this condition over a limited frequency band. In particular, the phase constant is not always equivalent to the wavenumber . The relation applies to the TEM wave, which travels in free space or TEM-devices such as the coaxial cable and two parallel wires transmission lines. Nevertheless, it does not apply to the TE wave (transverse electric wave) and TM wave (transverse magnetic wave). For example, in a hollow waveguide where the TEM wave cannot exist but TE and TM waves can propagate, Here is the cutoff frequency. In a rectangular waveguide, the cutoff frequency is where are the mode numbers for the rectangle's sides of length and respectively. For TE modes, (but is not allowed), while for TM modes . The phase velocity equals Filters and two-port networks The term propagation constant or propagation function is applied to filters and other two-port networks used for signal processing. In these cases, however, the attenuation and phase coefficients are expressed in terms of nepers and radians per network section rather than per unit length. Some authors make a distinction between per unit length measures (for which "constant" is used) and per section measures (for which "function" is used). The propagation constant is a useful concept in filter design which invariably uses a cascaded section topology. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc. Cascaded networks The ratio of output to input voltage for each network is given by The terms are impedance scaling terms and their use is explained in the image impedance article. The overall voltage ratio is given by Thus for n cascaded sections all having matching impedances facing each other, the overall propagation constant is given by See also The concept of penetration depth is one of many ways to describe the absorption of electromagnetic waves. For the others, and their interrelationships, see the article: Mathematical descriptions of opacity. Propagation speed Notes References . Matthaei, Young, Jones Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964. External links Free PDF download is available. There is an updated version dated August 6, 2002. Filter theory Physical quantities Telecommunication theory Electromagnetism Electromagnetic radiation Analog circuits Image impedance filters
Propagation constant
[ "Physics", "Mathematics", "Engineering" ]
1,932
[ "Physical phenomena", "Electromagnetism", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Quantity", "Analog circuits", "Filter theory", "Electronic engineering", "Radiation", "Fundamental interactions", "Physical properties" ]
41,587
https://en.wikipedia.org/wiki/Propagation%20path%20obstruction
In telecommunications, a propagation path obstruction is a man-made or natural physical feature that lies near enough to a radio path to cause a measurable effect on path loss, exclusive of reflection effects. An obstruction may lie to the side, above, or below the path. Ridges, bridges, cliffs, buildings, and trees are examples of obstructions. If the clearance from the nearest anticipated path position, over the expected range of Earth radius k-factor, exceeds 0.6 of the first Fresnel zone radius, the feature is not normally considered an obstruction. References Radio frequency propagation
Propagation path obstruction
[ "Physics" ]
119
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,589
https://en.wikipedia.org/wiki/Protective%20distribution%20system
A protective distribution system (PDS), also called protected distribution system, is a US government term for wireline or fiber-optic telecommunication system that includes terminals and adequate acoustical, electrical, electromagnetic, and physical safeguards to permit its use for the unencrypted transmission of classified information. At one time these systems were called "approved circuits". A complete protected distribution system includes the subscriber and terminal equipment and the interconnecting lines. Description The purpose of a PDS is to deter, detect and/or make difficult physical access to the communication lines carrying national security information. A specification called the National Security Telecommunications and Information Systems Security Instruction (NSTISSI) 7003 was issued in December 1996 by the Committee on National Security Systems. Approval authority, standards, and guidance for the design, installation, and maintenance for PDS are provided by NSTISSI 7003 to U.S. government departments and agencies and their contractors and vendors. This instruction describes the requirements for all PDS installations within the U.S. and for low and medium threat locations outside the U.S. PDS is commonly used to protect SIPRNet and JWICS networks. The document superseded one numbered NASCI 4009 on Protected Distribution Systems, dated December 30, 1981, and part of a document called NACSEM 5203, that covered guidelines for facility design, using the designations "red" and "black". There are two types of PDS: hardened distribution systems and simple distribution systems. Hardened distribution Hardened distribution PDSs provide significant physical protection and can be implemented in three forms: hardened carrier PDSs, alarmed carrier PDSs and continuously viewed carrier PDSs. Hardened carrier In a hardened carrier PDS, the data cables are installed in a carrier constructed of electrical metallic tubing (EMT), ferrous conduit or pipe, or rigid sheet steel ducting. All of the connections in a Hardened Carrier System are permanently sealed completely around all surfaces with welds, epoxy or other such sealants. If the hardened carrier is buried under ground, to secure cables running between buildings for example, the carrier containing the cables is encased in concrete. With a hardened carrier system, detection is accomplished via human inspections that are required to be performed periodically. Therefore, hardened carriers are installed below ceilings or above flooring so they can be visually inspected to ensure that no intrusions have occurred. These periodic visual inspections (PVIs) occur at a frequency dependent upon the level of threat to the environment, the security classification of the data, and the access control to the area. Alarmed carrier As an alternative to conducting human visual inspections, an alarmed carrier PDS may be constructed to automate the inspection process through electronic monitoring with an alarm system. In an Alarmed Carrier PDS, the carrier system is “alarmed” with specialized optical fibers deployed within the conduit for the purpose of sensing acoustic vibrations that usually occur when an intrusion is being attempted on the conduit in order to gain access to the cables. Alarmed carrier PDS offers several advantages over hardened carrier PDS: Provides continuous monitoring 24/7/365 Eliminates the requirement for periodic visual inspections Allows the carrier to be hidden above the ceiling or below the floor, since periodic visual inspections are not required Eliminates the need for the welding and epoxying of the connections Eliminates the requirement for concrete encasement outdoors Eliminates the need to lock down manhole covers Enables rapid redeployment for evolving network arrangements Legacy alarmed carrier systems monitor the carrier containing the cables being protected. More advanced systems monitor the fibers within, or intrinsic to, the cables being protected to turn those cables into sensors, which detect intrusion attempts. Depending on the government organization, utilizing an alarmed carrier PDS in conjunction with interlocking armored cable may, in some cases, allow for the elimination of the carrier systems altogether. In these instances, the cables being protected can be installed in existing conveyance (wire basket, ladder rack) or suspended cabling (on D-rings, J-Hooks, etc.). Continuously viewed carrier A Continuously Viewed Carrier PDS is one that is under continuous observation, 24 hours per day (including when operational). Such circuits may be grouped together, but should be separated from all non-continuously viewed circuits ensuring an open field of view. Standing orders should include the requirement to investigate any attempt to disturb the PDS. Appropriate security personnel should investigate the area of attempted penetration within 15 minutes of discovery. This type of hardened carrier is not used for Top Secret or special category information for non-U.S. UAA. UAA is an Uncontrolled Access Area (UAA). Like definitions include Controlled Access Area (CAA) and Restricted Access Area (RAA). A Secure Room (SR) offers the highest degree of protection. Therefore, from the least protected (least secure) to the most protected is as follows: UAA RAA CAA SR Simple distribution Simple distribution PDSs are afforded a reduced level of physical security protection as compared to a hardened distribution PDS. They use a simple carrier system and the following means are acceptable under NSTISSI 7003: The data cables should be installed in a carrier The carrier can be constructed of any material (e.g., wood, PVT, EMT, ferrous conduit) The joints and access points should be secured and be controlled by personnel cleared to the highest level of data handled by the PDS The carrier is to be inspected in accordance with the requirements of NSTISSI 7003 See also National Information Systems Security Glossary References Classified information Telecommunications systems
Protective distribution system
[ "Technology" ]
1,144
[ "Telecommunications systems" ]
41,592
https://en.wikipedia.org/wiki/Provisioning%20%28technology%29
In telecommunications, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability. The concept of network provisioning or service mediation, mostly used in the telecommunication industry, refers to the provisioning of the customer's services to the network elements, which are various equipment connected in that network communication system. Generally in telephony provisioning this is accomplished with network management database table mappings. It requires the existence of networking equipment and depends on network planning and design. In a modern signal infrastructure employing information technology (IT) at all levels, there is no possible distinction between telecommunications services and "higher level" infrastructure. Accordingly, provisioning configures any required systems, provides users with access to data and technology resources, and refers to all enterprise-level information-resource management involved. Organizationally, a CIO typically manages provisioning, necessarily involving human resources and IT departments cooperating to: Give users access to data repositories or grant authorization to systems, network applications and databases based on a unique user identity. Appropriate for their use hardware resources, such as computers, mobile phones and pagers. As its core, the provisioning process monitors access rights and privileges to ensure the security of an enterprise's resources and user privacy. As a secondary responsibility, it ensures compliance and minimizes the vulnerability of systems to penetration and abuse. As a tertiary responsibility, it tries to reduce the amount of custom configuration using boot image control and other methods that radically reduce the number of different configurations involved. Discussion of provisioning often appears in the context of virtualization, orchestration, utility computing, cloud computing, and open-configuration concepts and projects. For instance, the OASIS Provisioning Services Technical Committee (PSTC) defines an XML-based framework for exchanging user, resource, and service-provisioning information - SPML (Service Provisioning Markup Language) for "managing the provisioning and allocation of identity information and system resources within and between organizations". Once provisioning has taken place, the process of SysOpping ensures the maintenance of services to the expected standards. Provisioning thus refers only to the setup or startup part of the service operation, and SysOpping to the ongoing support. Network provisioning One type of provisioning. The services which are assigned to the customer in the customer relationship management (CRM) have to be provisioned on the network element which is enabling the service and allows the customer to actually use the service. The relation between a service configured in the CRM and a service on the network elements is not necessarily a one-to-one relationship; for example, services like Microsoft Media Server (mms://) can be enabled by more than one network element. During the provisioning, the service mediation device translates the service and the corresponding parameters of the service to one or more services/parameters on the network elements involved. The algorithm used to translate a system service into network services is called provisioning logic. Electronic invoice feeds from your carriers can be automatically downloaded directly into the core of the telecom expense management (TEM) software and it will immediately conduct an audit of each single line item charge all the way down to the User Support and Operations Center (USOC) level. The provisioning software will capture each circuit number provided by all of your carriers and if billing occurs outside of the contracted rate an exception rule will trigger a red flag and notify the pre-established staff member to review the billing error. Server provisioning Server provisioning is a set of actions to prepare a server with appropriate systems, data and software, and make it ready for network operation. Typical tasks when provisioning a server are: select a server from a pool of available servers, load the appropriate software (operating system, device drivers, middleware, and applications), appropriately customize and configure the system and the software to create or change a boot image for this server, and then change its parameters, such as IP address, IP Gateway to find associated network and storage resources (sometimes separated as resource provisioning) to audit the system. By auditing the system, you ensure OVAL compliance with limit vulnerability, ensure compliance, or install patches. After these actions, you restart the system and load the new software. This makes the system ready for operation. Typically an internet service provider (ISP) or network operations center will perform these tasks to a well-defined set of parameters, for example, a boot image that the organization has approved and which uses software it has license to. Many instances of such a boot image create a virtual dedicated host. There are many software products available to automate the provisioning of servers, services and end-user devices. Examples: BMC Bladelogic Server Automation, HP Server Automation, IBM Tivoli Provisioning Manager, Redhat Kickstart, xCAT, HP Insight CMU, etc. Middleware and applications can be installed either when the operating system is installed or afterwards by using an Application Service Automation tool. Further questions are addressed in academia such as when provisioning should be issued and how many servers are needed in multi-tier, or multi-service applications. In cloud computing, servers may be provisioned via a web user interface or an application programming interface (API). One of the unique things about cloud computing is how rapidly and easily this can be done. Monitoring software can be used to trigger automatic provisioning when existing resources become too heavily stressed. In short, server provisioning configures servers based on resource requirements. The use of a hardware or software component (e.g. single/dual processor, RAM, HDD, RAID Controller, a number of LAN cards, applications, OS, etc.) depends on the functionality of the server, such as ISP, virtualization, NOS, or voice processing. Server redundancy depends on the availability of servers in the organization. Critical applications have less downtime when using cluster servers, RAID, or a mirroring system. Service used by most larger-scale centers in part to avoid this. Additional resource provisioning may be done per service. There are several software on the market for server provisioning such as Cobbler or HP Intelligent Provisioning. User provisioning User provisioning refers to the creation, maintenance and deactivation of user objects and user attributes, as they exist in one or more systems, directories or applications, in response to automated or interactive business processes. User provisioning software may include one or more of the following processes: change propagation, self-service workflow, consolidated user administration, delegated user administration, and federated change control. User objects may represent employees, contractors, vendors, partners, customers or other recipients of a service. Services may include electronic mail, inclusion in a published user directory, access to a database, access to a network or mainframe, etc. User provisioning is a type of identity management software, particularly useful within organizations, where users may be represented by multiple objects on multiple systems and multiple instances. Self-service provisioning for cloud computing services On-demand self-service is described by the National Institute of Standards and Technology (NIST) as an essential characteristic of cloud computing. The self-service nature of cloud computing lets end users obtain and remove cloud services―including applications, the infrastructure supporting the applications, and configuration― themselves without requiring the assistance of an IT staff member. The automatic self-servicing may target different application goals and constraints (e.g. deadlines and cost), as well as handling different application architectures (e.g., bags-of-tasks and workflows). Cloud users can obtain cloud services through a cloud service catalog or a self-service portal. Because business users can obtain and configure cloud services themselves, this means IT staff can be more productive and gives them more time to manage cloud infrastructures. One downside of cloud service provisioning is that it is not instantaneous. A cloud virtual machine (VM) can be acquired at any time by the user, but it may take up to several minutes for the acquired VM to be ready to use. The VM startup time is dependent on factors, such as image size, VM type, data center location, and number of VMs. Cloud providers have different VM startup performance. Mobile subscriber provisioning Mobile subscriber provisioning refers to the setting up of new services, such as GPRS, MMS and Instant Messaging for an existing subscriber of a mobile phone network, and any gateways to standard Internet chat or mail services. The network operator typically sends these settings to the subscriber's handset using SMS text services or HTML, and less commonly WAP, depending on what the mobile operating systems can accept. A general example of provisioning is with data services. A mobile user who is using his or her device for voice calling may wish to switch to data services in order to read emails or browse the Internet. The mobile device's services are "provisioned" and thus the user is able to stay connected through push emails and other features of smartphone services. Device management systems can benefit end-users by incorporating plug-and-play data services, supporting whatever device the end-user is using.. Such a platform can automatically detect devices in the network, sending them settings for immediate and continued usability. The process is fully automated, keeping the history of used devices and sending settings only to subscriber devices which were not previously set. One method of managing mobile updates is to filter IMEI/IMSI pairs. Some operators report activity of 50 over-the-air settings update files per second. Mobile content provisioning This refers to delivering mobile content, such as mobile internet to a mobile phone, agnostic of the features of said device. These may include operating system type and versions, Java version, browser version, screen form factors, audio capabilities, language settings and many other characteristics. As of April 2006, an estimated 5,000 permutations were relevant. Mobile content provisioning facilitates a common user experience, though delivered on widely different handsets. Mobile device provisioning Provisioning devices involves delivering configuration data and policy settings to the mobile devices from a central point – Mobile device management system tools. Internet access provisioning When getting a customer online, the client system must be configured. Depending on the connection technology (e.g., DSL, Cable, Fibre), the client system configuration may include: Modem configuration Network authentication Installing drivers Setting up Wireless LAN Securing operating system (primarily for Windows) Configuring browser provider-specifics E-mail provisioning (create mailboxes and aliases) E-mail configuration in client systems Installing additional support software or add-on packages There are four approaches to provisioning internet access: Hand out manuals: Manuals are a great help for experienced users, but inexperienced users will need to call the support hotline several times until all internet services are accessible. Every unintended change in the configuration, by user mistake or due to a software error, results in additional calls. On-site setup by a technician: Sending a technician on-site is the most reliable approach from the provider's point of view, as the person ensures that the internet access is working, before leaving the customer's premises. This advantage comes at high costs – either for the provider or the customer, depending on the business model. Furthermore, it is inconvenient for customers, as they have to wait until they get an installation appointment and because they need to take a day off from work. For repairing an internet connection on-site or phone support will be needed again. Server-side remote setup: Server-side modem configuration uses a protocol called TR-069. It is widely established and reliable. At the current stage it can only be used for modem configuration. Protocol extensions are discussed, but not yet practically implemented, particularly because most client devices and applications do not support them yet. All other steps of the provisioning process are left to the user, typically causing many rather long calls to the support hotline. Installation CD: Also called a "client-side self-service installation" CD, it can cover the entire process from modem configuration to setting up client applications, including home networking devices. The software typically acts autonomously, i.e., it doesn't need an online connection and an expensive backend infrastructure. During such an installation process the software usually also install diagnosis and self-repair applications that support customers in case of problems, avoiding costly hotline calls. Such client-side applications also open completely new possibilities for marketing, cross- and upselling. Such solutions come from highly specialised companies or directly from the provider's development department. References Network access Operating system technology
Provisioning (technology)
[ "Engineering" ]
2,643
[ "Electronic engineering", "Network access" ]
41,595
https://en.wikipedia.org/wiki/Psophometer
In telecommunications, a psophometer is an instrument that measures the perceptible noise of a telephone circuit. The core of the meter is based on a true RMS voltmeter, which measures the level of the noise signal. This was used for the first psophometers, in the 1930s. As the human-perceived level of noise is more important for telephony than their raw voltage, a modern psophometer incorporates a weighting network to represent this perception. The characteristics of the weighting network depend on the type of circuit under investigation, such as whether the circuit is used to normal speech standards (300 Hz – 3.3 kHz), or for high-fidelity broadcast-quality sound (50 Hz – 15 kHz). Etymology The name was coined in the 1930s, on a basis from , itself derived from . It is unrelated to . The '-meter' suffix was already widely used in English, but also derives originally from Greek. See also Psophometric voltage References Electronic test equipment Noise (electronics) Measuring instruments
Psophometer
[ "Technology", "Engineering" ]
217
[ "Electronic test equipment", "Measuring instruments" ]
41,596
https://en.wikipedia.org/wiki/Psophometric%20voltage
Psophometric voltage is a circuit noise voltage measured with a psophometer that includes a CCIF-1951 weighting network. "Psophometric voltage" should not be confused with "psophometric emf," i.e., the emf in a generator or line with 600 Ω internal resistance. For practical purposes, the psophometric emf is twice the corresponding psophometric voltage. Psophometric voltage readings, V, in millivolts, are commonly converted to dBm(psoph) by dBm(psoph) = 20 log10V – 57.78. References Electrical parameters Noise (electronics) Telecommunications engineering
Psophometric voltage
[ "Engineering" ]
149
[ "Electrical engineering", "Telecommunications engineering", "Electrical parameters" ]
41,598
https://en.wikipedia.org/wiki/Public%20land%20mobile%20network
In telecommunication, a public land mobile network (PLMN) is a combination of wireless communication services offered by a specific operator in a specific country. A PLMN typically consists of several cellular technologies like GSM/2G, UMTS/3G, LTE/4G, NR/5G, offered by a single operator within a given country, often referred to as a cellular network. PLMN code A PLMN is identified by a globally unique PLMN code, which consists of a MCC (Mobile Country Code) and MNC (Mobile Network Code). Hence, it is a five- to six-digit number identifying a country, and a mobile network operator in that country, usually represented in the form 001-01 or 001–001. A PLMN is part of a: Location Area Identity (LAI) (PLMN and Location Area Code) Cell Global Identity (CGI) (LAI and Cell Identifier) IMSI (see PLMN code and IMSI) Leading zeros in PLMN codes Note that an MNC can be of two-digit form and three-digit form with leading zeros. It is administered by the respective national numbering plan administrator. From PLMN assignments, it is apparent that such dualities of two-digit and three-digit MNCs with the same number value are avoided (see the list of mobile country codes and mobile network codes). An example for an actual three-digit/two-digit MNC with leading zeros is in Bermuda MCC, 350-007 and 350-00, 350-01. PLMN code and IMSI The IMSI, which identifies a SIM or USIM for one subscriber, typically starts with the PLMN code. For example, an IMSI belonging to the PLMN 262-33 would look like 262330000000001. Mobile phones use this to detect roaming, so that a mobile phone subscribed on a network with a PLMN code that mismatches the start of the USIM's IMSI will typically display an "R" on the icon that indicates connection strength. PLMN services A PLMN typically offers the following services to a mobile subscriber: Emergency calls to local Fire/Ambulance/Police stations. Voice calls to/from any PLMN ("cellular network") or PSTN ("landline"/VoIP). Short message service (SMS) services to/from any PLMN or SIP service (the original form of texting on a mobile phone, now often replaced by Messaging apps). Multimedia Messaging Service (MMS) services to/from any PLMN or SIP service. Unstructured Supplementary Service Data (USSD) for operator specific interactions (e.g. dialing "*#100#" to indicate the current balance). Internet data connectivity for arbitrary services, e.g. via GPRS in GSM, IuPS in UMTS, or LTE. The availability, quality and bandwidth of these services strongly depends on the particular technology used to implement a PLMN. References Mobile technology
Public land mobile network
[ "Technology" ]
649
[ "nan" ]
41,600
https://en.wikipedia.org/wiki/Pulse
In medicine, the pulse is the rhythmic throbbing of each artery in response to the cardiac cycle (heartbeat). The pulse may be palpated in any place that allows an artery to be compressed near the surface of the body, such as at the neck (carotid artery), wrist (radial artery or ulnar artery), at the groin (femoral artery), behind the knee (popliteal artery), near the ankle joint (posterior tibial artery), and on foot (dorsalis pedis artery). The pulse is most commonly measured at the wrist or neck. A sphygmograph is an instrument for measuring the pulse. Physiology Claudius Galen was perhaps the first physiologist to describe the pulse. The pulse is an expedient tactile method of determination of systolic blood pressure to a trained observer. Diastolic blood pressure is non-palpable and unobservable by tactile methods, occurring between heartbeats. Pressure waves generated by the heart in systole move the arterial walls. Forward movement of blood occurs when the boundaries are pliable and compliant. These properties form enough to create a palpable pressure wave. Pulse velocity, pulse deficits and much more physiologic data are readily and simplistically visualized by the use of one or more arterial catheters connected to a transducer and oscilloscope. This invasive technique has been commonly used in intensive care since the 1970s. The pulse may be further indirectly observed under light absorbances of varying wavelengths with assigned and inexpensively reproduced mathematical ratios. Applied capture of variances of light signal from the blood component hemoglobin under oxygenated vs. deoxygenated conditions allows the technology of pulse oximetry. Characteristics Rate The rate of the pulse can be observed and measured on the outside of an artery by tactile or visual means. It is recorded as arterial beats per minute or BPM. Although the pulse and heart beat are related, they are not the same. For example, there is a delay between the onset of the heart beat and the onset of the pulse, known as the pulse transit time, which varies by site. Similarly measurements of heart rate variability and pulse rate variability differ. In healthy people, the pulse rate is close to the heart rate, as measured by ECG. Measuring the pulse rate is therefore a convenient way to estimate the heart rate. Pulse deficit is a condition in which a person has a difference between their pulse rate and heart rate. It can be observed by simultaneous palpation at the radial artery and auscultation using a stethoscope at the PMI, near the heart apex, for example. Typically, in people with pulse deficit, heart beats do not result in pulsations at the periphery, meaning the pulse rate is lower than the heart rate. Pulse deficit has been found to be significant in the context of premature ventricular contraction and atrial fibrillation. Rhythm A normal pulse is regular in rhythm and force. An irregular pulse may be due to sinus arrhythmia, ectopic beats, atrial fibrillation, paroxysmal atrial tachycardia, atrial flutter, partial heart block etc. Intermittent dropping out of beats at pulse is called "intermittent pulse". Examples of regular intermittent (regularly irregular) pulse include pulsus bigeminus, second-degree atrioventricular block. An example of irregular intermittent (irregularly irregular) pulse is atrial fibrillation. Volume The degree of expansion displayed by artery during diastolic and systolic state is called volume. It is also known as amplitude, expansion or size of pulse. Hypokinetic pulse A weak pulse signifies narrow pulse pressure. It may be due to low cardiac output (as seen in shock, congestive cardiac failure), hypovolemia, valvular heart disease (such as aortic outflow tract obstruction, mitral stenosis, aortic arch syndrome) etc. Hyperkinetic pulse A bounding pulse signifies high pulse pressure. It may be due to low peripheral resistance (as seen in fever, anemia, thyrotoxicosis, , A-V fistula, Paget's disease, beriberi, liver cirrhosis), increased cardiac output, increased stroke volume (as seen in anxiety, exercise, complete heart block, aortic regurgitation), decreased distensibility of arterial system (as seen in atherosclerosis, hypertension and coarctation of aorta). The strength of the pulse can also be reported: 0 = Absent 1 = Barely palpable 2 = Easily palpable 3 = Full 4 = Aneurysmal or bounding pulse Force Also known as compressibility of pulse. It is a rough indication of systolic blood pressure. Tension Determined mainly by mean arterial blood pressure [edited by Elmoghazy] & It corresponds to diastolic blood pressure. A low tension pulse (pulsus mollis), the vessel is soft or impalpable between beats. In high tension pulse (pulsus durus), vessels feel rigid even between pulse beats. Form A form or contour of a pulse is palpatory estimation of arteriogram. A quickly rising and quickly falling pulse (pulsus celer) is seen in aortic regurgitation. A slow rising and slowly falling pulse (pulsus tardus) is seen in aortic stenosis. Equality Comparing pulses and different places gives valuable clinical information. A discrepant or unequal pulse between left and right radial artery is observed in anomalous or aberrant course of artery, coarctation of aorta, aortitis, dissecting aneurysm, peripheral embolism etc. An unequal pulse between upper and lower extremities is seen in coarctation to aorta, aortitis, block at bifurcation of aorta, dissection of aorta, iatrogenic trauma and arteriosclerotic obstruction. Condition of arterial wall A normal artery is not palpable after flattening by digital pressure. A thick radial artery which is palpable 7.5–10 cm up the forearm is suggestive of arteriosclerosis. Radio-femoral delay In coarctation of aorta, femoral pulse may be significantly delayed as compared to radial pulse (unless there is coexisting aortic regurgitation). The delay can also be observed in supravalvar aortic stenosis. Patterns Several pulse patterns can be of clinical significance. These include: Anacrotic pulse: notch on the upstroke of the carotid pulse. Two distinct waves (slow initial upstroke and delayed peak, which is close to S2). Present in AS. Dicrotic pulse: is characterized by two beats per cardiac cycle, one systolic and the other diastolic. Physiologically, the dicrotic wave is the result of reflected waves from the lower extremities and aorta. Conditions associated with low cardiac output and high systemic vascular resistance can produce a dicrotic pulse. Pulse deficit: difference in the heart rate by direct cardiac ausculation and by palpation of the peripheral arterial pulse rate when in atrial fibrillation (AF). Pulsus alternans: an ominous medical sign that indicates progressive systolic heart failure. To trained fingertips, the examiner notes a pattern of a strong pulse followed by a weak pulse over and over again. This pulse signals a flagging effort of the heart to sustain itself in systole. It also can be detected in HCM with obstruction. Pulsus bigeminus: indicates a pair of hoofbeats within each heartbeat. Concurrent auscultation of the heart may reveal a gallop rhythm of the native heartbeat. Pulsus bisferiens: is characterized by two beats per cardiac cycle, both systolic, unlike the dicrotic pulse. It is an unusual physical finding typically seen in patients with aortic valve diseases if the aortic valve does not normally open and close. Trained fingertips will observe two pulses to each heartbeat instead of one. Pulsus tardus et parvus, also pulsus parvus et tardus, slow-rising pulse and anacrotic pulse, is weak (parvus), and late (tardus) relative to its expected characteristics. It is caused by a stiffened aortic valve that makes it progressively harder to open, thus requiring increased generation of blood pressure in the left ventricle. It is seen in aortic valve stenosis. Pulsus paradoxus: a condition in which some heartbeats cannot be detected at the radial artery during the inspiration phase of respiration. It is caused by an exaggerated decrease in blood pressure during this phase, and is diagnostic of a variety of cardiac and respiratory conditions of varying urgency, such as cardiac tamponade. Tachycardia: an elevated resting heart rate. In general an electrocardiogram (ECG) is required to identify the type of tachycardia. Pulsatile This description of the pulse implies the intrinsic physiology of systole and diastole. Scientifically, systole and diastole are forces that expand and contract the pulmonary and systemic circulations. A collapsing pulse is a sign of hyperdynamic circulation, which can be seen in AR or PDA. Common palpable sites Sites can be divided into peripheral pulses and central pulses. Central pulses include the carotid, femoral, and brachial pulses. Upper limb Axillary pulse: located inferiorly of the lateral wall of the axilla Brachial pulse: located on the inside of the upper arm near the elbow, frequently used in place of carotid pulse in infants (brachial artery) Radial pulse: located on the lateral of the wrist (radial artery). It can also be found in the anatomical snuff box. Commonly, the radial pulse is measured with three fingers. The finger closest to the heart is used to occlude the pulse pressure, the middle finger is used get a crude estimate of the blood pressure, and the finger most distal to the heart (usually the ring finger) is used to nullify the effect of the ulnar pulse as the two arteries are connected via the palmar arches (superficial and deep). Ulnar pulse: located on the medial of the wrist (ulnar artery). Lower limb Femoral pulse: located in the inner thigh, at the mid-inguinal point, halfway between the pubic symphysis and anterior superior iliac spine (femoral artery). Popliteal pulse: Above the knee in the popliteal fossa, found by holding the bent knee. The patient bends the knee at approximately 124°, and the health care provider holds it in both hands to find the popliteal artery in the pit behind the knee (popliteal artery). Dorsalis pedis pulse: located on top of the foot, immediately lateral to the extensor of hallucis longus (dorsalis pedis artery). Tibialis posterior pulse: located on the medial side of the ankle, 2 cm inferior and 2 cm posterior to the medial malleolus (posterior tibial artery). It is easily palpable over Pimenta's Point. Head and neck Carotid pulse: located in the neck (carotid artery). The carotid artery should be palpated gently and while the patient is sitting or lying down. Stimulating its baroreceptors with low palpitation can provoke severe bradycardia or even stop the heart in some sensitive persons. Also, a person's two carotid arteries should not be palpated at the same time. Doing so may limit the flow of blood to the head, possibly leading to fainting or brain ischemia. It can be felt between the anterior border of the sternocleidomastoid muscle, above the hyoid bone and lateral to the thyroid cartilage. Facial pulse: located on the mandible (lower jawbone) on a line with the corners of the mouth (facial artery). Temporal pulse: located on the temple directly in front of the ear (superficial temporal artery). Although the pulse can be felt in multiple places in the head, people should not normally hear their heartbeats within the head. This is called pulsatile tinnitus, and it can indicate several medical disorders. Torso Apical pulse: located in the 5th left intercostal space, 1.25 cm lateral to the mid-clavicular line. In contrast with other pulse sites, the apical pulse site is unilateral, and measured not under an artery, but below the heart itself (more specifically, the apex of the heart). See also apex beat. History Pulse rate was first measured by ancient Greek physicians and scientists. The first person to measure the heart beat was Herophilus of Alexandria, Egypt (c. 335–280 BC) who designed a water clock to time the pulse. Rumi has mentioned in a poem that "The wise physician measured the patient's pulse and became aware of his condition." It shows the practice was common during Rumi's era and geography. The first person to accurately measure the pulse rate was Santorio Santorii who invented the pulsilogium, a form of pendulum which was later studied by Galileo Galilei. A century later another physician, de Lacroix, used the pulsilogium to test cardiac function. See also Pulse meter Tempo Pulse diagnosis – a practice in various types of traditional medicine Pulse pressure References External links Measure Pulse Online Tap along with your pulse Cardiovascular physiology Mathematics in medicine
Pulse
[ "Mathematics" ]
2,863
[ "Applied mathematics", "Mathematics in medicine" ]
41,605
https://en.wikipedia.org/wiki/Pulse%20duration
In signal processing and telecommunications, pulse duration is the interval between the time, during the first transition, that the amplitude of the pulse reaches a specified fraction (level) of its final amplitude, and the time the pulse amplitude drops, on the last transition, to the same level. The interval between the 50% points of the final amplitude is usually used to determine or define pulse duration, and this is understood to be the case unless otherwise specified. Other fractions of the final amplitude, e.g., 90% or 1/e, may also be used, as may the root mean square (rms) value of the pulse amplitude. In radar, the pulse duration is the time the radar's transmitter is energized during each cycle. References Signal processing Telecommunication theory
Pulse duration
[ "Technology", "Engineering" ]
159
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
41,613
https://en.wikipedia.org/wiki/Quality%20control
Quality control (QC) is a process by which entities review the quality of all factors involved in production. ISO 9000 defines quality control as "a part of quality management focused on fulfilling quality requirements". This approach places emphasis on three aspects (enshrined in standards such as ISO 9001): Elements such as controls, job management, defined and well managed processes, performance and integrity criteria, and identification of records Competence, such as knowledge, skills, experience, and qualifications Soft elements, such as personnel, integrity, confidence, organizational culture, motivation, team spirit, and quality relationships. Inspection is a major component of quality control, where physical product is examined visually (or the end results of a service are analyzed). Product inspectors will be provided with lists and descriptions of unacceptable product defects such as cracks or surface blemishes for example. History and introduction Early stone tools such as anvils had no holes and were not designed as interchangeable parts. Mass production established processes for the creation of parts and system with identical dimensions and design, but these processes are not uniform and hence some customers were unsatisfied with the result. Quality control separates the act of testing products to uncover defects from the decision to allow or deny product release, which may be determined by fiscal constraints. For contract work, particularly work awarded by government agencies, quality control issues are among the top reasons for not renewing a contract. The simplest form of quality control was a sketch of the desired item. If the sketch did not match the item, it was rejected, in a simple Go/no go procedure. However, manufacturers soon found it was difficult and costly to make parts be exactly like their depiction; hence around 1840 tolerance limits were introduced, wherein a design would function if its parts were measured to be within the limits. Quality was thus precisely defined using devices such as plug gauges and ring gauges. However, this did not address the problem of defective items; recycling or disposing of the waste adds to the cost of production, as does trying to reduce the defect rate. Various methods have been proposed to prioritize quality control issues and determine whether to leave them unaddressed or use quality assurance techniques to improve and stabilize production. Notable approaches There is a tendency for individual consultants and organizations to name their own unique approaches to quality control—a few of these have ended up in widespread use: In project management In project management, quality control requires the project manager and/or the project team to inspect the accomplished work to ensure its alignment with the project scope. In practice, projects typically have a dedicated quality control team which focuses on this area. See also Analytical quality control Corrective and preventative action (CAPA) Eight dimensions of quality First article inspection (FAI) Good automated manufacturing practice (GAMP) Good manufacturing practice Quality assurance Quality management framework Standard operating procedure (SOP) QA/QC References Further reading External links ASTM quality control standards Design for X Quality management Applied probability
Quality control
[ "Mathematics", "Engineering" ]
603
[ "Applied mathematics", "Design", "Applied probability", "Design for X" ]
41,615
https://en.wikipedia.org/wiki/Quasi-analog%20signal
In telecommunications, a quasi-analog signal is a digital signal that has been converted to a form suitable for transmission over a specified analog channel. The specification of the analog channel should include frequency range, bandwidth, signal-to-noise ratio, and envelope delay distortion. When quasi-analog form of signaling is used to convey message traffic over dial-up telephone systems, it is often referred to as voice-data. A modem may be used for the conversion process. References Signal processing
Quasi-analog signal
[ "Technology", "Engineering" ]
99
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
41,616
https://en.wikipedia.org/wiki/Queuing%20delay
In telecommunications and computer engineering, the queuing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address. Router processing This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay. As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router that receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets. When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimal network congestion delays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instead of cars). This result is both hard to model mathematically and quite counterintuitive to people who lack experience with mathematics or real networks. Failing to drop packets, choosing instead to buffer an ever-increasing number of them, produces bufferbloat. Notation In Kendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system. Kendall's notation should be used to calculate the queuing delay when packets are dropped from the queue. The M/M/1/K queuing model is the most basic and important queuing model for network analysis. See also Broadcast delay Delay encoding End-to-end delay Network latency Little's law – queueing formula Network delay Packet loss Processing delay Queueing theory Transmission delay References Wireless communications; Theodore S.Rpappaport Computer networks engineering Telecommunications engineering Computer engineering Queueing theory
Queuing delay
[ "Technology", "Engineering" ]
796
[ "Electrical engineering", "Telecommunications engineering", "Computer networks engineering", "Computer engineering" ]
41,625
https://en.wikipedia.org/wiki/Radiometry
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting. The use of radiometers to determine the temperature of objects and gasses by measuring radiation flux is called pyrometry. Handheld pyrometer devices are often marketed as infrared thermometers. Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term. Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength. Radiometric quantities Integral and spectral radiometric quantities Integral quantities (like radiant flux) describe the total effect of radiation of all wavelengths or frequencies, while spectral quantities (like spectral power) describe the effect of radiation of a single wavelength or frequency . To each integral quantity there are corresponding spectral quantities, defined as the quotient of the integrated quantity by the range of frequency or wavelength considered. For example, the radiant flux Φe corresponds to the spectral power Φe, and Φe,. Getting an integral quantity's spectral counterpart requires a limit transition. This comes from the idea that the precisely requested wavelength photon existence probability is zero. Let us show the relation between them using the radiant flux as an example: Integral flux, whose unit is W: Spectral flux by wavelength, whose unit is : where is the radiant flux of the radiation in a small wavelength interval . The area under a plot with wavelength horizontal axis equals to the total radiant flux. Spectral flux by frequency, whose unit is : where is the radiant flux of the radiation in a small frequency interval . The area under a plot with frequency horizontal axis equals to the total radiant flux. The spectral quantities by wavelength and frequency are related to each other, since the product of the two variables is the speed of light (): or or The integral quantity can be obtained by the spectral quantity's integration: See also Reflectivity Microwave radiometer Measurement of ionizing radiation Radiometric calibration Radiometric resolution References External links Radiometry and photometry FAQ Professor Jim Palmer's Radiometry FAQ page (The University of Arizona College of Optical Sciences). Measurement Optical metrology Telecommunications engineering Observational astronomy Electromagnetic radiation
Radiometry
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
541
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Quantity", "Observational astronomy", "Measurement", "Size", "Radiation", "Electrical engineering", "Astronomical sub-disciplines", "Radiometry" ]
41,627
https://en.wikipedia.org/wiki/Random%20number
A random number is generated by a random (stochastic) process such as throwing Dice. Individual numbers can't be predicted, but the likely result of generating a large quantity of numbers can be predicted by specific mathematical series and statistics. Algorithms and implementations Random numbers are frequently used in algorithms such as Knuth's 1964-developed algorithm for shuffling lists. (popularly known as the Knuth shuffle or the Fisher–Yates shuffle, based on work they did in 1938). In 1999, a new feature was added to the Pentium III: a hardware-based random number generator. It has been described as "several oscillators combine their outputs and that odd waveform is sampled asynchronously." These numbers, however, were only 32 bit, at a time when export controls were on 56 bits and higher, so they were not state of the art. Common understanding In common understanding, "1 2 3 4 5" is not as random as "3 5 2 1 4" and certainly not as random as "47 88 1 32 41" but "we can't say authoritavely that the first sequence is not random ... it could have been generated by chance." When a police officer claims to have done a "random .. door-to-door" search, there is a certain expectation that members of a jury will have. Real world consequences Flaws in randomness have real-world consequences. A 99.8% randomness was shown by researchers to negatively affect an estimated 27,000 customers of a large service and that the problem was not limited to just that situation. See also Algorithmically random sequence Quasi-random sequence Random number generation Random sequence Random variable Random variate Random real References Permutations
Random number
[ "Mathematics" ]
357
[ "Functions and mappings", "Permutations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
41,628
https://en.wikipedia.org/wiki/Receive-after-transmit%20time%20delay
In telecommunications, receive-after-transmit time delay is the time interval between (a) the instant of keying off the local transmitter to stop transmitting and (b) the instant the local receiver output has increased to 90% of its steady-state value in response to an RF signal from another transmitter. The RF signal from the distant transmitter must exist at the local receiver input prior to, or at the time of, keying off the local transmitter. Receive-after-transmit time delay applies only to half-duplex operation. See also Transmit-after-receive time delay Attack-time delay References Telecommunications engineering Radio technology
Receive-after-transmit time delay
[ "Technology", "Engineering" ]
127
[ "Information and communications technology", "Electrical engineering", "Telecommunications engineering", "Radio technology" ]
41,630
https://en.wikipedia.org/wiki/Attack-time%20delay
In telecommunications, attack-time delay is the time needed for a receiver or transmitter to respond to an incoming signal. For a receiver, the attack-time delay is defined as the time interval from the instant a step radio-frequency signal, at a level equal to the receiver's threshold of sensitivity, is applied to the receiver input, to the instant when the receiver's output amplitude reaches 90% of its steady-state value. If a squelch circuit is operating, the receiver attack-time delay includes the time for the receiver to break squelch. For a transmitter, the attack-time delay is defined as the interval from the instant the transmitter is keyed-on to the instant the transmitted radio-frequency signal amplitude has increased to a specified level, usually 90% of its key-on steady-state value. The transmitter attack-time delay excludes the time required for automatic antenna tuning. See also Transmit-after-receive time delay Receive-after-transmit time delay References Telecommunications engineering Radio technology
Attack-time delay
[ "Technology", "Engineering" ]
208
[ "Information and communications technology", "Electrical engineering", "Telecommunications engineering", "Radio technology" ]
41,635
https://en.wikipedia.org/wiki/Recovery%20procedure
In telecommunications, a recovery procedure is a process that attempts to bring a system back to a normal operating state. Examples: The actions necessary to restore an automated information system's data files and computational capability after a system failure. In data communications, a process whereby a data station attempts to resolve conflicting or erroneous conditions arising during the data transfer. See also Error detection and correction Fault-tolerant design Fault-tolerant system References Telecommunications techniques Fault tolerance
Recovery procedure
[ "Engineering" ]
91
[ "Reliability engineering", "Fault tolerance" ]
41,636
https://en.wikipedia.org/wiki/Reference%20circuit
A reference circuit is a hypothetical electric circuit of specified equivalent length and configuration, and having a defined transmission characteristic or characteristics, used primarily as a reference for measuring the performance of other, i.e., real, circuits or as a guide for planning and engineering of circuits and networks. Normally, several types of reference circuits are defined, with different configurations, because communications are required over a wide range of distances. Another type of reference circuit shows how to configure integrated circuits into function blocks, which Analog Devices provides for electrical design engineers. Analog Devices' Circuits from the Lab reference circuits are fully tested and come with the schematics, evaluation boards, and device drivers necessary for system integration. A group of related reference circuits is also called a reference system. References See also netlist SPICE Electronic circuits
Reference circuit
[ "Engineering" ]
163
[ "Electronic engineering", "Electronic circuits" ]
41,639
https://en.wikipedia.org/wiki/Reference%20noise
In telecommunications, reference noise is the magnitude of circuit noise chosen as a reference for measurement. Many different levels with a number of different weightings are in current use, and care must be taken to ensure that the proper parameters are stated. Specific ones include: dBa, dBa(F1A), dBa(HA1), dBa0, dBm, dBm(psoph), dBm0, dBrn, dBrnC, dBrnC0, dBrn(f1-f2), dBrn(144-line), dBx. References Noise (electronics) Communication circuits Telecommunications techniques
Reference noise
[ "Engineering" ]
130
[ "Telecommunications engineering", "Communication circuits" ]
41,641
https://en.wikipedia.org/wiki/Reflection%20coefficient
In physics and electrical engineering the reflection coefficient is a parameter that describes how much of a wave is reflected by an impedance discontinuity in the transmission medium. It is equal to the ratio of the amplitude of the reflected wave to the incident wave, with each expressed as phasors. For example, it is used in optics to calculate the amount of light that is reflected from a surface with a different index of refraction, such as a glass surface, or in an electrical transmission line to calculate how much of the electromagnetic wave is reflected by an impedance discontinuity. The reflection coefficient is closely related to the transmission coefficient. The reflectance of a system is also sometimes called a reflection coefficient. Different specialties have different applications for the term. Transmission lines In telecommunications and transmission line theory, the reflection coefficient is the ratio of the complex amplitude of the reflected wave to that of the incident wave. The voltage and current at any point along a transmission line can always be resolved into forward and reflected traveling waves given a specified reference impedance Z0. The reference impedance used is typically the characteristic impedance of a transmission line that's involved, but one can speak of reflection coefficient without any actual transmission line being present. In terms of the forward and reflected waves determined by the voltage and current, the reflection coefficient is defined as the complex ratio of the voltage of the reflected wave () to that of the incident wave (). This is typically represented with a (capital gamma) and can be written as: It can as well be defined using the currents associated with the reflected and forward waves, but introducing a minus sign to account for the opposite orientations of the two currents: The reflection coefficient may also be established using other field or circuit pairs of quantities whose product defines power resolvable into a forward and reverse wave. For instance, with electromagnetic plane waves, one uses the ratio of the electric fields of the reflected to that of the forward wave (or magnetic fields, again with a minus sign); the ratio of each wave's electric field E to its magnetic field H is again an impedance Z0 (equal to the impedance of free space in a vacuum). Similarly in acoustics one uses the acoustic pressure and velocity respectively. In the accompanying figure, a signal source with internal impedance possibly followed by a transmission line of characteristic impedance is represented by its Thévenin equivalent, driving the load . For a real (resistive) source impedance , if we define using the reference impedance then the source's maximum power is delivered to a load , in which case implying no reflected power. More generally, the squared-magnitude of the reflection coefficient denotes the proportion of that power that is reflected back to the source, with the power actually delivered toward the load being . Anywhere along an intervening (lossless) transmission line of characteristic impedance , the magnitude of the reflection coefficient will remain the same (the powers of the forward and reflected waves stay the same) but with a different phase. In the case of a short circuited load (), one finds at the load. This implies the reflected wave having a 180° phase shift (phase reversal) with the voltages of the two waves being opposite at that point and adding to zero (as a short circuit demands). Relation to load impedance The reflection coefficient is determined by the load impedance at the end of the transmission line, as well as the characteristic impedance of the line. A load impedance of terminating a line with a characteristic impedance of will have a reflection coefficient of This is the coefficient at the load. The reflection coefficient can also be measured at other points on the line. The magnitude of the reflection coefficient in a lossless transmission line is constant along the line (as are the powers in the forward and reflected waves). However its phase will be shifted by an amount dependent on the electrical distance from the load. If the coefficient is measured at a point meters from the load, so the electrical distance from the load is radians, the coefficient at that point will be Note that the phase of the reflection coefficient is changed by twice the phase length of the attached transmission line. That is to take into account not only the phase delay of the reflected wave, but the phase shift that had first been applied to the forward wave, with the reflection coefficient being the quotient of these. The reflection coefficient so measured, , corresponds to an impedance which is generally dissimilar to present at the far side of the transmission line. The complex reflection coefficient (in the region , corresponding to passive loads) may be displayed graphically using a Smith chart. The Smith chart is a polar plot of , therefore the magnitude of is given directly by the distance of a point to the center (with the edge of the Smith chart corresponding to ). Its evolution along a transmission line is likewise described by a rotation of around the chart's center. Using the scales on a Smith chart, the resulting impedance (normalized to ) can directly be read. Before the advent of modern electronic computers, the Smith chart was of particular use as a sort of analog computer for this purpose. Standing wave ratio The standing wave ratio (SWR) is determined solely by the magnitude of the reflection coefficient: Along a lossless transmission line of characteristic impedance Z0, the SWR signifies the ratio of the voltage (or current) maxima to minima (or what it would be if the transmission line were long enough to produce them). The above calculation assumes that has been calculated using Z0 as the reference impedance. Since it uses only the magnitude of , the SWR intentionally ignores the specific value of the load impedance ZL responsible for it, but only the magnitude of the resulting impedance mismatch. That SWR remains the same wherever measured along a transmission line (looking towards the load) since the addition of a transmission line length to a load only changes the phase, not magnitude of . While having a one-to-one correspondence with reflection coefficient, SWR is the most commonly used figure of merit in describing the mismatch affecting a radio antenna or antenna system. It is most often measured at the transmitter side of a transmission line, but having, as explained, the same value as would be measured at the antenna (load) itself. Seismology Reflection coefficient is used in feeder testing for reliability of medium. Optics and microwaves In optics and electromagnetics in general, reflection coefficient can refer to either the amplitude reflection coefficient described here, or the reflectance, depending on context. Typically, the reflectance is represented by a capital R, while the amplitude reflection coefficient is represented by a lower-case r. These related concepts are covered by Fresnel equations in classical optics. Acoustics Acousticians use reflection coefficients to understand the effect of different materials on their acoustic environments. See also Microwave Mismatch loss Reflections of signals on conducting lines Scattering parameters Transmission coefficient Target strength Hagen–Rubens relation Reflection phase change References Figure 8-2 and Eqn. 8-1 Pg. 279 External links Flash tutorial for understanding reflection A flash program that shows how a reflected wave is generated, the reflection coefficient and VSWR Application for drawing standing wave diagrams including the reflection coefficient, input impedance, SWR, etc. Geometrical optics Electronic engineering Physical optics Seismology measurement Telecommunication theory Dimensionless numbers of physics
Reflection coefficient
[ "Technology", "Engineering" ]
1,499
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]
41,644
https://en.wikipedia.org/wiki/Reflectance
The reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at the boundary. Reflectance is a component of the response of the electronic structure of the material to the electromagnetic field of light, and is in general a function of the frequency, or wavelength, of the light, its polarization, and the angle of incidence. The dependence of reflectance on the wavelength is called a reflectance spectrum or spectral reflectance curve. Mathematical definitions Hemispherical reflectance The hemispherical reflectance of a surface, denoted , is defined as where is the radiant flux reflected by that surface and is the radiant flux received by that surface. Spectral hemispherical reflectance The spectral hemispherical reflectance in frequency and spectral hemispherical reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiant flux in frequency reflected by that surface; is the spectral radiant flux in frequency received by that surface; is the spectral radiant flux in wavelength reflected by that surface; is the spectral radiant flux in wavelength received by that surface. Directional reflectance The directional reflectance of a surface, denoted RΩ, is defined as where is the radiance reflected by that surface; is the radiance received by that surface. This depends on both the reflected direction and the incoming direction. In other words, it has a value for every combination of incoming and outgoing directions. It is related to the bidirectional reflectance distribution function and its upper limit is 1. Another measure of reflectance, depending only on the outgoing direction, is I/F, where I is the radiance reflected in a given direction and F is the incoming radiance averaged over all directions, in other words, the total flux of radiation hitting the surface per unit area, divided by π. This can be greater than 1 for a glossy surface illuminated by a source such as the sun, with the reflectance measured in the direction of maximum radiance (see also Seeliger effect). Spectral directional reflectance The spectral directional reflectance in frequency and spectral directional reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiance in frequency reflected by that surface; is the spectral radiance received by that surface; is the spectral radiance in wavelength reflected by that surface; is the spectral radiance in wavelength received by that surface. Again, one can also define a value of (see above) for a given wavelength. Reflectivity For homogeneous and semi-infinite (see halfspace) materials, reflectivity is the same as reflectance. Reflectivity is the square of the magnitude of the Fresnel reflection coefficient, which is the ratio of the reflected to incident electric field; as such the reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance is always a positive real number. For layered and finite media, according to the CIE, reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects. When reflection occurs from thin layers of material, internal reflection effects can cause the reflectance to vary with surface thickness. Reflectivity is the limit value of reflectance as the sample becomes thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space. Surface type Given that reflectance is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection. For specular surfaces, such as glass or polished metal, reflectance is nearly zero at all angles except at the appropriate reflected angle; that is the same angle with respect to the surface normal in the plane of incidence, but on the opposing side. When the radiation is incident normal to the surface, it is reflected back into the same direction. For diffuse surfaces, such as matte white paint, reflectance is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most practical objects exhibit a combination of diffuse and specular reflective properties. Water reflectance Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equations.<ref name="Ottav">Ottaviani, M. and Stamnes, K. and Koskulics, J. and Eide, H. and Long, S.R. and Su, W. and Wiscombe, W., 2008: 'Light Reflection from Water Waves: Suitable Setup for a Polarimetric Investigation under Controlled Laboratory Conditions. Journal of Atmospheric and Oceanic Technology, 25 (5), 715--728.</ref> Fresnel reflection is directional and therefore does not contribute significantly to albedo which primarily diffuses reflection. A real water surface may be wavy. Reflectance, which assumes a flat surface as given by the Fresnel equations, can be adjusted to account for waviness. Grating efficiency The generalization of reflectance to a diffraction grating, which disperses light by wavelength, is called diffraction efficiency''. Other radiometric coefficients See also Bidirectional reflectance distribution function Colorimetry Emissivity Lambert's cosine law Transmittance Sun path Light Reflectance Value Albedo Reststrahlen effect Lyddane–Sachs–Teller relation References External links Reflectivity of metals . Reflectance Data. Physical quantities Radiometry Dimensionless numbers of physics
Reflectance
[ "Physics", "Mathematics", "Engineering" ]
1,222
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Physical properties", "Radiometry" ]
41,646
https://en.wikipedia.org/wiki/Refractive%20index%20contrast
Refractive index contrast, in an optical waveguide, such as an optical fiber, is a measure of the relative difference in refractive index of the core and cladding. The refractive index contrast, Δ, is often given by , where n1 is the maximum refractive index in the core (or simply the core index for a step-index profile) and n2 is the refractive index of the cladding. The criterion n2 < n1 must be satisfied in order to sustain a guided mode by total internal reflection. Alternative formulations include and . Normal optical fibers, constructed of different glasses, have very low refractive index contrast (Δ<<1) and hence are weakly-guiding. The weak guiding will cause a greater portion of the cross-sectional Electric field profile to reside within the cladding (as evanescent tails of the guided mode) as compared to strongly-guided waveguides. Integrated optics can make use of higher core index to obtain Δ>1 allowing light to be efficiently guided around corners on the micro-scale, where popular high-Δ material platform is silicon-on-insulator. High-Δ allows sub-wavelength core dimensions and so greater control over the size of the evanescent tails. The most efficient low-loss optical fibers require low Δ to minimise losses to light scattered outwards. References Fiber optics Refraction
Refractive index contrast
[ "Physics" ]
280
[ "Optical phenomena", "Physical phenomena", "Refraction" ]
41,649
https://en.wikipedia.org/wiki/Relative%20transmission%20level
In telecommunications, relative transmission level is the ratio of the signal power, at a given point in a transmission system, to a reference signal power. The ratio is usually determined by applying a standard test tone at zero transmission level point (or applying adjusted test tone power at any other point) and measuring the gain or loss to the location of interest. A distinction should be made between the standard test tone power and the expected median power of the actual signal required as the basis for the design of transmission systems. Radio frequency propagation
Relative transmission level
[ "Physics" ]
105
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,655
https://en.wikipedia.org/wiki/Repeater
In telecommunications, a repeater is an electronic device that receives a signal and retransmits it. Repeaters are used to extend transmissions so that the signal can cover longer distances or be received on the other side of an obstruction. Some types of repeaters broadcast an identical signal, but alter its method of transmission, for example, on another frequency or baud rate. There are several different types of repeaters; a telephone repeater is an amplifier in a telephone line, an optical repeater is an optoelectronic circuit that amplifies the light beam in an optical fiber cable; and a radio repeater is a radio receiver and transmitter that retransmits a radio signal. A broadcast relay station is a repeater used in broadcast radio and television. Overview When an information-bearing signal passes through a communication channel, it is progressively degraded due to loss of power. For example, when a telephone call passes through a wire telephone line, some of the power in the electric current which represents the audio signal is dissipated as heat in the resistance of the copper wire. The longer the wire, the more power is lost, and the smaller the amplitude of the signal at the far end. So with a long enough wire the call will not be audible at the other end. Similarly, the greater the distance between a radio station and a receiver, the weaker the radio signal, and the poorer the reception. A repeater is an electronic device in a communication channel that increases the power of a signal and retransmits it, allowing it to travel further. Since it amplifies the signal, it requires a source of electric power. The term "repeater" originated with telegraphy in the 19th century, and referred to an electromechanical device (a relay) used to regenerate telegraph signals. Use of the term has continued in telephony and data communications. In computer networking, because repeaters work with the actual physical signal, and do not attempt to interpret the data being transmitted, they operate on the physical layer, the first layer of the OSI model; a multiport Ethernet repeater is usually called a hub. Types Telephone repeater This is used to increase the range of telephone signals in a telephone line. Land line repeater They are most frequently used in trunklines that carry long distance calls. In an analog telephone line consisting of a pair of wires, it consists of an amplifier circuit made of transistors which use power from a DC current source to increase the power of the alternating current audio signal on the line. Since the telephone is a duplex (bidirectional) communication system, the wire pair carries two audio signals, one going in each direction. So telephone repeaters have to be bilateral, amplifying the signal in both directions without causing feedback, which complicates their design considerably. Telephone repeaters were the first type of repeater and were some of the first applications of amplification. The development of telephone repeaters between 1900 and 1915 made long-distance phone service possible. Now, most telecommunications cables are fiber-optic cables which use optical repeaters (below). Before the invention of electronic amplifiers, mechanically coupled carbon microphones were used as amplifiers in telephone repeaters. After the turn of the 20th century it was found that negative resistance mercury lamps could amplify, and they were used. The invention of audion tube repeaters around 1916 made transcontinental telephony practical. In the 1930s vacuum tube repeaters using hybrid coils became commonplace, allowing the use of thinner wires. In the 1950s negative impedance gain devices were more popular, and a transistorized version called the E6 repeater was the final major type used in the Bell System before the low cost of digital transmission made all voiceband repeaters obsolete. Frequency frogging repeaters were commonplace in frequency-division multiplexing systems from the middle to late 20th century. Submarine cable repeater This is a type of telephone repeater used in underwater submarine telecommunications cables. Optical communications repeater This is used to increase the range of signals in a fiber-optic cable. Digital information travels through a fiber-optic cable in the form of short pulses of light. The light is made up of particles called photons, which can be absorbed or scattered in the fiber. An optical communications repeater usually consists of a phototransistor which converts the light pulses to an electrical signal, an amplifier to increase the power of the signal, an electronic filter which reshapes the pulses, and a laser which converts the electrical signal to light again and sends it out the other fiber. However, optical amplifiers are being developed for repeaters to amplify the light itself without the need of converting it to an electric signal first. Radio repeater This is used to extend the range of coverage of a radio signal. The history of radio relay repeaters began in 1898 from the publication by Johann Mattausch in Austrian Journal Zeitschrift für Electrotechnik (v. 16, 35 - 36). But his proposal "Translator" was primitive and not suitable for use. The first relay system with radio repeaters, which really functioned, was that invented in 1899 by Emile Guarini-Foresio. A radio repeater usually consists of a radio receiver connected to a radio transmitter. The received signal is amplified and retransmitted, often on another frequency, to provide coverage beyond the obstruction. Usage of a duplexer can allow the repeater to use one antenna for both receive and transmit at the same time. Broadcast relay station, rebroadcastor or translator: This is a repeater used to extend the coverage of a radio or television broadcasting station. It consists of a secondary radio or television transmitter. The signal from the main transmitter often comes over leased telephone lines or by microwave relay. Microwave relay: This is a specialized point-to-point telecommunications link, consisting of a microwave receiver that receives information over a beam of microwaves from another relay station in line-of-sight distance, and a microwave transmitter which passes the information on to the next station over another beam of microwaves. Networks of microwave relay stations transmit telephone calls, television programs, and computer data from one city to another over continent-wide areas. Passive repeater: This is a microwave relay that simply consists of a flat metal surface to reflect the microwave beam in another direction. It is used to get microwave relay signals over hills and mountains when it is not necessary to amplify the signal. Cellular repeater: This is a radio repeater for boosting cell phone reception in a limited area. The device functions like a small cellular base station, with a directional antenna to receive the signal from the nearest cell tower, an amplifier, and a local antenna to rebroadcast the signal to nearby cell phones. It is often used in downtown office buildings. Digipeater: A repeater node in a packet radio network. It performs a store and forward function, passing on packets of information from one node to another. Amateur radio repeater: Used by amateur radio operators to enable two-way communication across an area which would otherwise be difficult by point-to-point on VHF and UHF. These repeaters are set up and maintained by individual operators or clubs, and are generally available for any licensed amateur to use. A hill or mountaintop location is a preferable location to construct a repeater, as it will maximize the usability across a large area. Radio repeaters improve communication coverage in systems using frequencies that typically have line-of-sight propagation. Without a repeater, these systems are limited in range by the curvature of the Earth and the blocking effect of terrain or high buildings. A repeater on a hilltop or tall building can allow stations that are out of each other's line-of-sight range to communicate reliably. Radio repeaters may also allow translation from one set of radio frequencies to another, for example to allow two different public service agencies to interoperate (say, police and fire services of a city, or neighboring police departments). They may provide links to the public switched telephone network as well, or satellite network (BGAN, INMARSAT, MSAT) as an alternative path from source to the destination. Typically a repeater station listens on one frequency, A, and transmits on a second, B. All mobile stations listen for signals on channel B and transmit on channel A. The difference between the two frequencies may be relatively small compared to the frequency of operation, say 1%. Often the repeater station will use the same antenna for transmission and reception; highly selective filters called "duplexers" separate the faint incoming received signal from the billions of times more powerful outbound transmitted signal. Sometimes separate transmitting and receiving locations are used, connected by a wire line or a radio link. While the repeater station is designed for simultaneous reception and transmission, mobile units need not be equipped with the bulky and costly duplexers, as they only transmit or receive at any time. Mobile units in a repeater system may be provided with a "talkaround" channel that allows direct mobile-to-mobile operation on a single channel. This may be used if out of reach of the repeater system, or for communications not requiring the attention of all mobiles. The "talkaround" channel may be the repeater output frequency; the repeater will not retransmit any signals on its output frequency. An engineered radio communication system designer will analyze the coverage area desired and select repeater locations, elevations, antennas, operating frequencies and power levels to permit a predictable level of reliable communication over the designed coverage area. Data handling Repeaters can be divided into two types depending on the type of data they handle: Analog repeater This type is used in channels that transmit data in the form of an analog signal in which the voltage or current is proportional to the amplitude of the signal, as in an audio signal. They are also used in trunklines that transmit multiple signals using frequency division multiplexing (FDM). Analog repeaters are composed of a linear amplifier, and may include electronic filters to compensate for frequency and phase distortion in the line. Digital repeater The digital repeater is used in channels that transmit data by binary digital signals, in which the data is in the form of pulses with only two possible values, representing the binary digits 1 and 0. A digital repeater amplifies the signal, and it also may retime, resynchronize, and reshape the pulses. A repeater that performs the retiming or resynchronizing functions may be called a regenerator. See also 12-channel carrier system ADSL loop extender Complementary ground component Fiber media converter PLC carrier repeating station Relay (disambiguation) Repeater insertion in integrated circuits Signal strength Transponder Wireless distribution system Wireless repeater References External links The Bell system technical journal: Repeaters and Equalizers for the SD Submarine Cable System Amateur Radio Repeaters in India Amateur Radio Repeaters in Europe Telecommunications equipment Mobile technology Radio electronics Physical layer protocols
Repeater
[ "Technology", "Engineering" ]
2,260
[ "Radio electronics", "nan" ]
41,658
https://en.wikipedia.org/wiki/Reradiation
In telecommunications, the term reradiation has the following meanings: Electromagnetic radiation, at the same or different wavelengths, i.e., frequencies, of energy received from an incident wave. Undesirable radiation of signals locally generated in a radio receiver. This type of radiation might cause interference or reveal the location of the device. Near-field effects of an AM antenna may extend out or more. Cellular and microwave towers within this radius can reflect the Medium Wave AM signal out at a phase which cancels the main Medium Wave AM signal. This process results in an interfering signal called reradiation. Radio communications
Reradiation
[ "Engineering" ]
125
[ "Telecommunications engineering", "Radio communications" ]
41,660
https://en.wikipedia.org/wiki/Resonance
Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration that matches its natural frequency. When this happens, the object or system absorbs energy from the external force and starts vibrating with a larger amplitude. Resonance can occur in various systems, such as mechanical, electrical, or acoustic systems, and it is often desirable in certain applications, such as musical instruments or radio receivers. However, resonance can also be detrimental, leading to excessive vibrations or even structural failure in some cases. All systems, including molecular systems and particles, tend to vibrate at a natural frequency depending upon their structure; this frequency is known as a resonant frequency or resonance frequency. When an oscillating force, an external vibration, is applied at a resonant frequency of a dynamic system, object, or particle, the outside vibration will cause the system to oscillate at a higher amplitude (with more force) than when the same force is applied at other, non-resonant frequencies. The resonant frequencies of a system can be identified when the response to an external vibration creates an amplitude that is a relative maximum within the system. Small periodic forces that are near a resonant frequency of the system have the ability to produce large amplitude oscillations in the system due to the storage of vibrational energy. Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, orbital resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR) and resonance of quantum wave functions. Resonant systems can be used to generate vibrations of a specific frequency (e.g., musical instruments), or pick out specific frequencies from a complex vibration containing many frequencies (e.g., filters). The term resonance (from Latin resonantia, 'echo', from resonare, 'resound') originated from the field of acoustics, particularly the sympathetic resonance observed in musical instruments, e.g., when one string starts to vibrate and produce sound after a different one is struck. Overview Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a simple pendulum). However, there are some losses from cycle to cycle, called damping. When damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple and distinct resonant frequencies. Examples A familiar example is a playground swing, which acts as a pendulum. Pushing a person in a swing in time with the natural interval of the swing (its resonant frequency) makes the swing go higher and higher (maximum amplitude), while attempts to push the swing at a faster or slower tempo produce smaller arcs. This is because the energy the swing absorbs is maximized when the pushes match the swing's natural oscillations. Resonance occurs widely in nature, and is exploited in many devices. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. For example, when hard objects like metal, glass, or wood are struck, there are brief resonant vibrations in the object. Light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. Other examples of resonance include: Timekeeping mechanisms of modern clocks and watches, e.g., the balance wheel in a mechanical watch and the quartz crystal in a quartz watch Tidal resonance of the Bay of Fundy Acoustic resonances of musical instruments and the human vocal tract Shattering of a crystal wineglass when exposed to a musical tone of the right pitch (its resonant frequency) Friction idiophones, such as making a glass object (glass, bottle, vase) vibrate by rubbing around its rim with a fingertip Electrical resonance of tuned circuits in radios and TVs that allow radio frequencies to be selectively received Creation of coherent light by optical resonance in a laser cavity Orbital resonance as exemplified by some moons of the Solar System's giant planets and resonant groups such as the plutinos Material resonances in atomic scale are the basis of several spectroscopic techniques that are used in condensed matter physics Electron spin resonance Mössbauer effect Nuclear magnetic resonance Linear systems Resonance manifests itself in many linear and nonlinear systems as oscillations around an equilibrium point. When the system is driven by a sinusoidal external input, a measured output of the system may oscillate in response. The ratio of the amplitude of the output's steady-state oscillations to the input's oscillations is called the gain, and the gain can be a function of the frequency of the sinusoidal external input. Peaks in the gain at certain frequencies correspond to resonances, where the amplitude of the measured output's oscillations are disproportionately large. Since many linear and nonlinear systems that oscillate are modeled as harmonic oscillators near their equilibria, a derivation of the resonant frequency for a driven, damped harmonic oscillator is shown. An RLC circuit is used to illustrate connections between resonance and a system's transfer function, frequency response, poles, and zeroes. Building off the RLC circuit example, these connections for higher-order linear systems with multiple inputs and outputs are generalized. The driven, damped harmonic oscillator Consider a damped mass on a spring driven by a sinusoidal, externally applied force. Newton's second law takes the form where m is the mass, x is the displacement of the mass from the equilibrium point, F0 is the driving amplitude, ω is the driving angular frequency, k is the spring constant, and c is the viscous damping coefficient. This can be rewritten in the form where is called the undamped angular frequency of the oscillator or the natural frequency, is called the damping ratio. Many sources also refer to ω0 as the resonant frequency. However, as shown below, when analyzing oscillations of the displacement x(t), the resonant frequency is close to but not the same as ω0. In general the resonant frequency is close to but not necessarily the same as the natural frequency. The RLC circuit example in the next section gives examples of different resonant frequencies for the same system. The general solution of Equation () is the sum of a transient solution that depends on initial conditions and a steady state solution that is independent of initial conditions and depends only on the driving amplitude F0, driving frequency ω, undamped angular frequency ω0, and the damping ratio ζ. The transient solution decays in a relatively short amount of time, so to study resonance it is sufficient to consider the steady state solution. It is possible to write the steady-state solution for x(t) as a function proportional to the driving force with an induced phase change φ, where The phase value is usually taken to be between −180° and 0 so it represents a phase lag for both positive and negative values of the arctan argument. Resonance occurs when, at certain driving frequencies, the steady-state amplitude of x(t) is large compared to its amplitude at other driving frequencies. For the mass on a spring, resonance corresponds physically to the mass's oscillations having large displacements from the spring's equilibrium position at certain driving frequencies. Looking at the amplitude of x(t) as a function of the driving frequency ω, the amplitude is maximal at the driving frequency ωr is the resonant frequency for this system. Again, the resonant frequency does not equal the undamped angular frequency ω0 of the oscillator. They are proportional, and if the damping ratio goes to zero they are the same, but for non-zero damping they are not the same frequency. As shown in the figure, resonance may also occur at other frequencies near the resonant frequency, including ω0, but the maximum response is at the resonant frequency. Also, ωr is only real and non-zero if , so this system can only resonate when the harmonic oscillator is significantly underdamped. For systems with a very small damping ratio and a driving frequency near the resonant frequency, the steady state oscillations can become very large. The pendulum For other driven, damped harmonic oscillators whose equations of motion do not look exactly like the mass on a spring example, the resonant frequency remains but the definitions of ω0 and ζ change based on the physics of the system. For a pendulum of length ℓ and small displacement angle θ, Equation () becomes and therefore RLC series circuits Consider a circuit consisting of a resistor with resistance R, an inductor with inductance L, and a capacitor with capacitance C connected in series with current i(t) and driven by a voltage source with voltage vin(t). The voltage drop around the circuit is Rather than analyzing a candidate solution to this equation like in the mass on a spring example above, this section will analyze the frequency response of this circuit. Taking the Laplace transform of Equation (), where I(s) and Vin(s) are the Laplace transform of the current and input voltage, respectively, and s is a complex frequency parameter in the Laplace domain. Rearranging terms, Voltage across the capacitor An RLC circuit in series presents several options for where to measure an output voltage. Suppose the output voltage of interest is the voltage drop across the capacitor. As shown above, in the Laplace domain this voltage is or Define for this circuit a natural frequency and a damping ratio, The ratio of the output voltage to the input voltage becomes H(s) is the transfer function between the input voltage and the output voltage. This transfer function has two poles–roots of the polynomial in the transfer function's denominator–at and no zeros–roots of the polynomial in the transfer function's numerator. Moreover, for , the magnitude of these poles is the natural frequency ω0 and that for , our condition for resonance in the harmonic oscillator example, the poles are closer to the imaginary axis than to the real axis. Evaluating H(s) along the imaginary axis , the transfer function describes the frequency response of this circuit. Equivalently, the frequency response can be analyzed by taking the Fourier transform of Equation () instead of the Laplace transform. The transfer function, which is also complex, can be written as a gain and phase, A sinusoidal input voltage at frequency ω results in an output voltage at the same frequency that has been scaled by G(ω) and has a phase shift Φ(ω). The gain and phase can be plotted versus frequency on a Bode plot. For the RLC circuit's capacitor voltage, the gain of the transfer function H(iω) is Note the similarity between the gain here and the amplitude in Equation (). Once again, the gain is maximized at the resonant frequency Here, the resonance corresponds physically to having a relatively large amplitude for the steady state oscillations of the voltage across the capacitor compared to its amplitude at other driving frequencies. Voltage across the inductor The resonant frequency need not always take the form given in the examples above. For the RLC circuit, suppose instead that the output voltage of interest is the voltage across the inductor. As shown above, in the Laplace domain the voltage across the inductor is using the same definitions for ω0 and ζ as in the previous example. The transfer function between Vin(s) and this new Vout(s) across the inductor is This transfer function has the same poles as the transfer function in the previous example, but it also has two zeroes in the numerator at . Evaluating H(s) along the imaginary axis, its gain becomes Compared to the gain in Equation () using the capacitor voltage as the output, this gain has a factor of ω2 in the numerator and will therefore have a different resonant frequency that maximizes the gain. That frequency is So for the same RLC circuit but with the voltage across the inductor as the output, the resonant frequency is now larger than the natural frequency, though it still tends towards the natural frequency as the damping ratio goes to zero. That the same circuit can have different resonant frequencies for different choices of output is not contradictory. As shown in Equation (), the voltage drop across the circuit is divided among the three circuit elements, and each element has different dynamics. The capacitor's voltage grows slowly by integrating the current over time and is therefore more sensitive to lower frequencies, whereas the inductor's voltage grows when the current changes rapidly and is therefore more sensitive to higher frequencies. While the circuit as a whole has a natural frequency where it tends to oscillate, the different dynamics of each circuit element make each element resonate at a slightly different frequency. Voltage across the resistor Suppose that the output voltage of interest is the voltage across the resistor. In the Laplace domain the voltage across the resistor is and using the same natural frequency and damping ratio as in the capacitor example the transfer function is This transfer function also has the same poles as the previous RLC circuit examples, but it only has one zero in the numerator at s = 0. For this transfer function, its gain is The resonant frequency that maximizes this gain is and the gain is one at this frequency, so the voltage across the resistor resonates at the circuit's natural frequency and at this frequency the amplitude of the voltage across the resistor equals the input voltage's amplitude. Antiresonance Some systems exhibit antiresonance that can be analyzed in the same way as resonance. For antiresonance, the amplitude of the response of the system at certain frequencies is disproportionately small rather than being disproportionately large. In the RLC circuit example, this phenomenon can be observed by analyzing both the inductor and the capacitor combined. Suppose that the output voltage of interest in the RLC circuit is the voltage across the inductor and the capacitor combined in series. Equation () showed that the sum of the voltages across the three circuit elements sums to the input voltage, so measuring the output voltage as the sum of the inductor and capacitor voltages combined is the same as vin minus the voltage drop across the resistor. The previous example showed that at the natural frequency of the system, the amplitude of the voltage drop across the resistor equals the amplitude of vin, and therefore the voltage across the inductor and capacitor combined has zero amplitude. We can show this with the transfer function. The sum of the inductor and capacitor voltages is Using the same natural frequency and damping ratios as the previous examples, the transfer function is This transfer has the same poles as the previous examples but has zeroes at Evaluating the transfer function along the imaginary axis, its gain is Rather than look for resonance, i.e., peaks of the gain, notice that the gain goes to zero at ω = ω0, which complements our analysis of the resistor's voltage. This is called antiresonance, which has the opposite effect of resonance. Rather than result in outputs that are disproportionately large at this frequency, this circuit with this choice of output has no response at all at this frequency. The frequency that is filtered out corresponds exactly to the zeroes of the transfer function, which were shown in Equation () and were on the imaginary axis. Relationships between resonance and frequency response in the RLC series circuit example These RLC circuit examples illustrate how resonance is related to the frequency response of the system. Specifically, these examples illustrate: How resonant frequencies can be found by looking for peaks in the gain of the transfer function between the input and output of the system, for example in a Bode magnitude plot How the resonant frequency for a single system can be different for different choices of system output The connection between the system's natural frequency, the system's damping ratio, and the system's resonant frequency The connection between the system's natural frequency and the magnitude of the transfer function's poles, pointed out in Equation (), and therefore a connection between the poles and the resonant frequency A connection between the transfer function's zeroes and the shape of the gain as a function of frequency, and therefore a connection between the zeroes and the resonant frequency that maximizes gain A connection between the transfer function's zeroes and antiresonance The next section extends these concepts to resonance in a general linear system. Generalizing resonance and antiresonance for linear systems Next consider an arbitrary linear system with multiple inputs and outputs. For example, in state-space representation a third order linear time-invariant system with three inputs and two outputs might be written as where ui(t) are the inputs, xi(t) are the state variables, yi(t) are the outputs, and A, B, C, and D are matrices describing the dynamics between the variables. This system has a transfer function matrix whose elements are the transfer functions between the various inputs and outputs. For example, Each Hij(s) is a scalar transfer function linking one of the inputs to one of the outputs. The RLC circuit examples above had one input voltage and showed four possible output voltages–across the capacitor, across the inductor, across the resistor, and across the capacitor and inductor combined in series–each with its own transfer function. If the RLC circuit were set up to measure all four of these output voltages, that system would have a 4×1 transfer function matrix linking the single input to each of the four outputs. Evaluated along the imaginary axis, each Hij(iω) can be written as a gain and phase shift, Peaks in the gain at certain frequencies correspond to resonances between that transfer function's input and output, assuming the system is stable. Each transfer function Hij(s) can also be written as a fraction whose numerator and denominator are polynomials of s. The complex roots of the numerator are called zeroes, and the complex roots of the denominator are called poles. For a stable system, the positions of these poles and zeroes on the complex plane give some indication of whether the system can resonate or antiresonate and at which frequencies. In particular, any stable or marginally stable, complex conjugate pair of poles with imaginary components can be written in terms of a natural frequency and a damping ratio as as in Equation (). The natural frequency ω0 of that pole is the magnitude of the position of the pole on the complex plane and the damping ratio of that pole determines how quickly that oscillation decays. In general, Complex conjugate pairs of poles near the imaginary axis correspond to a peak or resonance in the frequency response in the vicinity of the pole's natural frequency. If the pair of poles is on the imaginary axis, the gain is infinite at that frequency. Complex conjugate pairs of zeroes near the imaginary axis correspond to a notch or antiresonance in the frequency response in the vicinity of the zero's frequency, i.e., the frequency equal to the magnitude of the zero. If the pair of zeroes is on the imaginary axis, the gain is zero at that frequency. In the RLC circuit example, the first generalization relating poles to resonance is observed in Equation (). The second generalization relating zeroes to antiresonance is observed in Equation (). In the examples of the harmonic oscillator, the RLC circuit capacitor voltage, and the RLC circuit inductor voltage, "poles near the imaginary axis" corresponds to the significantly underdamped condition ζ < 1/. Standing waves A physical system can have as many natural frequencies as it has degrees of freedom and can resonate near each of those natural frequencies. A mass on a spring, which has one degree of freedom, has one natural frequency. A double pendulum, which has two degrees of freedom, can have two natural frequencies. As the number of coupled harmonic oscillators increases, the time it takes to transfer energy from one to the next becomes significant. Systems with very large numbers of degrees of freedom can be thought of as continuous rather than as having discrete oscillators. Energy transfers from one oscillator to the next in the form of waves. For example, the string of a guitar or the surface of water in a bowl can be modeled as a continuum of small coupled oscillators and waves can travel along them. In many cases these systems have the potential to resonate at certain frequencies, forming standing waves with large-amplitude oscillations at fixed positions. Resonance in the form of standing waves underlies many familiar phenomena, such as the sound produced by musical instruments, electromagnetic cavities used in lasers and microwave ovens, and energy levels of atoms. Standing waves on a string When a string of fixed length is driven at a particular frequency, a wave propagates along the string at the same frequency. The waves reflect off the ends of the string, and eventually a steady state is reached with waves traveling in both directions. The waveform is the superposition of the waves. At certain frequencies, the steady state waveform does not appear to travel along the string. At fixed positions called nodes, the string is never displaced. Between the nodes the string oscillates and exactly halfway between the nodes–at positions called anti-nodes–the oscillations have their largest amplitude. For a string of length with fixed ends, the displacement of the string perpendicular to the -axis at time is where is the amplitude of the left- and right-traveling waves interfering to form the standing wave, is the wave number, is the frequency. The frequencies that resonate and form standing waves relate to the length of the string as where is the speed of the wave and the integer denotes different modes or harmonics. The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. The possible modes of oscillation form a harmonic series. Resonance in complex networks A generalization to complex networks of coupled harmonic oscillators shows that such systems have a finite number of natural resonant frequencies, related to the topological structure of the network itself. In particular, such frequencies result related to the eigenvalues of the network's Laplacian matrix. Let be the adjacency matrix describing the topological structure of the network and the corresponding Laplacian matrix, where is the diagonal matrix of the degrees of the network's nodes. Then, for a network of classical and identical harmonic oscillators, when a sinusoidal driving force is applied to a specific node, the global resonant frequencies of the network are given by where are the eigenvalues of the Laplacian . Types Mechanical Mechanical resonance is the tendency of a mechanical system to absorb more energy when the frequency of its oscillations matches the system's natural frequency of vibration than it does at other frequencies. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, and aircraft. When designing objects, engineers must ensure the mechanical resonance frequencies of the component parts do not match driving vibrational frequencies of motors or other oscillating parts, a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower, and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies and thus dissipate the absorbed energy. The Taipei 101 building relies on a —a tuned mass damper—to cancel resonance. Furthermore, the structure is designed to resonate at a frequency that does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. In addition, engineers designing objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. The cadence of runners has been hypothesized to be energetically favorable due to resonance between the elastic energy stored in the lower limb and the mass of the runner. International Space Station The rocket engines for the International Space Station (ISS) are controlled by an autopilot. Ordinarily, uploaded parameters for controlling the engine control system for the Zvezda module make the rocket engines boost the International Space Station to a higher orbit. The rocket engines are hinge-mounted, and ordinarily the crew does not notice the operation. On January 14, 2009, however, the uploaded parameters made the autopilot swing the rocket engines in larger and larger oscillations, at a frequency of 0.5 Hz. These oscillations were captured on video, and lasted for 142 seconds. Acoustic Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the frequency range of human hearing, in other words sound. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz), Many objects and materials act as resonators with resonant frequencies within this range, and when struck vibrate mechanically, pushing on the surrounding air to create sound waves. This is the source of many percussive sounds we hear. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of, and tension on, a drum membrane. Like mechanical resonance, acoustic resonance can result in catastrophic failure of the object at resonance. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass, although this is difficult in practice. Electrical Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedance of the circuit is at a minimum in a series circuit or at maximum in a parallel circuit (usually when the transfer function peaks in absolute value). Resonance in circuits are used for both transmitting and receiving wireless communications such as television, cell phones and radio. Optical An optical cavity, also called an optical resonator, is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times producing standing waves for certain resonant frequencies. The standing wave patterns produced are called "modes". Longitudinal modes differ only in frequency while transverse modes differ for different frequencies and have different intensity patterns across the cross-section of the beam. Ring resonators and whispering galleries are examples of optical resonators that do not form standing waves. Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them; flat mirrors are not often used because of the difficulty of aligning them precisely. The geometry (resonator type) must be chosen so the beam remains stable, i.e., the beam size does not continue to grow with each reflection. Resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point (and therefore intense light at that point) inside the cavity. Optical cavities are designed to have a very large Q factor. A beam reflects a large number of times with little attenuation—therefore the frequency line width of the beam is small compared to the frequency of the laser. Additional optical resonances are guided-mode resonances and surface plasmon resonance, which result in anomalous reflection and high evanescent fields at resonance. In this case, the resonant modes are guided modes of a waveguide or surface plasmon modes of a dielectric-metallic interface. These modes are usually excited by a subwavelength grating. Orbital In celestial mechanics, an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies. In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self-correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa, and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet. Atomic, particle, and molecular Nuclear magnetic resonance (NMR) is the name given to a physical resonance phenomenon involving the observation of specific quantum mechanical magnetic properties of an atomic nucleus in the presence of an applied, external magnetic field. Many scientific techniques exploit NMR phenomena to study molecular physics, crystals, and non-crystalline materials through NMR spectroscopy. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). All nuclei containing odd numbers of nucleons have an intrinsic magnetic moment and angular momentum. A key feature of NMR is that the resonant frequency of a particular substance is directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonant frequencies of the sample's nuclei depend on where in the field they are located. Therefore, the particle can be located quite precisely by its resonant frequency. Electron paramagnetic resonance, otherwise known as electron spin resonance (ESR), is a spectroscopic technique similar to NMR, but uses unpaired electrons instead. Materials for which this can be applied are much more limited since the material needs to both have an unpaired spin and be paramagnetic. The Mössbauer effect is the resonant and recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form. Resonance in particle physics appears in similar circumstances to classical physics at the level of quantum mechanics and quantum field theory. Resonances can also be thought of as unstable particles, with the formula in the Universal resonance curve section of this article applying if Γ is the particle's decay rate and Ω is the particle's mass M. In that case, the formula comes from the particle's propagator, with its mass replaced by the complex number M + iΓ. The formula is further related to the particle's decay rate by the optical theorem. Disadvantages A column of soldiers marching in regular step on a narrow and structurally flexible bridge can set it into dangerously large amplitude oscillations. On April 12, 1831, the Broughton Suspension Bridge near Salford, England collapsed while a group of British soldiers were marching across. Since then, the British Army has had a standing order for soldiers to break stride when marching across bridges, to avoid resonance from their regular marching pattern affecting the bridge. Vibrations of a motor or engine can induce resonant vibration in its supporting structures if their natural frequency is close to that of the vibrations of the engine. A common example is the rattling sound of a bus body when the engine is left idling. Structural resonance of a suspension bridge induced by winds can lead to its catastrophic collapse. Several early suspension bridges in Europe and United States were destroyed by structural resonance induced by modest winds. The collapse of the Tacoma Narrows Bridge on 7 November 1940 is characterized in physics as a classic example of resonance. It has been argued by Robert H. Scanlan and others that the destruction was instead caused by aeroelastic flutter, a complicated interaction between the bridge and the winds passing through it—an example of a self oscillation, or a kind of "self-sustaining vibration" as referred to in the nonlinear theory of vibrations. Q factor The Q factor or quality factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is, and characterizes the bandwidth of a resonator relative to its center frequency. A high value for Q indicates a lower rate of energy loss relative to the stored energy, i.e., the system is lightly damped. The parameter is defined by the equation: . The higher the Q factor, the greater the amplitude at the resonant frequency, and the smaller the bandwidth, or range of frequencies around resonance occurs. In electrical resonance, a high-Q circuit in a radio receiver is more difficult to tune, but has greater selectivity, and so would be better at filtering out signals from other stations. High Q oscillators are more stable. Examples that normally have a low Q factor include door closers (Q=0.5). Systems with high Q factors include tuning forks (Q=1000), atomic clocks and lasers (Q≈1011). Universal resonance curve The exact response of a resonance, especially for frequencies far from the resonant frequency, depends on the details of the physical system, and is usually not exactly symmetric about the resonant frequency, as illustrated for the simple harmonic oscillator above. For a lightly damped linear oscillator with a resonance frequency , the intensity of oscillations when the system is driven with a driving frequency is typically approximated by the following formula that is symmetric about the resonance frequency: Where the susceptibility links the amplitude of the oscillator to the driving force in frequency space: The intensity is defined as the square of the amplitude of the oscillations. This is a Lorentzian function, or Cauchy distribution, and this response is found in many physical situations involving resonant systems. is a parameter dependent on the damping of the oscillator, and is known as the linewidth of the resonance. Heavily damped oscillators tend to have broad linewidths, and respond to a wider range of driving frequencies around the resonant frequency. The linewidth is inversely proportional to the Q factor, which is a measure of the sharpness of the resonance. In radio engineering and electronics engineering, this approximate symmetric response is known as the universal resonance curve, a concept introduced by Frederick E. Terman in 1932 to simplify the approximate analysis of radio circuits with a range of center frequencies and Q values. See also Cymatics Driven harmonic motion Earthquake engineering Electric dipole spin resonance Formant Limbic resonance Nonlinear resonance Normal mode Positive feedback Schumann resonance Simple harmonic motion Stochastic resonance Sympathetic string Resonance (chemistry) Fermi resonance Resonance (particle physics) Notes References External links The Feynman Lectures on Physics Vol. I Ch. 23: Resonance Resonance - a chapter from an online textbook Greene, Brian, "Resonance in strings". The Elegant Universe, NOVA (PBS) Hyperphysics section on resonance concepts Resonance versus resonant (usage of terms) Wood and Air Resonance in a Harpsichord Breaking glass with sound , including high-speed footage of glass breaking Antennas (radio) Oscillation
Resonance
[ "Physics", "Chemistry" ]
7,436
[ "Resonance", "Physical phenomena", "Waves", "Scattering", "Mechanics", "Oscillation" ]
41,662
https://en.wikipedia.org/wiki/Response%20time%20%28technology%29
In technology, response time is the time a system or functional unit takes to react to a given input. Computing In computing, the responsiveness of a service, how long a system takes to respond to a request for service, is measured through the response time. That service can be anything from a memory fetch, to a disk IO, to a complex database query, or loading a full web page. Ignoring transmission time for a moment, the response time is the sum of the service time and wait time. The service time is the time it takes to do the work you requested. For a given request the service time varies little as the workload increases – to do X amount of work it always takes X amount of time. The wait time is how long the request had to wait in a queue before being serviced and it varies from zero, when no waiting is required, to a large multiple of the service time, as many requests are already in the queue and have to be serviced first. With basic queueing theory math you can calculate how the average wait time increases as the device providing the service goes from 0-100% busy. As the device becomes busier, the average wait time increases in a non-linear fashion. The busier the device is, the more dramatic the response time increases will seem as you approach 100% busy; all of that increase is caused by increases in wait time, which is the result of all the requests waiting in queue that have to run first. Transmission time gets added to response time when your request and the resulting response has to travel over a network and it can be very significant. Transmission time can include propagation delays due to distance (the speed of light is finite), delays due to transmission errors, and data communication bandwidth limits (especially at the last mile) slowing the transmission speed of the request or the reply. Developers can reduce the response time of a system (for end users or not) using program optimization techniques. Real-time systems In real-time systems the response time of a task or thread is defined as the time elapsed between the dispatch (time when task is ready to execute) to the time when it finishes its job (one dispatch). Response time is different from WCET which is the maximum time the task would take if it were to execute without interference. It is also different from deadline which is the length of time during which the task's output would be valid in the context of the specific system. And it has a relation to the TTFB, which is the time between the dispatch and the time when the response starts. Display technologies Response time is the amount of time a pixel in a display takes to change. It is measured in milliseconds (ms). Lower numbers mean faster transitions and therefore fewer visible image artifacts. Display monitors with long response times would create display motion blur around moving objects, making them unacceptable for rapidly moving images. Response times are usually measured from grey-to-grey transitions, based on a VESA industry standard from the 10% to the 90% points in the pixel response curve. In fast paced competitive games such as Counter-Strike, the response time of a display is crucial for optimal performance. Displays that have a lower response time are more responsive to player input and produce less visual errors when displaying a rapidly changing image, making low response time important for competitive gaming. Most modern monitors that are marketed for gaming have a response time of 1ms, although it is not uncommon to see <1ms response time in high end monitors, and >1ms response time on less expensive monitors or monitors that have a higher resolution. See also Latency (engineering) Interrupt latency Application Response Measurement References Television technology
Response time (technology)
[ "Technology" ]
748
[ "Information and communications technology", "Television technology" ]
41,663
https://en.wikipedia.org/wiki/Responsivity
Responsivity is a measure of the input–output gain of a detector system. In the specific case of a photodetector, it measures the electrical output per optical input. A photodetector's responsivity is usually expressed in units of amperes or volts per watt of incident radiant power. For a system that responds linearly to its input, there is a unique responsivity. For nonlinear systems, the responsivity is the local slope. Many common photodetectors respond linearly as a function of the incident power. Responsivity is a function of the wavelength of the incident radiation and of the sensor's properties, such as the bandgap of the material of which the photodetector is made. One simple expression for the responsivity R of a photodetector in which an optical signal is converted into an electric current (known as a photocurrent) is where is the quantum efficiency (the conversion efficiency of photons to electrons) of the detector for a given wavelength, is the electron charge, is the frequency of the optical signal, and is the Planck constant. This expression is also given in terms of , the wavelength of the optical signal, and has the unit of amperes per watt (A/W). The term responsivity is also used to summarize input–output relationship in non-electrical systems. For example, a neuroscientist may measure how neurons in the visual pathway respond to light. In this case, responsivity summarizes the change in the neural response per unit signal strength. The responsivity in these applications can have a variety of units. The signal strength typically is controlled by varying either intensity (intensity-response function) or contrast (contrast-response function). The neural response measure depends on the part of the nervous system under study. For example, at the level of the retinal cones, the response might be in photocurrent. In the central nervous system the response is usually spikes per second. In functional neuroimaging, the response measure is usually BOLD contrast. The responsivity units reflect the relevant stimulus and physiological units. When describing an amplifier, the more common term is gain. Deprecated synonym sensitivity. A system's sensitivity is the inverse of the stimulus level required to produce a threshold response, with the threshold typically chosen just above the noise level. See also Noise-equivalent power Responsiveness, a related concept from interaction design / HCI. Specific detectivity Spectral sensitivity References Electrical parameters
Responsivity
[ "Engineering" ]
532
[ "Electrical engineering", "Electrical parameters" ]
41,665
https://en.wikipedia.org/wiki/Return%20loss
In telecommunications, return loss is a measure in relative terms of the power of the signal reflected by a discontinuity in a transmission line or optical fiber. This discontinuity can be caused by a mismatch between the termination or load connected to the line and the characteristic impedance of the line. It is usually expressed as a ratio in decibels (dB); where RL(dB) is the return loss in dB, Pi is the incident power and Pr is the reflected power. Return loss is related to both standing wave ratio (SWR) and reflection coefficient (Γ). Increasing return loss corresponds to lower SWR. Return loss is a measure of how well devices or lines are matched. A match is good if the return loss is high. A high return loss is desirable and results in a lower insertion loss. From a certain perspective 'Return Loss' is a misnomer. The usual function of a transmission line is to convey power from a source to a load with minimal loss. If a transmission line is correctly matched to a load, the reflected power will be zero, no power will be lost due to reflection, and 'Return Loss' will be infinite. Conversely if the line is terminated in an open circuit, the reflected power will be equal to the incident power; all of the incident power will be lost in the sense that none of it will be transferred to a load, and RL will be zero. Thus the numerical values of RL tend in the opposite sense to that expected of a 'loss'. Sign As defined above, RL will always be positive, since Pr can never exceed Pi . However, return loss has historically been expressed as a negative number, and this convention is still widely found in the literature. Strictly speaking, if a negative sign is ascribed to RL, the ratio of reflected to incident power is implied; where RL(dB) is the negative of RL(dB). In practice, the sign ascribed to RL is largely immaterial. If a transmission line includes several discontinuities along its length, the total return loss will be the sum of the RLs caused by each discontinuity, and provided all RLs are given the same sign, no error or ambiguity will result. Whichever convention is used, it will always be understood that Pr can never exceed Pi . Electrical In metallic conductor systems, reflections of a signal traveling down a conductor can occur at a discontinuity or impedance mismatch. The ratio of the amplitude of the reflected wave Vr to the amplitude of the incident wave Vi is known as the reflection coefficient . Return loss is the negative of the magnitude of the reflection coefficient in dB. Since power is proportional to the square of the voltage, return loss is given by, where the vertical bars indicate magnitude. Thus, a large positive return loss indicates the reflected power is small relative to the incident power, which indicates good impedance match between transmission line and load. If the incident power and the reflected power are expressed in 'absolute' decibel units, (e.g., dBm), then the return loss in dB can be calculated as the difference between the incident power Pi (in absolute dBm units) and the reflected power Pr (also in absolute dBm units), Optical In optics (particularly in fiber optics) a loss that takes place at discontinuities of refractive index, especially at an air-glass interface such as a fiber endface. At those interfaces, a fraction of the optical signal is reflected back toward the source. This reflection phenomenon is also called "Fresnel reflection loss," or simply "Fresnel loss'." Fiber optic transmission systems use lasers to transmit signals over optical fiber, and a low optical return loss (ORL) can cause the laser to stop transmitting correctly. The measurement of ORL is becoming more important in the characterization of optical networks as the use of wavelength-division multiplexing increases. These systems use lasers that have a lower tolerance for ORL, and introduce elements into the network that are located in close proximity to the laser. where is the reflected power and is the incident, or input, power. See also Hybrid balance Mismatch loss Signal reflection Time-domain reflectometer Optical time domain reflectometer References Notes Bibliography Federal Standard 1037C and from MIL-STD-188 Optical Return Loss Testing—Ensuring High-Quality Transmission EXFO Application note #044 Wave mechanics Radio electronics Engineering ratios Electrical parameters Fiber optics de:Rückflussdämpfung
Return loss
[ "Physics", "Mathematics", "Engineering" ]
926
[ "Radio electronics", "Physical phenomena", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Waves", "Wave mechanics", "Electrical engineering", "Electrical parameters" ]
41,666
https://en.wikipedia.org/wiki/RF%20power%20margin
In telecommunications, the term RF power margin has the following meanings: The amount of transmitter power above that which is computed by the link designer as the minimum required to meet specified link performance. The RF power margin allows for uncertainties in (a) empirical components of the signal level prediction method, (b) terrain characteristics, (c) atmospheric conditions, and (d) equipment performance parameters. At any given time in an operational link, the reserve transmitter power over that which is required to maintain specified link performance. References Radio technology Radio transmission power
RF power margin
[ "Physics", "Technology", "Engineering" ]
110
[ "Information and communications technology", "Telecommunications engineering", "Physical quantities", "Radio transmission power", "Radio technology", "Power (physics)" ]
41,667
https://en.wikipedia.org/wiki/Ringaround
In telecommunications, the term ringaround has the following meanings: The improper routing of a call back through a switching center already engaged in attempting to complete the same call. In secondary surveillance radar, the presence of false targets declared as a result of transponder interrogation by side lobes of the interrogating antenna. References Telecommunications engineering
Ringaround
[ "Engineering" ]
67
[ "Electrical engineering", "Telecommunications engineering" ]
41,669
https://en.wikipedia.org/wiki/Ringdown
In telephony, ringdown is a method of signaling an operator in which telephone ringing current is sent over the line to operate a lamp or cause the operation of a self-locking relay known as a drop. Ringdown is used in manual operation, and is distinguished from automatic signaling by dialing a number. The signal consists of a continuous or pulsed alternating current (AC) signal transmitted over the line. It may be used with or without a telephone switchboard. The term originated in magneto telephone signaling in which cranking the magneto generator, either integrated into the telephone set or housed in a connected ringer box, would not only ring its bell but also cause a drop to fall down at the telephone exchange switchboard, marked with the number of the line to which the magneto telephone instrument was connected. At the end of the conversation, one participant would crank to ring off, signaling the operator to take down the connection. In modern British English, "ring off" still means ending a telephone conversation, though it is of course done by other means. Ring off is also used figuratively to indicate no longer communicating with a person. The last ringdown telephone exchange in the United States was located at Bryant Pond, Maine, had 400+ subscribers, and converted to dial service in October 1983. Ringdown operator In telephone systems where calls from distant automated exchanges arrive for manual subscribers or non-dialable points, there often would be a ringdown operator (reachable from the distant operator console by dialling NPA+181) who would manually ring the desired subscriber on a party line or toll station. On some systems, this function was carried out by the inward operator (NPA+121). In both cases, this is a telephone operator at the destination who provides assistance solely to other operators on inbound toll calls; the ringdown operator nominally cannot be dialed directly by the subscriber. Non-operator use In an application not involving a telephone operator, a two-point automatic ringdown circuit, or ringdown, has a telephone at each end. When the telephone at one end goes off-hook, the phone at the other end instantly rings. No dialing is involved and therefore telephone sets without dials are sometimes used. Many ringdown circuits work in both directions. In some cases a circuit is designed to work in one direction only. That is, going off-hook at one end (end A) rings the other (end B). Going off-hook at end B has no effect at end A. Ringdown features are often part of a key telephone system. In the wire spring relay key service units of the Bell System 1A2, a model 216 automatic ringdown was used to operate the circuit. In the 400-series units, a number of different KTUs operate (supervise) a ringdown, including the model 415. In other situations, the ringdown is powered and operated by equipment inside the telephone exchange. In the case of enterprises with a private branch exchange (PBX) switch, the ringdown can be operated by the PBX key. The switch is programmed to ring a specific extension (the called phone) when a defined extension (the calling phone) goes off-hook. The PBX does not offer dial tone to the calling extension: it only detects on-hook or off-hook status. Voice over IP adapters can be networked and configured to provide automatic ringdown by selecting a dial plan which replaces the empty string with a predefined number or SIP address, dialed immediately. (Some Cisco VoIP phones and analog adapters treat a dial plan of (S0 <:1234567890>) as a hotline configuration which dials 1-234-567890 zero seconds after the telephone is taken off-hook, for instance). These circuits are used: over high-volume routes where one site calls another very frequently. Example: an information desk and the information desk staff supervisor's desk. where a tamper-proof ability to call from one point to another is needed. Example: a phone used to summon a taxicab to an airport or hotel. where a limited ability to contact one entity (but no ability to make outside calls) is desired. Example: a "house phone" in a hotel lobby to the live operator at the hotel's switchboard where the public, or users that are not trained in using a specific office telephone system, must place calls. Example: the after-hours phone to reach the watchman from the front door at a warehouse. in locations where emergencies are handled and the time required to dial digits would cause an unacceptable delay in handling of an emergency. Example: an airport control tower to the airport's fire station or fire dispatch center. Example: Independent System Operator (ISO) communication to a power plant. in situations where the called party needs to be certain of who is calling. Example: a hospital emergency department and an ambulance dispatch center. In some cases, automatic ringdown circuits have one-to-many configurations. When one phone goes off-hook, a group of phones is made to ring simultaneously. In cases where one or both ends of the circuit terminate in a key telephone system, a well designed system will have no hold feature on the ringdown circuit unless supervision provides a Calling Party Control (CPC) signal. PLAR Private line automatic ringdown (PLAR) is a type of analog signaling often used in telephone-based systems. When a device is taken off-hook, ringing voltage is automatically applied to a circuit to alert other stations on the line. When answered on another station, a call is maintained over the circuit. The telephone company switch is not involved in the process, making this a private line. See also Courtesy phone Dedicated line References External links The Last Ringdown, 1980 documentary on the Bryant Pond Telephone Company PLAR Configuration Example, on Cisco Call Manager (CUCM) v.6 Communication circuits Telephony equipment
Ringdown
[ "Engineering" ]
1,220
[ "Telecommunications engineering", "Communication circuits" ]
41,670
https://en.wikipedia.org/wiki/Ringer%20equivalence%20number
The ringer equivalence number (REN) is a telecommunications measure that represents the electrical loading effect of a telephone ringer on a telephone line. In the United States, ringer equivalence was first defined by U.S. Code of Federal Regulations, Title 47, Part 68, based on the load that a standard Bell System model 500 telephone represented, and was later determined in accordance with specification ANSI/TIA-968-B (August 2009). Measurement systems analogous to the REN exist internationally. Definition The ringer equivalence of 1 represents the loading effect of a single traditional telephone ringing circuit, such as that within the Western Electric model 500 telephone. The ringer equivalence of modern telephone equipment may be significantly lower than 1. For example, externally powered electronic ringing telephones may have a value as low as 0.1, while modern line-powered telephones, in which the ringer is powered from the telephone line, typically have a REN of approximately 0.8. In the United States, the FCC Part 68 specification defined REN 1 as equivalent to a 6930 Ω resistor in series with an (microfarad) capacitor. The modern ANSI/TIA-968-B specification (August 2009) defines it as an impedance of at (type A ringer), or from to (type B ringer). Maximum ringer equivalence The total ringer load on a subscriber line is the sum of the ringer equivalences of all devices (phone, fax, a separate answerphone, etc.) connected to the line. This represents the overall loading effect of the subscriber equipment on the central office ringing current source. Subscriber telephone lines are usually limited to support a ringer equivalence of 5, per the federal specifications. If the total allowable ringer load is exceeded, the phone circuit may fail to ring or otherwise malfunction. For example, call waiting, caller ID, and ADSL services are often affected by high ringer load. Some analog telephone adapters for Internet telephony require analog telephones with low REN, for example, the AT&T 210 is a basic phone which does not require an external electrical connection and has a REN of 0.9B. International specifications In the United Kingdom a maximum of 4 is allowed on any British Telecom (BT) line. In Australia a maximum of 3 is allowed on any Telstra or Optus Line. In Canada it is called a load number (LN); which must not exceed 100. The LN of each device represents the percentage of total load allowed. In Europe 1 REN used to be equivalent to an 1800 Ω resistor in series with a 1 μF capacitor. The latest ETSI specification (2003–09) calls for 1 REN to be greater than 16 kΩ at 25 Hz and 50 Hz. References ANSI/TIA-968-B ETSI TS 103 021 Telephony equipment Equivalent units
Ringer equivalence number
[ "Mathematics" ]
601
[ "Equivalent units", "Quantity", "Equivalent quantities", "Units of measurement" ]
41,671
https://en.wikipedia.org/wiki/Ring%20latency
In a ring network, such as Token Ring, ring latency is the time required for a signal to propagate once around the ring. Ring latency may be measured in seconds or in bits at the data transmission rate. Ring latency includes signal propagation delays in the ring medium, the drop cables, and the data stations connected to the ring network. References Network protocols
Ring latency
[ "Technology" ]
77
[ "Computing stubs", "Computer network stubs" ]
41,672
https://en.wikipedia.org/wiki/Round-trip%20delay
In telecommunications, round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent plus the amount of time it takes for acknowledgement of that signal having been received. This time delay includes propagation times for the paths between the two communication endpoints. In the context of computer networks, the signal is typically a data packet. RTT is commonly used interchangeably with ping time, which can be determined with the ping command. However, ping time may differ from experienced RTT with other protocols since the payload and priority associated with ICMP messages used by ping may differ from that of other traffic. End-to-end delay is the length of time it takes for a signal to travel in one direction and is often approximated as half the RTT. Protocol design RTT is a measure of the amount of time taken for an entire message to be sent to a destination and for a reply to be sent back to the sender. The time to send the message to the destination in its entirety is known as the network latency, and thus RTT is twice the latency in the network plus a processing delay at the destination. The other sources of delay in a network that make up the network latency are processing delay in transmission, propagation time, transmission time and queueing time. Propagation time is dependent on distance. Transmission time for a message is proportional to the message size divided by the bandwidth. Thus higher bandwidth networks will have lower transmission time, but the propagation time will remain unchanged, and so RTT does fall with increased bandwidth, but the delay increasingly represents propagation time. Networks with both high bandwidth and a high RTT (and thus high bandwidth-delay product) can have large amounts of data in transit at any given time. Such long fat networks require a special protocol design. One example is the TCP window scale option. The RTT was originally estimated in TCP by: where is constant weighting factor (). Choosing a value for close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for close to 0 makes the weighted average respond to changes in delay very quickly. This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well. Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation. Wi-Fi Accurate round-trip time measurements over Wi-Fi using IEEE 802.11mc are the basis for the Wi-Fi positioning system. See also Lag (video games) Latency (engineering) Minimum-Pairs Protocol Network delay Time of flight References Computer network technology Telecommunication theory Light
Round-trip delay
[ "Physics" ]
574
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light" ]
41,675
https://en.wikipedia.org/wiki/Rural%20radio%20service
Rural radiotelephone service (RRTS) provides basic, analog communications service between locations deemed so remote that traditional wireline service or service by other means is not feasible. RRTS uses channelized radio to provide radiotelephone services such as Basic Exchange Telephone Radio Service between a fixed subscriber location and a remote central office, private line service between a two fixed locations or interconnection between two or more central offices. RRTS does not enable mobile communications. Licensing In the United States, the Federal Communications Commission issues initial rural radiotelephone service licenses on a site-by-site basis. Once a license is issued, the licensee can sell or lease the license to another party. The FCC service rules for rural radiotelephone are filed in 47 C.F.R. part 22 subpart F. Technical information In the United States, the ULS radio service code and description for rural radiotelephone licenses is CR – Rural Radiotelephone. The licensed spectrum is divided in 44 channels of 20 kHz each. References Telephone services Wireless
Rural radio service
[ "Engineering" ]
220
[ "Wireless", "Telecommunications engineering" ]
41,680
https://en.wikipedia.org/wiki/Scrambler
In telecommunications, a scrambler is a device that transposes or inverts signals or otherwise encodes a message at the sender's side to make the message unintelligible at a receiver not equipped with an appropriately set descrambling device. Whereas encryption usually refers to operations carried out in the digital domain, scrambling usually refers to operations carried out in the analog domain. Scrambling is accomplished by the addition of components to the original signal or the changing of some important component of the original signal in order to make extraction of the original signal difficult. Examples of the latter might include removing or changing vertical or horizontal sync pulses in television signals; televisions will not be able to display a picture from such a signal. Some modern scramblers are actually encryption devices, the name remaining due to the similarities in use, as opposed to internal operation. In telecommunications and recording, a scrambler (also referred to as a randomizer) is a device that manipulates a data stream before transmitting. The manipulations are reversed by a descrambler at the receiving side. Scrambling is widely used in satellite, radio relay communications and PSTN modems. A scrambler can be placed just before a FEC coder, or it can be placed after the FEC, just before the modulation or line code. A scrambler in this context has nothing to do with encrypting, as the intent is not to render the message unintelligible, but to give the transmitted data useful engineering properties. A scrambler replaces sequences (referred to as whitening sequences) with other sequences without removing undesirable sequences, and as a result it changes the probability of occurrence of vexatious sequences. Clearly it is not foolproof as there are input sequences that yield all-zeros, all-ones, or other undesirable periodic output sequences. A scrambler is therefore not a good substitute for a line code, which, through a coding step, removes unwanted sequences. Purposes of scrambling A scrambler (or randomizer) can be either: An algorithm that converts an input string into a seemingly random output string of the same length (e.g., by pseudo-randomly selecting bits to invert), thus avoiding long sequences of bits of the same value; in this context, a randomizer is also referred to as a scrambler. An analog or digital source of unpredictable (i.e., high entropy), unbiased, and usually independent (i.e., random) output bits. A "truly" random generator may be used to feed a (more practical) deterministic pseudo-random random number generator, which extends the random seed value. There are two main reasons why scrambling is used: To enable accurate timing recovery on receiver equipment without resorting to redundant line coding. It facilitates the work of a timing recovery circuit (see also clock recovery), an automatic gain control and other adaptive circuits of the receiver (eliminating long sequences consisting of '0' or '1' only). For energy dispersal on the carrier, reducing inter-carrier signal interference. It eliminates the dependence of a signal's power spectrum upon the actual transmitted data, making it more dispersed to meet maximum power spectral density requirements (because if the power is concentrated in a narrow frequency band, it can interfere with adjacent channels due to the intermodulation (also known as cross-modulation) caused by non-linearities of the receiving tract). Scramblers are essential components of physical layer system standards besides interleaved coding and modulation. They are usually defined based on linear-feedback shift registers (LFSRs) due to their good statistical properties and ease of implementation in hardware. It is common for physical layer standards bodies to refer to lower-layer (physical layer and link layer) encryption as scrambling as well. This may well be because (traditional) mechanisms employed are based on feedback shift registers as well. Some standards for digital television, such as DVB-CA and MPE, refer to encryption at the link layer as scrambling. Types of scramblers Additive (synchronous) scramblers Additive scramblers (they are also referred to as synchronous) transform the input data stream by applying a pseudo-random binary sequence (PRBS) (by modulo-two addition). Sometimes a pre-calculated PRBS stored in the read-only memory is used, but more often it is generated by a linear-feedback shift register (LFSR). In order to assure a synchronous operation of the transmitting and receiving LFSR (that is, scrambler and descrambler), a sync-word must be used. A sync-word is a pattern that is placed in the data stream through equal intervals (that is, in each frame). A receiver searches for a few sync-words in adjacent frames and hence determines the place when its LFSR must be reloaded with a pre-defined initial state. The additive descrambler is just the same device as the additive scrambler. Additive scrambler/descrambler is defined by the polynomial of its LFSR (for the scrambler on the picture above, it is ) and its initial state. Multiplicative (self-synchronizing) scramblers Multiplicative scramblers (also known as feed-through) are called so because they perform a multiplication of the input signal by the scrambler's transfer function in Z-space. They are discrete linear time-invariant systems. A multiplicative scrambler is recursive, and a multiplicative descrambler is non-recursive. Unlike additive scramblers, multiplicative scramblers do not need the frame synchronization, that is why they are also called self-synchronizing. Multiplicative scrambler/descrambler is defined similarly by a polynomial (for the scrambler on the picture it is ), which is also a transfer function of the descrambler. Comparison of scramblers Scramblers have certain drawbacks: Both types may fail to generate random sequences under worst-case input conditions. Multiplicative scramblers lead to error multiplication during descrambling (i.e. a single-bit error at the descrambler's input will result in w errors at its output, where w equals the number of the scrambler's feedback taps). Additive scramblers must be reset by the frame sync; if this fails, massive error propagation will result, as a complete frame cannot be descrambled. (Alternatively if you know what was sent, the scrambler can be synchronized) The effective length of the random sequence of an additive scrambler is limited by the frame length, which is normally much shorter than the period of the PRBS. By adding frame numbers to the frame sync, it is possible to extend the length of the random sequence, by varying the random sequence in accordance with the frame number. Noise The first voice scramblers were invented at Bell Labs in the period just before World War II. These sets consisted of electronics that could mix two signals or alternatively "subtract" one signal back out again. The two signals were provided by a telephone and a record player. A matching pair of records was produced, each containing the same recording of noise. The recording was played into the telephone, and the mixed signal was sent over the wire. The noise was then subtracted out at the far end using the matching record, leaving the original voice signal intact. Eavesdroppers would hear only the noisy signal, unable to understand the voice. One of those, used (among other duties) for telephone conversations between Winston Churchill and Franklin D. Roosevelt was intercepted and unscrambled by the Germans. At least one German engineer had worked at Bell Labs before the war and came up with a way to break them. Later versions were sufficiently different that the German team was unable to unscramble them. Early versions were known as "A-3" (from AT&T Corporation). An unrelated device called SIGSALY was used for higher-level voice communications. The noise was provided on large shellac phonograph records made in pairs, shipped as needed, and destroyed after use. This worked, but was enormously awkward. Just achieving synchronization of the two records proved difficult. Post-war electronics made such systems much easier to work with by creating pseudo-random noise based on a short input tone. In use, the caller would play a tone into the phone, and both scrambler units would then listen to the signal and synchronize to it. This provided limited security, however, as any listener with a basic knowledge of the electronic circuitry could often produce a machine of similar-enough settings to break into the communications. Cryptographic It was the need to synchronize the scramblers that suggested to James H. Ellis the idea for non-secret encryption, which ultimately led to the invention of both the RSA encryption algorithm and Diffie–Hellman key exchange well before either was reinvented publicly by Rivest, Shamir, and Adleman, or by Diffie and Hellman. The latest scramblers are not scramblers in the truest sense of the word, but rather digitizers combined with encryption machines. In these systems the original signal is first converted into digital form, and then the digital data is encrypted and sent. Using modern public-key systems, these "scramblers" are much more secure than their earlier analog counterparts. Only these types of systems are considered secure enough for sensitive data. Voice inversion scrambling can be as simple as inverting the frequency bands around a static point to various complex methods of changing the inversion point randomly and in real time and using multiple bands. Voice inversion with a fixed frequency offers no security at all and software is available to restore the original voice, which is why it is no longer used to protect conversations today. However, voice inversion is still found in low-end Chinese walkie talkies. The "scramblers" used in cable television are designed to prevent casual signal theft, not to provide any real security. Early versions of these devices simply "inverted" one important component of the TV signal, re-inverting it at the client end for display. Later devices were only slightly more complex, filtering out that component entirely and then adding it by examining other portions of the signal. In both cases the circuitry could be easily built by any reasonably knowledgeable hobbyist. (see Television encryption.) Electronic kits for scrambling and descrambling are available from hobbyist suppliers. Scanner enthusiasts often use them to listen in to scrambled communications at car races and some public-service transmissions. It is also common in FRS radios. This is an easy way to learn about scrambling. The term "scrambling" is sometimes incorrectly used when jamming is meant. Descramble Descramble in cable television context is the act of taking a scrambled or encrypted video signal that has been provided by a cable television company for premium television services, processed by a scrambler and then supplied over a coaxial cable and delivered to the household where a set-top box reprocesses the signal, thus descrambling it and making it available for viewing on the television set. A descrambler is a device that restores the picture and sound of a scrambled channel. A descrambler must be used with a cable converter box to be able to unencrypt all of the premium & pay-per-view channels of a Cable Television System. See also Ciphony Cryptography Cryptochannel One-time pad Secure voice Secure telephone Satellite modem SIGSALY Voice inversion References External links and references DVB framing structure, channel coding and modulation for 11/12 GHz satellite services (EN 300 421) V.34 ITU-T recommendation Intelsat Earth Station Standard IESS-308 Cryptography Line codes Applications of randomness Satellite broadcasting Telecommunications equipment Television terminology
Scrambler
[ "Mathematics", "Engineering" ]
2,461
[ "Cybersecurity engineering", "Telecommunications engineering", "Cryptography", "Applied mathematics", "Satellite broadcasting" ]
41,685
https://en.wikipedia.org/wiki/Security%20kernel
In telecommunications, the term security kernel has the following meanings: In computer and communications security, the central part of a computer or communications system hardware, firmware, and software that implements the basic security procedures for controlling access to system resources. A self-contained usually small collection of key security-related statements that (a) works as a part of an operating system to prevent unauthorized access to, or use of, the system and (b) contains criteria that must be met before specified programs can be accessed. Hardware, firmware, and software elements of a trusted computing base that implement the reference monitor concept. References National Information Systems Security Glossary Computing terminology
Security kernel
[ "Technology" ]
132
[ "Computing terminology" ]
41,686
https://en.wikipedia.org/wiki/Security%20management
Security management is the identification of an organization's assets i.e. including people, buildings, machines, systems and information assets, followed by the development, documentation, and implementation of policies and procedures for protecting assets. An organization uses such security management procedures for information classification, threat assessment, risk assessment, and risk analysis to identify threats, categorize assets, and rate system vulnerabilities. Loss prevention Loss prevention focuses on what one's critical assets are and how they are going to protect them. A key component to loss prevention is assessing the potential threats to the successful achievement of the goal. This must include the potential opportunities that further the object (why take the risk unless there's an upside?) Balance probability and impact determine and implement measures to minimize or eliminate those threats. Security management includes the theories, concepts, ideas, methods, procedures, and practices that are used to manage and control organizational resources in order to accomplish security goals. Policies, procedures, administration, operations, training, awareness campaigns, financial management, contracting, resource allocation, and dealing with problems like security degradation are all included in this vast sector. Security risk management The management of security risks applies the principles of risk management to the management of security threats. It consists of identifying threats (or risk causes), assessing the effectiveness of existing controls to face those threats, determining the risks' consequence(s), prioritizing the risks by rating the likelihood and impact, classifying the type of risk, and selecting an appropriate risk option or risk response. In 2016, a universal standard for managing risks was developed in The Netherlands. In 2017, it was updated and named: Universal Security Management Systems Standard 2017. Types of risks External Strategic: Competition and customer demand. Operational: Regulations, suppliers, and contract. Financial: FX and credit. Hazard: Natural disasters, cyber, and external criminal acts. Compliance: New regulatory or legal requirements are introduced, or existing ones are changed, exposing the organization to a non-compliance risk if measures are not taken to ensure compliance. Internal Strategic: R&D. Operational: Systems and processes (H&R, Payroll). Financial: Liquidity and cash flow. Hazard: Safety and security; employees and equipment. Compliance: Concrete or potential changes in an organization's systems, processes, suppliers, etc. may create exposure to a legal or regulatory non-compliance. Risk options Risk avoidance The first choice to be considered is the possibility of eliminating the existence of criminal opportunity or avoiding the creation of such an opportunity. When additional considerations or factors are not created as a result of this action that would create a greater risk. For example, removing all the cash flow from a retail outlet would eliminate the opportunity for stealing the money, but it would also eliminate the ability to conduct business. Risk reduction When avoiding or eliminating the criminal opportunity conflicts with the ability to conduct business, the next step is reducing the opportunity of potential loss to the lowest level consistent with the function of the business. In the example above, the application of risk reduction might result in the business keeping only enough cash on hand for one day's operation. Risk spreading Assets that remain exposed after the application of reduction and avoidance are the subjects of risk spreading. This is the concept that limits loss or potential losses by exposing the perpetrator to the probability of detection and apprehension prior to the consummation of the crime through the application of perimeter lighting, barred windows, and intrusion detection systems. The idea is to reduce the time available for thieves to steal assets and escape without apprehension. Risk transfer The two primary methods of accomplishing risk transfer is to insure the assets or raise prices to cover the loss in the event of a criminal act. Generally speaking, when the first three steps have been properly applied, the cost of transferring risks is much lower. Risk acceptance All of the remaining risks must simply be assumed by the business as a part of doing business. Included with these accepted losses are deductibles, which have been made as part of the insurance coverage. Security policy implementations Intrusion detection Alarm device. Access control Locks, simple or sophisticated, such as biometric authentication and keycard locks. Physical security Environmental elements (ex. Mountains, Trees, etc.). Barricade. Security guards (armed or unarmed) with wireless communication devices (e.g., two-way radio). Security lighting (spotlight, etc.). Security Cameras. Motion Detectors. IBNS containers for cash in transit. Procedures Coordination with law enforcement agencies. Fraud management. Risk Management. CPTED. Risk Analysis. Risk Mitigation. Contingency Planning. See also Alarm management IT risk IT risk management ITIL security management, an information security management system standard based on ISO/IEC 27001 Physical security Retail loss prevention Security Security policy Gordon–Loeb model for cyber security investments References Further reading BBC NEWS | In Depth. BBC News - Home. Web. 18 Mar. 2011. <http://news.bbc.co.uk/2/shared/spl/hi/guides/456900/456993/html/>. Rattner, Daniel. "Loss Prevention & Risk Management Strategy." Security Management. Northeastern University, Boston. 5 Mar. 2010. Lecture. Rattner, Daniel. "Risk Assessments." Security Management. Northeastern University, Boston. 15 Mar. 2010. Lecture. Rattner, Daniel. "Internal & External Threats." Security Management. Northeastern University, Boston. 8 April. 2010. Lecture. Asset Protection and Security Management Handbook, POA Publishing LLC, 2003, p. 358 ISO 31000 Risk management — Principles and guidelines, 2009, p. 7 Universal Security Management Systems Standard 2017 - Requirements and guidance for use, 2017, p. 50 Security Management Training & TSCM Training Network management Computer security procedures
Security management
[ "Engineering" ]
1,183
[ "Cybersecurity engineering", "Computer networks engineering", "Computer security procedures", "Network management" ]
41,687
https://en.wikipedia.org/wiki/Self-synchronizing%20code
In coding theory, especially in telecommunications, a self-synchronizing code is a uniquely decodable code in which the symbol stream formed by a portion of one code word, or by the overlapped portion of any two adjacent code words, is not a valid code word. Put another way, a set of strings (called "code words") over an alphabet is called a self-synchronizing code if for each string obtained by concatenating two code words, the substring starting at the second symbol and ending at the second-last symbol does not contain any code word as substring. Every self-synchronizing code is a prefix code, but not all prefix codes are self-synchronizing. Other terms for self-synchronizing code are synchronized code or, ambiguously, comma-free code. A self-synchronizing code permits the proper framing of transmitted code words provided that no uncorrected errors occur in the symbol stream; external synchronization is not required. Self-synchronizing codes also allow recovery from uncorrected errors in the stream; with most prefix codes, an uncorrected error in a single bit may propagate errors further in the stream and make the subsequent data corrupted. Importance of self-synchronizing codes is not limited to data transmission. Self-synchronization also facilitates some cases of data recovery, for example of a digitally encoded text. Examples UTF-8 is self-synchronizing because the leading byte (11xxxxxx) and subsequent bytes (10xxxxxx) of a multi-byte code point have different bit patterns. High Level Data Link Control (HDLC) Advanced Data Communication Control Procedures (ADCCP) Fibonacci coding Counterexamples: The prefix code {00, 11} is not self-synchronizing; while 0, 1, 01 and 10 are not codes, 00 and 11 are. The prefix code {ab,ba} is not self-synchronizing because abab contains ba. The prefix code b∗a (using the Kleene star) is not self-synchronizing (even though any new code word simply starts after a) because code word ba contains code word a. See also Bit slip Comma code Consistent overhead byte stuffing Dynkin sequence Kraus principle Kruskal's principle Overlapping instructions Pollard's lambda method Self-clocking signal Self-synchronizing block code References Further reading MIL-STD-188 Line codes Synchronization
Self-synchronizing code
[ "Engineering" ]
536
[ "Telecommunications engineering", "Synchronization" ]
41,695
https://en.wikipedia.org/wiki/Shadow%20loss
In telecommunications, the term shadow loss has the following meanings: The attenuation caused to a radio signal by obstructions in the propagation path. In a reflector antenna, the relative reduction in the effective aperture of the antenna caused by the masking effect of other antenna parts, such as a feed horn or a secondary reflector, which parts obstruct the radiation path. References Radio frequency propagation
Shadow loss
[ "Physics" ]
83
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,700
https://en.wikipedia.org/wiki/Shot%20noise
Shot noise or Poisson noise is a type of noise which can be modeled by a Poisson process. In electronics shot noise originates from the discrete nature of electric charge. Shot noise also occurs in photon counting in optical devices, where shot noise is associated with the particle nature of light. Origin In a statistical experiment such as tossing a fair coin and counting the occurrences of heads and tails, the numbers of heads and tails after many throws will differ by only a tiny percentage, while after only a few throws outcomes with a significant excess of heads over tails or vice versa are common; if an experiment with a few throws is repeated over and over, the outcomes will fluctuate a lot. From the law of large numbers, one can show that the relative fluctuations reduce as the reciprocal square root of the number of throws, a result valid for all statistical fluctuations, including shot noise. Shot noise exists because phenomena such as light and electric current consist of the movement of discrete (also called "quantized") 'packets'. Consider light—a stream of discrete photons—coming out of a laser pointer and hitting a wall to create a visible spot. The fundamental physical processes that govern light emission are such that these photons are emitted from the laser at random times; but the many billions of photons needed to create a spot are so many that the brightness, the number of photons per unit of time, varies only infinitesimally with time. However, if the laser brightness is reduced until only a handful of photons hit the wall every second, the relative fluctuations in number of photons, i.e., brightness, will be significant, just as when tossing a coin a few times. These fluctuations are shot noise. The concept of shot noise was first introduced in 1918 by Walter Schottky who studied fluctuations of current in vacuum tubes. Shot noise may be dominant when the finite number of particles that carry energy (such as electrons in an electronic circuit or photons in an optical device) is sufficiently small so that uncertainties due to the Poisson distribution, which describes the occurrence of independent random events, are significant. It is important in electronics, telecommunications, optical detection, and fundamental physics. The term can also be used to describe any noise source, even if solely mathematical, of similar origin. For instance, particle simulations may produce a certain amount of "noise", where because of the small number of particles simulated, the simulation exhibits undue statistical fluctuations which don't reflect the real-world system. The magnitude of shot noise increases according to the square root of the expected number of events, such as the electric current or intensity of light. But since the strength of the signal itself increases more rapidly, the relative proportion of shot noise decreases and the signal-to-noise ratio (considering only shot noise) increases anyway. Thus shot noise is most frequently observed with small currents or low light intensities that have been amplified. Signal-to-Noise For large numbers, the Poisson distribution approaches a normal distribution about its mean, and the elementary events (photons, electrons, etc.) are no longer individually observed, typically making shot noise in actual observations indistinguishable from true Gaussian noise. Since the standard deviation of shot noise is equal to the square root of the average number of events N, the signal-to-noise ratio (SNR) is given by: Thus when N is very large, the signal-to-noise ratio is very large as well, and any relative fluctuations in N due to other sources are more likely to dominate over shot noise. However, when the other noise source is at a fixed level, such as thermal noise, or grows slower than , increasing N (the DC current or light level, etc.) can lead to dominance of shot noise. Properties Electronic devices Shot noise in electronic circuits consists of random fluctuations of DC current, which is due to electric current being the flow of discrete charges (electrons). Because the electron has such a tiny charge, however, shot noise is of relative insignificance in many (but not all) cases of electrical conduction. For instance 1 ampere of current consists of about electrons per second; even though this number will randomly vary by several billion in any given second, such a fluctuation is minuscule compared to the current itself. In addition, shot noise is often less significant as compared with two other noise sources in electronic circuits, flicker noise and Johnson–Nyquist noise. However, shot noise is temperature and frequency independent, in contrast to Johnson–Nyquist noise, which is proportional to temperature, and flicker noise, with the spectral density decreasing with increasing frequency. Therefore, at high frequencies and low temperatures shot noise may become the dominant source of noise. With very small currents and considering shorter time scales (thus wider bandwidths) shot noise can be significant. For instance, a microwave circuit operates on time scales of less than a nanosecond and if we were to have a current of 16 nanoamperes that would amount to only 100 electrons passing every nanosecond. According to Poisson statistics the actual number of electrons in any nanosecond would vary by 10 electrons rms, so that one sixth of the time less than 90 electrons would pass a point and one sixth of the time more than 110 electrons would be counted in a nanosecond. Now with this small current viewed on this time scale, the shot noise amounts to 1/10 of the DC current itself. The result by Schottky, based on the assumption that the statistics of electrons passage is Poissonian, reads for the spectral noise density at the frequency , where is the electron charge, and is the average current of the electron stream. The noise spectral power is frequency independent, which means the noise is white. This can be combined with the Landauer formula, which relates the average current with the transmission eigenvalues of the contact through which the current is measured ( labels transport channels). In the simplest case, these transmission eigenvalues can be taken to be energy independent and so the Landauer formula is where is the applied voltage. This provides for commonly referred to as the Poisson value of shot noise, . This is a classical result in the sense that it does not take into account that electrons obey Fermi–Dirac statistics. The correct result takes into account the quantum statistics of electrons and reads (at zero temperature) It was obtained in the 1990s by Viktor Khlus, Gordey Lesovik (independently the single-channel case), and Markus Büttiker (multi-channel case). This noise is white and is always suppressed with respect to the Poisson value. The degree of suppression, , is known as the Fano factor. Noises produced by different transport channels are independent. Fully open () and fully closed () channels produce no noise, since there are no irregularities in the electron stream. At finite temperature, a closed expression for noise can be written as well. It interpolates between shot noise (zero temperature) and Nyquist-Johnson noise (high temperature). Examples Tunnel junction is characterized by low transmission in all transport channels, therefore the electron flow is Poissonian, and the Fano factor equals one. Quantum point contact is characterized by an ideal transmission in all open channels, therefore it does not produce any noise, and the Fano factor equals zero. The exception is the step between plateaus, when one of the channels is partially open and produces noise. A metallic diffusive wire has a Fano factor of 1/3 regardless of the geometry and the details of the material. In 2DEG exhibiting fractional quantum Hall effect electric current is carried by quasiparticles moving at the sample edge whose charge is a rational fraction of the electron charge. The first direct measurement of their charge was through the shot noise in the current. Effects of interactions While this is the result when the electrons contributing to the current occur completely randomly, unaffected by each other, there are important cases in which these natural fluctuations are largely suppressed due to a charge build up. Take the previous example in which an average of 100 electrons go from point A to point B every nanosecond. During the first half of a nanosecond we would expect 50 electrons to arrive at point B on the average, but in a particular half nanosecond there might well be 60 electrons which arrive there. This will create a more negative electric charge at point B than average, and that extra charge will tend to repel the further flow of electrons from leaving point A during the remaining half nanosecond. Thus the net current integrated over a nanosecond will tend more to stay near its average value of 100 electrons rather than exhibiting the expected fluctuations (10 electrons rms) we calculated. This is the case in ordinary metallic wires and in metal film resistors, where shot noise is almost completely cancelled due to this anti-correlation between the motion of individual electrons, acting on each other through the coulomb force. However this reduction in shot noise does not apply when the current results from random events at a potential barrier which all the electrons must overcome due to a random excitation, such as by thermal activation. This is the situation in p-n junctions, for instance. A semiconductor diode is thus commonly used as a noise source by passing a particular DC current through it. In other situations interactions can lead to an enhancement of shot noise, which is the result of a super-poissonian statistics. For example, in a resonant tunneling diode the interplay of electrostatic interaction and of the density of states in the quantum well leads to a strong enhancement of shot noise when the device is biased in the negative differential resistance region of the current-voltage characteristics. Shot noise is distinct from voltage and current fluctuations expected in thermal equilibrium; this occurs without any applied DC voltage or current flowing. These fluctuations are known as Johnson–Nyquist noise or thermal noise and increase in proportion to the Kelvin temperature of any resistive component. However both are instances of white noise and thus cannot be distinguished simply by observing them even though their origins are quite dissimilar. Since shot noise is a Poisson process due to the finite charge of an electron, one can compute the root mean square current fluctuations as being of a magnitude where q is the elementary charge of an electron, Δf is the single-sided bandwidth in hertz over which the noise is considered, and I is the DC current flowing. For a current of 100 mA, measuring the current noise over a bandwidth of 1 Hz, we obtain If this noise current is fed through a resistor a noise voltage of would be generated. Coupling this noise through a capacitor, one could supply a noise power of to a matched load. Detectors The flux signal that is incident on a detector is calculated as follows, in units of photons: where c is the speed of light, and h is the Planck constant. Following Poisson statistics, the photon noise is calculated as the square root of the signal: The SNR for a CCD camera can be calculated from the following equation: where: I = photon flux (photons/pixel/second), QE = quantum efficiency, t = integration time (seconds), Nd = dark current (electrons/pixel/sec), Nr = read noise (electrons). Optics In optics, shot noise describes the fluctuations of the number of photons detected (or simply counted in the abstract) because they occur independently of each other. This is therefore another consequence of discretization, in this case of the energy in the electromagnetic field in terms of photons. In the case of photon detection, the relevant process is the random conversion of photons into photo-electrons for instance, thus leading to a larger effective shot noise level when using a detector with a quantum efficiency below unity. Only in an exotic squeezed coherent state can the number of photons measured per unit time have fluctuations smaller than the square root of the expected number of photons counted in that period of time. Of course there are other mechanisms of noise in optical signals which often dwarf the contribution of shot noise. When these are absent, however, optical detection is said to be "photon noise limited" as only the shot noise (also known as "quantum noise" or "photon noise" in this context) remains. Shot noise is easily observable in the case of photomultipliers and avalanche photodiodes used in the Geiger mode, where individual photon detections are observed. However the same noise source is present with higher light intensities measured by any photo detector, and is directly measurable when it dominates the noise of the subsequent electronic amplifier. Just as with other forms of shot noise, the fluctuations in a photo-current due to shot noise scale as the square-root of the average intensity: The shot noise of a coherent optical beam (having no other noise sources) is a fundamental physical phenomenon, reflecting quantum fluctuations in the electromagnetic field. In optical homodyne detection, the shot noise in the photodetector can be attributed to either the zero point fluctuations of the quantised electromagnetic field, or to the discrete nature of the photon absorption process. However, shot noise itself is not a distinctive feature of quantised field and can also be explained through semiclassical theory. What the semiclassical theory does not predict, however, is the squeezing of shot noise. Shot noise also sets a lower bound on the noise introduced by quantum amplifiers which preserve the phase of an optical signal. See also Johnson–Nyquist noise or thermal noise 1/f noise Burst noise Contact resistance Image noise Quantum efficiency References Electronics concepts Noise (electronics) Electrical parameters Quantum optics Poisson point processes Mesoscopic physics
Shot noise
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
2,805
[ "Point (geometry)", "Electrical parameters", "Quantum optics", "Quantum mechanics", "Point processes", "Condensed matter physics", "Electrical engineering", "Mesoscopic physics", "Poisson point processes" ]
41,702
https://en.wikipedia.org/wiki/Signal%20compression
Signal compression is the use of various techniques to increase the quality or quantity of signal parameters transmitted through a given telecommunications channel. Types of signal compression include: Bandwidth compression Data compression Dynamic range compression Gain compression Image compression Lossy compression One-way compression function Compression Telecommunications techniques he:דחיסת אותות
Signal compression
[ "Technology", "Engineering" ]
66
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
41,705
https://en.wikipedia.org/wiki/Signal-to-crosstalk%20ratio
The signal-to-crosstalk ratio at a specified point in a circuit is the ratio of the power of the wanted signal to the power of the unwanted signal from another channel. The signals are adjusted in each channel so that they are of equal power at the zero transmission level point in their respective channels. The signal-to-crosstalk ratio is usually expressed in dB. References Engineering ratios Telecommunications
Signal-to-crosstalk ratio
[ "Mathematics", "Technology", "Engineering" ]
81
[ "Information and communications technology", "Metrics", "Engineering ratios", "Quantity", "Telecommunications" ]
41,706
https://en.wikipedia.org/wiki/Signal-to-noise%20ratio
Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. SNR is an important parameter that affects the performance and quality of systems that process or transmit signals, such as communication systems, audio systems, radar systems, imaging systems, and data acquisition systems. A high SNR means that the signal is clear and easy to detect or interpret, while a low SNR means that the signal is corrupted or obscured by noise and may be difficult to distinguish or recover. SNR can be improved by various methods, such as increasing the signal strength, reducing the noise level, filtering out unwanted noise, or using error correction techniques. SNR also determines the maximum possible amount of data that can be transmitted reliably over a given channel, which depends on its bandwidth and SNR. This relationship is described by the Shannon–Hartley theorem, which is a fundamental law of information theory. SNR can be calculated using different formulas depending on how the signal and noise are measured and defined. The most common way to express SNR is in decibels, which is a logarithmic scale that makes it easier to compare large or small values. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. Definition One definition of signal-to-noise ratio is the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): where is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. The signal-to-noise ratio of a random variable () to random noise is: where E refers to the expected value, which in this case is the mean square of . If the signal is simply a constant value of , this equation simplifies to: If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . The signal and the noise must be measured the same way, for example as voltages across the same impedance. Their root mean squares can alternatively be used according to: where is root mean square (RMS) amplitude (for example, RMS voltage). Decibels Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as and In a similar manner, SNR may be expressed in decibels as Using the definition of SNR Using the quotient rule for logarithms Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: In the above formula, P is measured in units of power, such as watts (W) or milliwatts (mW), and the signal-to-noise ratio is a pure number. However, when the signal and noise are measured in volts (V) or amperes (A), which are measures of amplitude, they must first be squared to obtain a quantity proportional to power, as shown below: Dynamic range The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernible signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Difference from conventional power In physics, the average power of an AC signal is defined as the average value of voltage times current; for resistive (non-reactive) circuits, where voltage and current are in phase, this is equivalent to the product of the rms voltage and current: But in signal processing and communication, one usually assumes that so that factor is usually not included while measuring power or energy of a signal. This may cause some confusion among readers, but the resistance factor is not significant for typical operations performed in signal processing, or for computing power ratios. For most cases, the power of a signal would be considered to be simply Alternative definition An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement: where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance), and it is only an approximation since . It is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above, in which case it is equivalent to the more common definition: This definition is closely related to the sensitivity index or d, when assuming that the signal has two states separated by signal amplitude , and the noise standard deviation does not change between the two states. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features with certainty. An SNR less than 5 means less than 100% certainty in identifying image details. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Related measures are the "contrast ratio" and the "contrast-to-noise ratio". Modulation system measurements Amplitude modulation Channel signal-to-noise ratio is given by where W is the bandwidth and is modulation index Output signal-to-noise ratio (of AM receiver) is given by Frequency modulation Channel signal-to-noise ratio is given by Output signal-to-noise ratio is given by Noise reduction All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, the gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Internal electronic noise of measurement systems can be reduced through the use of low-noise amplifiers. When the characteristics of the noise are known and are different from the signal, it is possible to use a filter to reduce the noise. For example, a lock-in amplifier can extract a narrow bandwidth signal from broadband noise a million times stronger. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurements. In this case the noise goes down as the square root of the number of averaged samples. Digital signals When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Fixed point For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then: This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB. Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n-bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent: The dynamic range is much larger than fixed-point but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. Optical signals Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer. Types and abbreviations Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio. Other uses While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. The term is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as that interferes with the of appropriate discussion. SNR can also be applied in marketing and how business professionals manage information overload. Managing a healthy signal to noise ratio can help business executives improve their KPIs (Key Performance Indicators). Similar concepts The signal-to-noise ratio is similar to Cohen's d given by the difference of estimated means divided by the standard deviation of the data and is related to the test statistic in the t-test. See also Audio system measurements Generation loss Matched filter Near–far problem Noise margin Omega ratio Pareidolia Peak signal-to-noise ratio Signal-to-noise statistic Signal-to-interference-plus-noise ratio SINAD SINADR Subjective video quality Total harmonic distortion Video quality Notes References External links ADC and DAC Glossary – Maxim Integrated Products Understand SINAD, ENOB, SNR, THD, THD + N, and SFDR so you don't get lost in the noise floor – Analog Devices The Relationship of dynamic range to data word size in digital audio processing Calculation of signal-to-noise ratio, noise voltage, and noise level Learning by simulations – a simulation showing the improvement of the SNR by time averaging Dynamic Performance Testing of Digital Audio D/A Converters Fundamental theorem of analog circuits: a minimum level of power must be dissipated to maintain a level of SNR Interactive webdemo of visualization of SNR in a QAM constellation diagram Institute of Telecommunicatons, University of Stuttgart Quantization Noise Widrow & Kollár Quantization book page with sample chapters and additional material Signal-to-noise ratio online audio demonstrator - Virtual Communications Lab Engineering ratios Error measures Measurement Electrical parameters Audio amplifier specifications Noise (electronics) Statistical ratios Acoustics Sound
Signal-to-noise ratio
[ "Physics", "Mathematics", "Engineering" ]
2,852
[ "Physical quantities", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Measurement", "Acoustics", "Size", "Electronic engineering", "Electrical engineering", "Audio engineering", "Audio amplifier specifications", "Electrical parameters" ]
41,707
https://en.wikipedia.org/wiki/Signal%20transition
Signal transition, when referring to the modulation of a carrier signal, is a change from one significant condition to another. Examples of signal transitions are a change from one electric current, voltage, or power level to another; a change from one optical power level to another; a phase shift; or a change from one frequency or wavelength to another. Signal transitions are used to create signals that represent information, such as "0" and "1" or "mark" and "space". See also References Telecommunication theory Radio technology
Signal transition
[ "Technology", "Engineering" ]
106
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
41,710
https://en.wikipedia.org/wiki/Simple%20Network%20Management%20Protocol
Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. Devices that typically support SNMP include cable modems, routers, network switches, servers, workstations, printers, and more. SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base (MIB), which describes the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications. Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects. Overview and basic concepts In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent that reports information via SNMP to the manager. An SNMP-managed network consists of three key components: Managed devices Agentsoftware that runs on managed devices Network management station (NMS)software that runs on the manager A managed device is a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional (read and write) access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to, routers, access servers, switches, cable modems, bridges, hubs, IP telephones, IP video cameras, computer hosts, and printers. An agent is a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form. A network management station executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network. Management information base SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design that allows applications to define their own hierarchies. These hierarchies are described as a management information base (MIB). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0 (SMIv2, ), a subset of ASN.1. Protocol details SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol (UDP). The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent. The agent response is sent back to the source port on the manager. The manager receives notifications (Traps and InformRequests) on port 162. The agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162. SNMPv1 specifies five core protocol data units (PDUs). Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows: The seven SNMP PDU types as identified by the PDU-type field are as follows: GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings (the value field is not used). Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned. SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with (current) new values for the variables is returned. GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB. The entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request. GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior. GetBulkRequest was introduced in SNMPv2. Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-status and error-index fields. Although it was used as a response to both gets and sets, this PDU was called GetResponse in SNMPv1. Asynchronous notification from agent to manager. While in other SNMP communication, the manager actively requests information from the agent, these are PDUs that are sent from the agent to the manager without being explicitly requested. SNMP Traps enable an agent to notify the management station of significant events by way of an unsolicited SNMP message. Trap PDUs include current sysUpTime value, an OID identifying the type of trap and optional variable bindings. Destination addressing for traps is determined in an application-specific manner typically through trap configuration variables in the MIB. The format of the trap message was changed in SNMPv2 and the PDU was renamed SNMPv2-Trap. Acknowledged asynchronous notification. This PDU was introduced in SNMPv2 and was originally defined as manager to manager communication. Later implementations have loosened the original definition to allow agent to manager communications. Manager-to-manager notifications were already possible in SNMPv1 using a Trap, but as SNMP commonly runs over UDP where delivery is not assured and dropped packets are not reported, delivery of a Trap was not guaranteed. InformRequest fixes this as an acknowledgement is returned on receipt. specifies that an SNMP implementation must accept a message of at least 484 bytes in length. In practice, SNMP implementations accept longer messages. If implemented correctly, an SNMP message is discarded if the decoding of the message fails and thus malformed SNMP requests are ignored. A successfully decoded SNMP request is then authenticated using the community string. If the authentication fails, a trap is generated indicating an authentication failure and the message is dropped. SNMPv1 and SNMPv2c use communities to establish trust between managers and agents. Most agents support three community names, one each for read-only, read-write and trap. These three community strings control different types of activities. The read-only community applies to get requests. The read-write community string applies to set requests. The trap community string applies to receipt of traps. SNMPv3 also uses community strings, but allows for secure authentication and communication between SNMP manager and agent. Protocol versions In practice, SNMP implementations often support multiple versions: typically SNMPv1, SNMPv2c, and SNMPv3. Version 1 SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. The design of SNMPv1 was done in the 1980s by a group of collaborators who viewed the officially sponsored OSI/IETF/NSF (National Science Foundation) effort (HEMS/CMIS/CMIP) as both unimplementable in the computing platforms of the time as well as potentially unworkable. SNMP was approved based on a belief that it was an interim protocol needed for taking steps towards large-scale deployment of the Internet and its commercialization. The first Request for Comments (RFCs) for SNMP, now known as SNMPv1, appeared in 1988:  — Structure and identification of management information for TCP/IP-based internets  — Management information base for network management of TCP/IP-based internets  — A simple network management protocol In 1990, these documents were superseded by:  — Structure and identification of management information for TCP/IP-based internets  — Management information base for network management of TCP/IP-based internets  — A simple network management protocol In 1991, (MIB-1) was replaced by the more often used:  — Version 2 of management information base (MIB-2) for network management of TCP/IP-based internets SNMPv1 is widely used and is the de facto network management protocol in the Internet community. SNMPv1 may be carried by transport layer protocols such as User Datagram Protocol (UDP), OSI Connectionless-mode Network Service (CLNS), AppleTalk Datagram Delivery Protocol (DDP), and Novell Internetwork Packet Exchange (IPX). Version 1 has been criticized for its poor security. The specification does, in fact, allow room for custom authentication to be used, but widely used implementations "support only a trivial authentication service that identifies all SNMP messages as authentic SNMP messages." The security of the messages, therefore, becomes dependent on the security of the channels over which the messages are sent. For example, an organization may consider their internal network to be sufficiently secure that no encryption is necessary for its SNMP messages. In such cases, the community name, which is transmitted in cleartext, tends to be viewed as a de facto password, in spite of the original specification. Version 2 SNMPv2, defined by and , revises version 1 and includes improvements in the areas of performance, security and manager-to-manager communications. It introduced GetBulkRequest, an alternative to iterative GetNextRequests for retrieving large amounts of management data in a single request. The new party-based security system introduced in SNMPv2, viewed by many as overly complex, was not widely adopted. This version of SNMP reached the Proposed Standard level of maturity, but was deemed obsolete by later versions. Community-Based Simple Network Management Protocol version 2, or SNMPv2c, is defined in –. SNMPv2c comprises SNMPv2 without the controversial new SNMP v2 security model, using instead the simple community-based security scheme of SNMPv1. This version is one of relatively few standards to meet the IETF's Draft Standard maturity level, and was widely considered the de facto SNMPv2 standard. It was later restated as part of SNMPv3. User-Based Simple Network Management Protocol version 2, or SNMPv2u, is defined in –. This is a compromise that attempts to offer greater security than SNMPv1, but without incurring the high complexity of SNMPv2. A variant of this was commercialized as SNMP v2*, and the mechanism was eventually adopted as one of two security frameworks in SNMP v3. 64-bit counters SNMP version 2 introduces the option for 64-bit data counters. Version 1 was designed only with 32-bit counters, which can store integer values from zero to 4.29 billion (precisely ). A 32-bit version 1 counter cannot store the maximum speed of a 10 gigabit or larger interface, expressed in bits per second. Similarly, a 32-bit counter tracking statistics for a 10 gigabit or larger interface can roll over back to zero again in less than one minute, which may be a shorter time interval than a counter is polled to read its current state. This would result in lost or invalid data due to the undetected value rollover, and corruption of trend-tracking data. The 64-bit version 2 counter can store values from zero to 18.4 quintillion (precisely 18,446,744,073,709,551,615) and so is currently unlikely to experience a counter rollover between polling events. For example, 1.6 terabit Ethernet is predicted to become available by 2025. A 64-bit counter incrementing at a rate of 1.6 trillion bits per second would be able to retain information for such an interface without rolling over for 133 days. SNMPv1 and SNMPv2c interoperability SNMPv2c is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2c messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2c also uses two protocol operations that are not specified in SNMPv1. To overcome incompatibility, defines two SNMPv1/v2c coexistence strategies: proxy agents and bilingual network-management systems. Proxy agents An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1-managed devices. When an SNMPv2 NMS issues a command intended for an SNMPv1 agent it sends it to the SNMPv2 proxy agent instead. The proxy agent forwards Get, GetNext, and Set messages to the SNMPv1 agent unchanged. GetBulk messages are converted by the proxy agent to GetNext messages and then are forwarded to the SNMPv1 agent. Additionally, the proxy agent receives and maps SNMPv1 trap messages to SNMPv2 trap messages and then forwards them to the NMS. Bilingual network-management system Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this dual-management environment, a management application examines information stored in a local database to determine whether the agent supports SNMPv1 or SNMPv2. Based on the information in the database, the NMS communicates with the agent using the appropriate version of SNMP. Version 3 Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks very different due to new textual conventions, concepts, and terminology. The most visible change was to define a secure version of SNMP, by adding security and remote configuration enhancements to SNMP. The security aspect is addressed by offering both strong authentication and data encryption for privacy. For the administration aspect, SNMPv3 focuses on two parts, namely notification originators and proxy forwarders. The changes also facilitate remote configuration and administration of the SNMP entities, as well as addressing issues related to the large-scale deployment, accounting, and fault management. Features and enhancements included: Identification of SNMP entities to facilitate communication only between known SNMP entities – Each SNMP entity has an identifier called the SNMPEngineID, and SNMP communication is possible only if an SNMP entity knows the identity of its peer. Traps and Notifications are exceptions to this rule. Support for security models – A security model may define the security policy within an administrative domain or an intranet. SNMPv3 contains the specifications for a user-based security model (USM). Definition of security goals where the goals of message authentication service include protection against the following: Modification of Information – Protection against some unauthorized SNMP entity altering in-transit messages generated by an authorized principal. Masquerade – Protection against attempting management operations not authorized for some principal by assuming the identity of another principal that has the appropriate authorizations. Message stream modification – Protection against messages getting maliciously re-ordered, delayed, or replayed to affect unauthorized management operations. Disclosure – Protection against eavesdropping on the exchanges between SNMP engines. Specification for USM – USM consists of the general definition of the following communication mechanisms available: Communication without authentication and privacy (NoAuthNoPriv). Communication with authentication and without privacy (AuthNoPriv). Communication with authentication and privacy (AuthPriv). Definition of different authentication and privacy protocols – MD5, SHA and HMAC-SHA-2 authentication protocols and the CBC_DES and CFB_AES_128 privacy protocols are supported in the USM. Definition of a discovery procedure – To find the SNMPEngineID of an SNMP entity for a given transport address and transport endpoint address. Definition of the time synchronization procedure – To facilitate authenticated communication between the SNMP entities. Definition of the SNMP framework MIB – To facilitate remote configuration and administration of the SNMP entity. Definition of the USM MIBs – To facilitate remote configuration and administration of the security module. Definition of the view-based access control model (VACM) MIBs – To facilitate remote configuration and administration of the access control module. Security was one of the biggest weaknesses of SNMP until v3. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent. Each SNMPv3 message contains security parameters that are encoded as an octet string. The meaning of these security parameters depends on the security model being used. The security approach in v3 targets: Confidentiality – Encryption of packets to prevent snooping by an unauthorized source. Integrity – Message integrity to ensure that a packet has not been tampered while in transit including an optional packet replay protection mechanism. Authentication – to verify that the message is from a valid source. v3 also defines the USM and VACM, which were later followed by a transport security model (TSM) that provided support for SNMPv3 over SSH and SNMPv3 over TLS and DTLS. USM (User-based Security Model) provides authentication and privacy (encryption) functions and operates at the message level. VACM (View-based Access Control Model) determines whether a given principal is allowed access to a particular MIB object to perform specific functions and operates at the PDU level. TSM (Transport Security Model) provides a method for authenticating and encrypting messages over external security channels. Two transports, SSH and TLS/DTLS, have been defined that make use of the TSM specification. the IETF recognizes Simple Network Management Protocol version 3 as defined by – (also known as STD0062) as the current standard version of SNMP. The IETF has designated SNMPv3 a full Internet standard, the highest maturity level for an RFC. It considers earlier versions to be obsolete (designating them variously Historic or Obsolete). Implementation issues SNMP's powerful write capabilities, which would allow the configuration of network devices, are not being fully utilized by many vendors, partly because of a lack of security in SNMP versions before SNMPv3, and partly because many devices simply are not capable of being configured via individual MIB object changes. Some SNMP values (especially tabular values) require specific knowledge of table indexing schemes, and these index values are not necessarily consistent across platforms. This can cause correlation issues when fetching information from multiple devices that may not employ the same table indexing scheme (for example fetching disk utilization metrics, where a specific disk identifier is different across platforms.) Some major equipment vendors tend to over-extend their proprietary command line interface (CLI) centric configuration and control systems. In February 2002 the Carnegie Mellon Software Engineering Institute (CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC) issued an Advisory on SNMPv1, after the Oulu University Secure Programming Group conducted a thorough analysis of SNMP message handling. Most SNMP implementations, regardless of which version of the protocol they support, use the same program code for decoding protocol data units (PDU) and problems were identified in this code. Other problems were found with decoding SNMP trap messages received by the SNMP management station or requests received by the SNMP agent on the network device. Many vendors had to issue patches for their SNMP implementations. Security implications Using SNMP to attack a network Because SNMP is designed to allow administrators to monitor and configure network devices remotely it can also be used to penetrate a network. A significant number of software tools can scan the entire network using SNMP, therefore mistakes in the configuration of the read-write mode can make a network susceptible to attacks. In 2001, Cisco released information that indicated that, even in read-only mode, the SNMP implementation of Cisco IOS is vulnerable to certain denial of service attacks. These security issues can be fixed through an IOS upgrade. If SNMP is not used in a network it should be disabled in network devices. When configuring SNMP read-only mode, close attention should be paid to the configuration of the access control and from which IP addresses SNMP messages are accepted. If the SNMP servers are identified by their IP, SNMP is only allowed to respond to these IPs and SNMP messages from other IP addresses would be denied. However, IP address spoofing remains a security concern. Authentication SNMP is available in different versions, and each version has its own security issues. SNMP v1 sends passwords in plaintext over the network. Therefore, passwords can be read with packet sniffing. SNMP v2 allows password hashing with MD5, but this has to be configured. Virtually all network management software support SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to provide data security, that is authentication, privacy and authorization, but only SNMP version 2c gained the endorsement of the Internet Engineering Task Force (IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5, Secure Hash Algorithm (SHA) and keyed algorithms to offer protection against unauthorized data modification and spoofing attacks. If a higher level of security is needed the Data Encryption Standard (DES) can be optionally used in the cipher block chaining mode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T. SNMPv3 may be subject to brute force and dictionary attacks for guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary. SNMPv3 allows both providing random uniformly distributed cryptographic keys and generating cryptographic keys from a password supplied by the user. The risk of guessing authentication strings from hash values transmitted over the network depends on the cryptographic hash function used and the length of the hash value. SNMPv3 uses the HMAC-SHA-2 authentication protocol for the User-based Security Model (USM). SNMP does not use a more secure challenge-handshake authentication protocol. SNMPv3 (like other SNMP protocol versions) is a stateless protocol, and it has been designed with a minimal amount of interactions between the agent and the manager. Thus introducing a challenge-response handshake for each command would impose a burden on the agent (and possibly on the network itself) that the protocol designers deemed excessive and unacceptable. The security deficiencies of all SNMP versions can be mitigated by IPsec authentication and confidentiality mechanisms. SNMP also may be carried securely over Datagram Transport Layer Security (DTLS). Many SNMP implementations include a type of automatic discovery where a new network component, such as a switch or router, is discovered and polled automatically. In SNMPv1 and SNMPv2c this is done through a community string that is transmitted in clear-text to other devices. Clear-text passwords are a significant security risk. Once the community string is known outside the organization it could become the target for an attack. To alert administrators of other attempts to glean community strings, SNMP can be configured to pass community-name authentication failure traps. If SNMPv2 is used, the issue can be avoided by enabling password encryption on the SNMP agents of network devices. The common default configuration for community strings are "public" for read-only access and "private" for read-write. Because of the well-known defaults, SNMP topped the list of the SANS Institute's Common Default Configuration Issues and was number ten on the SANS Top 10 Most Critical Internet Security Threats for the year 2000. System and network administrators frequently do not change these configurations. Whether it runs over TCP or UDP, SNMPv1 and v2 are vulnerable to IP spoofing attacks. With spoofing, attackers may bypass device access lists in agents that are implemented to restrict SNMP access. SNMPv3 security mechanisms such as USM or TSM can prevent spoofing attacks. See also Agent Extensibility Protocol (AgentX) – Subagent protocol for SNMP Common Management Information Protocol (CMIP) – Management protocol by ISO/OSI used by telecommunications devices Common Management Information Service (CMIS) Comparison of network monitoring systems Net-SNMP – Open source reference implementation of SNMP NETCONF – Protocol that is an XML-based configuration protocol for network equipment Remote Network Monitoring (RMON) Simple Gateway Monitoring Protocol (SGMP) – Obsolete protocol replaced by SNMP References Further reading (STD 16) — Structure and Identification of Management Information for the TCP/IP-based Internets (Historic) — Management Information Base for Network Management of TCP/IP-based internets (Historic) — A Simple Network Management Protocol (SNMP) (STD 17) — Management Information Base for Network Management of TCP/IP-based internets: MIB-II (Informational) — Coexistence between version 1 and version 2 of the Internet-standard Network Management Framework (Obsoleted by ) (Experimental) — Introduction to Community-based SNMPv2 (Draft Standard) — Structure of Management Information for SNMPv2 (Obsoleted by ) (Standards Track) — Coexistence between Version 1 and Version 2 of the Internet-standard Network Management Framework (Informational) — Introduction to Version 3 of the Internet-standard Network Management Framework (Obsoleted by ) (STD 58) — Structure of Management Information Version 2 (SMIv2) (Informational) — Introduction and Applicability Statements for Internet Standard Management Framework STD 62 contains the following RFCs:  — An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks  — Message Processing and Dispatching for the Simple Network Management Protocol (SNMP)  — Simple Network Management Protocol (SNMP) Applications  — User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)  — View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP)  — Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP)  — Transport Mappings for the Simple Network Management Protocol (SNMP)  — Management Information Base (MIB) for the Simple Network Management Protocol (SNMP) (Experimental) — Simple Network Management Protocol (SNMP) over Transmission Control Protocol (TCP) Transport Mapping (BCP 74) — Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework (Proposed) — The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model (Proposed) — Simple Network Management Protocol (SNMP) over IEEE 802 Networks (STD 78) — Simple Network Management Protocol (SNMP) Context EngineID Discovery (STD 78) — Transport Subsystem for the Simple Network Management Protocol (SNMP) (STD 78) — Transport Security Model for the Simple Network Management Protocol (SNMP) (Proposed) — Secure Shell Transport Model for the Simple Network Management Protocol (SNMP) (Proposed) — Remote Authentication Dial-In User Service (RADIUS) Usage for Simple Network Management Protocol (SNMP) Transport Models. (STD 78) — Transport Layer Security (TLS) Transport Model for the Simple Network Management Protocol (SNMP) (Proposed|Historic) — HMAC-SHA-2 Authentication Protocols in the User-based Security Model (USM) for SNMPv3 (Proposed) — HMAC-SHA-2 Authentication Protocols in User-Based Security Model (USM) for SNMPv3 External links Application layer protocols Internet protocols Internet Standards Agent communications languages Network management System administration Management frameworks
Simple Network Management Protocol
[ "Technology", "Engineering" ]
6,318
[ "Information systems", "Computer networks engineering", "Network management", "System administration" ]
41,719
https://en.wikipedia.org/wiki/Skip%20zone
A skip zone, also called a silent zone or zone of silence, is a region where a radio transmission can not be received. The zone is located between regions both closer and farther from the transmitter where reception is possible. Cause When using medium to high-frequency radio telecommunication, there are radio waves which travel both parallel to the ground, and towards the ionosphere, referred to as a ground wave and sky wave, respectively. A skip zone is an annular region between the farthest points at which the ground wave can be received and the nearest point at which the refracted sky waves can be received. Within this region, no signal can be received because, due to the conditions of the local ionosphere, the relevant sky waves are not reflected but penetrate the ionosphere. The skip zone is a natural phenomenon that cannot be influenced by technical means. Its width depends on the height and shape of the ionosphere and, particularly, on the local ionospheric maximum electron density characterized by critical frequency foF2. It varies mainly with this parameter, being larger for low foF2. With a fixed working frequency it is large by night and may even disappear by day. Transmitting at night is most effective for long-distance communication but the skip zone becomes significantly larger. Very high frequency waves and higher normally travel through the ionosphere wherefore communication via skywave is exceptional. A highly ionized Es-Layer that occasionally may appear in summer may produce such Sporadic E propagation. Avoidance A method of decreasing the skip zone is by decreasing the frequency of the radio waves. Decreasing the frequency is akin to increasing the ionospheric width. A point is eventually reached when decreasing the frequency results in a zero distance skip zone. In other words, a frequency exists for which vertically incident radio waves will always be refracted back to the Earth. This frequency is equivalent to the ionospheric plasma frequency and is also known as the ionospheric critical frequency, or foF2. Other Skip zone is the subject of a film 'SKIPZONE' made in 1992 by UK artist, Peter Lee-Jones. It refers to areas in Scottish Highlands where it is difficult to obtain radio and TV reception. In the episode "Short Wave" of Father Knows Best, the family hears a distress call from a small boat at sea. Jim explains that the reason they, and not the Coast Guard, can hear the transmission is because of a "skip". See also Shortwave References Sources Radio frequency propagation
Skip zone
[ "Physics" ]
505
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,720
https://en.wikipedia.org/wiki/Slant%20range
In radio electronics, especially radar terminology, slant range or slant distance is the distance along the relative direction between two points. If the two points are at the same level (relative to a specific datum), the slant distance equals the horizontal distance. An example of slant range is the distance to an aircraft flying at high altitude with respect to that of the radar antenna. The slant range (1) is the hypotenuse of the triangle represented by the altitude of the aircraft and the distance between the radar antenna and the aircraft's ground track (point (3) on the earth directly below the aircraft). In the absence of altitude information, for example from a height finder, the aircraft location would be plotted further (2) from the antenna than its actual ground track. See also Ranging Spherical range Line-of-sight_propagation References Antennas
Slant range
[ "Engineering" ]
175
[ "Antennas", "Telecommunications engineering" ]
41,721
https://en.wikipedia.org/wiki/Slave%20clock
In telecommunication and horology, a slave clock is a clock that depends on another clock, the master clock. Modern clocks are synchronized through the Internet or by radio time signals, to Coordinated Universal Time. UTC is based on a network of atomic clocks in many countries. For scientific purposes, precision clocks can be synchronized to within nanoseconds by dedicated satellite channels. Slave clock synchronization is usually achieved by phase-locking the slave clock signal to a signal received from the master clock. To adjust for the transit time of the signal from the master clock to the slave clock, the phase of the slave clocks are adjusted so that both clocks are in phase. Thus, the time markers of both clocks, at the output of the clocks, occur simultaneously. The predecessors of atomic clocks, computer clocks, and digital clocks, these electric clocks were synchronized by an electrical pulse, wired to their master clock in the same facility. Thus the terms "master" and "slave." From the late 19th to the mid 20th centuries, electrical master/slave clock systems were installed, all clocks in a building or facility synchronized through electric wires to a central master clock. Slave clocks either kept time by themselves, and were periodically corrected by the master clock, or required impulses from the master clock. Many slave clocks of these types were in operation, most commonly in schools, offices, military bases, hospitals, railway networks, telephone exchanges and factories the world over. School bells of elementary schools, high schools, and others were able to be synchronized across an entire campus, connected to the system. In schools, the master clock was in the principal's office, with slave units in classrooms which were in other buildings on campus. In factories, a system with a bell or horn could signal the end of a shift, lunchtime or break time. Very few relics of this electrical, analogue system operate in the 21st century. Most 21st century systems of the type are digital. Pictures Mechanical slave clocks from the 1950s and 1960s era. See also Clock network References External links All about electric master and slave clocks British GPO Clock Systems Telecommunications equipment Clocks Digital electronics
Slave clock
[ "Physics", "Technology", "Engineering" ]
431
[ "Machines", "Digital electronics", "Clocks", "Measuring instruments", "Physical systems", "Electronic engineering" ]
41,725
https://en.wikipedia.org/wiki/Spatial%20application
A spatial application is a technological application (such as video) requiring high spatial resolution, possibly at the expense of reduced temporal positioning accuracy, such as increased jerkiness. Examples of spatial applications include the requirement to display small characters and to resolve fine detail in still video, or in motion video that contains very limited motion. References Display technology Data storage
Spatial application
[ "Engineering" ]
70
[ "Electronic engineering", "Display technology" ]
41,727
https://en.wikipedia.org/wiki/Specific%20detectivity
Specific detectivity, or D*, for a photodetector is a figure of merit used to characterize performance, equal to the reciprocal of noise-equivalent power (NEP), normalized per square root of the sensor's area and frequency bandwidth (reciprocal of twice the integration time). Specific detectivity is given by , where is the area of the photosensitive region of the detector, is the bandwidth, and NEP the noise equivalent power in units [W]. It is commonly expressed in Jones units () in honor of Robert Clark Jones who originally defined it. Given that noise-equivalent power can be expressed as a function of the responsivity (in units of or ) and the noise spectral density (in units of or ) as , it is common to see the specific detectivity expressed as . It is often useful to express the specific detectivity in terms of relative noise levels present in the device. A common expression is given below. With q as the electronic charge, is the wavelength of interest, h is the Planck constant, c is the speed of light, k is the Boltzmann constant, T is the temperature of the detector, is the zero-bias dynamic resistance area product (often measured experimentally, but also expressible in noise level assumptions), is the quantum efficiency of the device, and is the total flux of the source (often a blackbody) in photons/sec/cm2. Detectivity measurement Detectivity can be measured from a suitable optical setup using known parameters. You will need a known light source with known irradiance at a given standoff distance. The incoming light source will be chopped at a certain frequency, and then each wavelength will be integrated over a given time constant over a given number of frames. In detail, we compute the bandwidth directly from the integration time constant . Next, an average signal and rms noise needs to be measured from a set of frames. This is done either directly by the instrument, or done as post-processing. Now, the computation of the radiance in W/sr/cm2 must be computed where cm2 is the emitting area. Next, emitting area must be converted into a projected area and the solid angle; this product is often called the etendue. This step can be obviated by the use of a calibrated source, where the exact number of photons/s/cm2 is known at the detector. If this is unknown, it can be estimated using the black-body radiation equation, detector active area and the etendue. This ultimately converts the outgoing radiance of the black body in W/sr/cm2 of emitting area into one of W observed on the detector. The broad-band responsivity, is then just the signal weighted by this wattage. where is the responsivity in units of Signal / W, (or sometimes V/W or A/W) is the outgoing radiance from the black body (or light source) in W/sr/cm2 of emitting area is the total integrated etendue between the emitting source and detector surface is the detector area is the solid angle of the source projected along the line connecting it to the detector surface. From this metric noise-equivalent power can be computed by taking the noise level over the responsivity. Similarly, noise-equivalent irradiance can be computed using the responsivity in units of photons/s/W instead of in units of the signal. Now, the detectivity is simply the noise-equivalent power normalized to the bandwidth and detector area. See also Sensitivity (electronics) References Physical quantities Infrared imaging
Specific detectivity
[ "Physics", "Mathematics" ]
756
[ "Physical phenomena", "Quantity", "Physical properties", "Physical quantities" ]
41,729
https://en.wikipedia.org/wiki/Spectral%20width
In telecommunications, spectral width is the width of a spectral band, i.e., the range of wavelengths or frequencies over which the magnitude of all spectral components is significant, i.e., equal to or greater than a specified fraction of the largest magnitude. In fiber-optic communication applications, the usual method of specifying spectral width is the full width at half maximum (FWHM). This is the same convention used in bandwidth, defined as the frequency range where power drops by less than half (at most −3 dB). The FWHM method may be difficult to apply when the spectrum has a complex shape. Another method of specifying spectral width is a special case of root-mean-square deviation where the independent variable is wavelength, λ, and f (λ) is a suitable radiometric quantity. The relative spectral width, Δλ/λ, is frequently used where Δλ is obtained according to note 1, and λ is the center wavelength. See also Spectral linewidth in optics Spectral bandwidth References Telecommunication theory Optical communications Optical quantities Spectrum (physical sciences)
Spectral width
[ "Physics", "Mathematics", "Engineering" ]
222
[ "Optical communications", "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Waves", "Optical quantities" ]
41,730
https://en.wikipedia.org/wiki/Speed%20of%20service
In telecommunication, speed of service is the time for a message to be received. For example: The time between release of a message by the originator to receipt of the message by the addressee, as perceived by the end user. (originator-to-recipient speed of service) The time between entry of a message into a communications system and receipt of the message at the terminating communications facility, i.e., the communications facility serving the addressee, as measured by the system. References Telecommunications engineering
Speed of service
[ "Engineering" ]
104
[ "Electrical engineering", "Telecommunications engineering" ]
41,734
https://en.wikipedia.org/wiki/Spread%20spectrum
In telecommunications, especially radio communication, spread spectrum are techniques by which a signal (e.g., an electrical, electromagnetic, or acoustic) generated with a particular bandwidth is deliberately spread in the frequency domain over a wider frequency band. Spread-spectrum techniques are used for the establishment of secure communications, increasing resistance to natural interference, noise, and jamming, to prevent detection, to limit power flux density (e.g., in satellite downlinks), and to enable multiple-access communications. Telecommunications Spread spectrum generally makes use of a sequential noise-like signal structure to spread the normally narrowband information signal over a relatively wideband (radio) band of frequencies. The receiver correlates the received signals to retrieve the original information signal. Originally there were two motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ), or to hide the fact that communication was even taking place, sometimes called low probability of intercept (LPI). Frequency-hopping spread spectrum (FHSS), direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), chirp spread spectrum (CSS), and combinations of these techniques are forms of spread spectrum. The first two of these techniques employ pseudorandom number sequences—created using pseudorandom number generators—to determine and control the spreading pattern of the signal across the allocated bandwidth. Wireless standard IEEE 802.11 uses either FHSS or DSSS in its radio interface. Techniques known since the 1940s and used in military communication systems since the 1950s "spread" a radio signal over a wide frequency range several magnitudes higher than minimum requirement. The core principle of spread spectrum is the use of noise-like carrier waves, and, as the name implies, bandwidths much wider than that required for simple point-to-point communication at the same data rate. Resistance to jamming (interference). Direct sequence (DS) is good at resisting continuous-time narrowband jamming, while frequency hopping (FH) is better at resisting pulse jamming. In DS systems, narrowband jamming affects detection performance about as much as if the amount of jamming power is spread over the whole signal bandwidth, where it will often not be much stronger than background noise. By contrast, in narrowband systems where the signal bandwidth is low, the received signal quality will be severely lowered if the jamming power happens to be concentrated on the signal bandwidth. Resistance to eavesdropping. The spreading sequence (in DS systems) or the frequency-hopping pattern (in FH systems) is often unknown by anyone for whom the signal is unintended, in which case it obscures the signal and reduces the chance of an adversary making sense of it. Moreover, for a given noise power spectral density (PSD), spread-spectrum systems require the same amount of energy per bit before spreading as narrowband systems and therefore the same amount of power if the bitrate before spreading is the same, but since the signal power is spread over a large bandwidth, the signal PSD is much lower — often significantly lower than the noise PSD — so that the adversary may be unable to determine whether the signal exists at all. However, for mission-critical applications, particularly those employing commercially available radios, spread-spectrum radios do not provide adequate security unless, at a minimum, long nonlinear spreading sequences are used and the messages are encrypted. Resistance to fading. The high bandwidth occupied by spread-spectrum signals offer some frequency diversity; i.e., it is unlikely that the signal will encounter severe multipath fading over its whole bandwidth. In direct-sequence systems, the signal can be detected by using a rake receiver. Multiple access capability, known as code-division multiple access (CDMA) or code-division multiplexing (CDM). Multiple users can transmit simultaneously in the same frequency band as long as they use different spreading sequences. Invention of frequency hopping The idea of trying to protect and avoid interference in radio transmissions dates back to the beginning of radio wave signaling. In 1899, Guglielmo Marconi experimented with frequency-selective reception in an attempt to minimize interference. The concept of Frequency-hopping was adopted by the German radio company Telefunken and also described in part of a 1903 US patent by Nikola Tesla. Radio pioneer Jonathan Zenneck's 1908 German book Wireless Telegraphy describes the process and notes that Telefunken was using it previously. It saw limited use by the German military in World War I, was put forward by Polish engineer Leonard Danilewicz in 1929, showed up in a patent in the 1930s by Willem Broertjes ( issued Aug. 2, 1932), and in the top-secret US Army Signal Corps World War II communications system named SIGSALY. During World War II, Golden Age of Hollywood actress Hedy Lamarr and avant-garde composer George Antheil developed an intended jamming-resistant radio guidance system for use in Allied torpedoes, patenting the device under "Secret Communications System" on August 11, 1942. Their approach was unique in that frequency coordination was done with paper player piano rolls, a novel approach which was never put into practice. Clock signal generation Spread-spectrum clock generation (SSCG) is used in some synchronous digital systems, especially those containing microprocessors, to reduce the spectral density of the electromagnetic interference (EMI) that these systems generate. A synchronous digital system is one that is driven by a clock signal and, because of its periodic nature, has an unavoidably narrow frequency spectrum. In fact, a perfect clock signal would have all its energy concentrated at a single frequency (the desired clock frequency) and its harmonics. Background Practical synchronous digital systems radiate electromagnetic energy on a number of narrow bands spread on the clock frequency and its harmonics, resulting in a frequency spectrum that, at certain frequencies, can exceed the regulatory limits for electromagnetic interference (e.g. those of the FCC in the United States, JEITA in Japan and the IEC in Europe). Spread-spectrum clocking avoids this problem by reducing the peak radiated energy and, therefore, its electromagnetic emissions and so comply with electromagnetic compatibility (EMC) regulations. It has become a popular technique to gain regulatory approval because it requires only simple equipment modification. It is even more popular in portable electronics devices because of faster clock speeds and increasing integration of high-resolution LCD displays into ever smaller devices. As these devices are designed to be lightweight and inexpensive, traditional passive, electronic measures to reduce EMI, such as capacitors or metal shielding, are not viable. Active EMI reduction techniques such as spread-spectrum clocking are needed in these cases. Method In PCIe, USB 3.0, and SATA systems, the most common technique is downspreading, via frequency modulation with a lower-frequency source. Spread-spectrum clocking, like other kinds of dynamic frequency change, can also create challenges for designers. Principal among these is clock/data misalignment, or clock skew. A phase-locked loop on the receiving side needs a high enough bandwidth to correctly track a spread-spectrum clock. Even though SSC compatibility is mandatory on SATA receivers, it is not uncommon to find expander chips having problems dealing with such a clock. Consequently, an ability to disable spread-spectrum clocking in computer systems is considered useful. Effect Note that this method does not reduce total radiated energy, and therefore systems are not necessarily less likely to cause interference. Spreading energy over a larger bandwidth effectively reduces electrical and magnetic readings within narrow bandwidths. Typical measuring receivers used by EMC testing laboratories divide the electromagnetic spectrum into frequency bands approximately 120 kHz wide. If the system under test were to radiate all its energy in a narrow bandwidth, it would register a large peak. Distributing this same energy into a larger bandwidth prevents systems from putting enough energy into any one narrowband to exceed the statutory limits. The usefulness of this method as a means to reduce real-life interference problems is often debated, as it is perceived that spread-spectrum clocking hides rather than resolves higher radiated energy issues by simple exploitation of loopholes in EMC legislation or certification procedures. This situation results in electronic equipment sensitive to narrow bandwidth(s) experiencing much less interference, while those with broadband sensitivity, or even operated at other higher frequencies (such as a radio receiver tuned to a different station), will experience more interference. FCC certification testing is often completed with the spread-spectrum function enabled in order to reduce the measured emissions to within acceptable legal limits. However, the spread-spectrum functionality may be disabled by the user in some cases. As an example, in the area of personal computers, some BIOS writers include the ability to disable spread-spectrum clock generation as a user setting, thereby defeating the object of the EMI regulations. This might be considered a loophole, but is generally overlooked as long as spread-spectrum is enabled by default. See also Direct-sequence spread spectrum Electromagnetic compatibility (EMC) Electromagnetic interference (EMI) Frequency allocation Frequency-hopping spread spectrum George Antheil HAVE QUICK military frequency-hopping UHF radio voice communication system Hedy Lamarr Open spectrum Orthogonal variable spreading factor (OVSF) Spread-spectrum time-domain reflectometry Time-hopping spread spectrum Ultra-wideband Notes Sources NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management National Information Systems Security Glossary History on spread spectrum, as given in "Smart Mobs, The Next Social Revolution", Howard Rheingold, Władysław Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War Two, edited and translated by Christopher Kasparek, Frederick, MD, University Publications of America, 1984, . Andrew S. Tanenbaum and David J. Wetherall, Computer Networks, Fifth Edition. External links A short history of spread spectrum CDMA and spread spectrum Spread Spectrum Scene newsletter Channel access methods Multiplexing Radio resource management Radio modulation modes Spectrum (physical sciences)
Spread spectrum
[ "Physics" ]
2,061
[ "Waves", "Physical phenomena", "Spectrum (physical sciences)" ]
41,735
https://en.wikipedia.org/wiki/Squelch
In telecommunications, squelch is a circuit function that acts to suppress the audio (or video) output of a receiver in the absence of a strong input signal. Essentially, squelch is a specialized type of noise gate designed to suppress weak signals. Squelch is used in two-way radios and VHF/UHF radio scanners to eliminate the sound of noise when the radio is not receiving a desired transmission. Squelch In some designs, the squelch threshold is preset. For example, television squelch settings are usually preset. Receivers in base stations, or repeaters at remote mountain top sites, are usually not adjustable remotely from the control point. In two-way radios (also known as radiotelephones), the received signal level required to unsquelch (un-mute) the receiver may be fixed or adjustable with a knob or a sequence of button presses. Typically the operator will adjust the control until noise is heard, and then adjust in the opposite direction until the noise is squelched. At this point, a weak signal will unsquelch the receiver and be heard by the operator. Further adjustment will increase the level of signal required to unsquelch the receiver. Some applications have the receiver tied to other equipment that uses the audio muting control voltage, as a "signal present" indication; for example, in a repeater the act of the receiver unmuting will switch on the transmitter. Squelch can be opened (turned off), which allows all signals to be heard, including radio frequency noise on the receiving frequency. This can be useful when trying to hear distant or otherwise weak signals, for example in DXing. Carrier squelch is the most simple variant of all. It functions strictly on the signal strength, such as when a television mutes the audio or blanks the video on "empty" channels, or when a walkie-talkie mutes the audio when no signal is present. Carrier squelch uses receiver Automatic gain control (AGC) to determine the squelch threshold. Single-sideband modulation (SSB) typically uses carrier squelch. Noise squelch is more reliable than carrier squelch. A noise squelch circuit is noise-operated and can be used in AM or FM receivers, and relies on the receiver quieting in the presence of an AM or FM carrier. To minimize the effects of voice audio on squelch operation, the audio from the receiver's detector is passed through a high-pass filter, typically passing 4,000 Hz (4kHz) and above, leaving only high frequency noise. The squelch control adjusts the gain of an amplifier which varies the level of the noise coming out of the filter. This noise is rectified, producing a DC voltage when noise is present. The presence of continuous noise on an idle channel creates a DC voltage which turns the receiver audio off. When a signal with little or no noise is received, the noise-derived voltage is reduced and the receiver audio is unmuted. Noise squelch can be defeated by intermodulation present in the high-pass band. For this reason, many receivers with noise squelch will also use a carrier squelch set at a higher threshold than the noise squelch. Tone squelch and selective calling Tone squelch, or another form of selective calling, is sometimes used to solve interference problems. Where more than one user is on the same channel (co-channel users), selective calling addresses a subset of all receivers. Instead of turning on the receiver audio for any signal, the audio turns on only in the presence of the correct selective calling code. This is akin to the use of a lock on a door. A carrier squelch is unlocked and will let any signal in. Selective calling locks out all signals except ones with the correct key to the lock (the correct code). In non-critical uses, selective calling can also be used to hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as inexpensive police scanners or low-cost mobile radios—cannot reject the strong signals present in urban environments. The interference will still be present, and will still degrade system performance, but by using selective calling the user will not have to hear the noises produced by receiving the interference. Four different techniques are commonly used. Selective calling can be regarded as a form of in-band signaling. CTCSS CTCSS (Continuous Tone-Coded Squelch System) continuously superimposes any one of about 50 low-pitch audio tones on the transmitted signal, ranging from 67 to 254 Hz. The original tone set was 10, then 32 tones, and has been expanded even further over the years. CTCSS is often called PL tone (for Private Line, a trademark of Motorola), or simply tone squelch. General Electric's implementation of CTCSS is called Channel Guard (or CG). RCA Corporation used the name Quiet Channel, or QC. There are many other company-specific names used by radio vendors to describe compatible options. Any CTCSS system that has compatible tones is interchangeable. Old and new radios with CTCSS and radios across manufacturers are compatible. For those PMR446 radios with 38 codes, the codes 0 to 38 are CTCSS Tones: SelCall Selcall (Selective Calling) transmits a burst of up to five in-band audio tones at the beginning of each transmission. This feature (sometimes called "tone burst") is common in European systems. Early systems used one tone (commonly called "Tone Burst"). Several tones were used, the most common being 1,750 Hz, which is still used in European amateur radio repeater systems. The addressing scheme provided by one tone was not enough, so a two-tone system was devised—one tone followed by a second tone (sometimes called a "1+1" system). Motorola later marketed a system called "Quik-Call" that used two simultaneous tones followed by two more simultaneous tones (sometimes called a "2+2" system) that was heavily used by fire department dispatch systems in the US. Later selective call systems used paging system technology that made use of a burst of five sequential tones. DCS DCS (Digital-Coded Squelch), generically known as CDCSS (Continuous Digital-Coded Squelch System), was designed as the digital replacement for CTCSS. In the same way that a single CTCSS tone would be used on an entire group of radios, the same DCS code is used in a group of radios. DCS is also referred to as Digital Private Line (or DPL), another trademark of Motorola, and likewise, General Electric's implementation of DCS is referred to as Digital Channel Guard (or DCG). Despite the fact that it is not a tone, DCS is also called DTCS (Digital Tone Code Squelch) by Icom, and other names by other manufacturers. Radios with DCS options are generally compatible, provided the radio's encoder-decoder will use the same code as radios in the existing system. DCS adds a 134.4 bit/s (sub-audible) bitstream to the transmitted audio. The code word is a 23-bit Golay (23,12) code which has the ability to detect and correct errors of 3 or fewer bits. The word consists of 12 data bits followed by 11 check bits. The last 3 data bits are a fixed '001', this leaves 9 code bits (512 possibilities) which are conventionally represented as a 3-digit octal number. Note that the first bit transmitted is the LSB, so the code is "backwards" from the transmitted bit order. Only 83 of the 512 possible codes are available, to prevent falsing due to alignment collisions. DCS codes are standardized by the Telecommunications Industry Association with the following 83 codes being found in their most recent standard, however, some systems use non-standard codes. For those PMR446 radios with 121 codes, the codes 39 to 121 are DCS codes: XTCSS XTCSS is the newest signalling technique, and provides 99 codes with the added advantage of "silent operation". XTCSS-fitted radios are purposed to enjoy more privacy and flexibility of operation. XTCSS is implemented as a combination of CTCSS and in-band signalling. Uses Squelch was invented first and is still in wide use in two-way radio. Squelch of any kind is used to indicate loss of signal, which is used to keep commercial and amateur radio repeaters from continually transmitting. Since a carrier squelch receiver cannot tell a valid carrier from a spurious signal (noise, etc.), CTCSS is often used as well, as it avoids false keyups. Use of CTCSS is especially helpful on congested frequencies or on frequency bands prone to skip and during band openings. Professional wireless microphones use squelch to avoid reproducing noise when the receiver does not receive enough signal from the microphone. Most professional models have adjustable squelch, usually set with a screwdriver adjustment or front-panel control on the receiver. See also Dynamic noise limiter Noise gate References Radio electronics Telecommunications equipment
Squelch
[ "Engineering" ]
1,924
[ "Radio electronics" ]
41,741
https://en.wikipedia.org/wiki/Standing%20wave
In physics, a standing wave, also known as a stationary wave, is a wave that oscillates in time but whose peak amplitude profile does not move in space. The peak amplitude of the wave oscillations at any point in space is constant with respect to time, and the oscillations at different points throughout the wave are in phase. The locations at which the absolute value of the amplitude is minimum are called nodes, and the locations where the absolute value of the amplitude is maximum are called antinodes. Standing waves were first described scientifically by Michael Faraday in 1831. Faraday observed standing waves on the surface of a liquid in a vibrating container. Franz Melde coined the term "standing wave" (German: stehende Welle or Stehwelle) around 1860 and demonstrated the phenomenon in his classic experiment with vibrating strings. This phenomenon can occur because the medium is moving in the direction opposite to the movement of the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions. The most common cause of standing waves is the phenomenon of resonance, in which standing waves occur inside a resonator due to interference between waves reflected back and forth at the resonator's resonant frequency. For waves of equal amplitude traveling in opposing directions, there is on average no net propagation of energy. Moving medium As an example of the first type, under certain meteorological conditions standing waves form in the atmosphere in the lee of mountain ranges. Such waves are often exploited by glider pilots. Standing waves and hydraulic jumps also form on fast flowing river rapids and tidal currents such as the Saltstraumen maelstrom. A requirement for this in river currents is a flowing water with shallow depth in which the inertia of the water overcomes its gravity due to the supercritical flow speed (Froude number: 1.7 – 4.5, surpassing 4.5 results in direct standing wave) and is therefore neither significantly slowed down by the obstacle nor pushed to the side. Many standing river waves are popular river surfing breaks. Opposing waves As an example of the second type, a standing wave in a transmission line is a wave in which the distribution of current, voltage, or field strength is formed by the superposition of two waves of the same frequency propagating in opposite directions. The effect is a series of nodes (zero displacement) and anti-nodes (maximum displacement) at fixed points along the transmission line. Such a standing wave may be formed when a wave is transmitted into one end of a transmission line and is reflected from the other end by an impedance mismatch, i.e., discontinuity, such as an open circuit or a short. The failure of the line to transfer power at the standing wave frequency will usually result in attenuation distortion. In practice, losses in the transmission line and other components mean that a perfect reflection and a pure standing wave are never achieved. The result is a partial standing wave, which is a superposition of a standing wave and a traveling wave. The degree to which the wave resembles either a pure standing wave or a pure traveling wave is measured by the standing wave ratio (SWR). Another example is standing waves in the open ocean formed by waves with the same wave period moving in opposite directions. These may form near storm centres, or from reflection of a swell at the shore, and are the source of microbaroms and microseisms. Mathematical description This section considers representative one- and two-dimensional cases of standing waves. First, an example of an infinite length string shows how identical waves traveling in opposite directions interfere to produce standing waves. Next, two finite length string examples with different boundary conditions demonstrate how the boundary conditions restrict the frequencies that can form standing waves. Next, the example of sound waves in a pipe demonstrates how the same principles can be applied to longitudinal waves with analogous boundary conditions. Standing waves can also occur in two- or three-dimensional resonators. With standing waves on two-dimensional membranes such as drumheads, illustrated in the animations above, the nodes become nodal lines, lines on the surface at which there is no movement, that separate regions vibrating with opposite phase. These nodal line patterns are called Chladni figures. In three-dimensional resonators, such as musical instrument sound boxes and microwave cavity resonators, there are nodal surfaces. This section includes a two-dimensional standing wave example with a rectangular boundary to illustrate how to extend the concept to higher dimensions. Standing wave on an infinite length string To begin, consider a string of infinite length along the x-axis that is free to be stretched transversely in the y direction. For a harmonic wave traveling to the right along the string, the string's displacement in the y direction as a function of position x and time t is The displacement in the y-direction for an identical harmonic wave traveling to the left is where ymax is the amplitude of the displacement of the string for each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. For identical right- and left-traveling waves on the same string, the total displacement of the string is the sum of yR and yL, Using the trigonometric sum-to-product identity , Equation () does not describe a traveling wave. At any position x, y(x,t) simply oscillates in time with an amplitude that varies in the x-direction as . The animation at the beginning of this article depicts what is happening. As the left-traveling blue wave and right-traveling green wave interfere, they form the standing red wave that does not travel and instead oscillates in place. Because the string is of infinite length, it has no boundary condition for its displacement at any point along the x-axis. As a result, a standing wave can form at any frequency. At locations on the x-axis that are even multiples of a quarter wavelength, the amplitude is always zero. These locations are called nodes. At locations on the x-axis that are odd multiples of a quarter wavelength the amplitude is maximal, with a value of twice the amplitude of the right- and left-traveling waves that interfere to produce this standing wave pattern. These locations are called anti-nodes. The distance between two consecutive nodes or anti-nodes is half the wavelength, λ/2. Standing wave on a string with two fixed ends Next, consider a string with fixed ends at and . The string will have some damping as it is stretched by traveling waves, but assume the damping is very small. Suppose that at the fixed end a sinusoidal force is applied that drives the string up and down in the y-direction with a small amplitude at some frequency f. In this situation, the driving force produces a right-traveling wave. That wave reflects off the right fixed end and travels back to the left, reflects again off the left fixed end and travels back to the right, and so on. Eventually, a steady state is reached where the string has identical right- and left-traveling waves as in the infinite-length case and the power dissipated by damping in the string equals the power supplied by the driving force so the waves have constant amplitude. Equation () still describes the standing wave pattern that can form on this string, but now Equation () is subject to boundary conditions where at and because the string is fixed at and because we assume the driving force at the fixed end has small amplitude. Checking the values of y at the two ends, This boundary condition is in the form of the Sturm–Liouville formulation. The latter boundary condition is satisfied when . L is given, so the boundary condition restricts the wavelength of the standing waves to Waves can only form standing waves on this string if they have a wavelength that satisfies this relationship with L. If waves travel with speed v along the string, then equivalently the frequency of the standing waves is restricted to The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. Higher integer values of n correspond to modes of oscillation called harmonics or overtones. Any standing wave on the string will have n + 1 nodes including the fixed ends and n anti-nodes. To compare this example's nodes to the description of nodes for standing waves in the infinite length string, Equation () can be rewritten as In this variation of the expression for the wavelength, n must be even. Cross multiplying we see that because L is a node, it is an even multiple of a quarter wavelength, This example demonstrates a type of resonance and the frequencies that produce standing waves can be referred to as resonant frequencies. Standing wave on a string with one fixed end Next, consider the same string of length L, but this time it is only fixed at . At , the string is free to move in the y direction. For example, the string might be tied at to a ring that can slide freely up and down a pole. The string again has small damping and is driven by a small driving force at . In this case, Equation () still describes the standing wave pattern that can form on the string, and the string has the same boundary condition of at . However, at where the string can move freely there should be an anti-node with maximal amplitude of y. Equivalently, this boundary condition of the "free end" can be stated as at , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the motion of the "free end" will follow that of the point to its left. Reviewing Equation (), for the largest amplitude of y occurs when , or This leads to a different set of wavelengths than in the two-fixed-ends example. Here, the wavelength of the standing waves is restricted to Equivalently, the frequency is restricted to In this example n only takes odd values. Because L is an anti-node, it is an odd multiple of a quarter wavelength. Thus the fundamental mode in this example only has one quarter of a complete sine cycle–zero at and the first peak at –the first harmonic has three quarters of a complete sine cycle, and so on. This example also demonstrates a type of resonance and the frequencies that produce standing waves are called resonant frequencies. Standing wave in a pipe Consider a standing wave in a pipe of length L. The air inside the pipe serves as the medium for longitudinal sound waves traveling to the right or left through the pipe. While the transverse waves on the string from the previous examples vary in their displacement perpendicular to the direction of wave motion, the waves traveling through the air in the pipe vary in terms of their pressure and longitudinal displacement along the direction of wave motion. The wave propagates by alternately compressing and expanding air in segments of the pipe, which displaces the air slightly from its rest position and transfers energy to neighboring segments through the forces exerted by the alternating high and low air pressures. Equations resembling those for the wave on a string can be written for the change in pressure Δp due to a right- or left-traveling wave in the pipe. where pmax is the pressure amplitude or the maximum increase or decrease in air pressure due to each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. If identical right- and left-traveling waves travel through the pipe, the resulting superposition is described by the sum This formula for the pressure is of the same form as Equation (), so a stationary pressure wave forms that is fixed in space and oscillates in time. If the end of a pipe is closed, the pressure is maximal since the closed end of the pipe exerts a force that restricts the movement of air. This corresponds to a pressure anti-node (which is a node for molecular motions, because the molecules near the closed end cannot move). If the end of the pipe is open, the pressure variations are very small, corresponding to a pressure node (which is an anti-node for molecular motions, because the molecules near the open end can move freely). The exact location of the pressure node at an open end is actually slightly beyond the open end of the pipe, so the effective length of the pipe for the purpose of determining resonant frequencies is slightly longer than its physical length. This difference in length is ignored in this example. In terms of reflections, open ends partially reflect waves back into the pipe, allowing some energy to be released into the outside air. Ideally, closed ends reflect the entire wave back in the other direction. First consider a pipe that is open at both ends, for example an open organ pipe or a recorder. Given that the pressure must be zero at both open ends, the boundary conditions are analogous to the string with two fixed ends, which only occurs when the wavelength of standing waves is or equivalently when the frequency is where v is the speed of sound. Next, consider a pipe that is open at (and therefore has a pressure node) and closed at (and therefore has a pressure anti-node). The closed "free end" boundary condition for the pressure at can be stated as , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the pressure of the closed end will follow that of the point to its left. Examples of this setup include a bottle and a clarinet. This pipe has boundary conditions analogous to the string with only one fixed end. Its standing waves have wavelengths restricted to or equivalently the frequency of standing waves is restricted to For the case where one end is closed, n only takes odd values just like in the case of the string fixed at only one end. So far, the wave has been written in terms of its pressure as a function of position x and time. Alternatively, the wave can be written in terms of its longitudinal displacement of air, where air in a segment of the pipe moves back and forth slightly in the x-direction as the pressure varies and waves travel in either or both directions. The change in pressure Δp and longitudinal displacement s are related as where ρ is the density of the air. In terms of longitudinal displacement, closed ends of pipes correspond to nodes since air movement is restricted and open ends correspond to anti-nodes since the air is free to move. A similar, easier to visualize phenomenon occurs in longitudinal waves propagating along a spring. We can also consider a pipe that is closed at both ends. In this case, both ends will be pressure anti-nodes or equivalently both ends will be displacement nodes. This example is analogous to the case where both ends are open, except the standing wave pattern has a phase shift along the x-direction to shift the location of the nodes and anti-nodes. For example, the longest wavelength that resonates–the fundamental mode–is again twice the length of the pipe, except that the ends of the pipe have pressure anti-nodes instead of pressure nodes. Between the ends there is one pressure node. In the case of two closed ends, the wavelength is again restricted to and the frequency is again restricted to A Rubens tube provides a way to visualize the pressure variations of the standing waves in a tube with two closed ends. 2D standing wave with a rectangular boundary Next, consider transverse waves that can move along a two dimensional surface within a rectangular boundary of length Lx in the x-direction and length Ly in the y-direction. Examples of this type of wave are water waves in a pool or waves on a rectangular sheet that has been pulled taut. The waves displace the surface in the z-direction, with defined as the height of the surface when it is still. In two dimensions and Cartesian coordinates, the wave equation is where z(x,y,t) is the displacement of the surface, c is the speed of the wave. To solve this differential equation, let's first solve for its Fourier transform, with Taking the Fourier transform of the wave equation, This is an eigenvalue problem where the frequencies correspond to eigenvalues that then correspond to frequency-specific modes or eigenfunctions. Specifically, this is a form of the Helmholtz equation and it can be solved using separation of variables. Assume Dividing the Helmholtz equation by Z, This leads to two coupled ordinary differential equations. The x term equals a constant with respect to x that we can define as Solving for X(x), This x-dependence is sinusoidal–recalling Euler's formula–with constants Akx and Bkx determined by the boundary conditions. Likewise, the y term equals a constant with respect to y that we can define as and the dispersion relation for this wave is therefore Solving the differential equation for the y term, Multiplying these functions together and applying the inverse Fourier transform, z(x,y,t) is a superposition of modes where each mode is the product of sinusoidal functions for x, y, and t, The constants that determine the exact sinusoidal functions depend on the boundary conditions and initial conditions. To see how the boundary conditions apply, consider an example like the sheet that has been pulled taut where z(x,y,t) must be zero all around the rectangular boundary. For the x dependence, z(x,y,t) must vary in a way that it can be zero at both and for all values of y and t. As in the one dimensional example of the string fixed at both ends, the sinusoidal function that satisfies this boundary condition is with kx restricted to Likewise, the y dependence of z(x,y,t) must be zero at both and , which is satisfied by Restricting the wave numbers to these values also restricts the frequencies that resonate to If the initial conditions for z(x,y,0) and its time derivative ż(x,y,0) are chosen so the t-dependence is a cosine function, then standing waves for this system take the form So, standing waves inside this fixed rectangular boundary oscillate in time at certain resonant frequencies parameterized by the integers n and m. As they oscillate in time, they do not travel and their spatial variation is sinusoidal in both the x- and y-directions such that they satisfy the boundary conditions. The fundamental mode, and , has a single antinode in the middle of the rectangle. Varying n and m gives complicated but predictable two-dimensional patterns of nodes and antinodes inside the rectangle. From the dispersion relation, in certain situations different modes–meaning different combinations of n and m–may resonate at the same frequency even though they have different shapes for their x- and y-dependence. For example, if the boundary is square, , the modes and , and , and and all resonate at Recalling that ω determines the eigenvalue in the Helmholtz equation above, the number of modes corresponding to each frequency relates to the frequency's multiplicity as an eigenvalue. Standing wave ratio, phase, and energy transfer If the two oppositely moving traveling waves are not of the same amplitude, they will not cancel completely at the nodes, the points where the waves are 180° out of phase, so the amplitude of the standing wave will not be zero at the nodes, but merely a minimum. Standing wave ratio (SWR) is the ratio of the amplitude at the antinode (maximum) to the amplitude at the node (minimum). A pure standing wave will have an infinite SWR. It will also have a constant phase at any point in space (but it may undergo a 180° inversion every half cycle). A finite, non-zero SWR indicates a wave that is partially stationary and partially travelling. Such waves can be decomposed into a superposition of two waves: a travelling wave component and a stationary wave component. An SWR of one indicates that the wave does not have a stationary component – it is purely a travelling wave, since the ratio of amplitudes is equal to 1. A pure standing wave does not transfer energy from the source to the destination. However, the wave is still subject to losses in the medium. Such losses will manifest as a finite SWR, indicating a travelling wave component leaving the source to supply the losses. Even though the SWR is now finite, it may still be the case that no energy reaches the destination because the travelling component is purely supplying the losses. However, in a lossless medium, a finite SWR implies a definite transfer of energy to the destination. Examples One easy example to understand standing waves is two people shaking either end of a jump rope. If they shake in sync the rope can form a regular pattern of waves oscillating up and down, with stationary points along the rope where the rope is almost still (nodes) and points where the arc of the rope is maximum (antinodes). Acoustic resonance Standing waves are also observed in physical media such as strings and columns of air. Any waves traveling along the medium will reflect back when they reach the end. This effect is most noticeable in musical instruments where, at various multiples of a vibrating string or air column's natural frequency, a standing wave is created, allowing harmonics to be identified. Nodes occur at fixed ends and anti-nodes at open ends. If fixed at only one end, only odd-numbered harmonics are available. At the open end of a pipe the anti-node will not be exactly at the end as it is altered by its contact with the air and so end correction is used to place it exactly. The density of a string will affect the frequency at which harmonics will be produced; the greater the density the lower the frequency needs to be to produce a standing wave of the same harmonic. Visible light Standing waves are also observed in optical media such as optical waveguides and optical cavities. Lasers use optical cavities in the form of a pair of facing mirrors, which constitute a Fabry–Pérot interferometer. The gain medium in the cavity (such as a crystal) emits light coherently, exciting standing waves of light in the cavity. The wavelength of light is very short (in the range of nanometers, 10−9 m) so the standing waves are microscopic in size. One use for standing light waves is to measure small distances, using optical flats. X-rays Interference between X-ray beams can form an X-ray standing wave (XSW) field. Because of the short wavelength of X-rays (less than 1 nanometer), this phenomenon can be exploited for measuring atomic-scale events at material surfaces. The XSW is generated in the region where an X-ray beam interferes with a diffracted beam from a nearly perfect single crystal surface or a reflection from an X-ray mirror. By tuning the crystal geometry or X-ray wavelength, the XSW can be translated in space, causing a shift in the X-ray fluorescence or photoelectron yield from the atoms near the surface. This shift can be analyzed to pinpoint the location of a particular atomic species relative to the underlying crystal structure or mirror surface. The XSW method has been used to clarify the atomic-scale details of dopants in semiconductors, atomic and molecular adsorption on surfaces, and chemical transformations involved in catalysis. Mechanical waves Standing waves can be mechanically induced into a solid medium using resonance. One easy to understand example is two people shaking either end of a jump rope. If they shake in sync, the rope will form a regular pattern with nodes and antinodes and appear to be stationary, hence the name standing wave. Similarly a cantilever beam can have a standing wave imposed on it by applying a base excitation. In this case the free end moves the greatest distance laterally compared to any location along the beam. Such a device can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology. Seismic waves Standing surface waves on the Earth are observed as free oscillations of the Earth. Faraday waves The Faraday wave is a non-linear standing wave at the air-liquid interface induced by hydrodynamic instability. It can be used as a liquid-based template to assemble microscale materials. Seiches A seiche is an example of a standing wave in an enclosed body of water. It is characterised by the oscillatory behaviour of the water level at either end of the body and typically has a nodal point near the middle of the body where very little change in water level is observed. It should be distinguished from a simple storm surge where no oscillation is present. In sizeable lakes, the period of such oscillations may be between minutes and hours, for example Lake Geneva's longitudinal period is 73 minutes and its transversal seiche has a period of around 10 minutes, while Lake Huron can be seen to have resonances with periods between 1 and 2 hours. See Lake seiches. See also Waves Electronics Notes References External links 1831 introductions 1831 in science 1860s neologisms Michael Faraday Wave mechanics Articles containing video clips
Standing wave
[ "Physics" ]
5,212
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
41,742
https://en.wikipedia.org/wiki/Standing%20wave%20ratio
In radio engineering and telecommunications, standing wave ratio (SWR) is a measure of impedance matching of loads to the characteristic impedance of a transmission line or waveguide. Impedance mismatches result in standing waves along the transmission line, and SWR is defined as the ratio of the partial standing wave's amplitude at an antinode (maximum) to the amplitude at a node (minimum) along the line. Voltage standing wave ratio (VSWR) (pronounced "vizwar") is the ratio of maximum to minimum voltage on a transmission line . For example, a VSWR of 1.2 means a peak voltage 1.2 times the minimum voltage along that line, if the line is at least one half wavelength long. A SWR can be also defined as the ratio of the maximum amplitude to minimum amplitude of the transmission line's currents, electric field strength, or the magnetic field strength. Neglecting transmission line loss, these ratios are identical. The power standing wave ratio (PSWR) is defined as the square of the VSWR, however, this deprecated term has no direct physical relation to power actually involved in transmission. SWR is usually measured using a dedicated instrument called an SWR meter. Since SWR is a measure of the load impedance relative to the characteristic impedance of the transmission line in use (which together determine the reflection coefficient as described below), a given SWR meter can interpret the impedance it sees in terms of SWR only if it has been designed for the same particular characteristic impedance as the line. In practice most transmission lines used in these applications are coaxial cables with an impedance of either 50 or 75 ohms, so most SWR meters correspond to one of these. Checking the SWR is a standard procedure in a radio station. Although the same information could be obtained by measuring the load's impedance with an impedance analyzer (or "impedance bridge"), the SWR meter is simpler and more robust for this purpose. By measuring the magnitude of the impedance mismatch at the transmitter output it reveals problems due to either the antenna or the transmission line. Impedance matching SWR is used as a measure of impedance matching of a load to the characteristic impedance of a transmission line carrying radio frequency (RF) signals. This especially applies to transmission lines connecting radio transmitters and receivers with their antennas, as well as similar uses of RF cables such as cable television connections to TV receivers and distribution amplifiers. Impedance matching is achieved when the source impedance is the complex conjugate of the load impedance. The easiest way of achieving this, and the way that minimizes losses along the transmission line, is for the imaginary part of the complex impedance of both the source and load to be zero, that is, pure resistances, equal to the characteristic impedance of the transmission line. When there is a mismatch between the load impedance and the transmission line, part of the forward wave sent toward the load is reflected back along the transmission line towards the source. The source then sees a different impedance than it expects which can lead to lesser (or in some cases, more) power being supplied by it, the result being very sensitive to the electrical length of the transmission line. Such a mismatch is usually undesired and results in standing waves along the transmission line which magnifies transmission line losses (significant at higher frequencies and for longer cables). The SWR is a measure of the depth of those standing waves and is, therefore, a measure of the matching of the load to the transmission line. A matched load would result in an SWR of 1:1 implying no reflected wave. An infinite SWR represents complete reflection by a load unable to absorb electrical power, with all the incident power reflected back towards the source. It should be understood that the match of a load to the transmission line is different from the match of a source to the transmission line or the match of a source to the load seen through the transmission line. For instance, if there is a perfect match between the load impedance load and the source impedance that perfect match will remain if the source and load are connected through a transmission line with an electrical length of one half wavelength (or a multiple of one half wavelengths) using a transmission line of any characteristic impedance 0. However the SWR will generally not be 1:1, depending only on load and 0. With a different length of transmission line, the source will see a different impedance than load which may or may not be a good match to the source. Sometimes this is deliberate, as when a quarter-wave matching section is used to improve the match between an otherwise mismatched source and load. However typical RF sources such as transmitters and signal generators are designed to look into a purely resistive load impedance such as 50Ω or 75Ω, corresponding to common transmission lines' characteristic impedances. In those cases, matching the load to the transmission line, load 0, always ensures that the source will see the same load impedance as if the transmission line weren't there. This is identical to a 1:1 SWR. This condition (load 0) also means that the load seen by the source is independent of the transmission line's electrical length. Since the electrical length of a physical segment of transmission line depends on the signal frequency, violation of this condition means that the impedance seen by the source through the transmission line becomes a function of frequency (especially if the line is long), even if load is frequency-independent. So in practice, a good SWR (near 1:1) implies a transmitter's output seeing the exact impedance it expects for optimum and safe operation. Relationship to the reflection coefficient The voltage component of a standing wave in a uniform transmission line consists of the forward wave (with complex amplitude ) superimposed on the reflected wave (with complex amplitude ). A wave is partly reflected when a transmission line is terminated with an impedance unequal to its characteristic impedance. The reflection coefficient can be defined as: or is a complex number that describes both the magnitude and the phase shift of the reflection. The simplest cases with measured at the load are: : complete negative reflection, when the line is short-circuited, : no reflection, when the line is perfectly matched, : complete positive reflection, when the line is open-circuited. The SWR directly corresponds to the magnitude of . At some points along the line the forward and reflected waves interfere constructively, exactly in phase, with the resulting amplitude given by the sum of those waves' amplitudes: At other points, the waves interfere 180° out of phase with the amplitudes partially cancelling: The voltage standing wave ratio is then Since the magnitude of always falls in the range [0,1], the SWR is always greater than or equal to unity. Note that the phase of Vf and Vr vary along the transmission line in opposite directions to each other. Therefore, the complex-valued reflection coefficient varies as well, but only in phase. With the SWR dependent only on the complex magnitude of , it can be seen that the SWR measured at any point along the transmission line (neglecting transmission line losses) obtains an identical reading. Since the power of the forward and reflected waves are proportional to the square of the voltage components due to each wave, SWR can be expressed in terms of forward and reflected power: By sampling the complex voltage and current at the point of insertion, an SWR meter is able to compute the effective forward and reflected voltages on the transmission line for the characteristic impedance for which the SWR meter has been designed. Since the forward and reflected power is related to the square of the forward and reflected voltages, some SWR meters also display the forward and reflected power. In the special case of a load L, which is purely resistive but unequal to the characteristic impedance of the transmission line 0, the SWR is given simply by their ratio: with the ratio or its reciprocal is chosen to obtain a value greater than unity. The standing wave pattern Using complex notation for the voltage amplitudes, for a signal at frequency , the actual (real) voltages V as a function of time are understood to relate to the complex voltages according to: Thus taking the real part of the complex quantity inside the parenthesis, the actual voltage consists of a sine wave at frequency with a peak amplitude equal to the complex magnitude of , and with a phase given by the phase of the complex . Then with the position along a transmission line given by , with the line ending in a load located at , the complex amplitudes of the forward and reverse waves would be written as: for some complex amplitude (corresponding to the forward wave at that some treatments use phasors where the time dependence is according to and spatial dependence (for a wave in the direction) of Either convention obtains the same result for . According to the superposition principle the net voltage present at any point on the transmission line is equal to the sum of the voltages due to the forward and reflected waves: Since we are interested in the variations of the magnitude of along the line (as a function of ), we shall solve instead for the squared magnitude of that quantity, which simplifies the mathematics. To obtain the squared magnitude we multiply the above quantity by its complex conjugate: Depending on the phase of the third term, the maximum and minimum values of (the square root of the quantity in the equations) are and respectively, for a standing wave ratio of: as earlier asserted. Along the line, the above expression for is seen to oscillate sinusoidally between and with a period of  . This is half of the guided wavelength for the frequency  . That can be seen as due to interference between two waves of that frequency which are travelling in opposite directions. For example, at a frequency (free space wavelength of 15 m) in a transmission line whose velocity factor is 0.67 , the guided wavelength (distance between voltage peaks of the forward wave alone) would be At instances when the forward wave at is at zero phase (peak voltage) then at it would also be at zero phase, but at it would be at 180° phase (peak negative voltage). On the other hand, the magnitude of the voltage due to a standing wave produced by its addition to a reflected wave, would have a wavelength between peaks of only Depending on the location of the load and phase of reflection, there might be a peak in the magnitude of at Then there would be another peak found where at whereas it would find minima of the standing wave at 8.8 m, etc. Practical implications of SWR The most common case for measuring and examining SWR is when installing and tuning transmitting antennas. When a transmitter is connected to an antenna by a feed line, the driving point impedance of the antenna must match the characteristic impedance of the feed line in order for the transmitter to see the impedance it was designed for (the impedance of the feed line, usually 50 or 75 ohms). The impedance of a particular antenna design can vary due to a number of factors that cannot always be clearly identified. This includes the transmitter frequency (as compared to the antenna's design or resonant frequency), the antenna's height above and quality of the ground, proximity to large metal structures, and variations in the exact size of the conductors used to construct the antenna. When an antenna and feed line do not have matching impedances, the transmitter sees an unexpected impedance, where it might not be able to produce its full power, and can even damage the transmitter in some cases. The reflected power in the transmission line increases the average current and therefore losses in the transmission line compared to power actually delivered to the load. It is the interaction of these reflected waves with forward waves which causes standing wave patterns, with the negative repercussions we have noted. Matching the impedance of the antenna to the impedance of the feed line can sometimes be accomplished through adjusting the antenna itself, but otherwise is possible using an antenna tuner, an impedance matching device. Installing the tuner between the feed line and the antenna allows for the feed line to see a load close to its characteristic impedance, while sending most of the transmitter's power (a small amount may be dissipated within the tuner) to be radiated by the antenna despite its otherwise unacceptable feed point impedance. Installing a tuner in between the transmitter and the feed line can also transform the impedance seen at the transmitter end of the feed line to one preferred by the transmitter. However, in the latter case, the feed line still has a high SWR present, with the resulting increased feed line losses unmitigated. The magnitude of those losses are dependent on the type of transmission line, and its length. They always increase with frequency. For example, a certain antenna used well away from its resonant frequency may have an SWR of 6:1. For a frequency of 3.5 MHz, with that antenna fed through 75 meters of RG-8A coax, the loss due to standing waves would be 2.2 dB. However the same 6:1 mismatch through 75 meters of RG-8A coax would incur 10.8 dB of loss at 146 MHz. Thus, a better match of the antenna to the feed line, that is, a lower SWR, becomes increasingly important with increasing frequency, even if the transmitter is able to accommodate the impedance seen (or an antenna tuner is used between the transmitter and feed line). Certain types of transmissions can suffer other negative effects from reflected waves on a transmission line. Analog TV can experience "ghosts" from delayed signals bouncing back and forth on a long line. FM stereo can also be affected and digital signals can experience delayed pulses leading to bit errors. Whenever the delay times for a signal going back down and then again up the line are comparable to the modulation time constants, effects occur. For this reason, these types of transmissions require a low SWR on the feedline, even if SWR induced loss might be acceptable and matching is done at the transmitter. Methods of measuring standing wave ratio Many different methods can be used to measure standing wave ratio. The most intuitive method uses a slotted line which is a section of transmission line with an open slot which allows a probe to detect the actual voltage at various points along the line. Thus the maximum and minimum values can be compared directly. This method is used at VHF and higher frequencies. At lower frequencies, such lines are impractically long. Directional couplers can be used at HF through microwave frequencies. Some are a quarter wave or more long, which restricts their use to the higher frequencies. Other types of directional couplers sample the current and voltage at a single point in the transmission path and mathematically combine them in such a way as to represent the power flowing in one direction. The common type of SWR / power meter used in amateur operation may contain a dual directional coupler. Other types use a single coupler which can be rotated 180 degrees to sample power flowing in either direction. Unidirectional couplers of this type are available for many frequency ranges and power levels and with appropriate coupling values for the analog meter used. The forward and reflected power measured by directional couplers can be used to calculate SWR. The computations can be done mathematically in analog or digital form or by using graphical methods built into the meter as an additional scale or by reading from the crossing point between two needles on the same meter. The above measuring instruments can be used "in line" that is, the full power of the transmitter can pass through the measuring device so as to allow continuous monitoring of SWR. Other instruments, such as network analyzers, low power directional couplers and antenna bridges use low power for the measurement and must be connected in place of the transmitter. Bridge circuits can be used to directly measure the real and imaginary parts of a load impedance and to use those values to derive SWR. These methods can provide more information than just SWR or forward and reflected power. Stand alone antenna analyzers use various measuring methods and can display SWR and other parameters plotted against frequency. By using directional couplers and a bridge in combination, it is possible to make an in line instrument that reads directly in complex impedance or in SWR. Stand alone antenna analyzers also are available that measure multiple parameters. Power standing wave ratio The term power standing wave ratio (PSWR) is sometimes referred to, and defined as, the square of the voltage standing wave ratio. The term is widely cited as "misleading". However it does correspond to one type of measurement of SWR using what was formerly a standard measuring instrument at microwave frequencies, the slotted line. The slotted line is a waveguide (or air-filled coaxial line) in which a small sensing antenna which is part of a crystal detector or detector is placed in the electric field in the line. The voltage induced in the antenna is rectified by either a point contact diode (crystal rectifier) or a Schottky barrier diode that is incorporated in the detector. These detectors have a square law output for low levels of input. Readings therefore corresponded to the square of the electric field along the slot, E2(x), with maximum and minimum readings of E2max and E2min found as the probe is moved along the slot. The ratio of these yields the square of the SWR, the so-called PSWR. This technique of rationalization of terms is fraught with problems. The square law behavior of the detector diode is exhibited only when the voltage across the diode is below the knee of the diode. Once the detected voltage exceeds the knee, the response of the diode becomes nearly linear. In this mode the diode and its associated filtering capacitor produce a voltage that is proportional to the peak of the sampled voltage. The operator of such a detector would not have a ready indication as to the mode in which the detector diode is operating and therefore differentiating the results between SWR or so called PSWR is not practical. Perhaps even worse, is the common case where the minimum detected voltage is below the knee and the maximum voltage is above the knee. In this case, the computed results are largely meaningless. Thus the terms PSWR and Power Standing Wave Ratio are deprecated and should be considered only from a legacy measurement perspective. Implications of SWR on medical applications SWR can also have a detrimental impact upon the performance of microwave-based medical applications. In microwave electrosurgery an antenna that is placed directly into tissue may not always have an optimal match with the feedline resulting in an SWR. The presence of SWR can affect monitoring components used to measure power levels impacting the reliability of such measurements. See also References Further reading External links — A web application that draws the Standing Wave Diagram and calculates the SWR, input impedance, reflection coefficient and more — A flash demonstration of transmission line reflection and SWR — An online conversion tool between SWR, return loss and reflection coefficient — Series of pages dealing with all aspects of VSWR, reflection coefficient, return loss, practical aspects, measurement, etc. Antennas (radio) Electronics concepts Wave mechanics Radio electronics Engineering ratios
Standing wave ratio
[ "Physics", "Mathematics", "Engineering" ]
3,963
[ "Radio electronics", "Physical phenomena", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Waves", "Wave mechanics" ]
41,744
https://en.wikipedia.org/wiki/Start%20signal
In telecommunications, a start signal is a signal that prepares a device to receive data or to perform a function. In asynchronous serial communication, start signals are used at the beginning of a character that prepares the receiving device for the reception of the code elements. A start signal is limited to one signal element usually having the duration of a unit interval. References Telecommunications engineering
Start signal
[ "Engineering" ]
77
[ "Electrical engineering", "Telecommunications engineering" ]
41,749
https://en.wikipedia.org/wiki/Stopband
A stopband is a band of frequencies, between specified limits, through which a circuit, such as a filter or telephone circuit, does not allow signals to pass, or the attenuation is above the required stopband attenuation level. Depending on application, the required attenuation within the stopband may typically be a value between 20 and 120 dB higher than the nominal passband attenuation, which often is 0 dB. The lower and upper limiting frequencies, also denoted lower and upper stopband corner frequencies, are the frequencies where the stopband and the transition bands meet in a filter specification. The stopband of a low-pass filter is the frequencies from the stopband corner frequency (which is slightly higher than the passband 3 dB cut-off frequency) up to the infinite frequency. The stopband of a high-pass filter consists of the frequencies from 0 hertz to a stopband corner frequency (slightly lower than the passband cut-off frequency). A band-stop filter has one stopband, specified by two non-zero and non-infinite corner frequencies. The difference between the limits in the band-stop filter is the stopband bandwidth, which usually is expressed in hertz. A bandpass filter typically has two stopbands. The shape factor of a bandpass filter is the relationship between the 3 dB bandwidth, and the difference between the stopband limits. See also Passband Band-stop filter Band gap in solid state physics Band rejection References 2. Filter theory sv:Stoppband
Stopband
[ "Engineering" ]
311
[ "Telecommunications engineering", "Filter theory" ]
41,750
https://en.wikipedia.org/wiki/Stop%20signal
In telecommunications, a stop signal is a signal that marks the end of part of a transmission, for example: In asynchronous serial communication, a signal at the end of a character that prepares the receiving device for the reception of a subsequent character. A stop signal is usually limited to one signal element having any duration equal to or greater than a specified minimum value. A signal to a receiving mechanism to wait for the next signal. References Telecommunications engineering
Stop signal
[ "Engineering" ]
93
[ "Electrical engineering", "Telecommunications engineering" ]
41,760
https://en.wikipedia.org/wiki/Summation%20check
In telecommunications, the term summation check (sum check) has the following meanings: A checksum based on the formation of the sum of the digits of a numeral. Note: The sum of the individual digits is usually compared with a previously computed value. A comparison of checksums on the same data on different occasions or on different representations of the data in order to verify data integrity. References Error detection and correction
Summation check
[ "Engineering" ]
85
[ "Error detection and correction", "Reliability engineering" ]
41,761
https://en.wikipedia.org/wiki/Supervisory%20program
A supervisory program or supervisor is a computer program that is usually referred as an operating system. It controls the execution of other routines such as regulating work schedules, input/output operations, and error actions. Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel. References Operating system technology
Supervisory program
[ "Technology" ]
82
[ "Computing stubs" ]
41,763
https://en.wikipedia.org/wiki/Surface%20wave
In physics, a surface wave is a mechanical wave that propagates along the interface between differing media. A common example is gravity waves along the surface of liquids, such as ocean waves. Gravity waves can also occur within liquids, at the interface between two fluids with different densities. Elastic surface waves can travel along the surface of solids, such as Rayleigh or Love waves. Electromagnetic waves can also propagate as "surface waves" in that they can be guided along with a refractive index gradient or along an interface between two media having different dielectric constants. In radio transmission, a ground wave is a guided wave that propagates close to the surface of the Earth. Mechanical waves In seismology, several types of surface waves are encountered. Surface waves, in this mechanical sense, are commonly known as either Love waves (L waves) or Rayleigh waves. A seismic wave is a wave that travels through the Earth, often as the result of an earthquake or explosion. Love waves have transverse motion (movement is perpendicular to the direction of travel, like light waves), whereas Rayleigh waves have both longitudinal (movement parallel to the direction of travel, like sound waves) and transverse motion. Seismic waves are studied by seismologists and measured by a seismograph or seismometer. Surface waves span a wide frequency range, and the period of waves that are most damaging is usually 10 seconds or longer. Surface waves can travel around the globe many times from the largest earthquakes. Surface waves are caused when P waves and S waves come to the surface. Examples are the waves at the surface of water and air (ocean surface waves). Another example is internal waves, which can be transmitted along the interface of two water masses of different densities. In theory of hearing physiology, the traveling wave (TW) of Von Bekesy, resulted from an acoustic surface wave of the basilar membrane into the cochlear duct. His theory purported to explain every feature of the auditory sensation owing to these passive mechanical phenomena. Jozef Zwislocki, and later David Kemp, showed that that is unrealistic and that active feedback is necessary. Electromagnetic waves Ground waves are radio waves propagating parallel to and adjacent to the surface of the Earth, following the curvature of the Earth. This radiative ground wave is known as Norton surface wave, or more properly Norton ground wave, because ground waves in radio propagation are not confined to the surface. Another type of surface wave is the non-radiative, bound-mode Zenneck surface wave or Zenneck–Sommerfeld surface wave. The earth has one refractive index and the atmosphere has another, thus constituting an interface that supports the guided Zenneck wave's transmission. Other types of surface wave are the trapped surface wave, the gliding wave and Dyakonov surface waves (DSW) propagating at the interface of transparent materials with different symmetry. Apart from these, various types of surface waves have been studied for optical wavelengths. Microwave field theory Within microwave field theory, the interface of a dielectric and conductor supports "surface wave transmission". Surface waves have been studied as part of transmission lines and some may be considered as single-wire transmission lines. Characteristics and utilizations of the electrical surface wave phenomenon include: The field components of the wave diminish with distance from the interface. Electromagnetic energy is not converted from the surface wave field to another form of energy (except in leaky or lossy surface waves) such that the wave does not transmit power normal to the interface, i.e. it is evanescent along that dimension. In coaxial cable in addition to the TEM mode there also exists a transverse-magnetic (TM) mode which propagates as a surface wave in the region around the central conductor. For coax of common impedance this mode is effectively suppressed but in high impedance coax and on a single central conductor without any outer shield, low attenuation and very broadband propagation is supported. Transmission line operation in this mode is called E-Line. Surface plasmon polariton The surface plasmon polariton (SPP) is an electromagnetic surface wave that can travel along an interface between two media with different dielectric constants. It exists under the condition that the permittivity of one of the materials forming the interface is negative, while the other one is positive, as is the case for the interface between air and a lossy conducting medium below the plasma frequency. The wave propagates parallel to the interface and decays exponentially vertical to it, a property called evanescence. Since the wave is on the boundary of a lossy conductor and a second medium, these oscillations can be sensitive to changes to the boundary, such as the adsorption of molecules by the conducting surface. Sommerfeld–Zenneck surface wave The Sommerfeld–Zenneck wave or Zenneck wave is a non-radiative guided electromagnetic wave that is supported by a planar or spherical interface between two homogeneous media having different dielectric constants. This surface wave propagates parallel to the interface and decays exponentially vertical to it, a property known as evanescence. It exists under the condition that the permittivity of one of the materials forming the interface is negative, while the other one is positive, as for example the interface between air and a lossy conducting medium such as the terrestrial transmission line, below the plasma frequency. Its electric field strength falls off at a rate of e-αd/√d in the direction of propagation along the interface due to two-dimensional geometrical field spreading at a rate of 1/√d, in combination with a frequency-dependent exponential attenuation (α), which is the terrestrial transmission line dissipation, where α depends on the medium’s conductivity. Arising from original analysis by Arnold Sommerfeld and Jonathan Zenneck of the problem of wave propagation over a lossy earth, it exists as an exact solution to Maxwell's equations. The Zenneck surface wave, which is a non-radiating guided-wave mode, can be derived by employing the Hankel transform of a radial ground current associated with a realistic terrestrial Zenneck surface wave source. Sommerfeld-Zenneck surface waves predict that the energy decays as R−1 because the energy distributes over the circumference of a circle and not the surface of a sphere. Evidence does not show that in radio space wave propagation, Sommerfeld-Zenneck surfaces waves are a mode of propagation as the path-loss exponent is generally between 20 dB/dec and 40 dB/dec. See also Seismic waves Seismic communication P-waves S-waves Surface acoustic wave Sky waves, the primary means of HF transmission Surface plasmon, a longitudinal charge density wave along the interface of conducting and dielectric mediums Surface-wave-sustained mode, a propagation of electromagnetic surface waves. Evanescent waves and evanescent wave coupling Ocean surface waves, internal waves and crests, dispersion, and freak waves Love wave and Rayleigh–Lamb wave Gravity waves, occurs at certain natural interfaces (e.g. the atmosphere and ocean) Stoneley wave Scholte wave Dyakonov surface wave People Arnold Sommerfeld – published a mathematical treatise on the Zenneck wave Jonathan Zenneck – Pupil of Sommerfeld; Wireless pioneer; developed the Zenneck wave John Stone Stone – Wireless pioneer; produced theories on radio propagation Other Ground constants, the electrical parameters of earth Near and far field, the radiated field that is within one quarter of a wavelength of the diffracting edge or the antenna and beyond. Skin effect, the tendency of an alternating electric current to distribute itself within a conductor so that the current density near the surface of the conductor is greater than that at its core. Surface wave inversion Green's function, a function used to solve inhomogeneous differential equations subject to boundary conditions. References Further reading Standards and doctrines "Surface wave ". Telecom Glossary 2000, ATIS Committee T1A1, Performance and Signal Processing, T1.523–2001. "Surface wave", Federal Standard 1037C. "Surface wave", MIL-STD-188 "Multi-service tactics, techniques, and procedures for the High-Frequency Automatic Link Establishment (HF-ALE): FM 6-02.74; MCRP 3–40.3E; NTTP 6-02.6; AFTTP(I) 3-2.48; COMDTINST M2000.7" Sept., 2003. Books Barlow, H.M., and Brown, J., "Radio Surface Waves", Oxford University Press 1962. Budden, K. G., "Radio waves in the ionosphere; the mathematical theory of the reflection of radio waves from stratified ionised layers". Cambridge, Eng., University Press, 1961. LCCN 61016040 /L/r85 Budden, K. G., "The wave-guide mode theory of wave propagation". London, Logos Press; Englewood Cliffs, N.J., Prentice-Hall, c1961. LCCN 62002870 /L Budden, K. G., " The propagation of radio waves : the theory of radio waves of low power in the ionosphere and magnetosphere". Cambridge (Cambridgeshire); New York : Cambridge University Press, 1985. LCCN 84028498 Collin, R. E., "Field Theory of Guided Waves". New York: Wiley-IEEE Press, 1990. Foti, S., Lai, C.G., Rix, G.J., and Strobbia, C., "“Surface Wave Methods for Near-Surface Site Characterization”", CRC Press, Boca Raton, Florida (USA), 487 pp., , 2014 <https://www.crcpress.com/product/isbn/9780415678766> Sommerfeld, A., "Partial Differential Equations in Physics" (English version), Academic Press Inc., New York 1949, chapter 6 – "Problems of Radio". Polo Jr., J. A., Mackay, T. G., and Lakhtakia, A., "Electromagnetic Surface Waves: A Modern Perspective". Waltham, MA, USA: Elsevier, 2013 <https://www.elsevier.com/books/electromagnetic-surface-waves/polo/978-0-12-397024-4>. Rawer, K.,"Wave Propagation in the Ionosphere", Dordrecht, Kluwer Acad.Publ. 1993. Sommerfeld, A., "Partial Differential Equations in Physics" (English version), Academic Press Inc., New York 1949, chapter 6 – "Problems of Radio". Weiner, Melvin M., "Monopole antennas" New York, Marcel Dekker, 2003. Wait, J. R., "Electromagnetic Wave Theory", New York, Harper and Row, 1985. Wait, J. R., "The Waves in Stratified Media". New York: Pergamon, 1962. Waldron, Richard Arthur, "Theory of guided electromagnetic waves". London, New York, Van Nostrand Reinhold, 1970. LCCN 69019848 //r86 Weiner, Melvin M., "Monopole antennas" New York, Marcel Dekker, 2003. Journals and papers Zenneck, Sommerfeld, Norton, and Goubau J. Zenneck, (translators: P. Blanchin, G. Guérard, É. Picot), "Précis de télégraphie sans fil : complément de l'ouvrage : Les oscillations électromagnétiques et la télégraphie sans fil", Paris : Gauthier-Villars, 1911. viii, 385 p. : ill.; 26 cm. (Tr. "Precisions of wireless telegraphy: complement of the work: Electromagnetic oscillations and wireless telegraphy.") J. Zenneck, "Über die Fortpflanzung ebener elektromagnetischer Wellen längs einer ebenen Leiterfläche und ihre Beziehung zur drahtlosen Telegraphie", Annalen der Physik, vol. 23, pp. 846–866, Sept. 1907. (Tr. "About the propagation of electromagnetic plane waves along a conductor plane and their relationship to wireless telegraphy.") J. Zenneck, "Elektromagnetische Schwingungen und drahtlose Telegraphie", gart, F. Enke, 1905. xxvii, 1019 p. : ill.; 24 cm. (Tr. "Electromagnetic oscillations and wireless telegraphy.") J. Zenneck, (translator: A.E. Seelig) "Wireless telegraphy,", New York [etc.] McGraw-Hill Book Company, inc., 1st ed. 1915. xx, 443 p. illus., diagrs. 24 cm. LCCN 15024534 (ed. "Bibliography and notes on theory" pp. 408–428.) A. Sommerfeld, "Über die Fortpflanzung elektrodynamischer Wellen längs eines Drahtes", Ann. der Physik und Chemie, vol. 67, pp. 233–290, Dec 1899. (Tr. "Propagation of electro-dynamic waves along a cylindric conductor.") A. Sommerfeld, "Über die Ausbreitung der Wellen in der drahtlosen Telegraphie", Annalen der Physik, Vol. 28, pp. 665–736, March 1909. (Tr. "About the Propagation of waves in wireless telegraphy.") A. Sommerfeld, "Propagation of waves in wireless telegraphy," Ann. Phys., vol. 81, pp. 1367–1153, 1926. K. A. Norton, "The propagation of radio waves over the surface of the earth and in the upper atmosphere," Proc. IRE, vol. 24, pp. 1367–1387, 1936. K. A. Norton, "The calculations of ground wave field intensity over a finitely conducting spherical earth," Proc. IRE, vol. 29, pp. 623–639, 1941. G. Goubau, "Surface waves and their application to transmission lines," J. Appl. Phys., vol. 21, pp. 1119–1128; November,1950. G. Goubau, “Über die Zennecksche Bodenwelle,” (Tr."On the Zenneck Surface Wave."), Zeitschrift für Angewandte Physik, Vol. 3, 1951, Nrs. 3/4, pp. 103–107. Wait Wait, J. R., "Lateral Waves and the Pioneering Research of the Late Kenneth A Norton". Wait, J. R., and D. A. Hill, "Excitation of the HF surface wave by vertical and horizontal apertures". Radio Science, 14, 1979, pp 767–780. Wait, J. R., and D. A. Hill, "Excitation of the Zenneck Surface Wave by a Vertical Aperture", Radio Science, Vol. 13, No. 6, November–December, 1978, pp. 969–977. Wait, J. R., "A note on surface waves and ground waves", IEEE Transactions on Antennas and Propagation, Nov 1965. Vol. 13, Issue 6, pp. 996–997 Wait, J. R., "The ancient and modern history of EM ground-wave propagation". IEEE Antennas Propagat. Mag., vol. 40, pp. 7–24, Oct. 1998. Wait, J. R., "Appendix C: On the theory of ground wave propagation over a slightly roughned curved earth", Electromagnetic Probing in Geophysics. Boulder, CO., Golem, 1971, pp. 37–381. Wait, J. R., "Electromagnetic surface waves", Advances in Radio Research, 1, New York, Academic Press, 1964, pp. 157–219. Others R. E. Collin, "Hertzian Dipole Radiating Over a Lossy Earth or Sea: Some Early and Late 20th-Century Controversies", Antennas and Propagation Magazine, 46, 2004, pp. 64–79. F. J. Zucker, "Surface wave antennas and surface wave excited arrays", Antenna Engineering Handbook, 2nd ed., R. C. Johnson and H. Jasik, Eds. New York: McGraw-Hill, 1984. Yu. V. Kistovich, "Possibility of Observing Zenneck Surface Waves in Radiation from a Source with a Small Vertical Aperture", Soviet Physics Technical Physics, Vol. 34, No.4, April, 1989, pp. 391–394. V. I. Baĭbakov, V. N. Datsko, Yu. V. Kistovich, "Experimental discovery of Zenneck's surface electromagnetic waves", Sov Phys Uspekhi, 1989, 32 (4), 378–379. Corum, K. L. and J. F. Corum, "The Zenneck Surface Wave", Nikola Tesla, Lightning Observations, and Stationary Waves, Appendix II. 1994. M. J. King and J. C. Wiltse, "Surface-Wave Propagation on Coated or Uncoated Metal Wires at Millimeter Wavelengths". J. Appl. Phys., vol. 21, pp. 1119–1128; November, M. J. King and J. C. Wiltse, "Surface-Wave Propagation on a Dielectric Rod of Electric Cross-Section." Electronic Communications, Inc., Tirnonium: kld. Sci. Rept.'No. 1, AFCKL Contract No. AF 19(601)-5475; August, 1960. T. Kahan and G. Eckart, "On the Electromagnetic Surface Wave of Sommerfeld", Phys. Rev. 76, 406–410 (1949). Other media L.A. Ostrovsky (ed.), "Laboratory modeling and theoretical studies of surface wave modulation by a moving sphere", m, Oceanic and Atmospheric Research Laboratories, 2002. External links The Feynman Lectures on Physics: Surface waves Eric W. Weisstein, et al., "Surface Wave", Eric Weisstein's World of Physics, 2006. David Reiss, "Electromagnetic surface waves". The Net Advance of Physics: Special Reports, No. 1 Gary Peterson, "Rediscovering the Zenneck wave". Feed Line No. 4. (ed''. reproduction available online at 21st Century Books) 3D Waves by Jesse Nochella based on a program by Stephen Wolfram, Wolfram Demonstrations Project. Radio frequency propagation Broadcast engineering Seismology
Surface wave
[ "Physics", "Engineering" ]
3,997
[ "Broadcast engineering", "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Surface waves", "Electromagnetic spectrum", "Waves", "Electronic engineering" ]
41,764
https://en.wikipedia.org/wiki/Survivability
Survivability is the ability to remain alive or continue to exist. The term has more specific meaning in certain contexts. Ecological Following disruptive forces such as flood, fire, disease, war, or climate change some species of flora, fauna, and local life forms are likely to survive more successfully than others because of consequent changes to their surrounding biophysical conditions. Engineering In engineering, survivability is the quantified ability of a system, subsystem, equipment, process, or procedure to continue to function during and after a natural or man-made disturbance; for example a nuclear electromagnetic pulse from the detonation of a nuclear weapon. For a given application, survivability must be qualified by specifying the range of conditions over which the entity will survive, the minimum acceptable level or post-disturbance functionality, and the maximum acceptable downtime. Military In the military environment, survivability can be defined as the ability to remain mission capable after a single engagement. Engineers working in survivability are often responsible for improving four main system elements: Detectability - the inability to avoid being aurally and visually detected as well as detected by radar (by an observer). Susceptibility - the inability to avoid being hit (by a weapon). Vulnerability - the inability to withstand the hit. Recoverability - longer-term post-hit effects, damage control, and firefighting, capability restoration, or (in extremis) escape and evacuation. The European Survivability Workshop introduced the concept of "Mission Survivability" whilst retaining the three core areas above, either pertaining to the "survivability" of a platform through a complete mission, or the "survivability" of the mission itself (i.e. probability of mission success). Recent studies have also introduced the concept of "Force Survivability" which relates to the ability of a force rather than an individual platform to remain "mission capable". There is no clear prioritisation of the three elements; this will depend on the characteristics and role of the platform. Some platform types, such as submarines and airplanes, minimise their susceptibility and may, to some extent, compromise in the other areas. Main Battle Tanks minimise vulnerability through the use of heavy armours. Present day surface warship designs tend to aim for a balanced combination of all three areas. A popular term is the "survivability onion"; described as 5-8 layers: Don't be there. If you are there, don’t be seen. If you are seen, don’t be targeted/acquired. If you are targeted/acquired, don’t be hit. If you are hit, don’t be penetrated. If you are penetrated, don’t be killed. Naval Survivability denotes the ability of a ship and its on-board systems to remain functional and continue designated mission in a man-made hostile environment. The naval vessels are designed to operate in a man-made hostile environment, and therefore the survivability is a vital feature required from them. The naval vessel's survivability is a complicated subject affecting the whole life cycle of the vessel, and should be considered from the initial design phase of every war ship. The classical definition of naval survivability includes three main aspects, which are susceptibility, vulnerability, and recoverability; although, recoverability is often subsumed within vulnerability. Susceptibility consists of all the factors that expose the ship to the weapons effects in a combat environment. These factors in general are the operating conditions, the threat, and the features of the ship itself. The operating conditions, such as sea state, weather and atmospheric conditions, vary considerably, and their influence is difficult to address (hence they are often not accounted for in survivability assessment). The threat is dependent on the weapons directed against the ship and weapon's performance, such as the range. The features of the ship in this sense include platform signatures (radar, infrared, acoustic, magnetic), the defensive systems on board, such as surface-to-air missiles, EW and decoys, and also the tactics employed by the platform in countering the attack (aspects such as speed, maneuverability, chosen aspect presented to the threat). Vulnerability refers to the ability of the vessel to withstand the short-term effects of the threat weapon. Vulnerability is an attribute typical to the vessel and therefore heavily affected by the vessel's basic characteristics such as size, subdivision, armouring, and other hardening features, and also the design of the ship's systems, in particular the location of equipment, degrees of redundancy and separation, and the presence within a system of single point failures. Recoverability refers to vessel's ability to restore and maintain its functionality after sustaining damage. Thus, recoverability is dependent on the actions aimed to neutralize the effects of the damage. These actions include firefighting, limiting the extent of flooding, and dewatering. Besides the equipment, the crew also has a vital role in recoverability. Combat vehicle crew The crews of military combat vehicles face numerous lethal hazards which are both diverse and constantly evolving. Improvised Explosive Devices (IEDs), mines, and enemy fire are examples of such persistent and variable threats. Historically, measures taken to mitigate these hazards were concerned with protecting the vehicle itself, but due to this achieving only limited protection, the focus has now shifted to safeguarding the crew within from an ever-broadening range of threats, including Radio Controlled IEDs (RCIEDs), blast, fragmentation, heat stress, and dehydration. The expressed goal of "crew survivability" is to ensure vehicle occupants are best protected. It goes beyond simply ensuring crew have the appropriate protective equipment and has expanded to include measuring the overpressure and blunt impact forces experienced by a vehicle from real blast incidents in order to develop medical treatment and improve overall crew survivability. Sustainable crew survivability is dependent on the effective integration of knowledge, training, and equipment. Prevention and training Threat intelligence identifying trends, emerging technologies, and attack tactics used by enemy forces enables crews to implement procedures that will reduce their exposure to unnecessary risks. Such intelligence also allows for more effective pre-deployment training programs where personnel can be taught the most up-to-date developments in IED concealment, for example, or undertake tailored training that will enable them to identify the likely attack strategy of enemy forces. In addition, with expert, current threat intelligence, the most effective equipment can be procured or rapidly developed in support of operations. Network Definitions of network survivability "The capability of a system to fulfill its mission, in a timely manner, in the presence of threats such as attacks or large-scale natural disasters. Survivability is a subset of resilience." “The capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents.” See also Availability List of system quality attributes References External links The ResiliNets Initiative RESIST RESIST Vulnerability Assessment Code SURVIVE SURVIVE Vulnerability Assessment Code Aerospace Systems Survivability Handbook - Vol. 1 Handbook Overview SURMA Naval Survivability Assessment Software HMS Counter-Terrorist Threat Intelligence United States Air Force and NATO Report RTO-TR-015 AC/323/(HFM-015)/TP-1 (2001) Engineering concepts Military science Survival
Survivability
[ "Engineering" ]
1,534
[ "nan" ]
41,767
https://en.wikipedia.org/wiki/Synchronism
Synchronism may refer to: Synchronism (Davidovsky), compositions by Argentine-American composer Mario Davidovsky incorporating acoustic instruments and electroacoustic sounds Chronological synchronism, an event that links two chronologies such as historical and datable astronomical events Synchronization, the coordination of events to operate a system in unison Film Synchronized sound, film sound technologically coupled to image Post-synchronization, the process of re-recording dialogue after the filming process See also Synchromism an early 20th-century art movement, commonly misspelled as "synchronism" Synchronicity (disambiguation) Synchronizer (disambiguation) Synchrony (disambiguation) Synchronization
Synchronism
[ "Engineering" ]
164
[ "Telecommunications engineering", "Synchronization" ]
41,770
https://en.wikipedia.org/wiki/Synchronous%20orbit
A synchronous orbit is an orbit in which an orbiting body (usually a satellite) has a period equal to the average rotational period of the body being orbited (usually a planet), and in the same direction of rotation as that body. Simplified meaning A synchronous orbit is an orbit in which the orbiting object (for example, an artificial satellite or a moon) takes the same amount of time to complete an orbit as it takes the object it is orbiting to rotate once. Properties A satellite in a synchronous orbit that is both equatorial and circular will appear to be suspended motionless above a point on the orbited planet's equator. For synchronous satellites orbiting Earth, this is also known as a geostationary orbit. However, a synchronous orbit need not be equatorial; nor circular. A body in a non-equatorial synchronous orbit will appear to oscillate north and south above a point on the planet's equator, whereas a body in an elliptical orbit will appear to oscillate eastward and westward. As seen from the orbited body the combination of these two motions produces a figure-8 pattern called an analemma. Nomenclature There are many specialized terms for synchronous orbits depending on the body orbited. The following are some of the more common ones. A synchronous orbit around Earth that is circular and lies in the equatorial plane is called a geostationary orbit. The more general case, when the orbit is inclined to Earth's equator or is non-circular is called a geosynchronous orbit. The corresponding terms for synchronous orbits around Mars are areostationary and areosynchronous orbits. Formula For a stationary synchronous orbit: G = Gravitational constant m2 = Mass of the celestial body T = rotational period of the body = Radius of orbit By this formula one can find the stationary orbit of an object in relation to a given body. Orbital speed (how fast a satellite is moving through space) is calculated by multiplying the angular speed of the satellite by the orbital radius. Examples An astronomical example is Pluto's largest moon Charon. Much more commonly, synchronous orbits are employed by artificial satellites used for communication, such as geostationary satellites. For natural satellites, which can attain a synchronous orbit only by tidally locking their parent body, it always goes in hand with synchronous rotation of the satellite. This is because the smaller body becomes tidally locked faster, and by the time a synchronous orbit is achieved, it has had a locked synchronous rotation for a long time already. See also Subsynchronous orbit Supersynchronous orbit Graveyard orbit Tidal locking (synchronous rotation) Sun-synchronous orbit List of orbits References Astrodynamics Orbits
Synchronous orbit
[ "Engineering" ]
593
[ "Astrodynamics", "Aerospace engineering" ]
41,771
https://en.wikipedia.org/wiki/System%20integrity
In telecommunications, the term system integrity has the following meanings: That condition of a system wherein its mandated operational and technical parameters are within the prescribed limits. The quality of an AIS when it performs its intended function in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the system. The state that exists when there is complete assurance that under all conditions an IT system is based on the logical correctness and reliability of the operating system, the logical completeness of the hardware and software that implement the protection mechanisms, and data integrity. References National Information Systems Security Glossary Telecommunications systems Technology systems Systems engineering Computer security Reliability engineering
System integrity
[ "Technology", "Engineering" ]
132
[ "Systems engineering", "Technology systems", "Reliability engineering", "Telecommunications systems", "nan" ]
41,773
https://en.wikipedia.org/wiki/Systems%20control
Systems control, in a communications system, is the control and implementation of a set of functions that: prevent or eliminate degradation of any part of the system, initiate immediate response to demands that are placed on the system, respond to changes in the system to meet long range requirements, and may include various subfunctions, such as immediate circuit utilization actions, continuous control of circuit quality, continuous control of equipment performance, development of procedures for immediate repair, restoration, or replacement of facilities and equipment, continuous liaison with system users and with representatives of other systems, and the provision of advice and assistance in system use. References Telecommunications systems Technology systems
Systems control
[ "Technology", "Engineering" ]
129
[ "Systems engineering", "Technology systems", "Telecommunications systems", "nan" ]
41,774
https://en.wikipedia.org/wiki/Systems%20design
The basic study of system design is the understanding of component parts and their subsequent interaction with one another. Systems design has appeared in a variety of fields, including sustainability, computer/software architecture, and sociology. Product Development If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data, for an electronic control system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering. Physical design The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed. In physical design, the following requirements about the system are decided. Input requirement, Output requirements, Storage requirements, Processing requirements, System control and backup or recovery. Put another way, the physical portion of system design can generally be broken down into three sub-tasks: User Interface Design Data Design Process Design Architecture design Designing the overall structure of a system focuses on creating a scalable, reliable, and efficient system. For example, services like Google, Twitter, Facebook, Amazon, and Netflix exemplify large-scale distributed systems. Here are key considerations: Functional and non-functional requirements Capacity estimation Usage of relational and/or NoSQL databases Vertical scaling, horizontal scaling, sharding Load balancing Primary-secondary replication Cache and CDN Stateless and Stateful servers Datacenter georouting Message Queue, Publish-Subscribe Architecture Performance Metrics Monitoring and Logging Build, test, configure deploy automation Finding single point of failure API Rate Limiting Service Level Agreement Machine Learning Systems Design Machine learning systems design focuses on building scalable, reliable, and efficient systems that integrate machine learning (ML) models to solve real-world problems. ML systems require careful consideration of data pipelines, model training, and deployment infrastructure. ML systems are often used in applications such as recommendation engines, fraud detection, and natural language processing. Key components to consider when designing ML systems include: Problem Definition: Clearly define the problem, data requirements, and evaluation metrics. Success criteria often involve accuracy, latency, and scalability. Data Pipeline: Build automated pipelines to collect, clean, transform, and validate data. Model Selection and Training: Choose appropriate algorithms (e.g., linear regression, decision trees, neural networks) and train models using frameworks like TensorFlow or PyTorch. Deployment and Serving: Deploy trained models to production environments using scalable architectures such as containerized services (e.g., Docker and Kubernetes). Monitoring and Maintenance: Continuously monitor model performance, retrain as necessary, and ensure data drift is addressed. Designing an ML system involves balancing trade-offs between accuracy, latency, cost, and maintainability, while ensuring system scalability and reliability. The discipline overlaps with MLOps, a set of practices that unifies machine learning development and operations to ensure smooth deployment and lifecycle management of ML systems. See also Arcadia (engineering) Architectural pattern (computer science) Configuration design Electronic design automation (EDA) Electronic system-level (ESL) Embedded system Graphical system design Hypersystems Modular design Morphological analysis (problem-solving) Systems analysis and design SCSD (School Construction Systems Development) project System information modelling System development life cycle (SDLC) System engineering System thinking TRIZ References Further reading External links Interactive System Design. Course by Chris Johnson, 1993 Course by Prof. Birgit Weller, 2020 Computer systems Electronic design automation Software design
Systems design
[ "Technology", "Engineering" ]
795
[ "Computer engineering", "Computer systems", "Computer science", "Software design", "Design", "Computers" ]
41,775
https://en.wikipedia.org/wiki/Tactical%20communications
Tactical communications are military communications in which information of any kind, especially orders and military intelligence, are conveyed from one command, person, or place to another upon a battlefield, particularly during the conduct of combat. It includes any kind of delivery of information, whether verbal, written, visual or auditory, and can be sent in a variety of ways. In modern times, this is usually done by electronic means. Tactical communications do not include communications provided to tactical forces by the Defense Communications System to non-tactical military commands, to tactical forces by civil organizations, nor does it include strategic communication. Early means The earliest way of communicating with others in a battle was by the commander's voice or by human messenger. A runner would carry reports or orders from one officer to another. Once the horse was domesticated messages could travel much faster. A very fast way to send information was to use either drums, trumpets or flags. Each sound or banner would have a pre-determined significance for the soldier who would respond accordingly. Auditory signals were only as effective, though, as the receiver's ability to hear them. The din of battle or long distances could make using noise less effective. They were also limited in the amount of information they could convey; the information must be simple, such as attack or retreat. Visual cues, such as flags or smoke signals required the receiver to have a clear line of sight to the signal, and know when and where to look for them. Intricate warning systems have though always been used such as scouting towers with fires to signal incoming threats - this could occur at the tactical as well as the strategic level. The armies of the 19th century used two flags in combinations that replicated the alphabet. This allowed commanders the ability to send any order they wanted as they needed to, but still relied on line-of-sight. During the Siege of Paris (1870–71) the defending French effectively used carrier pigeons to relay information between tactical units. The wireless revolution Although visual communication flew at the speed of light, it relied on a direct line of sight between the sender and the receiver. Telegraphs helped theater commanders to move large armies about, but one certainly could not count on using immobile telegraph lines on a changing battlefield. At the end of the 19th century the disparate units across any field were instantaneously joined to their commanders by the invention and mass production of the radio. At first the radio could only broadcast tones, so messages were sent via Morse code. The first field radios used by the United States Army saw action in the Spanish–American War (1898) and the Philippine Insurrection (1899–1902). At the same time as radios were deployed the field telephone was developed and made commercially viable. This caused a new signal occupation specialty to be developed: lineman. During the Interwar period the German army invented Blitzkrieg in which air, armor, and infantry forces acted swiftly and precisely, with constant radio communication. They triumphed until their enemies equipped themselves to communicate and coordinate similarly. The digital battlefield Security was a problem. If you broadcast your plans over radio waves, anyone with a similar radio listening to the same frequency could hear your plans. Trench codes became the tactical part of World War I cryptography. Advances in electronics, particularly after World War II, allowed for electronic scrambling of voice radio. Operational and strategic messages during the war were by text were encrypted with ciphers too complex for humans to crack without the assistance of a similar, high-tech machine, such as the German Enigma machine. Once computer science advanced, tactical voice radio could be encrypted, and large amounts of data could be sent over the airwaves in quick bursts of signals with more complex encryption. Communication between armies were of course much more difficult before the electronic age and could only be achieved with messengers on horseback or by foot and with time delays according to the distance the messenger needed to travel. Advances in long-range communications aided the commander on the battlefield, for then they could receive news of any outside force or factor that could impact the conduct of a battle. See also Air Defense Control Center Combat Information Center History of communication Network Simulator for simulation of Tactical Communication Systems Joint Tactical Information Distribution System Mission Control Center Naval Tactical Data System Electronics technician References Sources Rienzi, Thomas Matthew. "Vietnam Studies: Communications-Electronics 1962–1970. (Washington: Department of the Army, 1985. Military communications Command and control
Tactical communications
[ "Engineering" ]
894
[ "Military communications", "Telecommunications engineering" ]
41,781
https://en.wikipedia.org/wiki/Technical%20control%20facility
In telecommunications, a technical control facility (TCF) is defined by US Federal Standard 1037C as a telecommunications facility, or a designated and specially configured part thereof, that: contains the equipment necessary for ensuring fast, reliable, and secure exchange of information; typically includes distribution frames and associated panels, jacks, and switches and monitoring, test, conditioning, and orderwire equipment; and allows telecommunications systems control personnel to exercise operational control of communications paths and facilities, make quality analyses of communications and communications channels, monitor operations and maintenance functions, recognize and correct deteriorating conditions, restore disrupted communications, provide requested on-call circuits, and take or direct such actions as may be required and practical to provide effective telecommunications services. References Telecommunications systems Telecommunications infrastructure
Technical control facility
[ "Technology" ]
150
[ "Telecommunications systems" ]
41,782
https://en.wikipedia.org/wiki/Telecommunications%20service
In telecommunications, a telecommunications service is a service provided by a telecommunications provider, or a specified set of user-information transfer capabilities provided to a group of users by a telecommunications system. The telecommunications service user is responsible for the information content of the message. The telecommunications service provider has the responsibility for the acceptance, transmission, and delivery of the message. For purposes of regulation by the Federal Communications Commission under the U.S. Communications Act of 1934 and Telecommunications Act of 1996, the definition of telecommunications service is "the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used." Telecommunications, in turn, is defined as "the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received." See also Communications service provider Intelligent network service (IN service) Internet service provider (ISP) Service layer Value-added service or content provider References See also White pages Yellow Pages Tourism Service
Telecommunications service
[ "Technology" ]
221
[ "Information and communications technology", "Telecommunications" ]
41,783
https://en.wikipedia.org/wiki/Teleconference
A teleconference or telecon is a live exchange of information among several people remote from one another but linked by a communications system. Terms such as audio conferencing, telephone conferencing, and phone conferencing are also sometimes used to refer to teleconferencing. The communications system may support the teleconference by providing one or more of the following: audio, video, and/or data services by one or more means, such as telephone, computer, telegraph, teletypewriter, radio, and television. Internet teleconferencing Internet teleconferencing includes internet telephone conferencing, videotelephony, web conferencing, virtual workplace, and augmented reality conferencing. Internet telephony involves conducting a teleconference over the Internet or a wide area network. One key technology in this area is Voice over Internet Protocol (VOIP). A working example of an augmented reality conferencing was demonstrated at the Salone di Mobile in Milano by AR+RFID Lab. See also References Telecommunications
Teleconference
[ "Technology" ]
230
[ "Information and communications technology", "Telecommunications" ]
41,789
https://en.wikipedia.org/wiki/Thermodynamic%20temperature
Thermodynamic temperature is a quantity defined in thermodynamics as distinct from kinetic theory or statistical mechanics. Historically, thermodynamic temperature was defined by Lord Kelvin in terms of a macroscopic relation between thermodynamic work and heat transfer as defined in thermodynamics, but the kelvin was redefined by international agreement in 2019 in terms of phenomena that are now understood as manifestations of the kinetic energy of free motion of microscopic particles such as atoms, molecules, and electrons. From the thermodynamic viewpoint, for historical reasons, because of how it is defined and measured, this microscopic kinetic definition is regarded as an "empirical" temperature. It was adopted because in practice it can generally be measured more precisely than can Kelvin's thermodynamic temperature. A thermodynamic temperature of zero is of particular importance for the third law of thermodynamics. By convention, it is reported on the Kelvin scale of temperature in which the unit of measurement is the kelvin (unit symbol: K). For comparison, a temperature of 295 K corresponds to 21.85 °C and 71.33 °F. Overview Thermodynamic temperature, as distinct from SI temperature, is defined in terms of a macroscopic Carnot cycle. Thermodynamic temperature is of importance in thermodynamics because it is defined in purely thermodynamic terms. SI temperature is conceptually far different from thermodynamic temperature. Thermodynamic temperature was rigorously defined historically long before there was a fair knowledge of microscopic particles such as atoms, molecules, and electrons. The International System of Units (SI) specifies the international absolute scale for measuring temperature, and the unit of measure kelvin (unit symbol: K) for specific values along the scale. The kelvin is also used for denoting temperature intervals (a span or difference between two temperatures) as per the following example usage: "A 60/40 tin/lead solder is non-eutectic and is plastic through a range of 5 kelvins as it solidifies." A temperature interval of one degree Celsius is the same magnitude as one kelvin. The magnitude of the kelvin was redefined in 2019 in relation to the physical property underlying thermodynamic temperature: the kinetic energy of atomic free particle motion. The revision fixed the Boltzmann constant at exactly (J/K). The microscopic property that imbues material substances with a temperature can be readily understood by examining the ideal gas law, which relates, per the Boltzmann constant, how heat energy causes precisely defined changes in the pressure and temperature of certain gases. This is because monatomic gases like helium and argon behave kinetically like freely moving perfectly elastic and spherical billiard balls that move only in a specific subset of the possible motions that can occur in matter: that comprising the three translational degrees of freedom. The translational degrees of freedom are the familiar billiard ball-like movements along the X, Y, and Z axes of 3D space (see Fig. 1, below). This is why the noble gases all have the same specific heat capacity per atom and why that value is lowest of all the gases. Molecules (two or more chemically bound atoms), however, have internal structure and therefore have additional internal degrees of freedom (see Fig. 3, below), which makes molecules absorb more heat energy for any given amount of temperature rise than do the monatomic gases. Heat energy is born in all available degrees of freedom; this is in accordance with the equipartition theorem, so all available internal degrees of freedom have the same temperature as their three external degrees of freedom. However, the property that gives all gases their pressure, which is the net force per unit area on a container arising from gas particles recoiling off it, is a function of the kinetic energy borne in the freely moving atoms' and molecules' three translational degrees of freedom. Fixing the Boltzmann constant at a specific value, along with other rule making, had the effect of precisely establishing the magnitude of the unit interval of SI temperature, the kelvin, in terms of the average kinetic behavior of the noble gases. Moreover, the starting point of the thermodynamic temperature scale, absolute zero, was reaffirmed as the point at which zero average kinetic energy remains in a sample; the only remaining particle motion being that comprising random vibrations due to zero-point energy. Absolute zero of temperature Temperature scales are numerical. The numerical zero of a temperature scale is not bound to the absolute zero of temperature. Nevertheless, some temperature scales have their numerical zero coincident with the absolute zero of temperature. Examples are the International SI temperature scale, the Rankine temperature scale, and the thermodynamic temperature scale. Other temperature scales have their numerical zero far from the absolute zero of temperature. Examples are the Fahrenheit scale and the Celsius scale. At the zero point of thermodynamic temperature, absolute zero, the particle constituents of matter have minimal motion and can become no colder. Absolute zero, which is a temperature of zero kelvins (0 K), precisely corresponds to −273.15 °C and −459.67 °F. Matter at absolute zero has no remaining transferable average kinetic energy and the only remaining particle motion is due to an ever-pervasive quantum mechanical phenomenon called ZPE (zero-point energy). Though the atoms in, for instance, a container of liquid helium that was precisely at absolute zero would still jostle slightly due to zero-point energy, a theoretically perfect heat engine with such helium as one of its working fluids could never transfer any net kinetic energy (heat energy) to the other working fluid and no thermodynamic work could occur. Temperature is generally expressed in absolute terms when scientifically examining temperature's interrelationships with certain other physical properties of matter such as its volume or pressure (see Gay-Lussac's law), or the wavelength of its emitted black-body radiation. Absolute temperature is also useful when calculating chemical reaction rates (see Arrhenius equation). Furthermore, absolute temperature is typically used in cryogenics and related phenomena like superconductivity, as per the following example usage: "Conveniently, tantalum's transition temperature (T) of 4.4924 kelvin is slightly above the 4.2221 K boiling point of helium." Boltzmann constant The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE (zero-point energy) is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of gases. However, in temperature condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 2.5 MPa (25 bar)), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy. Rankine scale Though there have been many other temperature scales throughout history, there have been only two scales for measuring thermodynamic temperature which have absolute zero as their null point (0): The Kelvin scale and the Rankine scale. Throughout the scientific world where modern measurements are nearly always made using the International System of Units, thermodynamic temperature is measured using the Kelvin scale. The Rankine scale is part of English engineering units and finds use in certain engineering fields, particularly in legacy reference works. The Rankine scale uses the degree Rankine (symbol: °R) as its unit, which is the same magnitude as the degree Fahrenheit (symbol: °F). A unit increment of one kelvin is exactly 1.8 times one degree Rankine; thus, to convert a specific temperature on the Kelvin scale to the Rankine scale, , and to convert from a temperature on the Rankine scale to the Kelvin scale, . Consequently, absolute zero is "0" for both scales, but the melting point of water ice (0 °C and 273.15 K) is 491.67 °R. To convert temperature intervals (a span or difference between two temperatures), the formulas from the preceding paragraph are applicable; for instance, an interval of 5 kelvin is precisely equal to an interval of 9 degrees Rankine. Modern redefinition of the kelvin For 65 years, between 1954 and the 2019 revision of the SI, a temperature interval of one kelvin was defined as the difference between the triple point of water and absolute zero. The 1954 resolution by the International Bureau of Weights and Measures (known by the French-language acronym BIPM), plus later resolutions and publications, defined the triple point of water as precisely 273.16 K and acknowledged that it was "common practice" to accept that due to previous conventions (namely, that 0 °C had long been defined as the melting point of water and that the triple point of water had long been experimentally determined to be indistinguishably close to 0.01 °C), the difference between the Celsius scale and Kelvin scale is accepted as 273.15 kelvins; which is to say, 0 °C corresponds to 273.15 kelvins. The net effect of this as well as later resolutions was twofold: 1) they defined absolute zero as precisely 0 K, and 2) they defined that the triple point of special isotopically controlled water called Vienna Standard Mean Ocean Water occurred at precisely 273.16 K and 0.01 °C. One effect of the aforementioned resolutions was that the melting point of water, while very close to 273.15 K and 0 °C, was not a defining value and was subject to refinement with more precise measurements. The 1954 BIPM standard did a good job of establishing—within the uncertainties due to isotopic variations between water samples—temperatures around the freezing and triple points of water, but required that intermediate values between the triple point and absolute zero, as well as extrapolated values from room temperature and beyond, to be experimentally determined via apparatus and procedures in individual labs. This shortcoming was addressed by the International Temperature Scale of 1990, or ITS90, which defined 13 additional points, from 13.8033 K, to 1,357.77 K. While definitional, ITS90 had—and still has—some challenges, partly because eight of its extrapolated values depend upon the melting or freezing points of metal samples, which must remain exceedingly pure lest their melting or freezing points be affected—usually depressed. The 2019 revision of the SI was primarily for the purpose of decoupling much of the SI system's definitional underpinnings from the kilogram, which was the last physical artifact defining an SI base unit (a platinum/iridium cylinder stored under three nested bell jars in a safe located in France) and which had highly questionable stability. The solution required that four physical constants, including the Boltzmann constant, be definitionally fixed. Assigning the Boltzmann constant a precisely defined value had no practical effect on modern thermometry except for the most exquisitely precise measurements. Before the revision, the triple point of water was exactly 273.16 K and 0.01 °C and the Boltzmann constant was experimentally determined to be , where the "(51)" denotes the uncertainty in the two least significant digits (the 03) and equals a relative standard uncertainty of 0.37 ppm. Afterwards, by defining the Boltzmann constant as exactly , the 0.37 ppm uncertainty was transferred to the triple point of water, which became an experimentally determined value of (). That the triple point of water ended up being exceedingly close to 273.16 K after the SI revision was no accident; the final value of the Boltzmann constant was determined, in part, through clever experiments with argon and helium that used the triple point of water for their key reference temperature. Notwithstanding the 2019 revision, water triple-point cells continue to serve in modern thermometry as exceedingly precise calibration references at 273.16 K and 0.01 °C. Moreover, the triple point of water remains one of the 14 calibration points comprising ITS90, which spans from the triple point of hydrogen (13.8033 K) to the freezing point of copper (1,357.77 K), which is a nearly hundredfold range of thermodynamic temperature. Relationship of temperature, motions, conduction, and thermal energy Nature of kinetic energy, translational motion, and temperature The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three X, Y, and Z–axis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law's formula and is embodied in the gas laws. Though the kinetic energy borne exclusively in the three translational degrees of freedom comprise the thermodynamic temperature of a substance, molecules, as can be seen in Fig. 3, can have other degrees of freedom, all of which fall under three categories: bond length, bond angle, and rotational. All three additional categories are not necessarily available to all molecules, and even for molecules that can experience all three, some can be "frozen out" below a certain temperature. Nonetheless, all those degrees of freedom that are available to the molecules under a particular set of conditions contribute to the specific heat capacity of a substance; which is to say, they increase the amount of heat (kinetic energy) required to raise a given amount of the substance by one kelvin or one degree Celsius. The relationship of kinetic energy, mass, and velocity is given by the formula . Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity. The extent to which the kinetic energy of translational motion in a statistically significant collection of atoms or molecules in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: ). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particles' translational motion as follows: where: is the mean kinetic energy for an individual particle is the thermodynamic temperature of the bulk quantity of the substance While the Boltzmann constant is useful for finding the mean kinetic energy in a sample of particles, it is important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x-axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x-axis represents infinite temperature. Additionally, the x- and y-axes on both graphs are scaled proportionally. High speeds of translational motion Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second to in order to calculate their temperature. Formulas for calculating the velocity and speed of translational motion are given in the following footnote. It is neither difficult to imagine atomic motions due to kinetic temperature, nor distinguish between such motions and those due to zero-point energy. Consider the following hypothetical thought experiment, as illustrated in Fig. 2.5 at left, with an atom that is exceedingly close to absolute zero. Imagine peering through a common optical microscope set to 400 power, which is about the maximum practical magnification for optical microscopes. Such microscopes generally provide fields of view a bit over 0.4 mm in diameter. At the center of the field of view is a single levitated argon atom (argon comprises about 0.93% of air) that is illuminated and glowing against a dark backdrop. If this argon atom was at a beyond-record-setting one-trillionth of a kelvin above absolute zero, and was moving perpendicular to the field of view towards the right, it would require 13.9 seconds to move from the center of the image to the 200-micron tick mark; this travel distance is about the same as the width of the period at the end of this sentence on modern computer monitors. As the argon atom slowly moved, the positional jitter due to zero-point energy would be much less than the 200-nanometer (0.0002 mm) resolution of an optical microscope. Importantly, the atom's translational velocity of 14.43 microns per second constitutes all its retained kinetic energy due to not being precisely at absolute zero. Were the atom precisely at absolute zero, imperceptible jostling due to zero-point energy would cause it to very slightly wander, but the atom would perpetually be located, on average, at the same spot within the field of view. This is analogous to a boat that has had its motor turned off and is now bobbing slightly in relatively calm and windless ocean waters; even though the boat randomly drifts to and fro, it stays in the same spot in the long term and makes no headway through the water. Accordingly, an atom that was precisely at absolute zero would not be "motionless", and yet, a statistically significant collection of such atoms would have zero net kinetic energy available to transfer to any other collection of atoms. This is because regardless of the kinetic temperature of the second collection of atoms, they too experience the effects of zero-point energy. Such are the consequences of statistical mechanics and the nature of thermodynamics. Internal motions of molecules and internal energy As mentioned above, there are other ways molecules can jiggle besides the three translational degrees of freedom that imbue substances with their kinetic temperature. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements; these are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom (the X, Y, and Z axis). Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called "internal", the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as internal energy is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum. The kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions, is not contributing to the molecules' translational motions at that same instant. This extra kinetic energy simply increases the amount of internal energy that substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity. Different molecules absorb different amounts of internal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom. Diffusion of thermal energy: entropy, phonons, and mobile conduction electrons Heat conduction is the diffusion of thermal energy from hot parts of a system to cold parts. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases). One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more. Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at the speed of sound of a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam. Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity. Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion, However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they are delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons. Diffusion of thermal energy: black-body radiation Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see ). Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process. As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system. Table of thermodynamic temperatures The table below shows various points on the thermodynamic scale, in order of increasing temperature. Heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin. Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green. At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules, converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy cannot make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance. As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it is called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements. If the substance is one of the monatomic gases (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole. Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times. The phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase. Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above); water vapors (gas phase) are liquefied on the skin with releasing a large amount of energy (enthalpy) to the environment including the skin, resulting in skin damage. In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity); the water evaporation on the skin takes a large amount of energy from the environment including the skin, reducing the skin temperature. Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when the pools are not in use) are so effective at reducing heating costs: they prevent evaporation. (In other words, taking energy from water when it is evaporated is limited.) For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water . Internal energy The total energy of all translational and internal particle motions, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy of a substance comprise the internal energy of it. Internal energy at absolute zero As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic energy or temperature decreases); the internal motions of molecules diminish (their internal energy or temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower; and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When particles of a substance are as close as possible to complete rest and retain only ZPE (zero-point energy)-induced quantum mechanical motion, the substance is at the temperature of absolute zero ( = 0). Whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero internal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably,  = 0 helium remains liquid at room pressure (Fig. 9 at right) and must be under a pressure of at least to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid–solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one. The above complexities make for rather cumbersome blanket statements regarding the internal energy in  = 0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy. One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration). Lastly, all  = 0 substances contain zero kinetic thermal energy. Practical applications for thermodynamic temperature Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay-Lussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold pressure of 200 kPa, then its absolute pressure is 300 kPa. Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as  = 6.8% greater thermodynamic temperature and absolute pressure; that is, an absolute pressure of 320 kPa, which is a of 220 kPa. Relationship to ideal gas law The thermodynamic temperature is closely linked to the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio of two temperatures and is the same in all absolute scales. Strictly speaking, the temperature of a system is well-defined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena. Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, , and a lower temperature heat sink, , through a gas filled piston. The work done per cycle is equal in magnitude to net heat taken up, which is sum of the heat taken up by the engine from the high-temperature source, plus the waste heat given off by the engine, < 0. The efficiency of the engine is the work divided by the heat put into the system or where is the work done per cycle. Thus the efficiency depends only on . Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures and must have the same efficiency, that is to say, the efficiency is the function of only temperatures In addition, a reversible heat engine operating between a pair of thermal reservoirs at temperatures and must have the same efficiency as one consisting of two cycles, one between and another (intermediate) temperature , and the second between and . If this were not the case, then energy (in the form of ) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles as an engine design choice, and any reversible engine between the same reservoir at and must be equally efficient regardless of the engine design. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as below. Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at . We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following. In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3. With this understanding of , and , mathematically, But since the first function is not a function of , the product of the final two functions must result in the removal of as a variable. The only way is therefore to define the function as follows: and so that I.e. the ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale. Such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of and , and hence derive that the (complete) Carnot cycle is isentropic: Substituting this back into our first formula for efficiency yields a relationship in terms of temperature: Note that for the efficiency is 100% and that efficiency becomes greater than 100% for , which is unrealistic. Subtracting 1 from the right hand side of the Equation (4) and the middle portion gives and thus The generalization of this equation is the Clausius theorem, which proposes the existence of a state function (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by where the subscript rev indicates heat transfer in a reversible process. The function is the entropy of the system, mentioned previously, and the change of around any cycle is zero (as is necessary for any state function). The Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid a logic loop, we should first define entropy through statistical mechanics): For a constant-volume system (so no mechanical work ) in which the entropy is a function of its internal energy , and the thermodynamic temperature is therefore given by so that the reciprocal of the thermodynamic temperature is the rate of change of entropy with respect to the internal energy at the constant volume. History Guillaume Amontons (1663–1705) published two papers in 1702 and 1703 that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 °C—only 33.15 degrees short of the true value of −273.15 °C. Amonton's discovery of a one-to-one relationship between absolute temperature and absolute pressure was rediscovered a century later and popularized within the scientific community by Joseph Louis Gay-Lussac. Today, this principle of thermodynamics is commonly known as Gay-Lussac's law but is also known as Amonton's law. In 1742, Anders Celsius (1701–1744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level. Coincident with the death of Anders Celsius in 1744, the botanist Carl Linnaeus (1707–1778) effectively reversed Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made Linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (BIPM). The 9th CGPM (General Conference on Weights and Measures and the CIPM (International Committee for Weights and Measures formally adopted degree Celsius (symbol: °C) in 1948. In his book Pyrometrie (1777) completed four months before his death, Johann Heinrich Lambert (1728–1777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C. Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746–1823) is often credited with discovering (circa 1787), but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was . Joseph Louis Gay-Lussac (1778–1850) published work in 1802 (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C). William Thomson (1824–1907), also known as Lord Kelvin, wrote in his 1848 paper "On an Absolute Thermometric Scale" of the need for a scale whereby infinite cold (absolute zero) was the scale's zero point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the kelvin thermodynamic temperature scale. Thomson's value of −273 was derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C. In the paper he proposed to define temperature using idealized heat engines. In detail, he proposed that, given three heat reservoirs at temperatures , if two reversible heat engines (Carnot engine), one working between and another between , can produce the same amount of mechanical work by letting the same amount of heat pass through, then define . Note that like Carnot, Kelvin worked under the assumption that heat is conserved ("the conversion of heat (or caloric) into mechanical effect is probably impossible"), and if heat goes into the heat engine, then heat must come out. Kelvin, realizing after Joule's experiments that heat is not a conserved quantity but is convertible with mechanical work, modified his scale in the 1851 work An Account of Carnot's Theory of the Motive Power of Heat. In this work, he defined as follows: The above definition fixes the ratios between absolute temperatures, but it does not fix a scale for absolute temperature. For the scale, Thomson proposed to use the Celsius degree, that is, the interval between the freezing and the boiling point of water. In 1859 Macquorn Rankine (1820–1872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment, that is, the interval between the freezing and the boiling point of water. This absolute scale is known today as the Rankine thermodynamic temperature scale. Ludwig Boltzmann (1844–1906) made major contributions to thermodynamics between 1877 and 1884 through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics. Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed in the 1930s that absolute zero was equivalent to −273.15 °C. Resolution 3 of the 9th General Conference on Weights and Measures (CGPM) in 1948 fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the International Committee for Weights and Measures (CIPM) and the CGPM formally adopted the name Celsius for the degree Celsius and the Celsius temperature scale. Resolution 3 of the 10th CGPM in 1954 gave the kelvin scale its modern definition by choosing the triple point of water as its upper defining point (with no change to absolute zero being the null point) and assigning it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and −273.15 °C. Resolution 3 of the 13th CGPM in 1967/1968 renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol . Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water". The CIPM affirmed in 2005 that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water. In November 2018, the 26th General Conference on Weights and Measures (CGPM) changed the definition of the Kelvin by fixing the Boltzmann constant to when expressed in the unit J/K. This change (and other changes in the definition of SI units) was made effective on the 144th anniversary of the Metre Convention, 20 May 2019. See also :Category:Thermodynamics Absolute zero Hagedorn temperature Adiabatic process Boltzmann constant Carnot heat engine Conversion of scales of temperature Energy conversion efficiency Enthalpy Enthalpy of fusion Enthalpy of vaporization Entropy Equipartition theorem Fahrenheit First law of thermodynamics Freezing Gas laws International System of Quantities International Temperature Scale of 1990 (ITS-90) Ideal gas law Kelvin Laws of thermodynamics Maxwell–Boltzmann distribution Orders of magnitude (temperature) Phase transition Planck's law of black body radiation Rankine scale Specific heat capacity Temperature Thermal radiation Thermodynamic beta Thermodynamic equations Thermodynamic equilibrium Thermodynamics Timeline of heat engine technology Timeline of temperature and pressure measurement technology Triple point Notes In the following notes, wherever numeric equalities are shown in concise form, such as , the two digits between the parentheses denotes the uncertainty at 1-σ (1 standard deviation, 68% confidence level) in the two least significant digits of the significand. External links Zero Point Energy and Zero Point Field. A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute. Temperature SI base quantities State functions
Thermodynamic temperature
[ "Physics", "Chemistry", "Mathematics" ]
10,990
[ "State functions", "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Quantity", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
41,790
https://en.wikipedia.org/wiki/Third-order%20intercept%20point
In telecommunications, a third-order intercept point (IP3 or TOI) is a specific figure of merit associated with the more general third-order intermodulation distortion (IMD3), which is a measure for weakly nonlinear systems and devices, for example receivers, linear amplifiers and mixers. It is based on the idea that the device nonlinearity can be modeled using a low-order polynomial, derived by means of Taylor series expansion. The third-order intercept point relates nonlinear products caused by the third-order nonlinear term to the linearly amplified signal, in contrast to the second-order intercept point that uses second-order terms. The intercept point is a purely mathematical concept and does not correspond to a practically occurring physical power level. In many cases, it lies far beyond the damage threshold of the device. Definitions Two different definitions for intercept points are in use: Based on harmonics: The device is tested using a single input tone. The nonlinear products caused by n-th-order nonlinearity appear at n times the frequency of the input tone. Based on intermodulation products: The device is fed with two sine tones one at and one at . When you cube the sum of these sine waves you will get sine waves at various frequencies including and . If and are large but very close together then and will be very close to and . This two-tone approach has the advantage that it is not restricted to broadband devices and is commonly used for radio receivers. The intercept point is obtained graphically by plotting the output power versus the input power both on logarithmic scales (e.g., decibels). Two curves are drawn; one for the linearly amplified signal at an input tone frequency, one for a nonlinear product. On a logarithmic scale, the function xn translates into a straight line with slope of n. Therefore, the linearly amplified signal will exhibit a slope of 1. A third-order nonlinear product will increase by 3 dB in power when the input power is raised by 1 dB. Both curves are extended with straight lines of slope 1 and n (3 for a third-order intercept point). The point where the curves intersect is the intercept point. It can be read off from the input or output power axis, leading to input (IIP3) or output (OIP3) intercept point respectively. Input and output intercept point differ by the small-signal gain of the device. Practical considerations The concept of intercept point is based on the assumption of a weakly nonlinear system, meaning that higher-order nonlinear terms are small enough to be negligible. In practice, the weakly nonlinear assumption may not hold for the upper end of the input power range, be it during measurement or during use of the amplifier. As a consequence, measured or simulated data will deviate from the ideal slope of n. The intercept point according to its basic definition should be determined by drawing the straight lines with slope 1 and n through the measured data at the smallest possible power level (possibly limited towards lower power levels by instrument or device noise). It is a frequent mistake to derive intercept points by either changing the slope of the straight lines, or fitting them to points measured at too high power levels. In certain situations such a measure can be useful, but it is not an intercept point according to definition. Its value depends on the measurement conditions that need to be documented, whereas the IP according to definition is mostly unambiguous; although there is some dependency on frequency and tone spacing, depending on the physics of the device under test. One of the useful applications of third-order intercept point is as a rule-of-thumb measure to estimate nonlinear products. When comparing systems or devices for linearity, a higher intercept point is better. It can be seen that the spacing between two straight lines with slopes of 3 and 1 closes with slope 2. For example, assume a device with an input-referred third-order intercept point of 10 dBm is driven with a test signal of −5 dBm. This power is 15 dB below the intercept point, therefore nonlinear products will appear at approximately 2×15 dB below the test signal power at the device output (in other words, 3×15 dB below the output-referred third-order intercept point). A rule of thumb that holds for many linear radio-frequency amplifiers is that the 1 dB compression point point falls approximately 10 dB below the third-order intercept point. Theory The third-order intercept point (TOI) is a property of the device transfer function O (see diagram). This transfer function relates the output signal voltage level to the input signal voltage level. We assume a "linear" device having a transfer function whose small-signal form may be expressed in terms of a power series containing only odd terms, making the transfer function an odd function of input signal voltage, i.e., O(−s) = −O(s). Where the signals passing through the actual device are modulated sinusoidal voltage waveforms (e.g., RF amplifier), device nonlinearities can be expressed in terms of how they affect individual sinusoidal signal components. For example, say the input voltage signal is the sine wave and the device transfer function produces an output of the form where G is the amplifier gain, and D3 is cubic distortion. We may substitute the first equation into the second and, using the trigonometric identity we obtain the device output voltage waveform as The output waveform contains the original waveform, cos(ωt), plus a new harmonic term, cos(3ωt), the third-order term. The coefficient of the cos(ωt) harmonic has two terms, one that varies linearly with V and one that varies with the cube of V. In fact, the coefficient of cos(ωt) has nearly the same form as the transfer function, except for the factor on the cubic term. In other words, as signal level V is increased, the level of the cos(ωt) term in the output eventually levels off, similar to how the transfer function levels off. Of course, the coefficients of the higher-order harmonics will increase (with increasing V) as the coefficient of the cos(ωt) term levels off (the power has to go somewhere). If we now restrict our attention to the portion of the cos(ωt) coefficient that varies linearly with V, and then ask ourselves, at what input voltage level V will the coefficients of the first- and third-order terms have equal magnitudes (i.e., where the magnitudes intersect), we find that this happens when which is the third-order intercept point (TOI). So, we see that the TOI input power level is simply 4/3 times the ratio of the gain and the cubic distortion term in the device transfer function. The smaller the cubic term is in relation to the gain, the more linear the device is, and the higher the TOI is. The TOI, being related to the magnitude squared of the input voltage waveform, is a power quantity, typically measured in milliwatts (mW). The TOI is always beyond operational power levels because the output power saturates before reaching this level. The TOI is closely related to the amplifier's "1 dB compression point", which is defined as that point at which the total coefficient of the cos(ωt) term is 1 dB below the linear portion of that coefficient. We can relate the 1 dB compression point to the TOI as follows. Since 1 dB = 20 log10 1.122, we may say, in a voltage sense, that the 1 dB compression point occurs when or or In a power sense (V2 is a power quantity), a factor of 0.10875 corresponds to −9.636 dB, so by this approximate analysis, the 1 dB compression point occurs roughly 9.6 dB below the TOI. Recall: decibel figure = 10 dB × log10(power ratio) = 20 dB × log10(voltage ratio). See also Intermodulation intercept point Second-order intercept point Notes The third-order intercept point is an extrapolated convergence – not directly measurable – of intermodulation distortion products in the desired output. It indicates how well a device (for example an amplifier) or a system (for example, a receiver) performs in the presence of strong signals. It is sometimes used (interchangeably with the 1 dB compression point) to define the upper limit of the dynamic range of an amplifier. Determination of a third-order intercept point of a superheterodyne receiver is accomplished by using two test frequencies that fall within the first intermediate frequency mixer passband. Usually, the test frequencies are about 20–30 kHz apart. The concept of intercept point has no meaning for strongly nonlinear systems, such as when an output signal is clipped due to limited supply voltage. References Further reading (9 pages) Frequency mixers Electronic amplifiers
Third-order intercept point
[ "Technology", "Engineering" ]
1,857
[ "Radio electronics", "Frequency mixers", "Electronic amplifiers", "Amplifiers" ]
41,795
https://en.wikipedia.org/wiki/Minimum%20spanning%20tree
A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any edge-weighted undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of the minimum spanning trees for its connected components. There are many use cases for minimum spanning trees. One example is a telecommunications company trying to lay cable in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. roads), then there would be a graph containing the points (e.g. houses) connected by those paths. Some of the paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as the triangle inequality. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects every house; there might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost, representing the least expensive path for laying the cable. Properties Possible multiplicity If there are vertices in the graph, then each spanning tree has edges. There may be several minimum spanning trees of the same weight; in particular, if all the edge weights of a given graph are the same, then every spanning tree of that graph is minimum. Uniqueness If each edge has a distinct weight then there will be only one, unique minimum spanning tree. This is true in many realistic situations, such as the telecommunications company example above, where it's unlikely any two paths have exactly the same cost. This generalizes to spanning forests as well. Proof: Assume the contrary, that there are two different MSTs and . Since and differ despite containing the same nodes, there is at least one edge that belongs to one but not the other. Among such edges, let be the one with least weight; this choice is unique because the edge weights are all distinct. Without loss of generality, assume is in . As is an MST, must contain a cycle with . As a tree, contains no cycles, therefore must have an edge that is not in . Since was chosen as the unique lowest-weight edge among those belonging to exactly one of and , the weight of must be greater than the weight of . As and are part of the cycle , replacing with in therefore yields a spanning tree with a smaller weight. This contradicts the assumption that is an MST. More generally, if the edge weights are not all distinct then only the (multi-)set of weights in minimum spanning trees is certain to be unique; it is the same for all minimum spanning trees. Minimum-cost subgraph If the weights are positive, then a minimum spanning tree is, in fact, a minimum-cost subgraph connecting all vertices, since if a subgraph contains a cycle, removing any edge along that cycle will decrease its cost and preserve connectivity. Cycle property For any cycle in the graph, if the weight of an edge of is larger than any of the individual weights of all other edges of , then this edge cannot belong to an MST. Proof: Assume the contrary, i.e. that belongs to an MST . Then deleting will break into two subtrees with the two ends of in different subtrees. The remainder of reconnects the subtrees, hence there is an edge of with ends in different subtrees, i.e., it reconnects the subtrees into a tree with weight less than that of , because the weight of is less than the weight of . Cut property For any cut of the graph, if the weight of an edge in the cut-set of is strictly smaller than the weights of all other edges of the cut-set of , then this edge belongs to all MSTs of the graph. Proof: Assume that there is an MST that does not contain . Adding to will produce a cycle, that crosses the cut once at and crosses back at another edge . Deleting we get a spanning tree of strictly smaller weight than . This contradicts the assumption that was a MST. By a similar argument, if more than one edge is of minimum weight across a cut, then each such edge is contained in some minimum spanning tree. Minimum-cost edge If the minimum cost edge of a graph is unique, then this edge is included in any MST. Proof: if was not included in the MST, removing any of the (larger cost) edges in the cycle formed after adding to the MST, would yield a spanning tree of smaller weight. Contraction If is a tree of MST edges, then we can contract into a single vertex while maintaining the invariant that the MST of the contracted graph plus gives the MST for the graph before contraction. Algorithms In all of the algorithms below, is the number of edges in the graph and is the number of vertices. Classic algorithms The first algorithm for finding a minimum spanning tree was developed by Czech scientist Otakar Borůvka in 1926 (see Borůvka's algorithm). Its purpose was an efficient electrical coverage of Moravia. The algorithm proceeds in a sequence of stages. In each stage, called Boruvka step, it identifies a forest consisting of the minimum-weight edge incident to each vertex in the graph , then forms the graph as the input to the next step. Here denotes the graph derived from by contracting edges in (by the Cut property, these edges belong to the MST). Each Boruvka step takes linear time. Since the number of vertices is reduced by at least half in each step, Boruvka's algorithm takes time. A second algorithm is Prim's algorithm, which was invented by Vojtěch Jarník in 1930 and rediscovered by Prim in 1957 and Dijkstra in 1959. Basically, it grows the MST () one edge at a time. Initially, contains an arbitrary vertex. In each step, is augmented with a least-weight edge such that is in and is not yet in . By the Cut property, all edges added to are in the MST. Its run-time is either or , depending on the data-structures used. A third algorithm commonly in use is Kruskal's algorithm, which also takes time. A fourth algorithm, not as commonly used, is the reverse-delete algorithm, which is the reverse of Kruskal's algorithm. Its runtime is . All four of these are greedy algorithms. Since they run in polynomial time, the problem of finding such trees is in FP, and related decision problems such as determining whether a particular edge is in the MST or determining if the minimum total weight exceeds a certain value are in P. Faster algorithms Several researchers have tried to find more computationally-efficient algorithms. In a comparison model, in which the only allowed operations on edge weights are pairwise comparisons, found a linear time randomized algorithm based on a combination of Borůvka's algorithm and the reverse-delete algorithm. The fastest non-randomized comparison-based algorithm with known complexity, by Bernard Chazelle, is based on the soft heap, an approximate priority queue. Its running time is , where is the classical functional inverse of the Ackermann function. The function grows extremely slowly, so that for all practical purposes it may be considered a constant no greater than 4; thus Chazelle's algorithm takes very close to linear time. Linear-time algorithms in special cases Dense graphs If the graph is dense (i.e. , then a deterministic algorithm by Fredman and Tarjan finds the MST in time . The algorithm executes a number of phases. Each phase executes Prim's algorithm many times, each for a limited number of steps. The run-time of each phase is . If the number of vertices before a phase is , the number of vertices remaining after a phase is at most . Hence, at most phases are needed, which gives a linear run-time for dense graphs. There are other algorithms that work in linear time on dense graphs. Integer weights If the edge weights are integers represented in binary, then deterministic algorithms are known that solve the problem in integer operations. Whether the problem can be solved deterministically for a general graph in linear time by a comparison-based algorithm remains an open question. Decision trees Given graph where the nodes and edges are fixed but the weights are unknown, it is possible to construct a binary decision tree (DT) for calculating the MST for any permutation of weights. Each internal node of the DT contains a comparison between two edges, e.g. "Is the weight of the edge between and larger than the weight of the edge between and ?". The two children of the node correspond to the two possible answers "yes" or "no". In each leaf of the DT, there is a list of edges from that correspond to an MST. The runtime complexity of a DT is the largest number of queries required to find the MST, which is just the depth of the DT. A DT for a graph is called optimal if it has the smallest depth of all correct DTs for . For every integer , it is possible to find optimal decision trees for all graphs on vertices by brute-force search. This search proceeds in two steps. A. Generating all potential DTs There are different graphs on vertices. For each graph, an MST can always be found using comparisons, e.g. by Prim's algorithm. Hence, the depth of an optimal DT is less than . Hence, the number of internal nodes in an optimal DT is less than . Every internal node compares two edges. The number of edges is at most so the different number of comparisons is at most . Hence, the number of potential DTs is less than B. Identifying the correct DTs To check if a DT is correct, it should be checked on all possible permutations of the edge weights. The number of such permutations is at most . For each permutation, solve the MST problem on the given graph using any existing algorithm, and compare the result to the answer given by the DT. The running time of any MST algorithm is at most , so the total time required to check all permutations is at most . Hence, the total time required for finding an optimal DT for all graphs with vertices is: which is less than Optimal algorithm Seth Pettie and Vijaya Ramachandran have found a optimal deterministic comparison-based minimum spanning tree algorithm. The following is a simplified description of the algorithm. Let , where is the number of vertices. Find all optimal decision trees on vertices. This can be done in time (see Decision trees above). Partition the graph to components with at most vertices in each component. This partition uses a soft heap, which "corrupts" a small number of the edges of the graph. Use the optimal decision trees to find an MST for the uncorrupted subgraph within each component. Contract each connected component spanned by the MSTs to a single vertex, and apply any algorithm which works on dense graphs in time to the contraction of the uncorrupted subgraph Add back the corrupted edges to the resulting forest to form a subgraph guaranteed to contain the minimum spanning tree, and smaller by a constant factor than the starting graph. Apply the optimal algorithm recursively to this graph. The runtime of all steps in the algorithm is , except for the step of using the decision trees. The runtime of this step is unknown, but it has been proved that it is optimal - no algorithm can do better than the optimal decision tree. Thus, this algorithm has the peculiar property that it is optimal although its runtime complexity is unknown. Parallel and distributed algorithms Research has also considered parallel algorithms for the minimum spanning tree problem. With a linear number of processors it is possible to solve the problem in time. The problem can also be approached in a distributed manner. If each node is considered a computer and no node knows anything except its own connected links, one can still calculate the distributed minimum spanning tree. MST on complete graphs with random weights Alan M. Frieze showed that given a complete graph on n vertices, with edge weights that are independent identically distributed random variables with distribution function satisfying , then as n approaches +∞ the expected weight of the MST approaches , where is the Riemann zeta function (more specifically is Apéry's constant). Frieze and Steele also proved convergence in probability. Svante Janson proved a central limit theorem for weight of the MST. For uniform random weights in , the exact expected size of the minimum spanning tree has been computed for small complete graphs. Fractional variant There is a fractional variant of the MST, in which each edge is allowed to appear "fractionally". Formally, a fractional spanning set of a graph (V,E) is a nonnegative function f on E such that, for every non-trivial subset W of V (i.e., W is neither empty nor equal to V), the sum of f(e) over all edges connecting a node of W with a node of V\W is at least 1. Intuitively, f(e) represents the fraction of e that is contained in the spanning set. A minimum fractional spanning set is a fractional spanning set for which the sum is as small as possible. If the fractions f(e) are forced to be in {0,1}, then the set T of edges with f(e)=1 are a spanning set, as every node or subset of nodes is connected to the rest of the graph by at least one edge of T. Moreover, if f minimizes, then the resulting spanning set is necessarily a tree, since if it contained a cycle, then an edge could be removed without affecting the spanning condition. So the minimum fractional spanning set problem is a relaxation of the MST problem, and can also be called the fractional MST problem. The fractional MST problem can be solved in polynomial time using the ellipsoid method. However, if we add a requirement that f(e) must be half-integer (that is, f(e) must be in {0, 1/2, 1}), then the problem becomes NP-hard, since it includes as a special case the Hamiltonian cycle problem: in an -vertex unweighted graph, a half-integer MST of weight can only be obtained by assigning weight 1/2 to each edge of a Hamiltonian cycle. Other variants The Steiner tree of a subset of the vertices is the minimum tree that spans the given subset. Finding the Steiner tree is NP-Complete. The k-minimum spanning tree (k-MST) is the tree that spans some subset of k vertices in the graph with minimum weight. A set of k-smallest spanning trees is a subset of k spanning trees (out of all possible spanning trees) such that no spanning tree outside the subset has smaller weight. (Note that this problem is unrelated to the k-minimum spanning tree.) The Euclidean minimum spanning tree is a spanning tree of a graph with edge weights corresponding to the Euclidean distance between vertices which are points in the plane (or space). The rectilinear minimum spanning tree is a spanning tree of a graph with edge weights corresponding to the rectilinear distance between vertices which are points in the plane (or space). The distributed minimum spanning tree is an extension of MST to the distributed model, where each node is considered a computer and no node knows anything except its own connected links. The mathematical definition of the problem is the same but there are different approaches for a solution. The capacitated minimum spanning tree is a tree that has a marked node (origin, or root) and each of the subtrees attached to the node contains no more than c nodes. c is called a tree capacity. Solving CMST optimally is NP-hard, but good heuristics such as Esau-Williams and Sharma produce solutions close to optimal in polynomial time. The degree-constrained minimum spanning tree is a MST in which each vertex is connected to no more than d other vertices, for some given number d. The case d = 2 is a special case of the traveling salesman problem, so the degree constrained minimum spanning tree is NP-hard in general. An arborescence is a variant of MST for directed graphs. It can be solved in time using the Chu–Liu/Edmonds algorithm. A maximum spanning tree is a spanning tree with weight greater than or equal to the weight of every other spanning tree. Such a tree can be found with algorithms such as Prim's or Kruskal's after multiplying the edge weights by -1 and solving the MST problem on the new graph. A path in the maximum spanning tree is the widest path in the graph between its two endpoints: among all possible paths, it maximizes the weight of the minimum-weight edge. Maximum spanning trees find applications in parsing algorithms for natural languages and in training algorithms for conditional random fields. The dynamic MST problem concerns the update of a previously computed MST after an edge weight change in the original graph or the insertion/deletion of a vertex. The minimum labeling spanning tree problem is to find a spanning tree with least types of labels if each edge in a graph is associated with a label from a finite label set instead of a weight. A bottleneck edge is the highest weighted edge in a spanning tree. A spanning tree is a minimum bottleneck spanning tree (or MBST) if the graph does not contain a spanning tree with a smaller bottleneck edge weight. A MST is necessarily a MBST ( by the cut property), but a MBST is not necessarily a MST. A minimum-cost spanning tree game is a cooperative game in which the players have to share among them the costs of constructing the optimal spanning tree. The optimal network design problem is the problem of computing a set, subject to a budget constraint, which contains a spanning tree, such that the sum of shortest paths between every pair of nodes is as small as possible. Applications Minimum spanning trees have direct applications in the design of networks, including computer networks, telecommunications networks, transportation networks, water supply networks, and electrical grids (which they were first invented for, as mentioned above). They are invoked as subroutines in algorithms for other problems, including the Christofides algorithm for approximating the traveling salesman problem, approximating the multi-terminal minimum cut problem (which is equivalent in the single-terminal case to the maximum flow problem), and approximating the minimum-cost weighted perfect matching. Other practical applications based on minimal spanning trees include: Taxonomy. Cluster analysis: clustering points in the plane, single-linkage clustering (a method of hierarchical clustering), graph-theoretic clustering, and clustering gene expression data. Constructing trees for broadcasting in computer networks. Image registration and segmentation – see minimum spanning tree-based segmentation. Curvilinear feature extraction in computer vision. Handwriting recognition of mathematical expressions. Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Regionalisation of socio-geographic areas, the grouping of areas into homogeneous, contiguous regions. Comparing ecotoxicology data. Topological observability in power systems. Measuring homogeneity of two-dimensional materials. Minimax process control. Minimum spanning trees can also be used to describe financial markets. A correlation matrix can be created by calculating a coefficient of correlation between any two stocks. This matrix can be represented topologically as a complex network and a minimum spanning tree can be constructed to visualize relationships. References Further reading Otakar Boruvka on Minimum Spanning Tree Problem (translation of both 1926 papers, comments, history) (2000) Jaroslav Nešetřil, Eva Milková, Helena Nesetrilová. (Section 7 gives his algorithm, which looks like a cross between Prim's and Kruskal's.) Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Chapter 23: Minimum Spanning Trees, pp. 561–579. Eisner, Jason (1997). State-of-the-art algorithms for minimum spanning trees: A tutorial discussion. Manuscript, University of Pennsylvania, April. 78 pp. Kromkowski, John David. "Still Unmelted after All These Years", in Annual Editions, Race and Ethnic Relations, 17/e (2009 McGraw Hill) (Using minimum spanning tree as method of demographic analysis of ethnic diversity across the United States). External links Implemented in BGL, the Boost Graph Library The Stony Brook Algorithm Repository - Minimum Spanning Tree codes Implemented in QuickGraph for .Net Spanning tree Polynomial-time problems
Minimum spanning tree
[ "Mathematics" ]
4,385
[ "Mathematical problems", "Computational problems", "Polynomial-time problems" ]