text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1 value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|---|---|
There is a kind of continuum with three semi-distinct categories dealing with anthropogenic hybridization: hybridization without introgression, hybridization with widespread introgression (backcrossing with one of the parent species), and hybrid swarms (highly variable populations with much interbreeding as well as backcrossing with the parent species). Depending on where a population falls along this continuum, the management plans for that population will change. Hybridization is currently an area of great discussion within wildlife management and habitat management. Global climate change is creating other changes such as difference in population distributions which are indirect causes for an increase in anthropogenic hybridization.
Conservationists disagree on when is the proper time to give up on a population that is becoming a hybrid swarm, or to try and save the still existing pure individuals. Once a population becomes a complete mixture, the goal becomes to conserve those hybrids to avoid their loss. Conservationists treat each case on its merits, depending on detecting hybrids within the population. It is nearly impossible to formulate a uniform hybridization policy, because hybridization can occur beneficially when it occurs "naturally", and when hybrid swarms are the only remaining evidence of prior species, they need to be conserved as well.
Genetic mixing and extinction
Regionally developed ecotypes can be threatened with extinction when new alleles or genes are introduced that alter that ecotype. This is sometimes called genetic mixing. Hybridization and introgression, which can happen in natural and hybrid populations, of new genetic material can lead to the replacement of local genotypes if the hybrids are more fit and have breeding advantages over the indigenous ecotype or species. These hybridization events can result from the introduction of non-native genotypes by humans or through habitat modification, bringing previously isolated species into contact. Genetic mixing can be especially detrimental for rare species in isolated habitats, ultimately affecting the population to such a degree that none of the originally genetically distinct population remains.
Effect on biodiversity and food security | Hybrid (biology) | Wikipedia | 405 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
In agriculture and animal husbandry, the Green Revolution's use of conventional hybridization increased yields by breeding high-yielding varieties. The replacement of locally indigenous breeds, compounded with unintentional cross-pollination and crossbreeding (genetic mixing), has reduced the gene pools of various wild and indigenous breeds resulting in the loss of genetic diversity. Since the indigenous breeds are often well-adapted to local extremes in climate and have immunity to local pathogens, this can be a significant genetic erosion of the gene pool for future breeding. Therefore, commercial plant geneticists strive to breed "widely adapted" cultivars to counteract this tendency.
Different taxa
In animals
Mammals
Familiar examples of equid hybrids are the mule, a cross between a female horse and a male donkey, and the hinny, a cross between a female donkey and a male horse. Pairs of complementary types like the mule and hinny are called reciprocal hybrids. Polar bears and brown bears are another case of a hybridizing species pairs, and introgression among non-sister species of bears appears to have shaped the Ursidae family tree. Among many other mammal crosses are hybrid camels, crosses between a bactrian camel and a dromedary. There are many examples of felid hybrids, including the liger. The oldest-known animal hybrid bred by humans is the kunga equid hybrid produced as a draft animal and status symbol 4,500 years ago in Umm el-Marra, present-day Syria.
The first known instance of hybrid speciation in marine mammals was discovered in 2014. The clymene dolphin (Stenella clymene) is a hybrid of two Atlantic species, the spinner and striped dolphins. In 2019, scientists confirmed that a skull found 30 years earlier was a hybrid between the beluga whale and narwhal, dubbed the narluga.
Birds
Hybridization between species is common in birds. Hybrid birds are purposefully bred by humans, but hybridization is also common in the wild. Waterfowl have a particularly high incidence of hybridization, with at least 60% of species known to produce hybrids with another species. Among ducks, mallards widely hybridize with many other species, and the genetic relationships between ducks are further complicated by the widespread gene flow between wild and domestic mallards. | Hybrid (biology) | Wikipedia | 474 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
One of the most common interspecific hybrids in geese occurs between Greylag and Canada geese (Anser anser x Branta canadensis). One potential mechanism for the occurrence of hybrids in these geese is interspecific nest parasitism, where an egg is laid in the nest of another species to be raised by non-biological parents. The chick imprints upon and eventually seeks a mate among the species that raised it, instead of the species of its biological parents.
Cagebird breeders sometimes breed bird hybrids known as mules between species of finch, such as goldfinch × canary.
Amphibians
Among amphibians, Japanese giant salamanders and Chinese giant salamanders have created hybrids that threaten the survival of Japanese giant salamanders because of competition for similar resources in Japan.
Fish
Among fish, a group of about 50 natural hybrids between Australian blacktip shark and the larger common blacktip shark was found by Australia's eastern coast in 2012.
Russian sturgeon and American paddlefish were hybridized in captivity when sperm from the paddlefish and eggs from the sturgeon were combined, unexpectedly resulting in viable offspring. This hybrid is called a sturddlefish.
Cephalochordates
The two genera Asymmetron and Branchiostoma are able to produce viable hybrid offspring, even if none have lived into adulthood so far, despite the parents' common ancestor living tens of millions of years ago.
Insects
Among insects, so-called killer bees were accidentally created during an attempt to breed a strain of bees that would both produce more honey and be better adapted to tropical conditions. It was done by crossing a European honey bee and an African bee.
The Colias eurytheme and C. philodice butterflies have retained enough genetic compatibility to produce viable hybrid offspring. Hybrid speciation may have produced the diverse Heliconius butterflies, but that is disputed. | Hybrid (biology) | Wikipedia | 384 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
The two closely related harvester ant species Pogonomyrmex barbatus and Pogonomyrmex rugosus have evolved to depend on hybridization. When a queen fertilizes her eggs with sperm from males of her own species, the offspring is always new queens. And when she fertilizes the eggs with sperm from males of the other species, the offspring is always sterile worker ants (and because ants are haplodiploid, unfertilized eggs become males). Without mating with males of the other species, the queens are unable to produce workers, and will fail to establish a colony of their own.
In plants
Plant species hybridize more readily than animal species, and the resulting hybrids are fertile more often. Many plant species are the result of hybridization, combined with polyploidy, which duplicates the chromosomes. Chromosome duplication allows orderly meiosis and so viable seed can be produced.
Plant hybrids are generally given names that include an "×" (not in italics), such as Platanus × hispanica for the London plane, a natural hybrid of P. orientalis (oriental plane) and P. occidentalis (American sycamore). The parent's names may be kept in their entirety, as seen in Prunus persica × Prunus americana, with the female parent's name given first, or if not known, the parent's names given alphabetically.
Plant species that are genetically compatible may not hybridize in nature for various reasons, including geographical isolation, differences in flowering period, or differences in pollinators. Species that are brought together by humans in gardens may hybridize naturally, or hybridization can be facilitated by human efforts, such as altered flowering period or artificial pollination. Hybrids are sometimes created by humans to produce improved plants that have some of the characteristics of each of the parent species. Much work is now being done with hybrids between crops and their wild relatives to improve disease resistance or climate resilience for both agricultural and horticultural crops. | Hybrid (biology) | Wikipedia | 416 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
Some crop plants are hybrids from different genera (intergeneric hybrids), such as Triticale, × Triticosecale, a wheat–rye hybrid. Most modern and ancient wheat breeds are themselves hybrids; bread wheat, Triticum aestivum, is a hexaploid hybrid of three wild grasses. Several commercial fruits including loganberry (Rubus × loganobaccus) and grapefruit (Citrus × paradisi) are hybrids, as are garden herbs such as peppermint (Mentha × piperita), and trees such as the London plane (Platanus × hispanica). Among many natural plant hybrids is Iris albicans, a sterile hybrid that spreads by rhizome division, and Oenothera lamarckiana, a flower that was the subject of important experiments by Hugo de Vries that produced an understanding of polyploidy.
Sterility in a non-polyploid hybrid is often a result of chromosome number; if parents are of differing chromosome pair number, the offspring will have an odd number of chromosomes, which leaves them unable to produce chromosomally balanced gametes. While that is undesirable in a crop such as wheat, for which growing a crop that produces no seeds would be pointless, it is an attractive attribute in some fruits. Triploid bananas and watermelons are intentionally bred because they produce no seeds and are also parthenocarpic.
In fungi
Hybridization between fungal species is common and well established, particularly in yeast. Yeast hybrids are widely found and used in human-related activities, such as brewing and winemaking. The production of lager beers for instance are known to be carried out by the yeast Saccharomyces pastorianus, a cryotolerant hybrid between Saccharomyces cerevisiae and Saccharomyces eubayanus, which allows fermentation at low temperatures.
In humans | Hybrid (biology) | Wikipedia | 396 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
There is evidence of hybridization between modern humans and other species of the genus Homo. In 2010, the Neanderthal genome project showed that 1–4% of DNA from all people living today, apart from most Sub-Saharan Africans, is of Neanderthal heritage. Analyzing the genomes of 600 Europeans and East Asians found that combining them covered 20% of the Neanderthal genome that is in the modern human population. Ancient human populations lived and interbred with Neanderthals, Denisovans, and at least one other extinct Homo species. Thus, Neanderthal and Denisovan DNA has been incorporated into human DNA by introgression.
In 1998, a complete prehistorical skeleton found in Portugal, the Lapedo child, had features of both anatomically modern humans and Neanderthals. Some ancient human skulls with especially large nasal cavities and unusually shaped braincases represent human-Neanderthal hybrids. A 37,000- to 42,000-year-old human jawbone found in Romania's Oase cave contains traces of Neanderthal ancestry from only four to six generations earlier. All genes from Neanderthals in the current human population are descended from Neanderthal fathers and human mothers.
Mythology
Folk tales and myths sometimes contain mythological hybrids; the Minotaur was the offspring of a human, Pasiphaë, and a white bull. More often, they are composites of the physical attributes of two or more kinds of animals, mythical beasts, and humans, with no suggestion that they are the result of interbreeding, as in the centaur (man/horse), chimera (goat/lion/snake), hippocamp (fish/horse), and sphinx (woman/lion). The Old Testament mentions a first generation of half-human hybrid giants, the Nephilim, while the apocryphal Book of Enoch describes the Nephilim as the wicked sons of fallen angels and attractive women. | Hybrid (biology) | Wikipedia | 405 | 41244 | https://en.wikipedia.org/wiki/Hybrid%20%28biology%29 | Biology and health sciences | Genetics and taxonomy | null |
In electrodynamics, linear polarization or plane polarization of electromagnetic radiation is a confinement of the electric field vector or magnetic field vector to a given plane along the direction of propagation. The term linear polarization (French: polarisation rectiligne) was coined by Augustin-Jean Fresnel in 1822. See polarization and plane of polarization for more information.
The orientation of a linearly polarized electromagnetic wave is defined by the direction of the electric field vector. For example, if the electric field vector is vertical (alternately up and down as the wave travels) the radiation is said to be vertically polarized.
Mathematical description
The classical sinusoidal plane wave solution of the electromagnetic wave equation for the electric and magnetic fields is (cgs units)
for the magnetic field, where k is the wavenumber,
is the angular frequency of the wave, and is the speed of light.
Here is the amplitude of the field and
is the Jones vector in the x-y plane.
The wave is linearly polarized when the phase angles are equal,
.
This represents a wave polarized at an angle with respect to the x axis. In that case, the Jones vector can be written
.
The state vectors for linear polarization in x or y are special cases of this state vector.
If unit vectors are defined such that
and
then the polarization state can be written in the "x-y basis" as
. | Linear polarization | Wikipedia | 290 | 41316 | https://en.wikipedia.org/wiki/Linear%20polarization | Physical sciences | Optics | Physics |
In telecommunications and computer networking, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource a physical transmission medium. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end.
A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX or DMX).
Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream.
In computing, I/O multiplexing can also be used to refer to the concept of processing multiple input/output events from a single event loop, with system calls like poll and select (Unix).
Types
Multiple variable bit rate digital bit streams may be transferred efficiently over a single fixed bandwidth channel by means of statistical multiplexing. This is an asynchronous mode time-domain multiplexing which is a form of time-division multiplexing.
Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such as frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS).
In wireless communications, multiplexing can also be accomplished through alternating polarization (horizontal/vertical or clockwise/counterclockwise) on each adjacent channel and satellite, or through phased multi-antenna array combined with a multiple-input multiple-output communications (MIMO) scheme.
Space-division multiplexing | Multiplexing | Wikipedia | 446 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
In wired communication, space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analog stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pair telephone cable, a switched star network such as a telephone access network, a switched Ethernet network, and a mesh network.
In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming a phased array antenna. Examples are multiple-input and multiple-output (MIMO), single-input and multiple-output (SIMO) and multiple-input and single-output (MISO) multiplexing. An IEEE 802.11g wireless router with k antennas makes it in principle possible to communicate with k multiplexed channels, each with a peak bit rate of 54 Mbit/s, thus increasing the total peak bit rate by the factor k. Different antennas would give different multi-path propagation (echo) signatures, making it possible for digital signal processing techniques to separate different signals from each other. These techniques may also be utilized for space diversity (improved robustness to fading) or beamforming (improved selectivity) rather than multiplexing.
Frequency-division multiplexing
Frequency-division multiplexing (FDM) is inherently an analog technology. FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium. In FDM the signals are electrical signals.
One of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's residential area, but the service provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the appropriate frequency (channel) to access the desired signal.
A variant technology, called wavelength-division multiplexing (WDM) is used in optical communications.
Time-division multiplexing | Multiplexing | Wikipedia | 417 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
Time-division multiplexing (TDM) is a digital (or in rare cases, analog) technology that uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, and in such a way that they can be associated with the appropriate receiver. If done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path.
Consider an application requiring four terminals at an airport to reach a central computer. Each terminal communicated at 2400 baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the airport ticket desk back to the airline data center are also installed. Some web proxy servers (e.g. polipo) use TDM in HTTP pipelining of multiple HTTP transactions onto the same TCP/IP connection.
Carrier-sense multiple access and multidrop communication methods are similar to time-division multiplexing in that multiple data streams are separated by time on the same medium, but because the signals have separate origins instead of being combined into a single signal, are best viewed as channel access methods, rather than a form of multiplexing.
TD is a legacy multiplexing technology still providing the backbone of most National fixed-line telephony networks in Europe, providing the 2 Mbit/s voice and signaling ports on narrow-band telephone exchanges such as the DMS100. Each E1 or 2 Mbit/s TDM port provides either 30 or 31 speech timeslots in the case of CCITT7 signaling systems and 30 voice channels for customer-connected Q931, DASS2, DPNSS, V5 and CASS signaling systems.
Polarization-division multiplexing
Polarization-division multiplexing uses the polarization of electromagnetic radiation to separate orthogonal channels. It is in practical use in both radio and optical communications, particularly in 100 Gbit/s per channel fiber-optic transmission systems.
Differential Cross-Polarized Wireless Communications is a novel method for polarized antenna transmission utilizing a differential technique.
Orbital angular momentum multiplexing | Multiplexing | Wikipedia | 462 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
Orbital angular momentum multiplexing is a relatively new and experimental technique for multiplexing multiple channels of signals carried using electromagnetic radiation over a single path. It can potentially be used in addition to other physical multiplexing methods to greatly expand the transmission capacity of such systems. it is still in its early research phase, with small-scale laboratory demonstrations of bandwidths of up to 2.5 Tbit/s over a single light path. This is a controversial subject in the academic community, with many claiming it is not a new method of multiplexing, but rather a special case of space-division multiplexing.
Code-division multiplexing
Code-division multiplexing (CDM), code-division multiple access (CDMA) or spread spectrum is a class of techniques where several channels simultaneously share the same frequency spectrum, and this spectral bandwidth is much higher than the bit rate or symbol rate. One form is frequency hopping, another is direct sequence spread spectrum. In the latter case, each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Number of chips per bit, or chips per symbol, is the spreading factor. This coded transmission typically is accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber or radio channel or other medium, and asynchronously demultiplexed. Advantages over conventional techniques are that variable bandwidth is possible (just as in statistical multiplexing), that the wide bandwidth allows poor signal-to-noise ratio according to Shannon–Hartley theorem, and that multi-path propagation in wireless communication can be combated by rake receivers.
A significant application of CDMA is the Global Positioning System (GPS).
Multiple access method
A multiplexing technique may be further extended into a multiple access method or channel access method, for example, TDM into time-division multiple access (TDMA) and statistical multiplexing into carrier-sense multiple access (CSMA). A multiple-access method makes it possible for several transmitters connected to the same physical medium to share their capacity.
Multiplexing is provided by the physical layer of the OSI model, while multiple access also involves a media access control protocol, which is part of the data link layer.
The Transport layer in the OSI model, as well as TCP/IP model, provides statistical multiplexing of several application layer data flows to/from the same computer. | Multiplexing | Wikipedia | 506 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
Code-division multiplexing (CDM) is a technique in which each channel transmits its bits as a coded channel-specific sequence of pulses. This coded transmission is typically accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber and asynchronously demultiplexed. Other widely used multiple access techniques are time-division multiple access (TDMA) and frequency-division multiple access (FDMA).
Code-division multiplex techniques are used as an access technology, namely code-division multiple access (CDMA), in Universal Mobile Telecommunications System (UMTS) standard for the third-generation (3G) mobile communication identified by the ITU.
Application areas
Telegraphy
The earliest communication technology using electrical wires, and therefore sharing an interest in the economies afforded by multiplexing, was the electric telegraph. Early experiments allowed two separate messages to travel in opposite directions simultaneously, first using an electric battery at both ends, then at only one end.
Émile Baudot developed a time-multiplexing system of multiple Hughes machines in the 1870s. In 1874, the quadruplex telegraph developed by Thomas Edison transmitted two messages in each direction simultaneously, for a total of four messages transiting the same wire at the same time. Several researchers were investigating acoustic telegraphy, a frequency-division multiplexing technique, which led to the invention of the telephone.
Telephony
In telephony, a customer's telephone line now typically ends at the remote concentrator box, where it is multiplexed along with other telephone lines for that neighborhood or other similar area. The multiplexed signal is then carried to the central switching office on significantly fewer wires and for much further distances than a customer's line can practically go. This is likewise also true for digital subscriber lines (DSL).
Fiber in the loop (FITL) is a common method of multiplexing, which uses optical fiber as the backbone. It not only connects POTS phone lines with the rest of the PSTN, but also replaces DSL by connecting directly to Ethernet wired into the home. Asynchronous Transfer Mode is often the communications protocol used.
Cable TV has long carried multiplexed television channels, and late in the 20th century began offering the same services as telephone companies. IPTV also depends on multiplexing.
Video processing | Multiplexing | Wikipedia | 500 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
In video editing and processing systems, multiplexing refers to the process of interleaving audio and video into one coherent data stream.
In digital video, such a transport stream is normally a feature of a container format which may include metadata and other information, such as subtitles. The audio and video streams may have variable bit rate. Software that produces such a transport stream and/or container is commonly called a multiplexer or muxer. A demuxer is software that extracts or otherwise makes available for separate processing the components of such a stream or container.
Digital broadcasting
In digital television systems, several variable bit-rate data streams are multiplexed together to a fixed bit-rate transport stream by means of statistical multiplexing. This makes it possible to transfer several video and audio channels simultaneously over the same frequency channel, together with various services. This may involve several standard-definition television (SDTV) programs (particularly on DVB-T, DVB-S2, ISDB and ATSC-C), or one HDTV, possibly with a single SDTV companion channel over one 6 to 8 MHz-wide TV channel. The device that accomplishes this is called a statistical multiplexer. In several of these systems, the multiplexing results in an MPEG transport stream. The newer DVB standards DVB-S2 and DVB-T2 has the capacity to carry several HDTV channels in one multiplex.
In digital radio, a multiplex (also known as an ensemble) is a number of radio stations that are grouped together. A multiplex is a stream of digital information that includes audio and other data.
On communications satellites which carry broadcast television networks and radio networks, this is known as multiple channel per carrier or MCPC. Where multiplexing is not practical (such as where there are different sources using a single transponder), single channel per carrier mode is used. | Multiplexing | Wikipedia | 385 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
Analog broadcasting
In FM broadcasting and other analog radio media, multiplexing is a term commonly given to the process of adding subcarriers to the audio signal before it enters the transmitter, where modulation occurs. (In fact, the stereo multiplex signal can be generated using time-division multiplexing, by switching between the two (left channel and right channel) input signals at an ultrasonic rate (the subcarrier), and then filtering out the higher harmonics.) Multiplexing in this sense is sometimes known as MPX, which in turn is also an old term for stereophonic FM, seen on stereo systems since the 1960s.
Other meanings
In spectroscopy the term is used to indicate that the experiment is performed with a mixture of frequencies at once and their respective response unraveled afterward using the Fourier transform principle.
In computer programming, it may refer to using a single in-memory resource (such as a file handle) to handle multiple external resources (such as on-disk files).
Some electrical multiplexing techniques do not require a physical "multiplexer" device, they refer to a "keyboard matrix" or "Charlieplexing" design style:
Multiplexing may refer to the design of a multiplexed display (non-multiplexed displays are immune to break up).
Multiplexing may refer to the design of a "switch matrix" (non-multiplexed buttons are immune to "phantom keys" and also immune to "phantom key blocking").
In high-throughput DNA sequencing, the term is used to indicate that some artificial sequences (often called barcodes or indexes) have been added to link given sequence reads to a given sample, and thus allow for the sequencing of multiple samples in the same reaction.
In sociolinguistics, multiplexity is used to describe the number of distinct connections between individuals who are part of a social network. A multiplex network is one in which members share a number of ties stemming from more than one social context, such as workmates, neighbors, or relatives. | Multiplexing | Wikipedia | 411 | 41389 | https://en.wikipedia.org/wiki/Multiplexing | Technology | Signal processing | null |
The neper (symbol: Np) is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI.
Definition
Like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the decadic (base-10) logarithm to compute ratios, the neper uses the natural logarithm, based on Euler's number (). The level of a ratio of two signal amplitudes or root-power quantities, with the unit neper, is given by
where and are the signal amplitudes, and is the natural logarithm. The level of a ratio of two power quantities, with the unit neper, is given by
where and are the signal powers.
In the International System of Quantities, the neper is defined as .
Units
The neper is defined in terms of ratios of field quantities — also called root-power quantities — (for example, voltage or current amplitudes in electrical circuits, or pressure in acoustics), whereas the decibel was originally defined in terms of power ratios. A power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power in a linear system is proportional to the square (Joule's laws) of the amplitude. Hence the decibel and the neper have a fixed ratio to each other:
and
The (voltage) level ratio is
Like the decibel, the neper is a dimensionless unit. The International Telecommunication Union (ITU) recognizes both units. Only the neper is coherent with the SI.
Applications
The neper is a natural linear unit of relative difference, meaning in nepers (logarithmic units) relative differences add rather than multiply. This property is shared with logarithmic units in other bases, such as the bel. | Neper | Wikipedia | 462 | 41402 | https://en.wikipedia.org/wiki/Neper | Physical sciences | Ratio | Basics and measurement |
The derived units decineper (1 dNp = 0.1 neper) and centineper (1 cNp = 0.01 neper) are also used. The centineper for root-power quantities corresponds to a log point or log percentage, see . | Neper | Wikipedia | 56 | 41402 | https://en.wikipedia.org/wiki/Neper | Physical sciences | Ratio | Basics and measurement |
Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.
Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of the physical layer of the OSI model.
Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology.
Topologies
Two basic categories of network topologies exist, physical topologies and logical topologies.
The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits. | Network topology | Wikipedia | 440 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, Avionics Full-Duplex Switched Ethernet (AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, the AFDX virtual links are modeled as time-switched single-transmitter bus connections, thus following the safety model of a single-transmitter bus topology previously used in aircraft. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches.
Links
The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cables (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
Wired technologies
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed. | Network topology | Wikipedia | 418 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation between the conductors helps maintain the characteristic impedance of the cable which can help improve its performance. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
Signal traces on printed circuit boards are common for board-level serial communication, particularly between certain types integrated circuits, a common example being SPI.
Ribbon cable (untwisted and possibly unshielded) has been a cost-effective media for serial protocols, especially within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with flat or ribbon cable, or a hybrid flat and twisted ribbon cable, should EMC, length, and bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc.
Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted pair (STP). Each form comes in several category ratings, designed for use in various scenarios. | Network topology | Wikipedia | 443 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents.
Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.
Wireless technologies
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geostationary orbit above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Exotic technologies
There have been various attempts at transporting data over exotic media:
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet. | Network topology | Wikipedia | 499 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Both cases have a large round-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information.
Nodes
Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, one RS-232 transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g., ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g., CAN can have many transceivers connected to a single bus). While the conventional system building blocks of a computer network include network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, gateways, and firewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology.
Network interfaces
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. | Network topology | Wikipedia | 449 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Repeaters and hubs
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal may be reformed or retransmitted at a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extended RS-232 segments from 15 meters to over a kilometer. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
A repeater with multiple ports is known as hub, an Ethernet hub in Ethernet networks, a USB hub in USB networks.
USB networks use hubs to form tiered-star topologies.
Ethernet hubs and repeaters in LANs have been mostly obsoleted by modern switches.
Bridges
A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs. | Network topology | Wikipedia | 452 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame.
A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).
Routers
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a black hole because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.
Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a digital subscriber line technology.
Firewalls
A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.
Classification
The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.
Point-to-point | Network topology | Wikipedia | 488 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. A child's tin can telephone is one example of a physical dedicated channel.
Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony.
The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe's Law.
Daisy chain
Daisy chaining is accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
By connecting the computers at each end of the chain, a ring topology can be formed. When a node sends a message, the message is processed by each computer in the ring. An advantage of the ring is that the number of transmitters and receivers can be cut in half. Since a message will eventually loop all of the way around, transmission does not need to go both directions. Alternatively, the ring can be used to improve fault tolerance. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure.
Bus
In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as the backbone, or trunk – all data transmission between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously. | Network topology | Wikipedia | 473 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be the single point of failure of the network. In this topology data being transferred may be accessed by any node.
Linear bus
In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called a terminator.
Distributed bus
In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium.
Star
In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as a signal repeater.
The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters. | Network topology | Wikipedia | 453 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Extended star
The extended star network topology extends a physical star topology by one or more repeaters between the central node and the peripheral (or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based.
A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.
A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from a tree topology in the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network.
Distributed star
A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').
Ring
A ring topology is a daisy chain in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus.
Advantages:
When the load on the network increases, its performance is better than bus topology.
There is no need of network server to control the connectivity between workstations.
Disadvantages:
Aggregate network bandwidth is bottlenecked by the weakest link between two nodes.
Mesh
The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law.
Fully connected network | Network topology | Wikipedia | 503 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes:
This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network.
Partially connected network
In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network.
Hybrid
Hybrid topology is also known as hybrid network. Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected.
A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub.
Snowflake topology is meshed at the core, but tree shaped at the edges.
Two other hybrid network types are hybrid mesh and hierarchical star.
Centralization
The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes. | Network topology | Wikipedia | 509 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.
A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree structure has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.
To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will learn the layout of the network by listening on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.
Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node.
A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop. | Network topology | Wikipedia | 428 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Decentralization
In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful.
This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance.
A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications. | Network topology | Wikipedia | 262 | 41413 | https://en.wikipedia.org/wiki/Network%20topology | Technology | Networks | null |
Noise is sound, chiefly unwanted, unintentional, or harmful sound considered unpleasant, loud, or disruptive to mental or hearing faculties. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrations through a medium, such as air or water. The difference arises when the brain receives and perceives a sound. Acoustic noise is any sound in the acoustic domain, either deliberate (e.g., music or speech) or unintended. In contrast, noise in electronics may not be audible to the human ear and may require instruments for detection.
In audio engineering, noise can refer to the unwanted residual electronic noise signal that gives rise to acoustic noise heard as a hiss. This signal noise is commonly measured using A-weighting or ITU-R 468 weighting. In experimental sciences, noise can refer to any random fluctuations of data that hinders perception of a signal.
Measurement
Sound is measured based on the amplitude and frequency of a sound wave. Amplitude measures how forceful the wave is. The energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave. Decibels are expressed in a logarithmic scale. On the other hand, pitch describes the frequency of a sound and is measured in hertz (Hz).
The main instrument to measure sounds in the air is the Sound Level Meter. There are many different varieties of instruments that are used to measure noise - Noise Dosimeters are often used in occupational environments, noise monitors are used to measure environmental noise and noise pollution, and recently smartphone-based sound level meter applications (apps) are being used to crowdsource and map recreational and community noise.
A-weighting is applied to a sound spectrum to represent the sound that humans are capable of hearing at each frequency. Sound pressure is thus expressed in terms of dBA. 0 dBA is the softest level that a person can hear. Normal speaking voices are around 65 dBA. A rock concert can be about 120 dBA.
Recording and reproduction
In audio, recording, and broadcast systems, audio noise refers to the residual low-level sound (four major types: hiss, rumble, crackle, and hum) that is heard in quiet periods of program. This variation from the expected pure sound or silence can be caused by the audio recording equipment, the instrument, or ambient noise in the recording room. | Noise | Wikipedia | 503 | 41415 | https://en.wikipedia.org/wiki/Noise | Physical sciences | Waves | null |
In audio engineering it can refer either to the acoustic noise from loudspeakers or to the unwanted residual electronic noise signal that gives rise to acoustic noise heard as hiss. This signal noise is commonly measured using A-weighting or ITU-R 468 weighting
Noise is often generated deliberately and used as a test signal for audio recording and reproduction equipment.
Environmental noise
Environmental noise is the accumulation of all noise present in a specified environment. The principal sources of environmental noise are surface motor vehicles, aircraft, trains and industrial sources. These noise sources expose millions of people to noise pollution that creates not only annoyance, but also significant health consequences such as elevated incidence of hearing loss, cardiovascular disease, and many others. Urban noise is generally not of an intensity that causes hearing loss but it interrupts sleep, disturbs communication and interferes with other human activities. There are a variety of mitigation strategies and controls available to reduce sound levels including source intensity reduction, land-use planning strategies, noise barriers and sound baffles, time of day use regimens, vehicle operational controls and architectural acoustics design measures.
Regulation
Certain geographic areas or specific occupations may be at a higher risk of being exposed to constantly high levels of noise; regulation may prevent negative health outcomes. Noise regulation includes statutes or guidelines relating to sound transmission established by national, state or provincial and municipal levels of government. Environmental noise is governed by laws and standards which set maximum recommended levels of noise for specific land uses, such as residential areas, areas of outstanding natural beauty, or schools. These standards usually specify measurement using a weighting filter, most often A-weighting.
United States
In 1972, the Noise Control Act was passed to promote a healthy living environment for all Americans, where noise does not pose a threat to human health. This policy's main objectives were: (1) establish coordination of research in the area of noise control, (2) establish federal standards on noise emission for commercial products, and (3) promote public awareness about noise emission and reduction.
The Quiet Communities Act of 1978 promotes noise control programs at the state and local level and developed a research program on noise control. Both laws authorized the Environmental Protection Agency to study the effects of noise and evaluate regulations regarding noise control. | Noise | Wikipedia | 453 | 41415 | https://en.wikipedia.org/wiki/Noise | Physical sciences | Waves | null |
The National Institute for Occupational Safety and Health (NIOSH) provides recommendation on noise exposure in the workplace. In 1972 (revised in 1998), NIOSH published a document outlining recommended standards relating to the occupational exposure to noise, with the purpose of reducing the risk of developing permanent hearing loss related to exposure at work. This publication set the recommended exposure limit (REL) of noise in an occupation setting to 85 dBA for 8 hours using a 3-dB exchange rate (every 3-dB increase in level, duration of exposure should be cut in half, i.e., 88 dBA for 4 hours, 91 dBA for 2 hours, 94 dBA for 1 hour, etc.). However, in 1973 the Occupational Safety and Health Administration (OSHA) maintained the requirement of an 8-hour average of 90 dBA. The following year, OSHA required employers to provide a hearing conservation program to workers exposed to 85 dBA average 8-hour workdays.
Europe
The European Environment Agency regulates noise control and surveillance within the European Union. The Environmental Noise Directive was set to determine levels of noise exposure, increase public access to information regarding environmental noise, and reduce environmental noise. Additionally, in the European Union, underwater noise is a pollutant according to the Marine Strategy Framework Directive (MSFD). The MSFD requires EU Member States to achieve or maintain Good Environmental Status, meaning that the "introduction of energy, including underwater noise, is at levels that do not adversely affect the marine environment".
Health effects
Exposure to noise is associated with several negative health outcomes. Depending on duration and level of exposure, noise may cause or increase the likelihood of hearing loss, high blood pressure, ischemic heart disease, sleep disturbances, injuries, and even decreased school performance. When noise is prolonged, the body's stress responses can be triggered; which can include increased heartbeat, and rapid breathing. There are also causal relationships between noise and psychological effects such as annoyance, psychiatric disorders, and effects on psychosocial well-being. | Noise | Wikipedia | 416 | 41415 | https://en.wikipedia.org/wiki/Noise | Physical sciences | Waves | null |
Noise exposure has increasingly been identified as a public health issue, especially in an occupational setting, as demonstrated with the creation of NIOSH's Noise and Hearing Loss Prevention program. Noise has also proven to be an occupational hazard, as it is the most common work-related pollutant. Noise-induced hearing loss, when associated with noise exposure at the workplace is also called occupational hearing loss. For example, some occupational studies have shown a relation between those who are regularly exposed to noise above 85 decibels to have higher blood pressure than those who are not exposed.
Hearing loss prevention
While noise-induced hearing loss is permanent, it is also preventable. Particularly in the workplace, regulations may exist limiting permissible exposure limit to noise. This can be especially important for professionals working in settings with consistent exposure to loud sounds, such as musicians, music teachers and audio engineers. Examples of measures taken to prevent noise-induced hearing loss in the workplace include engineering noise control, the Buy-Quiet initiative, creation of the Safe-In-Sound award, and noise surveillance.
OSHA requires the use of hearing protection. But the HPD (without individual selection, training and fit testing) does not significantly reduce the risk of hearing loss. For example, one study covered more than 19 thousand workers, some of whom usually used hearing protective devices, and some did not use them at all. There was no statistically significant difference in the risk of noise-induced hearing loss.
Literary views
Roland Barthes distinguishes between physiological noise, which is merely heard, and psychological noise, which is actively listened to. Physiological noise is felt subconsciously as the vibrations of the noise (sound) waves physically interact with the body while psychological noise is perceived as our conscious awareness shifts its attention to that noise.
Luigi Russolo, one of the first composers of noise music, wrote the essay The Art of Noises. He argued that any kind of noise could be used as music, as audiences become more familiar with noises caused by technological advancements; noise has become so prominent that pure sound no longer exists.
Avant-garde composer Henry Cowell claimed that technological advancements have reduced unwanted noises from machines, but have not managed so far to eliminate them.
Felix Urban sees noise as a result of cultural circumstances. In his comparative study on sound and noise in cities, he points out that noise regulations are only one indicator of what is considered as harmful. It is the way in which people live and behave (acoustically) that determines the way how sounds are perceived. | Noise | Wikipedia | 506 | 41415 | https://en.wikipedia.org/wiki/Noise | Physical sciences | Waves | null |
An optical disc is a flat, usually disc-shaped object that stores information in the form of physical variations on its surface that can be read with the aid of a beam of light. Optical discs can be reflective, where the light source and detector are on the same side of the disc, or transmissive, where light shines through the disc to be detected on the other side.
Optical discs can store analog information (e.g. Laserdisc), digital information (e.g. DVD), or store both on the same disc (e.g. CD Video).
Their main uses are the distribution of media and data, and long-term archival.
Design and technology
The encoding material sits atop a thicker substrate (usually polycarbonate) that makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track.
The data are stored on the disc with a laser or stamping machine, and can be accessed when the data path is illuminated with a laser diode in an optical disc drive that spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, and the distance of the read head from the center of the disc (outer tracks are read at a higher data speed due to higher linear velocities at the same angular velocities).
Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by their grooves. This side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer.
The reverse side of an optical disc usually has a printed label, sometimes made of paper but often printed or stamped onto the disc itself. Unlike the 3-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches, fingerprints, and other environmental problems. Blu-rays have a coating called durabis that mitigates these problems.
Optical discs are usually between in diameter, with being the most common size. The so-called program area that contains the data commonly starts 25 millimetres away from the center point. A typical disc is about thick, while the track pitch (distance from the center of one track to the center of the next) ranges from 1.6 μm (for CDs) to 320 nm (for Blu-ray discs). | Optical disc | Wikipedia | 511 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
Recording types
An optical disc is designed to support one of three recording types: read-only (such as CD and CD-ROM), recordable (write-once, like CD-R), or re-recordable (rewritable, like CD-RW). Write-once optical discs commonly have an organic dye (may also be a (phthalocyanine) azo dye, mainly used by Verbatim, or an oxonol dye, used by Fujifilm) recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, and tellurium. Azo dyes were introduced in 1996 and phthalocyanine only began to see wide use in 2002. The type of dye and the material used on the reflective layer on an optical disc may be determined by shining a light through the disc, as different dye and material combinations have different colors.
Blu-ray Disc recordable discs do not usually use an organic dye recording layer, instead using an inorganic recording layer. Those that do are known as low-to-high (LTH) discs and can be made in existing CD and DVD production lines, but are of lower quality than traditional Blu-ray recordable discs.
File systems
File systems specifically created for optical discs are ISO9660 and the Universal Disk Format (UDF).
ISO9660 can be extended using the "Joliet" extension to store longer file names than standalone ISO9660. The "Rock Ridge" extension can store even longer file names and Unix/Linux-style file permissions, but is not recognized by Windows and by DVD players and similar devices that can read data discs.
For cross-platform compatibility, multiple file systems can co-exist on one disc and reference the same files.
Usage
Optical discs are most commonly used for digital preservation, storing music (particularly for use in a CD player), video (such as for use in a Blu-ray player), or data and programs for personal computers (PC), as well as offline hard copy data distribution due to lower per-unit prices than other types of media. The Optical Storage Technology Association (OSTA) promoted standardized optical storage formats.
Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player. | Optical disc | Wikipedia | 507 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
File operations of traditional mass storage devices such as flash drives, memory cards and hard drives can be simulated using a UDF live file system.
For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices, especially the USB flash drive. This trend is expected to continue as USB flash drives continue to increase in capacity and drop in price.
Additionally, music, movies, games, software and TV shows purchased, shared or streamed over the Internet has significantly reduced the number of audio CDs, video DVDs and Blu-ray discs sold annually. However, audio CDs and Blu-rays are still preferred and bought by some, as a way of supporting their favorite works while getting something tangible in return and also since audio CDs (alongside vinyl records and cassette tapes) contain uncompressed audio without the artifacts introduced by lossy compression algorithms like MP3, and Blu-rays offer better image and sound quality than streaming media, without visible compression artifacts, due to higher bitrates and more available storage space. However, Blu-rays may sometimes be torrented over the internet, but torrenting may not be an option for some, due to restrictions put in place by ISPs on legal or copyright grounds, low download speeds or not having enough available storage space, since the content may weigh up to several dozen gigabytes. Blu-rays may be the only option for those looking to play large games without having to download them over an unreliable or slow internet connection, which is the reason why they are still (as of 2020) widely used by gaming consoles, like the PlayStation 4 and Xbox One X. As of 2020, it is unusual for PC games to be available in a physical format like Blu-ray. | Optical disc | Wikipedia | 354 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
Optical discs are typically stored in special cases, sometimes called jewel cases. Discs should not have any stickers and should not be stored together with paper; papers must be removed from the jewel case before storage. Discs should be handled by the edges to prevent scratching, with the thumb on the inner edge of the disc. The ISO Standard 18938:2014 is about best optical disc handling techniques. Optical disc cleaning should never be done in a circular pattern, to avoid concentric cirles from forming on the disc. Improper cleaning can scratch the disc. Recordable discs should not be exposed to light for extended periods of time. Optical discs should be stored in dry and cool conditions to increase longevity, with temperatures between -10 and 23 °C, never exceeding 32 °C, and with humidity never falling below 10%, with recommended storage at 20 to 50% of humidity without fluctuations of more than ±10%.
Durability
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage, if handled improperly.
Optical discs are not prone to uncontrollable catastrophic failures such as head crashes, power surges, or exposure to water like hard disk drives and flash storage, since optical drives' storage controllers are not tied to optical discs themselves like with hard disk drives and flash memory controllers, and a disc is usually recoverable from a defective optical drive by pushing an unsharp needle into the emergency ejection pinhole, and has no point of immediate water ingress and no integrated circuitry.
Security
As the media itself only is accessed through a laser beam and has no internal control circuitry, it cannot contain malicious hardware in the same way as so-called rubber-duckies or USB killers. Like any data storage media, optical discs can contain malicious data, they are able to contain and spread malware - as happened in the case of the Sony BMG copy protection rootkit scandal in 2005 where Sony misused discs by pre-loading them with malware.
Many types of optical discs are factory-pressed or finalized write once read many storage devices and would therefore not be effective at spreading computer worms that are designed to spread by copying themselves onto optical media, because data on those discs can not be modified once pressed or written. However, re-writable disc technologies (such as CD-RW) are able to spread this type of malware.
History | Optical disc | Wikipedia | 490 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
The first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light.
Optophonie is a very early (1931) example of a recording device using light for both recording and playing back sound signals on a transparent photograph.
An early analogue optical disc system existed in 1935, used on Welte's sampling organ.
An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was a very early form of the DVD (). It is of special interest that , filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics.
American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s.
Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks, after this were developed four generations of optical drive that includes Laserdisc (1969), WORM (1979), Compact Discs (1984), DVD (1995), Blu-ray (2005), HD-DVD (2006), more formats are currently under development.
First-generation
From the start optical discs were used to store broadcast-quality analog video, and later digital media such as music or computer software. The LaserDisc format stored analog video signals for the distribution of home video, but commercially lost to the VHS videocassette format, due mainly to its high cost and non-re-recordability; other first-generation disc formats were designed only to store digital data and were not initially capable of use as a digital video medium. | Optical disc | Wikipedia | 456 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
Most first-generation disc devices had an infrared laser reading head. The minimum size of the laser spot is proportional to the wavelength of the laser, so wavelength is a limiting factor upon the amount of information that can be stored in a given physical area on the disc. The infrared range is beyond the long-wavelength end of the visible light spectrum, so it supports less density than shorter-wavelength visible light. One example of high-density data storage capacity, achieved with an infrared laser, is 700 MB of net user data for a 12 cm compact disc.
Other factors that affect data storage density include: the existence of multiple layers of data on the disc, the method of rotation (Constant linear velocity (CLV), Constant angular velocity (CAV), or zoned-CAV), the composition of lands and pits, and how much margin is unused is at the center and the edge of the disc.
Types of Optical Discs:
Compact disc (CD) and derivatives
Audio CD
Video CD (VCD)
Super Video CD
CD Video
CD-Interactive
LaserDisc
GD-ROM
Phase-change Dual
Double Density Compact Disc (DDCD)
Magneto-optical disc
MiniDisc (MD)
MD Data
Write Once Read Many (WORM)
Laserdisc
In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam , filed 1972, issued 1991. Kramer's physical format is used in all optical discs.
In 1975, Philips and MCA began to work together, and in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the discs and Philips the players. However, the presentation was a commercial failure, and the cooperation ended.
In Japan and the U.S., Pioneer succeeded with the Laserdisc until the advent of the DVD. In 1979, Philips and Sony, in consortium, successfully developed the audio compact disc.
WORM drive | Optical disc | Wikipedia | 401 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk. The recording system utilized blue light at 457 nm to record and red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation (STC) in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives, originally labeled the Laser Storage Drive 2000 (LSD-2000), was only moderately successful. Many of the disks were shipped to RCA Laboratories (now David Sarnoff Research Center) to be used in the Library of Congress archiving efforts. The STC disks utilized a sealed cartridge with an optical window for protection .
CD-ROM
The CD-ROM format was developed by Sony and Philips, introduced in 1984, as an extension of Compact Disc Digital Audio and adapted to hold any form of digital data. The same year, Sony demonstrated a LaserDisc data storage format, with a larger data capacity of 3.28 GB.
In the late 1980s and early 1990s, Optex, Inc. of Rockville, MD, built an erasable optical digital video disc system using Electron Trapping Optical Media (ETOM). Although this technology was written up in Video Pro Magazine's December 1994 issue promising "the death of the tape", it was never marketed.
Magnetic disks found limited applications in storing the data in large amount. So, there was the need of finding some more data storing techniques. As a result, it was found that by using optical means large data storing devices can be made that in turn gave rise to the optical discs. The very first application of this kind was the compact disc (CD), which was used in audio systems. | Optical disc | Wikipedia | 376 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
Sony and Philips developed the first generation of the CDs in the mid-1980s with the complete specifications for these devices. With the help of this kind of technology the possibility of representing the analog signal into digital signal was exploited to a great level. For this purpose, the 16-bit samples of the analog signal were taken at the rate of 44,100 samples per second. This sample rate was based on the Nyquist rate of 40,000 samples per second required to capture the audible frequency range to 20 kHz without aliasing, with an additional tolerance to allow the use of less-than-perfect analog audio pre-filters to remove any higher frequencies. The first version of the standard allowed up to 74 minutes of music or 650 MB of data storage.
Second-generation
Second-generation optical discs were for storing great amounts of data, including broadcast-quality digital video. Such discs usually are read with a visible-light laser (usually red); the shorter wavelength and greater numerical aperture allow a narrower light beam, permitting smaller pits and lands in the disc. In the DVD format, this allows 4.7 GB storage on a standard 12 cm, single-sided, single-layer disc; alternatively, smaller media, such as the DataPlay format, can have capacity comparable to that of the larger, standard compact 12 cm disc.
DVD and derivatives
DVD-Audio
DualDisc
Digital Video Express (DIVX)
DVD-RAM
DVD±R
Nintendo GameCube Game Disc (miniDVD derivative)
Wii Optical Disc (DVD derivative)
Super Audio CD (SACD)
Enhanced Versatile Disc
DataPlay
Hi-MD
Universal Media Disc (UMD)
Ultra Density Optical
DVD-ROM
In 1995, a consortium of manufacturers (Sony, Philips, Toshiba, Panasonic) developed the second generation of the optical disc, the DVD. The DVD disc appeared after the CD-ROM had become widespread in society.
Third-generation
Third-generation optical discs are used for distributing high-definition video and videogames and support greater data storage capacities, accomplished with short-wavelength visible-light lasers and greater numerical apertures. Blu-ray Disc and HD DVD uses blue-violet lasers and focusing optics of greater aperture, for use with discs with smaller pits and lands, thereby greater data storage capacity per layer.
In practice, the effective multimedia presentation capacity is improved with enhanced video data compression codecs such as H.264/MPEG-4 AVC and VC-1. | Optical disc | Wikipedia | 495 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
Blu-ray and derivatives (up to 400 GB - experimental)
BD-R and BD-RE
High Fidelity Pure Audio
AVCHD and AVCREC
BDXL and Blu-ray 3D
4K Blu-ray and 8K Blu-ray
Wii U Optical Disc (25 GB per layer)
HD DVD (discontinued disc format, up to 51 GB triple layer)
CBHD (a derivative of the HD DVD format)
HD VMD
Professional Disc
Announced but not released:
Digital Multilayer Disk
Fluorescent Multilayer Disc
Forward Versatile Disc
Blu-ray and HD-DVD
The third generation optical disc was developed in 2000–2006 and was introduced as Blu-ray Disc. First movies on Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a high definition optical disc format war over a competing format, the HD DVD. A standard Blu-ray disc can hold about 25 GB of data, a DVD about 4.7 GB, and a CD about 700 MB.
Fourth-generation
The following formats go beyond the current third-generation discs and have the potential to hold more than one terabyte (1 TB) of data and at least some are meant for cold data storage in data centers:
Archival Disc
Holographic Versatile Disc
Announced but not released:
LS-R
Protein-coated disc
Stacked Volumetric Optical Disc
5D DVD
3D optical data storage (not a single technology, examples are Hyper CD-ROM and Fluorescent Multilayer Disc)
In 2004, development of the Holographic Versatile Disc (HVD) commenced, which promised the storage of several terabytes of data per disc. However, development stagnated towards the late 2000s due to lack of funding.
In 2006, it was reported that Japanese researchers developed ultraviolet ray lasers with a wavelength of 210 nanometers, which would enable a higher bit density than Blu-ray discs. As of 2022, no updates on that project have been reported.
Folio Photonics is planning to release high-capacity discs in 2024 with the cost of $5 per TB, with a roadmap to $1 per TB, using 80% less power than HDD.
Overview of optical types | Optical disc | Wikipedia | 440 | 41458 | https://en.wikipedia.org/wiki/Optical%20disc | Technology | Data storage | null |
An optical isolator, or optical diode, is an optical component which allows the transmission of light in only one direction. It is typically used to prevent unwanted feedback into an optical oscillator, such as a laser cavity.
The operation of conventional optical isolators relies on the Faraday effect (which in turn is produced by magneto-optic effect), which is used in the main component, the Faraday rotator. However, integrated isolators which do not rely on magnetism have been made in recent years too.
Theory
The main component of the optical isolator is the Faraday rotator. The magnetic field, , applied to the Faraday rotator causes a rotation in the polarization of the light due to the Faraday effect. The angle of rotation, , is given by,
,
where, is the Verdet constant of the material (amorphous or crystalline solid, or liquid, or crystalline liquid, or vaprous, or gaseous) of which the rotator is made, and is the length of the rotator. This is shown in Figure 2. Specifically for an optical isolator, the values are chosen to give a rotation of 45°.
It has been shown that a crucial requirement for any kind of optical isolator (not only the Faraday isolator) is some kind of non-reciprocal optics
Polarization dependent isolator
The polarization dependent isolator, or Faraday isolator, is made of three parts, an input polarizer (polarized vertically), a Faraday rotator, and an output polarizer, called an analyzer (polarized at 45°).
Light traveling in the forward direction becomes polarized vertically by the input polarizer. The Faraday rotator will rotate the polarization by 45°. The analyzer then enables the light to be transmitted through the isolator.
Light traveling in the backward direction becomes polarized at 45° by the analyzer. The Faraday rotator will again rotate the polarization by 45°. This means the light is polarized horizontally (the direction of rotation is not sensitive to the direction of propagation). Since the polarizer is vertically aligned, the light will be extinguished. | Optical isolator | Wikipedia | 456 | 41460 | https://en.wikipedia.org/wiki/Optical%20isolator | Technology | Optical components | null |
Figure 2 shows a Faraday rotator with an input polarizer, and an output analyzer. For a polarization dependent isolator, the angle between the polarizer and the analyzer, , is set to 45°. The Faraday rotator is chosen to give a 45° rotation.
Polarization dependent isolators are typically used in free space optical systems. This is because the polarization of the source is typically maintained by the system. In optical fibre systems, the polarization direction is typically dispersed in non polarization maintaining systems. Hence the angle of polarization will lead to a loss.
Polarization independent isolator
The polarization independent isolator is made of three parts, an input birefringent wedge (with its ordinary polarization direction vertical and its extraordinary polarization direction horizontal), a Faraday rotator, and an output birefringent wedge (with its ordinary polarization direction at 45°, and its extraordinary polarization direction at −45°).
Light traveling in the forward direction is split by the input birefringent wedge into its vertical (0°) and horizontal (90°) components, called the ordinary ray (o-ray) and the extraordinary ray (e-ray) respectively. The Faraday rotator rotates both the o-ray and e-ray by 45°. This means the o-ray is now at 45°, and the e-ray is at −45°. The output birefringent wedge then recombines the two components.
Light traveling in the backward direction is separated into the o-ray at 45, and the e-ray at −45° by the birefringent wedge. The Faraday Rotator again rotates both the rays by 45°. Now the o-ray is at 90°, and the e-ray is at 0°. Instead of being focused by the second birefringent wedge, the rays diverge.
Typically collimators are used on either side of the isolator. In the transmitted direction the beam is split and then combined and focused into the output collimator. In the isolated direction the beam is split, and then diverged, so it does not focus at the collimator. | Optical isolator | Wikipedia | 465 | 41460 | https://en.wikipedia.org/wiki/Optical%20isolator | Technology | Optical components | null |
Figure 3 shows the propagation of light through a polarization independent isolator. The forward travelling light is shown in blue, and the backward propagating light is shown in red. The rays were traced using an ordinary refractive index of 2, and an extraordinary refractive index of 3. The wedge angle is 7°.
The Faraday rotator
The most important optical element in an isolator is the Faraday rotator. The characteristics that one looks for in a Faraday rotator optic include a high Verdet constant, low absorption coefficient, low non-linear refractive index and high damage threshold. Also, to prevent self-focusing and other thermal related effects, the optic should be as short as possible. The two most commonly used materials for the 700–1100 nm range are terbium doped borosilicate glass and terbium gallium garnet crystal (TGG). For long distance fibre communication, typically at 1310 nm or 1550 nm, yttrium iron garnet crystals are used (YIG). Commercial YIG based Faraday isolators reach isolations higher than 30 dB.
Optical isolators are different from 1/4 wave plate based isolators because the Faraday rotator provides non-reciprocal rotation while maintaining linear polarization. That is, the polarization rotation due to the Faraday rotator is always in the same relative direction. So in the forward direction, the rotation is positive 45°. In the reverse direction, the rotation is −45°. This is due to the change in the relative magnetic field direction, positive one way, negative the other. This then adds to a total of 90° when the light travels in the forward direction and then the negative direction. This allows the higher isolation to be achieved.
Optical isolators and thermodynamics
It might seem at first glance that a device that allows light to flow in only one direction would violate Kirchhoff's law and the second law of thermodynamics, by allowing light energy to flow from a cold object to a hot object and blocking it in the other direction, but the violation is avoided because the isolator must absorb (not reflect) the light from the hot object and will eventually reradiate it to the cold one. Attempts to re-route the photons back to their source unavoidably involve creating a route by which other photons can travel from the hot body to the cold one, avoiding the paradox. | Optical isolator | Wikipedia | 506 | 41460 | https://en.wikipedia.org/wiki/Optical%20isolator | Technology | Optical components | null |
In optics, optical path length (OPL, denoted Λ in equations), also known as optical length or optical distance, is the length that light needs to travel through a vacuum to create the same phase difference as it would have when traveling through a given medium. It is calculated by taking the product of the geometric length of the optical path followed by light and the refractive index of the homogeneous medium through which the light ray propagates; for inhomogeneous optical media, the product above is generalized as a path integral as part of the ray tracing procedure. A difference in OPL between two paths is often called the optical path difference (OPD). OPL and OPD are important because they determine the phase of the light and govern interference and diffraction of light as it propagates.
In a medium of constant refractive index, n, the OPL for a path of geometrical length s is just
If the refractive index varies along the path, the OPL is given by a line integral
where n is the local refractive index as a function of distance along the path C.
An electromagnetic wave propagating along a path C has the phase shift over C as if it was propagating a path in a vacuum, length of which, is equal to the optical path length of C. Thus, if a wave is traveling through several different media, then the optical path length of each medium can be added to find the total optical path length. The optical path difference between the paths taken by two identical waves can then be used to find the phase change. Finally, using the phase change, the interference between the two waves can be calculated.
Fermat's principle states that the path light takes between two points is the path that has the minimum optical path length.
Optical path difference
The OPD corresponds to the phase shift undergone by the light emitted from two previously coherent sources when passed through mediums of different refractive indices. For example, a wave passing through air appears to travel a shorter distance than an identical wave traveling the same distance in glass. This is because a larger number of wavelengths fit in the same distance due to the higher refractive index of the glass.
The OPD can be calculated from the following equation:
where d1 and d2 are the distances of the ray passing through medium 1 or 2, n1 is the greater refractive index (e.g., glass) and n2 is the smaller refractive index (e.g., air). | Optical path length | Wikipedia | 509 | 41461 | https://en.wikipedia.org/wiki/Optical%20path%20length | Physical sciences | Optics | Physics |
The visible spectrum is the band of the electromagnetic spectrum that is visible to the human eye. Electromagnetic radiation in this range of wavelengths is called visible light (or simply light).
The optical spectrum is sometimes considered to be the same as the visible spectrum, but some authors define the term more broadly, to include the ultraviolet and infrared parts of the electromagnetic spectrum as well, known collectively as optical radiation.
A typical human eye will respond to wavelengths from about 380 to about 750 nanometers. In terms of frequency, this corresponds to a band in the vicinity of 400–790 terahertz. These boundaries are not sharply defined and may vary per individual. Under optimal conditions, these limits of human perception can extend to 310 nm (ultraviolet) and 1100 nm (near infrared).
The spectrum does not contain all the colors that the human visual system can distinguish. Unsaturated colors such as pink, or purple variations like magenta, for example, are absent because they can only be made from a mix of multiple wavelengths. Colors containing only one wavelength are also called pure colors or spectral colors.
Visible wavelengths pass largely unattenuated through the Earth's atmosphere via the "optical window" region of the electromagnetic spectrum. An example of this phenomenon is when clean air scatters blue light more than red light, and so the midday sky appears blue (apart from the area around the Sun which appears white because the light is not scattered as much). The optical window is also referred to as the "visible window" because it overlaps the human visible response spectrum. The near infrared (NIR) window lies just out of the human vision, as well as the medium wavelength infrared (MWIR) window, and the long-wavelength or far-infrared (LWIR or FIR) window, although other animals may perceive them.
Spectral colors
Colors that can be produced by visible light of a narrow band of wavelengths (monochromatic light) are called pure spectral colors. The various color ranges indicated in the illustration are an approximation: The spectrum is continuous, with no clear boundaries between one color and the next.
History
In the 13th century, Roger Bacon theorized that rainbows were produced by a similar process to the passage of light through glass or crystal. | Visible spectrum | Wikipedia | 455 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
In the 17th century, Isaac Newton discovered that prisms could disassemble and reassemble white light, and described the phenomenon in his book Opticks. He was the first to use the word spectrum (Latin for "appearance" or "apparition") in this sense in print in 1671 in describing his experiments in optics. Newton observed that, when a narrow beam of sunlight strikes the face of a glass prism at an angle, some is reflected and some of the beam passes into and through the glass, emerging as different-colored bands. Newton hypothesized light to be made up of "corpuscles" (particles) of different colors, with the different colors of light moving at different speeds in transparent matter, red light moving more quickly than violet in glass. The result is that red light is bent (refracted) less sharply than violet as it passes through the prism, creating a spectrum of colors.
Newton originally divided the spectrum into six named colors: red, orange, yellow, green, blue, and violet. He later added indigo as the seventh color since he believed that seven was a perfect number as derived from the ancient Greek sophists, of there being a connection between the colors, the musical notes, the known objects in the Solar System, and the days of the week. The human eye is relatively insensitive to indigo's frequencies, and some people who have otherwise-good vision cannot distinguish indigo from blue and violet. For this reason, some later commentators, including Isaac Asimov, have suggested that indigo should not be regarded as a color in its own right but merely as a shade of blue or violet. Evidence indicates that what Newton meant by "indigo" and "blue" does not correspond to the modern meanings of those color words. Comparing Newton's observation of prismatic colors with a color image of the visible light spectrum shows that "indigo" corresponds to what is today called blue, whereas his "blue" corresponds to cyan. | Visible spectrum | Wikipedia | 404 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
In the 18th century, Johann Wolfgang von Goethe wrote about optical spectra in his Theory of Colours. Goethe used the word spectrum (Spektrum) to designate a ghostly optical afterimage, as did Schopenhauer in On Vision and Colors. Goethe argued that the continuous spectrum was a compound phenomenon. Where Newton narrowed the beam of light to isolate the phenomenon, Goethe observed that a wider aperture produces not a spectrum but rather reddish-yellow and blue-cyan edges with white between them. The spectrum appears only when these edges are close enough to overlap.
In the early 19th century, the concept of the visible spectrum became more definite, as light outside the visible range was discovered and characterized by William Herschel (infrared) and Johann Wilhelm Ritter (ultraviolet), Thomas Young, Thomas Johann Seebeck, and others.
Young was the first to measure the wavelengths of different colors of light, in 1802.
The connection between the visible spectrum and color vision was explored by Thomas Young and Hermann von Helmholtz in the early 19th century. Their theory of color vision correctly proposed that the eye uses three distinct receptors to perceive color.
Limits to visible range
The visible spectrum is limited to wavelengths that can both reach the retina and trigger visual phototransduction (excite a visual opsin). Insensitivity to UV light is generally limited by transmission through the lens. Insensitivity to IR light is limited by the spectral sensitivity functions of the visual opsins. The range is defined psychometrically by the luminous efficiency function, which accounts for all of these factors. In humans, there is a separate function for each of two visual systems, one for photopic vision, used in daylight, which is mediated by cone cells, and one for scotopic vision, used in dim light, which is mediated by rod cells. Each of these functions have different visible ranges. However, discussion on the visible range generally assumes photopic vision.
Atmospheric transmission
The visible range of most animals evolved to match the optical window, which is the range of light that can pass through the atmosphere. The ozone layer absorbs almost all UV light (below 315 nm). However, this only affects cosmic light (e.g. sunlight), not terrestrial light (e.g. Bioluminescence).
Ocular transmission | Visible spectrum | Wikipedia | 465 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
Before reaching the retina, light must first transmit through the cornea and lens. UVB light (< 315 nm) is filtered mostly by the cornea, and UVA light (315–400 nm) is filtered mostly by the lens. The lens also yellows with age, attenuating transmission most strongly at the blue part of the spectrum. This can cause xanthopsia as well as a slight truncation of the short-wave (blue) limit of the visible spectrum. Subjects with aphakia are missing a lens, so UVA light can reach the retina and excite the visual opsins; this expands the visible range and may also lead to cyanopsia.
Opsin absorption
Each opsin has a spectral sensitivity function, which defines how likely it is to absorb a photon of each wavelength. The luminous efficiency function is approximately the superposition of the contributing visual opsins. Variance in the position of the individual opsin spectral sensitivity functions therefore affects the luminous efficiency function and the visible range. For example, the long-wave (red) limit changes proportionally to the position of the L-opsin. The positions are defined by the peak wavelength (wavelength of highest sensitivity), so as the L-opsin peak wavelength blue shifts by 10 nm, the long-wave limit of the visible spectrum also shifts 10 nm. Large deviations of the L-opsin peak wavelength lead to a form of color blindness called protanomaly and a missing L-opsin (protanopia) shortens the visible spectrum by about 30 nm at the long-wave limit. Forms of color blindness affecting the M-opsin and S-opsin do not significantly affect the luminous efficiency function nor the limits of the visible spectrum.
Different definitions
Regardless of actual physical and biological variance, the definition of the limits is not standard and will change depending on the industry. For example, some industries may be concerned with practical limits, so would conservatively report 420–680 nm, while others may be concerned with psychometrics and achieving the broadest spectrum would liberally report 380–750, or even 380–800 nm. The luminous efficiency function in the NIR does not have a hard cutoff, but rather an exponential decay, such that the function's value (or vision sensitivity) at 1,050 nm is about 109 times weaker than at 700 nm; much higher intensity is therefore required to perceive 1,050 nm light than 700 nm light. | Visible spectrum | Wikipedia | 504 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
Vision outside the visible spectrum
Under ideal laboratory conditions, subjects may perceive infrared light up to at least 1,064 nm. While 1,050 nm NIR light can evoke red, suggesting direct absorption by the L-opsin, there are also reports that pulsed NIR lasers can evoke green, which suggests two-photon absorption may be enabling extended NIR sensitivity.
Similarly, young subjects may perceive ultraviolet wavelengths down to about 310–313 nm, but detection of light below 380 nm may be due to fluorescence of the ocular media, rather than direct absorption of UV light by the opsins. As UVA light is absorbed by the ocular media (lens and cornea), it may fluoresce and be released at a lower energy (longer wavelength) that can then be absorbed by the opsins. For example, when the lens absorbs 350 nm light, the fluorescence emission spectrum is centered on 440 nm.
Non-visual light detection
In addition to the photopic and scotopic systems, humans have other systems for detecting light that do not contribute to the primary visual system. For example, melanopsin has an absorption range of 420–540 nm and regulates circadian rhythm and other reflexive processes. Since the melanopsin system does not form images, it is not strictly considered vision and does not contribute to the visible range.
In non-humans
The visible spectrum is defined as that visible to humans, but the variance between species is large. Not only can cone opsins be spectrally shifted to alter the visible range, but vertebrates with 4 cones (tetrachromatic) or 2 cones (dichromatic) relative to humans' 3 (trichromatic) will also tend to have a wider or narrower visible spectrum than humans, respectively.
Vertebrates tend to have 1-4 different opsin classes:
longwave sensitive (LWS) with peak sensitivity between 500–570 nm,
middlewave sensitive (MWS) with peak sensitivity between 480–520 nm,
shortwave sensitive (SWS) with peak sensitivity between 415–470 nm, and
violet/ultraviolet sensitive (VS/UVS) with peak sensitivity between 355–435 nm.
Testing the visual systems of animals behaviorally is difficult, so the visible range of animals is usually estimated by comparing the peak wavelengths of opsins with those of typical humans (S-opsin at 420 nm and L-opsin at 560 nm). | Visible spectrum | Wikipedia | 497 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
Mammals
Most mammals have retained only two opsin classes (LWS and VS), due likely to the nocturnal bottleneck. However, old world primates (including humans) have since evolved two versions in the LWS class to regain trichromacy. Unlike most mammals, rodents' UVS opsins have remained at shorter wavelengths. Along with their lack of UV filters in the lens, mice have a UVS opsin that can detect down to 340 nm. While allowing UV light to reach the retina can lead to retinal damage, the short lifespan of mice compared with other mammals may minimize this disadvantage relative to the advantage of UV vision. Dogs have two cone opsins at 429 nm and 555 nm, so see almost the entire visible spectrum of humans, despite being dichromatic. Horses have two cone opsins at 428 nm and 539 nm, yielding a slightly more truncated red vision.
Birds
Most other vertebrates (birds, lizards, fish, etc.) have retained their tetrachromacy, including UVS opsins that extend further into the ultraviolet than humans' VS opsin. The sensitivity of avian UVS opsins vary greatly, from 355–425 nm, and LWS opsins from 560–570 nm. This translates to some birds with a visible spectrum on par with humans, and other birds with greatly expanded sensitivity to UV light. The LWS opsin of birds is sometimes reported to have a peak wavelength above 600 nm, but this is an effective peak wavelength that incorporates the filter of avian oil droplets. The peak wavelength of the LWS opsin alone is the better predictor of the long-wave limit. A possible benefit of avian UV vision involves sex-dependent markings on their plumage that are visible only in the ultraviolet range.
Fish
Teleosts (bony fish) are generally tetrachromatic. The sensitivity of fish UVS opsins vary from 347-383 nm, and LWS opsins from 500-570 nm. However, some fish that use alternative chromophores can extend their LWS opsin sensitivity to 625 nm. The popular belief that the common goldfish is the only animal that can see both infrared and ultraviolet light is incorrect, because goldfish cannot see infrared light. | Visible spectrum | Wikipedia | 463 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
Invertebrates
The visual systems of invertebrates deviate greatly from vertebrates, so direct comparisons are difficult. However, UV sensitivity has been reported in most insect species.
Bees and many other insects can detect ultraviolet light, which helps them find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to their appearance in ultraviolet light rather than how colorful they appear to humans. Bees' long-wave limit is at about 590 nm. Mantis shrimp exhibit up to 14 opsins, enabling a visible range of less than 300 nm to above 700 nm.
Thermal vision
Some snakes can "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes, and other snakes with the organ may detect warm bodies from a meter away. It may also be used in thermoregulation and predator detection.
Spectroscopy
Spectroscopy is the study of objects based on the spectrum of color they emit, absorb or reflect. Visible-light spectroscopy is an important tool in astronomy (as is spectroscopy at other wavelengths), where scientists use it to analyze the properties of distant objects. Chemical elements and small molecules can be detected in astronomical objects by observing emission lines and absorption lines. For example, helium was first detected by analysis of the spectrum of the Sun. The shift in frequency of spectral lines is used to measure the Doppler shift (redshift or blueshift) of distant objects to determine their velocities towards or away from the observer. Astronomical spectroscopy uses high-dispersion diffraction gratings to observe spectra at very high spectral resolutions. | Visible spectrum | Wikipedia | 339 | 41464 | https://en.wikipedia.org/wiki/Visible%20spectrum | Physical sciences | Electrodynamics | null |
Cod (: cod) is the common name for the demersal fish genus Gadus, belonging to the family Gadidae. Cod is also used as part of the common name for a number of other fish species, and one species that belongs to genus Gadus is commonly not called cod (Alaska pollock, Gadus chalcogrammus).
The two most common species of cod are the Atlantic cod (Gadus morhua), which lives in the colder waters and deeper sea regions throughout the North Atlantic, and the Pacific cod (Gadus macrocephalus), which is found in both eastern and western regions of the northern Pacific. Gadus morhua was named by Linnaeus in 1758. (However, G. morhua callarias, a low-salinity, nonmigratory race restricted to parts of the Baltic, was originally described as Gadus callarias by Linnaeus.)
Cod as food is popular in several parts of the world. It has a mild flavour and a dense, flaky, white flesh. Cod livers are processed to make cod liver oil, a common source of vitamin A, vitamin D, vitamin E, and omega-3 fatty acids (EPA and DHA). Young Atlantic cod or haddock prepared in strips for cooking is called scrod. In the United Kingdom, Atlantic cod is one of the most common ingredients in fish and chips, along with haddock and plaice.
Species
At various times in the past, taxonomists included many species in the genus Gadus. Most of these are now either classified in other genera, or have been recognized as forms of one of three species. All these species have a number of common names, most of them ending with the word "cod", whereas other species, as closely related, have other common names (such as pollock and haddock). However, many other, unrelated species also have common names ending with cod. The usage often changes with different localities and at different times.
Cod in the genus Gadus
Three species in the genus Gadus are currently called cod:
The fourth species of genus Gadus, Gadus chalcogrammus, is commonly called Alaska pollock or walleye pollock. But there are also less widespread alternative trade names highlighting the fish's belonging to the cod genus, like snow cod or bigeye cod. | Cod | Wikipedia | 477 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
Related species
Cod forms part of the common name of many other fish no longer classified in the genus Gadus. Many are members of the family Gadidae; others are members of three related families within the order Gadiformes whose names include the word "cod": the morid cods, Moridae (100 or so species); the eel cods, Muraenolepididae (four species); and the Eucla cod, Euclichthyidae (one species). The tadpole cod family (Ranicipitidae) has now been placed in Gadidae.
Some fish have common names derived from "cod", such as codling, codlet, or tomcod. ("Codling" is also used as a name for a young cod.)
Other species
Some fish commonly known as cod are unrelated to Gadus. Part of this name confusion is market-driven. Severely shrunken Atlantic cod stocks have led to the marketing of cod replacements using culinary names of the form "x cod", according to culinary rather than phyletic similarity. The common names for the following species have become well established; note that all inhabit the Southern Hemisphere.
Perciformes
Fish of the order Perciformes that are commonly called "cod" include:
Blue cod Parapercis colias
Eastern freshwater cod Maccullochella ikei
Mary River cod Maccullochella mariensis
Murray cod Maccullochella peelii
Potato cod Epinephelus tukula
Sleepy cod Oxyeleotris lineolatus
Trout cod Maccullochella macquariensis
The notothen family, Nototheniidae, including:
Antarctic cod Dissostichus mawsoni
Black cod Notothenia microlepidota
Maori cod Paranotothenia magellanica
Rock cod, reef cod, and coral cod
Almost all coral cod, reef cod or rock cod are also in order Perciformes. Most are better known as groupers, and belong to the family Serranidae. Others belong to the Nototheniidae. Two exceptions are the Australasian red rock cod, which belongs to a different order (see below), and the fish known simply as the rock cod and as soft cod in New Zealand, Lotella rhacina, which as noted above actually is related to the true cod (it is a morid cod). | Cod | Wikipedia | 488 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
Scorpaeniformes
From the order Scorpaeniformes:
Ling cod Ophiodon elongatus
Red rock cod Scorpaena papillosa
Rock cod Sebastes
Ophidiiformes
The tadpole cod family, Ranicipitidae, and the Eucla cod family, Euclichthyidae, were formerly classified in the order Ophidiiformes, but are now grouped with the Gadiformes.
Marketed as cod
Some fish that do not have "cod" in their names are sometimes sold as cod. Haddock and whiting belong to the same family, the Gadidae, as cod.
Haddock Melanogrammus aeglefinus
Whiting Merlangius merlangus
Patagonian toothfish or Chilean seabass
Characteristics
Cods of the genus Gadus have three rounded dorsal and two anal fins. The pelvic fins are small, with the first ray extended, and are set under the gill cover (i.e. the throat region), in front of the pectoral fins. The upper jaw extends over the lower jaw, which has a well-developed chin barbel. The eyes are medium-sized, approximately the same as the length of the chin barbel. Cod have a distinct white lateral line running from the gill slit above the pectoral fin, to the base of the caudal or tail fin. The back tends to be a greenish to sandy brown, and shows extensive mottling, especially towards the lighter sides and white belly. Dark brown colouration of the back and sides is not uncommon, especially for individuals that have resided in rocky inshore regions.
The Atlantic cod can change colour at certain water depths. It has two distinct colour phases: gray-green and reddish brown. Its average weight is , but specimens weighing up to have been recorded. Pacific cod are smaller than Atlantic cod and are darker in colour.
Distribution
Atlantic cod (Gadus morhua) live in the colder waters and deeper sea regions throughout the North Atlantic. Pacific cod (Gadus macrocephalus) is found in both eastern and western regions of the Pacific. | Cod | Wikipedia | 438 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
Atlantic cod could be further divided into several stocks, including the Arcto-Norwegian, North Sea, Baltic Sea, Faroe, Iceland, East Greenland, West Greenland, Newfoundland, and Labrador stocks. There seems to be little interchange between the stocks, although migrations to their individual breeding grounds may involve distances of or more. For instance, eastern Baltic cod shows specific reproductive adaptations to low salinity compared to Western Baltic and Atlantic cod.
Atlantic cod occupy varied habitats, favouring rough ground, especially inshore, and are demersal in depths between , on average, although not uncommonly to depths of . Off the Norwegian and New England coasts and on the Grand Banks of Newfoundland, cod congregate at certain seasons in water of depth. Cod are gregarious and form schools, although shoaling tends to be a feature of the spawning season.
Life cycle
Spawning of northeastern Atlantic cod occurs between January and April (March and April are the peak months), at a depth of in specific spawning grounds at water temperatures between . Around the UK, the major spawning grounds are in the middle to southern North Sea, the start of the Bristol Channel (north of Newquay), the Irish Channel (both east and west of the Isle of Man), around Stornoway, and east of Helmsdale.
Prespawning courtship involves fin displays and male grunting, which leads to pairing. The male inverts himself beneath the female, and the pair swim in circles while spawning. The eggs are planktonic and hatch between eight and 23 days, with larva reaching in length. This planktonic phase lasts some ten weeks, enabling the young cod to increase its body weight by 40-fold, and growing to about . The young cod then move to the seabed and change their diet to small benthic crustaceans, such as isopods and small crabs. They increase in size to in the first six months, by the end of their first year, and to by the end of the second. Growth tends to be less at higher latitudes. Cod reach maturity at about at about 3 to 4 years of age. Changes in growth rate over decades of particular stocks have been reported, current eastern Baltic cod shows the lowest growth observed since 1955.
Ecology
Adult cod are active hunters, feeding on sand eels, whiting, haddock, small cod, squid, crabs, lobsters, mussels, worms, mackerel, and molluscs. | Cod | Wikipedia | 498 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
In the Baltic Sea the most important prey species are herring and sprat. Many studies that analyze the stomach contents of these fish indicate that cod is the top predator, preying on the herring and sprat. Sprat form particularly high concentrations in the Bornholm Basin in the southern Baltic Sea. Although cod feed primarily on adult sprat, sprat tend to prey on the cod eggs and larvae.
Cod and related species are plagued by parasites. For example, the cod worm, Lernaeocera branchialis, starts life as a copepod-like larva, a small free-swimming crustacean. The first host used by the larva is a flatfish or lumpsucker, which it captures with grasping hooks at the front of its body. It penetrates the fish with a thin filament, which it uses to suck the fish's blood. The nourished larvae then mate on the fish. The female larva, with her now fertilized eggs, then finds a cod, or a cod-like fish such as a haddock or whiting. There the larva clings to the gills while it metamorphoses into a plump sinusoidal wormlike body with a coiled mass of egg strings at the rear. The front part of the worm's body penetrates the body of the cod until it enters the rear bulb of the host's heart. There, firmly rooted in the cod's circulatory system, the front part of the parasite develops like the branches of a tree, reaching into the main artery. In this way, the worm extracts nutrients from the cod's blood, remaining safely tucked beneath the cod's gill cover until it releases a new generation of offspring into the water.
Fisheries
The 2006 northwest Atlantic cod quota is 23,000 tons, representing half the available stocks, while the northeast Atlantic quota is 473,000 tons. Pacific cod is currently enjoying strong global demand. The 2006 total allowable catch (TAC) for the Gulf of Alaska and Aleutian Islands was 260,000 tons. | Cod | Wikipedia | 425 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
Aquaculture
Farming of Atlantic cod has received a significant amount of interest due to the overall trend of increasing cod prices alongside reduced wild catches. However, progress in creating large scale farming of cod has been slow, mainly due to bottlenecks in the larval production stage, where survival and growth are often unpredictable. It has been suggested that this bottleneck may be overcome by ensuring cod larvae are fed diets with similar nutritional content as the copepods they feed on in the wild Recent examples have shown that increasing dietary levels of minerals such as selenium, iodine and zinc may improve survival and/or biomarkers for health in aquaculture reared cod larvae.
As food
Cod is popular as a food with a mild flavour and a dense, flaky white flesh. Cod livers are processed to make cod liver oil, an important source of vitamin A, vitamin D, vitamin E and omega-3 fatty acids (EPA and DHA).
Young Atlantic cod or haddock prepared in strips for cooking is called scrod. In the United Kingdom, Atlantic cod is one of the most common ingredients in fish and chips, along with haddock and plaice. Cod's soft liver can be tinned (canned) and eaten.
History
Cod has been an important economic commodity in international markets since the Viking period (around 800 AD). Norwegians travelled with dried cod and soon a dried cod market developed in southern Europe. This market has lasted for more than 1,000 years, enduring the Black Death, wars and other crises, and is still an important Norwegian fish trade. The Portuguese began fishing cod in the 15th century. Clipfish is widely enjoyed in Portugal. The Basques played an important role in the cod trade, and allegedly found the Canadian fishing banks before Columbus' discovery of America. The North American east coast developed in part due to the vast cod stocks. Many cities in the New England area are located near cod fishing grounds. The fish was so important to the history and development of Massachusetts, the state's House of Representatives hung a wood carving of a codfish, known as the Sacred Cod of Massachusetts, in its chambers. | Cod | Wikipedia | 433 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
Apart from the long history, cod differ from most fish because the fishing grounds are far from population centres. The large cod fisheries along the coast of North Norway (and in particular close to the Lofoten islands) have been developed almost uniquely for export, depending on sea transport of stockfish over large distances. Since the introduction of salt, dried and salted cod (clipfish or 'klippfisk' in Norwegian) has also been exported. By the end of the 14th century, the Hanseatic League dominated trade operations and sea transport, with Bergen as the most important port.
William Pitt the Elder, criticizing the Treaty of Paris in Parliament, claimed cod was "British gold"; and that it was folly to restore Newfoundland fishing rights to the French.
In the 17th and 18th centuries in the New World, especially in Massachusetts and Newfoundland, cod became a major commodity, creating trade networks and cross-cultural exchanges. In 1733, Britain tried to gain control over trade between New England and the British Caribbean by imposing the Molasses Act, which they believed would eliminate the trade by making it unprofitable. The cod trade grew instead, because the "French were eager to work with the New Englanders in a lucrative contraband arrangement". In addition to increasing trade, the New England settlers organized into a "codfish aristocracy". The colonists rose up against Britain's "tariff on an import".
In the 20th century, Iceland re-emerged as a fishing power and entered the Cod Wars. In the late 20th and early 21st centuries, fishing off the European and American coasts severely depleted stocks and become a major political issue. The necessity of restricting catches to allow stocks to recover upset the fishing industry and politicians who are reluctant to hurt employment.
Collapse of the Atlantic northwest cod fishery
On July 2, 1992, the Honourable John Crosbie, Canadian Federal Minister of Fisheries and Oceans, declared a two-year moratorium on the Northern Cod fishery, a designated fishing region off the coast of Newfoundland, after data showed that the total cod biomass had suffered a collapse to less than 1% of its normal value. The minister championed the measure as a temporary solution, allowing the cod population time to recover. The fisheries had long shaped the lives and communities on Canada's Atlantic eastern coast for the preceding five centuries. Societies which are dependent on fishing have a strong mutual relationship with them: the act of fishing changes the ecosystems' balance, which forces the fishery and, in turn, the fishing societies to adapt to new ecological conditions. | Cod | Wikipedia | 511 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
The near-complete destruction of the Atlantic northwest cod biomass off the shores devastated coastal communities, which had been overexploiting the same cod population for decades. The fishermen along the Atlantic northwest had employed modern fishing technologies, including the ecologically-devastating practice of trawling, especially in the years leading up to the 1990s, in the misguided belief that fishing stocks are perpetually plentiful and unable to be depleted. After this assumption was empirically and abruptly shown to be incorrect, to the dismay of government officials and rural workers, some 19,000 fishermen and cod processing plant workers in Newfoundland lost their employment. The powerful economic engine of rural Newfoundland coughed, wheezed, and died. Nearly 40,000 workers and harvesters in the provinces of Newfoundland and Labrador applied for the federal relief program TAGS (the Atlantic Groundfish Strategy). Abandoned and rusting fishing boats still litter the coasts of Newfoundland and the Canadian northwest to this day.
The fishery minister, John Crosbie, after delivering a speech on the day before the declaration of the moratorium, or July 1, 1992, was publicly heckled and verbally harassed by disgruntled locals at a fishing village. The moratorium, initially lasting for only two years, was indefinitely extended after it became evident that cod populations had not recovered at all but, instead, had continued to spiral downward in both size and numbers, due to the damage caused by decades of horrible fishing practices, and the fact that the moratorium had permitted exceptions for food fisheries for "personal consumption" purposes to this very day. Some 12,000 tons of Northwest cod are still being caught every year along the Newfoundland coast by local fishermen.
The collapse of the four-million ton biomass, which had persevered through several previous marine extinctions over tens of millions of years, in a timespan of no more than 20 years, is oft-cited by researchers as one of the most visible examples of the phenomenon of the "Tragedy of the Commons." Factors which had been implicated as contributing to the collapse include: overfishing; government mismanagement; the disregard of scientific uncertainty; warming habitat waters; declining reproduction; and plain human ignorance. The Northern Cod biomass has been recovering slowly since the imposition of the moratorium. However, as of 2021, the growth of the cod population has been stagnant since 2017, and some scientists argue that the population will not rebound unless the Fisheries Department of Canada lower its yearly quota to 5,000 tons. | Cod | Wikipedia | 508 | 41515 | https://en.wikipedia.org/wiki/Cod | Biology and health sciences | Acanthomorpha | null |
The photic zone (or euphotic zone, epipelagic zone, or sunlight zone) is the uppermost layer of a body of water that receives sunlight, allowing phytoplankton to perform photosynthesis. It undergoes a series of physical, chemical, and biological processes that supply nutrients into the upper water column. The photic zone is home to the majority of aquatic life due to the activity (primary production) of the phytoplankton. The thicknesses of the photic and euphotic zones vary with the intensity of sunlight as a function of season and latitude and with the degree of water turbidity. The bottommost, or aphotic, zone is the region of perpetual darkness that lies beneath the photic zone and includes most of the ocean waters.
Photosynthesis in photic zone
In the photic zone, the photosynthesis rate exceeds the respiration rate. This is due to the abundant solar energy which is used as an energy source for photosynthesis by primary producers such as phytoplankton. These phytoplankton grow extremely quickly because of sunlight's heavy influence, enabling it to be produced at a fast rate. In fact, ninety five percent of photosynthesis in the ocean occurs in the photic zone. Therefore, if we go deeper, beyond the photic zone, such as into the compensation point, there is little to no phytoplankton, because of insufficient sunlight. The zone which extends from the base of the euphotic zone to the aphotic zone is sometimes called the dysphotic zone.
Life in the photic zone | Photic zone | Wikipedia | 340 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Ninety percent of marine life lives in the photic zone, which is approximately two hundred meters deep. This includes phytoplankton (plants), including dinoflagellates, diatoms, cyanobacteria, coccolithophores, and cryptomonads. It also includes zooplankton, the consumers in the photic zone. There are carnivorous meat eaters and herbivorous plant eaters. Next, copepods are the small crustaceans distributed everywhere in the photic zone. Finally, there are nekton (animals that can propel themselves, like fish, squids, and crabs), which are the largest and the most obvious animals in the photic zone, but their quantity is the smallest among all the groups. Phytoplankton are microscopic plants living suspended in the water column that have little or no means of motility. They are primary producers that use solar energy as a food source.
Detritivores and scavengers are rare in the photic zone. Microbial decomposition of dead organisms begins here and continues once the bodies sink to the aphotic zone where they form the most important source of nutrients for deep sea organisms. The depth of the photic zone depends on the transparency of the water. If the water is very clear, the photic zone can become very deep. If it is very murky, it can be only fifty feet (fifteen meters) deep.
Animals within the photic zone use the cycle of light and dark as an important environmental signal, migration is directly linked to this fact, fishes use the concept of dusk and dawn when its time to migrate, the photic zone resembles this concept providing a sense of time. These animals can be herrings and sardines and other fishes that consistently live within the photic zone. | Photic zone | Wikipedia | 378 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Nutrient uptake in the photic zone
Due to biological uptake, the photic zone has relatively low levels of nutrient concentrations. As a result, phytoplankton doesn't receive enough nutrients when there is high water-column stability. The spatial distribution of organisms can be controlled by a number of factors. Physical factors include: temperature, hydrostatic pressure, turbulent mixing such as the upward turbulent flux of inorganic nitrogen across the nutricline. Chemical factors include oxygen and trace elements. Biological factors include grazing and migrations. Upwelling carries nutrients from the deep waters into the photic zone, strengthening phytoplankton growth. The remixing and upwelling eventually bring nutrient-rich wastes back into the photic zone. The Ekman transport additionally brings more nutrients to the photic zone. Nutrient pulse frequency affects the phytoplankton competition. Photosynthesis produces more of it. Being the first link in the food chain, what happens to phytoplankton creates a rippling effect for other species. Besides phytoplankton, many other animals also live in this zone and utilize these nutrients. The majority of ocean life occurs in the photic zone, the smallest ocean zone by water volume. The photic zone, although small, has a large impact on those who reside in it.
Photic zone depth
The depth is, by definition, where radiation is degraded down to 1% of its surface strength. Accordingly, its thickness depends on the extent of light attenuation in the water column. As incoming light at the surface can vary widely, this says little about the net growth of phytoplankton. Typical euphotic depths vary from only a few centimetres in highly turbid eutrophic lakes, to around 200 meters in the open ocean. It also varies with seasonal changes in turbidity, which can be strongly driven by phytoplankton concentrations, such that the depth of the photic zone often decreases as primary production increases. Moreover, the respiration rate is actually greater than the photosynthesis rate. The reason why phytoplankton production is so important is because it plays a prominent role when interwoven with other food webs.
Light attenuation | Photic zone | Wikipedia | 464 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Most of the solar energy reaching the Earth is in the range of visible light, with wavelengths between about 400-700 nm. Each colour of visible light has a unique wavelength, and together they make up white light. The shortest wavelengths are on the violet and ultraviolet end of the spectrum, while the longest wavelengths are at the red and infrared end. In between, the colours of the visible spectrum comprise the familiar “ROYGBIV”; red, orange, yellow, green, blue, indigo, and violet.
Water is very effective at absorbing incoming light, so the amount of light penetrating the ocean declines rapidly (is attenuated) with depth. At one metre depth only 45% of the solar energy that falls on the ocean surface remains. At 10 metres depth only 16% of the light is still present, and only 1% of the original light is left at 100 metres. No light penetrates beyond 1000 metres.
In addition to overall attenuation, the oceans absorb the different wavelengths of light at different rates. The wavelengths at the extreme ends of the visible spectrum are attenuated faster than those wavelengths in the middle. Longer wavelengths are absorbed first; red is absorbed in the upper 10 metres, orange by about 40 metres, and yellow disappears before 100 metres. Shorter wavelengths penetrate further, with blue and green light reaching the deepest depths.
This is why things appear blue underwater. How colours are perceived by the eye depends on the wavelengths of light that are received by the eye. An object appears red to the eye because it reflects red light and absorbs other colours. So the only colour reaching the eye is red. Blue is the only colour of light available at depth underwater, so it is the only colour that can be reflected back to the eye, and everything has a blue tinge under water. A red object at depth will not appear red to us because there is no red light available to reflect off of the object. Objects in water will only appear as their real colours near the surface where all wavelengths of light are still available, or if the other wavelengths of light are provided artificially, such as by illuminating the object with a dive light. | Photic zone | Wikipedia | 433 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Water in the open ocean appears clear and blue because it contains much less particulate matter, such as phytoplankton or other suspended particles, and the clearer the water, the deeper the light penetration. Blue light penetrates deeply and is scattered by the water molecules, while all other colours are absorbed; thus the water appears blue. On the other hand, coastal water often appears greenish. Coastal water contains much more suspended silt and algae and microscopic organisms than the open ocean. Many of these organisms, such as phytoplankton, absorb light in the blue and red range through their photosynthetic pigments, leaving green as the dominant wavelength of reflected light. Therefore the higher the phytoplankton concentration in water, the greener it appears. Small silt particles may also absorb blue light, further shifting the colour of water away from blue when there are high concentrations of suspended particles.
The ocean can be divided into depth layers depending on the amount of light penetration, as discussed in pelagic zone. The upper 200 metres is referred to as the photic or euphotic zone. This represents the region where enough light can penetrate to support photosynthesis, and it corresponds to the epipelagic zone. From 200 to 1000 metres lies the dysphotic zone, or the twilight zone (corresponding with the mesopelagic zone). There is still some light at these depths, but not enough to support photosynthesis. Below 1000 metres is the aphotic (or midnight) zone, where no light penetrates. This region includes the majority of the ocean volume, which exists in complete darkness.
Paleoclimatology | Photic zone | Wikipedia | 341 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Phytoplankton are unicellular microorganisms which form the base of the ocean food chains. They are dominated by diatoms, which grow silicate shells called frustules. When diatoms die their shells can settle on the seafloor and become microfossils. Over time, these microfossils become buried as opal deposits in the marine sediment. Paleoclimatology is the study of past climates. Proxy data is used in order to relate elements collected in modern-day sedimentary samples to climatic and oceanic conditions in the past. Paleoclimate proxies refer to preserved or fossilized physical markers which serve as substitutes for direct meteorological or ocean measurements. An example of proxies is the use of diatom isotope records of δ13C, δ18O, δ30Si (δ13Cdiatom, δ18Odiatom, and δ30Sidiatom). In 2015, Swann and Snelling used these isotope records to document historic changes in the photic zone conditions of the north-west Pacific Ocean, including nutrient supply and the efficiency of the soft-tissue biological pump, from the modern day back to marine isotope stage 5e, which coincides with the last interglacial period. Peaks in opal productivity in the marine isotope stage are associated with the breakdown of the regional halocline stratification and increased nutrient supply to the photic zone.
The initial development of the halocline and stratified water column has been attributed to the onset of major Northern Hemisphere glaciation at 2.73 Ma, which increased the flux of freshwater to the region, via increased monsoonal rainfall and/or glacial meltwater, and sea surface temperatures. The decrease of abyssal water upwelling associated with this may have contributed to the establishment of globally cooler conditions and the expansion of glaciers across the Northern Hemisphere from 2.73 Ma. While the halocline appears to have prevailed through the late Pliocene and early Quaternary glacial–interglacial cycles, other studies have shown that the stratification boundary may have broken down in the late Quaternary at glacial terminations and during the early part of interglacials.
Phytoplankton side notes. | Photic zone | Wikipedia | 466 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
Phytoplankton are restricted to the photo zone only. As its growth is completely dependent upon photosynthesis. This results in the 50–100 m water level inside the ocean. Growth can also come from land factors, for example minerals that are dissolved from rocks, mineral nutrients from generations of plants and animals ,that made its way into the photic zone.
An increase in the amount of phytoplankton also creates an increase in zooplankton, the zooplankton feeds on the phytoplankton as they are at the bottom of the food chain.
Dimethylsulfide
Dimethylsulfide loss within the photic zone is controlled by microbial uptake and photochemical degradation. But what exactly is dimethylsulfide and why is it important? This compound (see the photo) helps regulate sulfur cycle and ecology within the ocean. Marine bacteria, algae, coral and most other organisms within the ocean release this, constituting a range of gene families.
However this compound can be toxic to humans if swallowed, absorbed through the skin and inhaled. Proteins within plants and animals depend on this compound. Making it a significant part of ecology, it's good to know that it lives in the photic zone as well. | Photic zone | Wikipedia | 261 | 41519 | https://en.wikipedia.org/wiki/Photic%20zone | Physical sciences | Oceanography | Earth science |
The Avogadro constant, commonly denoted or , is an SI defining constant with an exact value of (reciprocal moles). It is this defined number of constituent particles (usually molecules, atoms, ions, or ion pairs—in general, entities) per mole (SI unit) and used as a normalization factor in relating the amount of substance, n(X), in a sample of a substance X to the corresponding number of entities, N(X): n(X) = N(X)(1/N), an aggregate of N(X) reciprocal Avogadro constants. By setting N(X) = 1, a reciprocal Avogadro constant is seen to be equal to one entity, which means that n(X) is more easily interpreted as an aggregate of N(X) entities. In the SI dimensional analysis of measurement units, the dimension of the Avogadro constant is the reciprocal of amount of substance, denoted N−1. The Avogadro number, sometimes denoted , is the numeric value of the Avogadro constant (i.e., without a unit), namely the dimensionless number ; the value chosen based on the number of atoms in 12 grams of carbon-12 in alignment with the historical definition of a mole. The constant is named after the Italian physicist and chemist Amedeo Avogadro (1776–1856).
The Avogadro constant is also the factor that converts the average mass () of one particle, in grams, to the molar mass () of the substance, in grams per mole (g/mol). That is, .
The constant also relates the molar volume (the volume per mole) of a substance to the average volume nominally occupied by one of its particles, when both are expressed in the same units of volume. For example, since the molar volume of water in ordinary conditions is about , the volume occupied by one molecule of water is about , or about (cubic nanometres). For a crystalline substance, relates the volume of a crystal with one mole worth of repeating unit cells, to the volume of a single cell (both in the same units).
Definition | Avogadro constant | Wikipedia | 444 | 41545 | https://en.wikipedia.org/wiki/Avogadro%20constant | Physical sciences | Substance | Chemistry |
The Avogadro constant was historically derived from the old definition of the mole as the amount of substance in 12 grams of carbon-12 (12C); or, equivalently, the number of daltons in a gram, where the dalton is defined as of the mass of a 12C atom. By this old definition, the numerical value of the Avogadro constant in mol−1 (the Avogadro number) was a physical constant that had to be determined experimentally.
The redefinition of the mole in 2019, as being the amount of substance containing exactly particles, meant that the mass of 1 mole of a substance is now exactly the product of the Avogadro number and the average mass of its particles. The dalton, however, is still defined as of the mass of a 12C atom, which must be determined experimentally and is known only with finite accuracy. The prior experiments that aimed to determine the Avogadro constant are now re-interpreted as measurements of the value in grams of the dalton.
By the old definition of mole, the numerical value of one mole of a substance, expressed in grams, was precisely equal to the average mass of one particle in daltons. With the new definition, this numerical equivalence is no longer exact, as it is affected by the uncertainty of the value of the dalton in SI units. However, it is still applicable for all practical purposes. For example, the average mass of one molecule of water is about 18.0153 daltons, and of one mole of water is about 18.0153 grams. Also, the Avogadro number is the approximate number of nucleons (protons and neutrons) in one gram of ordinary matter.
In older literature, the Avogadro number was also denoted , although that conflicts with the symbol for number of particles in statistical mechanics.
History
Origin of the concept
The Avogadro constant is named after the Italian scientist Amedeo Avogadro (1776–1856), who, in 1811, first proposed that the volume of a gas (at a given pressure and temperature) is proportional to the number of atoms or molecules regardless of the nature of the gas.
Avogadro's hypothesis was popularized four years after his death by Stanislao Cannizzaro, who advocated Avogadro's work at the Karlsruhe Congress in 1860. | Avogadro constant | Wikipedia | 478 | 41545 | https://en.wikipedia.org/wiki/Avogadro%20constant | Physical sciences | Substance | Chemistry |
The name Avogadro's number was coined in 1909 by the physicist Jean Perrin, who defined it as the number of molecules in exactly 32 grams of oxygen gas. The goal of this definition was to make the mass of a mole of a substance, in grams, be numerically equal to the mass of one molecule relative to the mass of the hydrogen atom; which, because of the law of definite proportions, was the natural unit of atomic mass, and was assumed to be of the atomic mass of oxygen.
First measurements
The value of Avogadro's number (not yet known by that name) was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. This value, the number density of particles in an ideal gas, is now called the Loschmidt constant in his honor, and is related to the Avogadro constant, , by
where is the pressure, is the gas constant, and is the absolute temperature. Because of this work, the symbol is sometimes used for the Avogadro constant, and, in German literature, that name may be used for both constants, distinguished only by the units of measurement. (However, should not be confused with the entirely different Loschmidt constant in English-language literature.)
Perrin himself determined the Avogadro number by several different experimental methods. He was awarded the 1926 Nobel Prize in Physics, largely for this work.
The electric charge per mole of electrons is a constant called the Faraday constant and has been known since 1834, when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan with the help of Harvey Fletcher obtained the first measurement of the charge on an electron. Dividing the charge on a mole of electrons by the charge on a single electron provided a more accurate estimate of the Avogadro number.
SI definition of 1971
In 1971, in its 14th conference, the International Bureau of Weights and Measures (BIPM) decided to regard the amount of substance as an independent dimension of measurement, with the mole as its base unit in the International System of Units (SI). Specifically, the mole was defined as an amount of a substance that contains as many elementary entities as there are atoms in () of carbon-12 (12C). Thus, in particular, one mole of carbon-12 was exactly of the element. | Avogadro constant | Wikipedia | 485 | 41545 | https://en.wikipedia.org/wiki/Avogadro%20constant | Physical sciences | Substance | Chemistry |
By this definition, one mole of any substance contained exactly as many elementary entities as one mole of any other substance. However, this number was a physical constant that had to be experimentally determined since it depended on the mass (in grams) of one atom of 12C, and therefore, it was known only to a limited number of decimal digits. The common rule of thumb that "one gram of matter contains nucleons" was exact for carbon-12, but slightly inexact for other elements and isotopes.
In the same conference, the BIPM also named (the factor that converted moles into number of particles) the "Avogadro constant". However, the term "Avogadro number" continued to be used, especially in introductory works. As a consequence of this definition, was not a pure number, but had the metric dimension of reciprocal of amount of substance (mol−1).
SI redefinition of 2019
In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant as the exact value , thus redefining the mole as exactly constituent particles of the substance under consideration. One consequence of this change is that the mass of a mole of 12C atoms is no longer exactly 0.012 kg. On the other hand, the dalton ( universal atomic mass unit) remains unchanged as of the mass of 12C. Thus, the molar mass constant remains very close to but no longer exactly equal to 1 g/mol, although the difference ( in relative terms, as of March 2019) is insignificant for all practical purposes.
Connection to other constants
The Avogadro constant is related to other physical constants and properties.
It relates the molar gas constant and the Boltzmann constant , which in the SI is defined to be exactly :
It relates the Faraday constant and the elementary charge , which in the SI is defined as exactly :
It relates the molar mass constant and the atomic mass constant currently | Avogadro constant | Wikipedia | 407 | 41545 | https://en.wikipedia.org/wiki/Avogadro%20constant | Physical sciences | Substance | Chemistry |
In physics, a plane wave is a special case of a wave or field: a physical quantity whose value, at any given moment, is constant through any plane that is perpendicular to a fixed direction in space.
For any position in space and any time , the value of such a field can be written as
where is a unit-length vector, and is a function that gives the field's value as dependent on only two real parameters: the time , and the scalar-valued displacement of the point along the direction . The displacement is constant over each plane perpendicular to .
The values of the field may be scalars, vectors, or any other physical or mathematical quantity. They can be complex numbers, as in a complex exponential plane wave.
When the values of are vectors, the wave is said to be a longitudinal wave if the vectors are always collinear with the vector , and a transverse wave if they are always orthogonal (perpendicular) to it.
Special types
Traveling plane wave
Often the term "plane wave" refers specifically to a traveling plane wave, whose evolution in time can be described as simple translation of the field at a constant wave speed along the direction perpendicular to the wavefronts. Such a field can be written as
where is now a function of a single real parameter , that describes the "profile" of the wave, namely the value of the field at time , for each displacement . In that case, is called the direction of propagation. For each displacement , the moving plane perpendicular to at distance from the origin is called a "wavefront". This plane travels along the direction of propagation with velocity ; and the value of the field is then the same, and constant in time, at every one of its points.
Sinusoidal plane wave
The term is also used, even more specifically, to mean a "monochromatic" or sinusoidal plane wave: a travelling plane wave whose profile is a sinusoidal function. That is,
The parameter , which may be a scalar or a vector, is called the amplitude of the wave; the scalar coefficient is its "spatial frequency"; and the scalar is its "phase shift". | Plane wave | Wikipedia | 436 | 41559 | https://en.wikipedia.org/wiki/Plane%20wave | Physical sciences | Waves | Physics |
A true plane wave cannot physically exist, because it would have to fill all space. Nevertheless, the plane wave model is important and widely used in physics. The waves emitted by any source with finite extent into a large homogeneous region of space can be well approximated by plane waves when viewed over any part of that region that is sufficiently small compared to its distance from the source. That is the case, for example, of the light waves from a distant star that arrive at a telescope.
Plane standing wave
A standing wave is a field whose value can be expressed as the product of two functions, one depending only on position, the other only on time. A plane standing wave, in particular, can be expressed as
where is a function of one scalar parameter (the displacement ) with scalar or vector values, and is a scalar function of time.
This representation is not unique, since the same field values are obtained if and are scaled by reciprocal factors. If is bounded in the time interval of interest (which is usually the case in physical contexts), and can be scaled so that the maximum value of is 1. Then will be the maximum field magnitude seen at the point .
Properties
A plane wave can be studied by ignoring the directions perpendicular to the direction vector ; that is, by considering the function as a wave in a one-dimensional medium.
Any local operator, linear or not, applied to a plane wave yields a plane wave. Any linear combination of plane waves with the same normal vector is also a plane wave.
For a scalar plane wave in two or three dimensions, the gradient of the field is always collinear with the direction ; specifically, , where is the partial derivative of with respect to the first argument.
The divergence of a vector-valued plane wave depends only on the projection of the vector in the direction . Specifically,
In particular, a transverse planar wave satisfies for all and . | Plane wave | Wikipedia | 386 | 41559 | https://en.wikipedia.org/wiki/Plane%20wave | Physical sciences | Waves | Physics |
(also ) is a property of transverse waves which specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave. One example of a polarized transverse wave is vibrations traveling along a taut string (see image), for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves (shear waves) in solids.
An electromagnetic wave such as light consists of a coupled oscillating electric field and magnetic field which are always perpendicular to each other. Different states of polarization correspond to different relationships between polarization and the direction of propagation. In linear polarization, the fields oscillate in a single direction. In circular or elliptical polarization, the fields rotate at a constant rate in a plane as the wave travels, either in the right-hand or in the left-hand direction.
Light or other electromagnetic radiation from many sources, such as the sun, flames, and incandescent lamps, consists of short wave trains with an equal mixture of polarizations; this is called unpolarized light. Polarized light can be produced by passing unpolarized light through a polarizer, which allows waves of only one polarization to pass through. The most common optical materials do not affect the polarization of light, but some materials—those that exhibit birefringence, dichroism, or optical activity—affect light differently depending on its polarization. Some of these are used to make polarizing filters. Light also becomes partially polarized when it reflects at an angle from a surface. | Polarization (waves) | Wikipedia | 423 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
According to quantum mechanics, electromagnetic waves can also be viewed as streams of particles called photons. When viewed in this way, the polarization of an electromagnetic wave is determined by a quantum mechanical property of photons called their spin. A photon has one of two possible spins: it can either spin in a right hand sense or a left hand sense about its direction of travel. Circularly polarized electromagnetic waves are composed of photons with only one type of spin, either right- or left-hand. Linearly polarized waves consist of photons that are in a superposition of right and left circularly polarized states, with equal amplitude and phases synchronized to give oscillation in a plane.
Polarization is an important parameter in areas of science dealing with transverse waves, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar.
Introduction
Wave propagation and polarization
Most sources of light are classified as incoherent and unpolarized (or only "partially polarized") because they consist of a random mixture of waves having different spatial characteristics, frequencies (wavelengths), phases, and polarization states. However, for understanding electromagnetic waves and polarization in particular, it is easier to just consider coherent plane waves; these are sinusoidal waves of one particular direction (or wavevector), frequency, phase, and polarization state. Characterizing an optical system in relation to a plane wave with those given parameters can then be used to predict its response to a more general case, since a wave with any specified spatial structure can be decomposed into a combination of plane waves (its so-called angular spectrum). Incoherent states can be modeled stochastically as a weighted combination of such uncorrelated waves with some distribution of frequencies (its spectrum), phases, and polarizations.
Transverse electromagnetic waves | Polarization (waves) | Wikipedia | 391 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Electromagnetic waves (such as light), traveling in free space or another homogeneous isotropic non-attenuating medium, are properly described as transverse waves, meaning that a plane wave's electric field vector and magnetic field are each in some direction perpendicular to (or "transverse" to) the direction of wave propagation; and are also perpendicular to each other. By convention, the "polarization" direction of an electromagnetic wave is given by its electric field vector. Considering a monochromatic plane wave of optical frequency (light of vacuum wavelength has a frequency of where is the speed of light), let us take the direction of propagation as the axis. Being a transverse wave the and fields must then contain components only in the and directions whereas . Using complex (or phasor) notation, the instantaneous physical electric and magnetic fields are given by the real parts of the complex quantities occurring in the following equations. As a function of time and spatial position (since for a plane wave in the direction the fields have no dependence on or ) these complex fields can be written as:
and
where is the wavelength (whose refractive index is ) and is the period of the wave. Here , , , and are complex numbers. In the second more compact form, as these equations are customarily expressed, these factors are described using the wavenumber and angular frequency (or "radian frequency") . In a more general formulation with propagation restricted to the direction, then the spatial dependence is replaced by where is called the wave vector, the magnitude of which is the wavenumber.
Thus the leading vectors and each contain up to two nonzero (complex) components describing the amplitude and phase of the wave's and polarization components (again, there can be no polarization component for a transverse wave in the direction). For a given medium with a characteristic impedance , is related to by:
In a dielectric, is real and has the value , where is the refractive index and is the impedance of free space. The impedance will be complex in a conducting medium. Note that given that relationship, the dot product of and must be zero:
indicating that these vectors are orthogonal (at right angles to each other), as expected. | Polarization (waves) | Wikipedia | 456 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Knowing the propagation direction ( in this case) and , one can just as well specify the wave in terms of just and describing the electric field. The vector containing and (but without the component which is necessarily zero for a transverse wave) is known as a Jones vector. In addition to specifying the polarization state of the wave, a general Jones vector also specifies the overall magnitude and phase of that wave. Specifically, the intensity of the light wave is proportional to the sum of the squared magnitudes of the two electric field components:
However, the wave's state of polarization is only dependent on the (complex) ratio of to . So let us just consider waves whose ; this happens to correspond to an intensity of about in free space (where ). And because the absolute phase of a wave is unimportant in discussing its polarization state, let us stipulate that the phase of is zero; in other words is a real number while may be complex. Under these restrictions, and can be represented as follows:
where the polarization state is now fully parameterized by the value of (such that ) and the relative phase .
Non-transverse waves
In addition to transverse waves, there are many wave motions where the oscillation is not limited to directions perpendicular to the direction of propagation. These cases are far beyond the scope of the current article which concentrates on transverse waves (such as most electromagnetic waves in bulk media), but one should be aware of cases where the polarization of a coherent wave cannot be described simply using a Jones vector, as we have just done. | Polarization (waves) | Wikipedia | 321 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Just considering electromagnetic waves, we note that the preceding discussion strictly applies to plane waves in a homogeneous isotropic non-attenuating medium, whereas in an anisotropic medium (such as birefringent crystals as discussed below) the electric or magnetic field may have longitudinal as well as transverse components. In those cases the electric displacement and magnetic flux density still obey the above geometry but due to anisotropy in the electric susceptibility (or in the magnetic permeability), now given by a tensor, the direction of (or ) may differ from that of (or ). Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part (or "extinction coefficient") such as metals; these fields are also not strictly transverse. Surface waves or waves propagating in a waveguide (such as an optical fiber) are generally transverse waves, but might be described as an electric or magnetic transverse mode, or a hybrid mode.
Even in free space, longitudinal field components can be generated in focal regions, where the plane wave approximation breaks down. An extreme example is radially or tangentially polarized light, at the focus of which the electric or magnetic field respectively is longitudinal (along the direction of propagation).
For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so the issue of polarization is normally not even mentioned. On the other hand, sound waves in a bulk solid can be transverse as well as longitudinal, for a total of three polarization components. In this case, the transverse polarization is associated with the direction of the shear stress and displacement in directions perpendicular to the propagation direction, while the longitudinal polarization describes compression of the solid and vibration along the direction of propagation. The differential propagation of transverse and longitudinal polarizations is important in seismology.
Polarization state | Polarization (waves) | Wikipedia | 400 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Polarization can be defined in terms of pure polarization states with only a coherent sinusoidal wave at one optical frequency. The vector in the adjacent diagram might describe the oscillation of the electric field emitted by a single-mode laser (whose oscillation frequency would be typically times faster). The field oscillates in the -plane, along the page, with the wave propagating in the direction, perpendicular to the page.
The first two diagrams below trace the electric field vector over a complete cycle for linear polarization at two different orientations; these are each considered a distinct state of polarization (SOP). The linear polarization at 45° can also be viewed as the addition of a horizontally linearly polarized wave (as in the leftmost figure) and a vertically polarized wave of the same amplitude .
Now if one were to introduce a phase shift in between those horizontal and vertical polarization components, one would generally obtain elliptical polarization as is shown in the third figure. When the phase shift is exactly ±90°, and the amplitudes are the same, then circular polarization is produced (fourth and fifth figures). Circular polarization can be created by sending linearly polarized light through a quarter-wave plate oriented at 45° to the linear polarization to create two components of the same amplitude with the required phase shift. The superposition of the original and phase-shifted components causes a rotating electric field vector, which is depicted in the animation on the right. Note that circular or elliptical polarization can involve either a clockwise or counterclockwise rotation of the field, depending on the relative phases of the components. These correspond to distinct polarization states, such as the two circular polarizations shown above.
The orientation of the and axes used in this description is arbitrary. The choice of such a coordinate system and viewing the polarization ellipse in terms of the and polarization components, corresponds to the definition of the Jones vector (below) in terms of those basis polarizations. Axes are selected to suit a particular problem, such as being in the plane of incidence. Since there are separate reflection coefficients for the linear polarizations in and orthogonal to the plane of incidence (p and s polarizations, see below), that choice greatly simplifies the calculation of a wave's reflection from a surface. | Polarization (waves) | Wikipedia | 477 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Any pair of orthogonal polarization states may be used as basis functions, not just linear polarizations. For instance, choosing right and left circular polarizations as basis functions simplifies the solution of problems involving circular birefringence (optical activity) or circular dichroism.
Polarization ellipse
For a purely polarized monochromatic wave the electric field vector over one cycle of oscillation traces out an ellipse.
A polarization state can then be described in relation to the geometrical parameters of the ellipse, and its "handedness", that is, whether the rotation around the ellipse is clockwise or counter clockwise. One parameterization of the elliptical figure specifies the orientation angle , defined as the angle between the major axis of the ellipse and the -axis along with the ellipticity , the ratio of the ellipse's major to minor axis. (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of an ellipse's eccentricity or the ellipticity angle, as is shown in the figure. The angle is also significant in that the latitude (angle from the equator) of the polarization state as represented on the Poincaré sphere (see below) is equal to . The special cases of linear and circular polarization correspond to an ellipticity of infinity and unity (or of zero and 45°) respectively.
Jones vector
Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): | Polarization (waves) | Wikipedia | 365 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization.
Coordinate frame
Regardless of whether polarization state is represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north.
s and p designations | Polarization (waves) | Wikipedia | 337 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Another coordinate system frequently used relates to the plane of incidence. This is the plane made by the incoming propagation direction and the vector perpendicular to the plane of an interface, in other words, the plane in which the ray travels before and after reflection or refraction. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from , German for 'perpendicular'). Polarized light with its electric field along the plane of incidence is thus denoted , while light whose electric field is normal to the plane of incidence is called . P-polarization is commonly referred to as transverse-magnetic (TM), and has also been termed pi-polarized or -polarized, or tangential plane polarized. S-polarization is also called transverse-electric (TE), as well as sigma-polarized or σ-polarized, or sagittal plane polarized.
Degree of polarization
Degree of polarization (DOP) is a quantity used to describe the portion of an electromagnetic wave which is polarized. can be calculated from the Stokes parameters. A perfectly polarized wave has a of 100%, whereas an unpolarized wave has a of 0%. A wave which is partially polarized, and therefore can be represented by a superposition of a polarized and unpolarized component, will have a somewhere in between 0 and 100%. is calculated as the fraction of the total power that is carried by the polarized component of the wave.
can be used to map the strain field in materials when considering the of the photoluminescence. The polarization of the photoluminescence is related to the strain in a material by way of the given material's photoelasticity tensor.
is also visualized using the Poincaré sphere representation of a polarized beam. In this representation, is equal to the length of the vector measured from the center of the sphere.
Unpolarized and partially polarized light
Implications for reflection and propagation
Polarization in wave propagation
In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector of a plane wave in the direction follows: | Polarization (waves) | Wikipedia | 479 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
where is the wavenumber. As noted above, the instantaneous electric field is the real part of the product of the Jones vector times the phase factor When an electromagnetic wave interacts with matter, its propagation is altered according to the material's (complex) index of refraction. When the real or imaginary part of that refractive index is dependent on the polarization state of a wave, properties known as birefringence and polarization dichroism (or diattenuation) respectively, then the polarization state of a wave will generally be altered.
In such media, an electromagnetic wave with any given state of polarization may be decomposed into two orthogonally polarized components that encounter different propagation constants. The effect of propagation over a given path on those two components is most easily characterized in the form of a complex transformation matrix known as a Jones matrix:
The Jones matrix due to passage through a transparent material is dependent on the propagation distance as well as the birefringence. The birefringence (as well as the average refractive index) will generally be dispersive, that is, it will vary as a function of optical frequency (wavelength). In the case of non-birefringent materials, however, the Jones matrix is the identity matrix (multiplied by a scalar phase factor and attenuation factor), implying no change in polarization during propagation.
For propagation effects in two orthogonal modes, the Jones matrix can be written as
where and are complex numbers describing the phase delay and possibly the amplitude attenuation due to propagation in each of the two polarization eigenmodes. is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors; in the case of linear birefringence or diattenuation the modes are themselves linear polarization states so and can be omitted if the coordinate axes have been chosen appropriately.
Birefringence | Polarization (waves) | Wikipedia | 400 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
In a birefringent substance, electromagnetic waves of different polarizations travel at different speeds (phase velocities). As a result, when unpolarized waves travel through a plate of birefringent material, one polarization component has a shorter wavelength than the other, resulting in a phase difference between the components which increases the further the waves travel through the material. The Jones matrix is a unitary matrix: . Media termed diattenuating (or dichroic in the sense of polarization), in which only the amplitudes of the two polarizations are affected differentially, may be described using a Hermitian matrix (generally multiplied by a common phase factor). In fact, since matrix may be written as the product of unitary and positive Hermitian matrices, light propagation through any sequence of polarization-dependent optical components can be written as the product of these two basic types of transformations.
In birefringent media there is no attenuation, but two modes accrue a differential phase delay. Well known manifestations of linear birefringence (that is, in which the basis polarizations are orthogonal linear polarizations) appear in optical wave plates/retarders and many crystals. If linearly polarized light passes through a birefringent material, its state of polarization will generally change, its polarization direction is identical to one of those basis polarizations. Since the phase shift, and thus the change in polarization state, is usually wavelength-dependent, such objects viewed under white light in between two polarizers may give rise to colorful effects, as seen in the accompanying photograph.
Circular birefringence is also termed optical activity, especially in chiral fluids, or Faraday rotation, when due to the presence of a magnetic field along the direction of propagation. When linearly polarized light is passed through such an object, it will exit still linearly polarized, but with the axis of polarization rotated. A combination of linear and circular birefringence will have as basis polarizations two orthogonal elliptical polarizations; however, the term "elliptical birefringence" is rarely used. | Polarization (waves) | Wikipedia | 445 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
One can visualize the case of linear birefringence (with two orthogonal linear propagation modes) with an incoming wave linearly polarized at a 45° angle to those modes. As a differential phase starts to accrue, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) perpendicular to the original polarization, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes. Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, the two polarization components of a collimated beam (or ray) can exit the material with a positional offset, even though their final propagation directions will be the same (assuming the entrance face and exit face are parallel). This is commonly viewed using calcite crystals, which present the viewer with two slightly offset images, in opposite polarizations, of an object behind the crystal. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669.
Dichroism
Media in which transmission of one polarization mode is preferentially reduced are called dichroic or diattenuating. Like birefringence, diattenuation can be with respect to linear polarization modes (in a crystal) or circular polarization modes (usually in a liquid). | Polarization (waves) | Wikipedia | 382 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Devices that block nearly all of the radiation in one mode are known as or simply "polarizers". This corresponds to in the above representation of the Jones matrix. The output of an ideal polarizer is a specific polarization state (usually linear polarization) with an amplitude equal to the input wave's original amplitude in that polarization mode. Power in the other polarization mode is eliminated. Thus if unpolarized light is passed through an ideal polarizer (where and ) exactly half of its initial power is retained. Practical polarizers, especially inexpensive sheet polarizers, have additional loss so that . However, in many instances the more relevant figure of merit is the polarizer's degree of polarization or extinction ratio, which involve a comparison of to . Since Jones vectors refer to waves' amplitudes (rather than intensity), when illuminated by unpolarized light the remaining power in the unwanted polarization will be of the power in the intended polarization.
Specular reflection
In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected; for a given material those proportions (and also the phase of reflection) are dependent on the angle of incidence and are different for the s- and p-polarizations. Therefore, the polarization state of reflected light (even if initially unpolarized) is generally changed.
Any light striking a surface at a special angle of incidence known as Brewster's angle, where the reflection coefficient for p-polarization is zero, will be reflected with only the s-polarization remaining. This principle is employed in the so-called "pile of plates polarizer" (see figure) in which part of the s-polarization is removed by reflection at each Brewster angle surface, leaving only the p-polarization after transmission through many such surfaces. The generally smaller reflection coefficient of the p-polarization is also the basis of polarized sunglasses; by blocking the s- (horizontal) polarization, most of the glare due to reflection from a wet street, for instance, is removed. | Polarization (waves) | Wikipedia | 469 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
In the important special case of reflection at normal incidence (not involving anisotropic materials) there is no particular s- or p-polarization. Both the and polarization components are reflected identically, and therefore the polarization of the reflected wave is identical to that of the incident wave. However, in the case of circular (or elliptical) polarization, the handedness of the polarization state is thereby reversed, since by convention this is specified relative to the direction of propagation. The circular rotation of the electric field around the axes called "right-handed" for a wave in the direction is "left-handed" for a wave in the direction. But in the general case of reflection at a nonzero angle of incidence, no such generalization can be made. For instance, right-circularly polarized light reflected from a dielectric surface at a grazing angle, will still be right-handed (but elliptically) polarized. Linear polarized light reflected from a metal at non-normal incidence will generally become elliptically polarized. These cases are handled using Jones vectors acted upon by the different Fresnel coefficients for the s- and p-polarization components.
Measurement techniques involving polarization
Some optical measurement techniques are based on polarization. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention.
Measurement of stress
In engineering, the phenomenon of stress induced birefringence allows for stresses in transparent materials to be readily observed. As noted above and seen in the accompanying photograph, the chromaticity of birefringence typically creates colored patterns when viewed in between two polarizers. As external forces are applied, internal stress induced in the material is thereby observed. Additionally, birefringence is frequently observed due to stresses "frozen in" at the time of manufacture. This is famously observed in cellophane tape whose birefringence is due to the stretching of the material during the manufacturing process.
Ellipsometry
Ellipsometry is a powerful technique for the measurement of the optical properties of a uniform surface. It involves measuring the polarization state of light following specular reflection from such a surface. This is typically done as a function of incidence angle or wavelength (or both). Since ellipsometry relies on reflection, it is not required for the sample to be transparent to light or for its back side to be accessible. | Polarization (waves) | Wikipedia | 501 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Ellipsometry can be used to model the (complex) refractive index of a surface of a bulk material. It is also very useful in determining parameters of one or more thin film layers deposited on a substrate. Due to their reflection properties, not only are the predicted magnitude of the p and s polarization components, but their relative phase shifts upon reflection, compared to measurements using an ellipsometer. A normal ellipsometer does not measure the actual reflection coefficient (which requires careful photometric calibration of the illuminating beam) but the ratio of the p and s reflections, as well as change of polarization ellipticity (hence the name) induced upon reflection by the surface being studied. In addition to use in science and research, ellipsometers are used in situ to control production processes for instance.
Geology
The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details.
Sound waves in solid materials exhibit polarization. Differential propagation of the three polarizations through the earth is a crucial in the field of seismology. Horizontally and vertically polarized seismic waves (shear waves) are termed SH and SV, while waves with longitudinal polarization (compressional waves) are termed P-waves.
Autopsy
Similarly, polarization microscopes can be used to aid in the detection of foreign matter in biological tissue slices if it is birefringent; autopsies often mention (a lack of or presence of) "polarizable foreign debris." | Polarization (waves) | Wikipedia | 343 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Chemistry
We have seen (above) that the birefringence of a type of crystal is useful in identifying it, and thus detection of linear birefringence is especially useful in geology and mineralogy. Linearly polarized light generally has its polarization state altered upon transmission through such a crystal, making it stand out when viewed in between two crossed polarizers, as seen in the photograph, above. Likewise, in chemistry, rotation of polarization axes in a liquid solution can be a useful measurement. In a liquid, linear birefringence is impossible, but there may be circular birefringence when a chiral molecule is in solution. When the right and left handed enantiomers of such a molecule are present in equal numbers (a so-called racemic mixture) then their effects cancel out. However, when there is only one (or a preponderance of one), as is more often the case for organic molecules, a net circular birefringence (or optical activity) is observed, revealing the magnitude of that imbalance (or the concentration of the molecule itself, when it can be assumed that only one enantiomer is present). This is measured using a polarimeter in which polarized light is passed through a tube of the liquid, at the end of which is another polarizer which is rotated in order to null the transmission of light through it.
Astronomy
In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarized. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth, but chirality selection on inorganic crystals has been proposed as an alternative theory.
Applications and examples
Polarized sunglasses | Polarization (waves) | Wikipedia | 498 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Unpolarized light, after being reflected by a specular (shiny) surface, generally obtains a degree of polarization. This phenomenon was observed in the early 1800s by the mathematician Étienne-Louis Malus, after whom Malus's law is named. Polarizing sunglasses exploit this effect to reduce glare from reflections by horizontal surfaces, notably the road ahead viewed at a grazing angle.
Wearers of polarized sunglasses will occasionally observe inadvertent polarization effects such as color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics, in conjunction with natural polarization by reflection or scattering. The polarized light from LCD monitors (see below) is extremely conspicuous when these are worn.
Sky polarization and photography
Polarization is observed in the light of the sky, as this is due to sunlight scattered by aerosols as it passes through Earth's atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is most strongly observed at points on the sky making a 90° angle to the Sun. Polarizing filters use these effects to optimize the results of photographing scenes in which reflection or scattering by the sky is involved.
Sky polarization has been used for orientation in navigation. The Pfund sky compass was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass from Asia to Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century. | Polarization (waves) | Wikipedia | 409 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Display technologies
The principle of liquid-crystal display (LCD) technology relies on the rotation of the axis of linear polarization by the liquid crystal array. Light from the backlight (or the back reflective layer, in devices not including or requiring a backlight) first passes through a linear polarizing sheet. That polarized light passes through the actual liquid crystal layer which may be organized in pixels (for a TV or computer monitor) or in another format such as a seven-segment display or one with custom symbols for a particular product. The liquid crystal layer is produced with a consistent right (or left) handed chirality, essentially consisting of tiny helices. This causes circular birefringence, and is engineered so that there is a 90 degree rotation of the linear polarization state. However, when a voltage is applied across a cell, the molecules straighten out, lessening or totally losing the circular birefringence. On the viewing side of the display is another linear polarizing sheet, usually oriented at 90 degrees from the one behind the active layer. Therefore, when the circular birefringence is removed by the application of a sufficient voltage, the polarization of the transmitted light remains at right angles to the front polarizer, and the pixel appears dark. With no voltage, however, the 90 degree rotation of the polarization causes it to exactly match the axis of the front polarizer, allowing the light through. Intermediate voltages create intermediate rotation of the polarization axis and the pixel has an intermediate intensity. Displays based on this principle are widespread, and now are used in the vast majority of televisions, computer monitors and video projectors, rendering the previous CRT technology essentially obsolete. The use of polarization in the operation of LCD displays is immediately apparent to someone wearing polarized sunglasses, often making the display unreadable. | Polarization (waves) | Wikipedia | 375 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
In a totally different sense, polarization encoding has become the leading (but not sole) method for delivering separate images to the left and right eye in stereoscopic displays used for 3D movies. This involves separate images intended for each eye either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarizing filters ensure that each eye receives only the intended image. Historically such systems used linear polarization encoding because it was inexpensive and offered good separation. However, circular polarization makes separation of the two images insensitive to tilting of the head, and is widely used in 3-D movie exhibition today, such as the system from RealD. Projecting such images requires screens that maintain the polarization of the projected light when viewed in reflection (such as silver screens); a normal diffuse white projection screen causes depolarization of the projected images, making it unsuitable for this application.
Although now obsolete, CRT computer displays suffered from reflection by the glass envelope, causing glare from room lights and consequently poor contrast. Several anti-reflection solutions were employed to ameliorate this problem. One solution utilized the principle of reflection of circularly polarized light. A circular polarizing filter in front of the screen allows for the transmission of (say) only right circularly polarized room light. Now, right circularly polarized light (depending on the convention used) has its electric (and magnetic) field direction rotating clockwise while propagating in the +z direction. Upon reflection, the field still has the same direction of rotation, but now propagation is in the −z direction making the reflected wave left circularly polarized. With the right circular polarization filter placed in front of the reflecting glass, the unwanted light reflected from the glass will thus be in very polarization state that is blocked by that filter, eliminating the reflection problem. The reversal of circular polarization on reflection and elimination of reflections in this manner can be easily observed by looking in a mirror while wearing 3-D movie glasses which employ left- and right-handed circular polarization in the two lenses. Closing one eye, the other eye will see a reflection in which it cannot see itself; that lens appears black. However, the other lens (of the closed eye) will have the correct circular polarization allowing the closed eye to be easily seen by the open one.
Radio transmission and reception | Polarization (waves) | Wikipedia | 509 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
All radio (and microwave) antennas used for transmitting or receiving are intrinsically polarized. They transmit in (or receive signals from) a particular polarization, being totally insensitive to the opposite polarization; in certain cases that polarization is a function of direction. Most antennas are nominally linearly polarized, but elliptical and circular polarization is a possibility. In the case of linear polarization, the same kind of filtering as described above, is possible. In the case of elliptical polarization (circular polarization is in reality just a kind of elliptical polarization where the length of both elasticity factors is the same), filtering out a single angle (e.g. 90°) will have virtually no impact as the wave at any time can be in any of the 360 degrees.
The vast majority of antennas are linearly polarized. In fact it can be shown from considerations of symmetry that an antenna that lies entirely in a plane which also includes the observer, can only have its polarization in the direction of that plane. This applies to many cases, allowing one to easily infer such an antenna's polarization at an intended direction of propagation. So a typical rooftop Yagi or log-periodic antenna with horizontal conductors, as viewed from a second station toward the horizon, is necessarily horizontally polarized. But a vertical "whip antenna" or AM broadcast tower used as an antenna element (again, for observers horizontally displaced from it) will transmit in the vertical polarization. A turnstile antenna with its four arms in the horizontal plane, likewise transmits horizontally polarized radiation toward the horizon. However, when that same turnstile antenna is used in the "axial mode" (upwards, for the same horizontally-oriented structure) its radiation is circularly polarized. At intermediate elevations it is elliptically polarized.
Polarization is important in radio communications because, for instance, if one attempts to use a horizontally polarized antenna to receive a vertically polarized transmission, the signal strength will be substantially reduced (or under very controlled conditions, reduced to nothing). This principle is used in satellite television in order to double the channel capacity over a fixed frequency band. The same frequency channel can be used for two signals broadcast in opposite polarizations. By adjusting the receiving antenna for one or the other polarization, either signal can be selected without interference from the other. | Polarization (waves) | Wikipedia | 479 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Especially due to the presence of the ground, there are some differences in propagation (and also in reflections responsible for TV ghosting) between horizontal and vertical polarizations. AM and FM broadcast radio usually use vertical polarization, while television uses horizontal polarization. At low frequencies especially, horizontal polarization is avoided. That is because the phase of a horizontally polarized wave is reversed upon reflection by the ground. A distant station in the horizontal direction will receive both the direct and reflected wave, which thus tend to cancel each other. This problem is avoided with vertical polarization. Polarization is also important in the transmission of radar pulses and reception of radar reflections by the same or a different antenna. For instance, back scattering of radar pulses by rain drops can be avoided by using circular polarization. Just as specular reflection of circularly polarized light reverses the handedness of the polarization, as discussed above, the same principle applies to scattering by objects much smaller than a wavelength such as rain drops. On the other hand, reflection of that wave by an irregular metal object (such as an airplane) will typically introduce a change in polarization and (partial) reception of the return wave by the same antenna.
The effect of free electrons in the ionosphere, in conjunction with the earth's magnetic field, causes Faraday rotation, a sort of circular birefringence. This is the same mechanism which can rotate the axis of linear polarization by electrons in interstellar space as mentioned below. The magnitude of Faraday rotation caused by such a plasma is greatly exaggerated at lower frequencies, so at the higher microwave frequencies used by satellites the effect is minimal. However, medium or short wave transmissions received following refraction by the ionosphere are strongly affected. Since a wave's path through the ionosphere and the earth's magnetic field vector along such a path are rather unpredictable, a wave transmitted with vertical (or horizontal) polarization will generally have a resulting polarization in an arbitrary orientation at the receiver. | Polarization (waves) | Wikipedia | 405 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
Polarization and vision
Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth.
The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye.
Angular momentum using circular polarization
It is well known that electromagnetic radiation carries a certain linear momentum in the direction of propagation. In addition, however, light carries a certain angular momentum if it is circularly polarized (or partially so). In comparison with lower frequencies such as microwaves, the amount of angular momentum in light, even of pure circular polarization, compared to the same wave's linear momentum (or radiation pressure) is very small and difficult to even measure. However, it was utilized in an experiment to achieve speeds of up to 600 million revolutions per minute. | Polarization (waves) | Wikipedia | 387 | 41564 | https://en.wikipedia.org/wiki/Polarization%20%28waves%29 | Physical sciences | Optics | null |
In medicine, the pulse is the rhythmic throbbing of each artery in response to the cardiac cycle (heartbeat). The pulse may be palpated in any place that allows an artery to be compressed near the surface of the body, such as at the neck (carotid artery), wrist (radial artery or ulnar artery), at the groin (femoral artery), behind the knee (popliteal artery), near the ankle joint (posterior tibial artery), and on foot (dorsalis pedis artery). The pulse is most commonly measured at the wrist or neck. A sphygmograph is an instrument for measuring the pulse.
Physiology
Claudius Galen was perhaps the first physiologist to describe the pulse. The pulse is an expedient tactile method of determination of systolic blood pressure to a trained observer. Diastolic blood pressure is non-palpable and unobservable by tactile methods, occurring between heartbeats.
Pressure waves generated by the heart in systole move the arterial walls. Forward movement of blood occurs when the boundaries are pliable and compliant. These properties form enough to create a palpable pressure wave.
Pulse velocity, pulse deficits and much more physiologic data are readily and simplistically visualized by the use of one or more arterial catheters connected to a transducer and oscilloscope. This invasive technique has been commonly used in intensive care since the 1970s.
The pulse may be further indirectly observed under light absorbances of varying wavelengths with assigned and inexpensively reproduced mathematical ratios. Applied capture of variances of light signal from the blood component hemoglobin under oxygenated vs. deoxygenated conditions allows the technology of pulse oximetry.
Characteristics
Rate
The rate of the pulse can be observed and measured on the outside of an artery by tactile or visual means. It is recorded as arterial beats per minute or BPM. Although the pulse and heart beat are related, they are not the same. For example, there is a delay between the onset of the heart beat and the onset of the pulse, known as the pulse transit time, which varies by site. Similarly measurements of heart rate variability and pulse rate variability differ. | Pulse | Wikipedia | 462 | 41600 | https://en.wikipedia.org/wiki/Pulse | Biology and health sciences | Medical procedures | null |
In healthy people, the pulse rate is close to the heart rate, as measured by ECG. Measuring the pulse rate is therefore a convenient way to estimate the heart rate. Pulse deficit is a condition in which a person has a difference between their pulse rate and heart rate. It can be observed by simultaneous palpation at the radial artery and auscultation using a stethoscope at the PMI, near the heart apex, for example. Typically, in people with pulse deficit, heart beats do not result in pulsations at the periphery, meaning the pulse rate is lower than the heart rate. Pulse deficit has been found to be significant in the context of premature ventricular contraction and atrial fibrillation.
Rhythm
A normal pulse is regular in rhythm and force. An irregular pulse may be due to sinus arrhythmia, ectopic beats, atrial fibrillation, paroxysmal atrial tachycardia, atrial flutter, partial heart block etc. Intermittent dropping out of beats at pulse is called "intermittent pulse". Examples of regular intermittent (regularly irregular) pulse include pulsus bigeminus, second-degree atrioventricular block. An example of irregular intermittent (irregularly irregular) pulse is atrial fibrillation.
Volume
The degree of expansion displayed by artery during diastolic and systolic state is called volume. It is also known as amplitude, expansion or size of pulse.
Hypokinetic pulse
A weak pulse signifies narrow pulse pressure. It may be due to low cardiac output (as seen in shock, congestive cardiac failure), hypovolemia, valvular heart disease (such as aortic outflow tract obstruction, mitral stenosis, aortic arch syndrome) etc.
Hyperkinetic pulse
A bounding pulse signifies high pulse pressure. It may be due to low peripheral resistance (as seen in fever, anemia, thyrotoxicosis, , A-V fistula, Paget's disease, beriberi, liver cirrhosis), increased cardiac output, increased stroke volume (as seen in anxiety, exercise, complete heart block, aortic regurgitation), decreased distensibility of arterial system (as seen in atherosclerosis, hypertension and coarctation of aorta). | Pulse | Wikipedia | 483 | 41600 | https://en.wikipedia.org/wiki/Pulse | Biology and health sciences | Medical procedures | null |
The strength of the pulse can also be reported:
0 = Absent
1 = Barely palpable
2 = Easily palpable
3 = Full
4 = Aneurysmal or bounding pulse
Force
Also known as compressibility of pulse. It is a rough indication of systolic blood pressure.
Tension
Determined mainly by mean arterial blood pressure [edited by Elmoghazy] & It corresponds to diastolic blood pressure. A low tension pulse (pulsus mollis), the vessel is soft or impalpable between beats. In high tension pulse (pulsus durus), vessels feel rigid even between pulse beats.
Form
A form or contour of a pulse is palpatory estimation of arteriogram. A quickly rising and quickly falling pulse (pulsus celer) is seen in aortic regurgitation. A slow rising and slowly falling pulse (pulsus tardus) is seen in aortic stenosis.
Equality
Comparing pulses and different places gives valuable clinical information.
A discrepant or unequal pulse between left and right radial artery is observed in anomalous or aberrant course of artery, coarctation of aorta, aortitis, dissecting aneurysm, peripheral embolism etc. An unequal pulse between upper and lower extremities is seen in coarctation to aorta, aortitis, block at bifurcation of aorta, dissection of aorta, iatrogenic trauma and arteriosclerotic obstruction.
Condition of arterial wall
A normal artery is not palpable after flattening by digital pressure. A thick radial artery which is palpable 7.5–10 cm up the forearm is suggestive of arteriosclerosis.
Radio-femoral delay
In coarctation of aorta, femoral pulse may be significantly delayed as compared to radial pulse (unless there is coexisting aortic regurgitation). The delay can also be observed in supravalvar aortic stenosis. | Pulse | Wikipedia | 422 | 41600 | https://en.wikipedia.org/wiki/Pulse | Biology and health sciences | Medical procedures | null |
Patterns
Several pulse patterns can be of clinical significance. These include:
Anacrotic pulse: notch on the upstroke of the carotid pulse. Two distinct waves (slow initial upstroke and delayed peak, which is close to S2). Present in AS.
Dicrotic pulse: is characterized by two beats per cardiac cycle, one systolic and the other diastolic. Physiologically, the dicrotic wave is the result of reflected waves from the lower extremities and aorta. Conditions associated with low cardiac output and high systemic vascular resistance can produce a dicrotic pulse.
Pulse deficit: difference in the heart rate by direct cardiac ausculation and by palpation of the peripheral arterial pulse rate when in atrial fibrillation (AF).
Pulsus alternans: an ominous medical sign that indicates progressive systolic heart failure. To trained fingertips, the examiner notes a pattern of a strong pulse followed by a weak pulse over and over again. This pulse signals a flagging effort of the heart to sustain itself in systole. It also can be detected in HCM with obstruction.
Pulsus bigeminus: indicates a pair of hoofbeats within each heartbeat. Concurrent auscultation of the heart may reveal a gallop rhythm of the native heartbeat.
Pulsus bisferiens: is characterized by two beats per cardiac cycle, both systolic, unlike the dicrotic pulse. It is an unusual physical finding typically seen in patients with aortic valve diseases if the aortic valve does not normally open and close. Trained fingertips will observe two pulses to each heartbeat instead of one.
Pulsus tardus et parvus, also pulsus parvus et tardus, slow-rising pulse and anacrotic pulse, is weak (parvus), and late (tardus) relative to its expected characteristics. It is caused by a stiffened aortic valve that makes it progressively harder to open, thus requiring increased generation of blood pressure in the left ventricle. It is seen in aortic valve stenosis.
Pulsus paradoxus: a condition in which some heartbeats cannot be detected at the radial artery during the inspiration phase of respiration. It is caused by an exaggerated decrease in blood pressure during this phase, and is diagnostic of a variety of cardiac and respiratory conditions of varying urgency, such as cardiac tamponade. | Pulse | Wikipedia | 496 | 41600 | https://en.wikipedia.org/wiki/Pulse | Biology and health sciences | Medical procedures | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.