text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1 value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|---|---|
Composition
The Earth's planetary atmosphere contains, besides other gases, water vapour and carbon dioxide, which produce carbonic acid in rain water, which therefore has an approximate natural pH of 5.0 to 5.5 (slightly acidic). (Water other than atmospheric water vapour fallen as fresh rain, such as fresh/sweet/potable/river water, will usually be affected by the physical environment and may not be in this pH range.) Atmospheric water vapour holds suspended gasses in it (not by mass),78.08% nitrogen as N2, 20.95% oxygen as O2, 0.93% argon, trace gases, and variable amounts of condensing water (from saturated water vapor). Any carbon dioxide released into the atmosphere from a pressurised source combines with the carbonic acid water vapour and momentarily reduces the atmospheric pH by negligible amounts. Respiration from animals releases out of equilibrium carbonic acid and low levels of other ions. Combustion of hydrocarbons which is not a chemical reaction releases to atmosphere carbonic acid water as; saturates, condensates, vapour or gas (invisible steam). Combustion can releases particulates (carbon/soot and ash) as well as molecules forming nitrites and sulphites which will reduce the atmospheric pH of the water slightly or harmfully in highly industrialised areas where this is classed as air pollution and can create the phenomena of acid rain, a pH lower than the natural pH5.56. The negative effects of the by-products of combustion released into the atmospheric vapour can be removed by the use of scrubber towers and other physical means, the captured pollutants can be processed into a valuable by-product. The sources of atmospheric water vapor are the bodies of water (oceans, seas, lakes, rivers, swamps), and vegetation on the planetary surface, which humidify the troposphere through the processes of evaporation and transpiration respectively, and which influences the occurrence of weather phenomena; the greatest proportion of water vapor is in the atmosphere nearest the surface of the Earth. The temperature of the troposphere decreases at high altitude by way of the inversion layers that occur in the tropopause, which is the atmospheric boundary that demarcates the troposphere from the stratosphere. At higher altitudes, the low air-temperature consequently decreases the saturation vapor pressure, the amount of atmospheric water vapor in the upper troposphere. | Troposphere | Wikipedia | 508 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
Pressure
The maximum air pressure (weight of the atmosphere) is at sea level and decreases at high altitude because the atmosphere is in hydrostatic equilibrium, wherein the air pressure is equal to the weight of the air above a given point on the planetary surface. The relation between decreased air pressure and high altitude can be equated to the density of a fluid, by way of the following hydrostatic equation:
where:
gn is the standard gravity
ρ is the density
z is the altitude
P is the pressure
R is the gas constant
T is the thermodynamic (absolute) temperature
m is the molar mass
Temperature
The planetary surface of the Earth heats the troposphere by means of latent heat, thermal radiation, and sensible heat.
The gas layers of the troposphere are less dense at the geographic poles and denser at the equator, where the average height of the tropical troposphere is 13 km, approximately 7.0 km greater than the 6.0 km average height of the polar troposphere at the geographic poles; therefore, surplus heating and vertical expansion of the troposphere occur in the tropical latitudes. At the middle latitudes, tropospheric temperatures decrease from an average temperature of at sea level to approximately at the tropopause. At the equator, the tropospheric temperatures decrease from an average temperature of at sea level to approximately at the tropopause. At the geographical poles, the Arctic and the Antarctic regions, the tropospheric temperature decreases from an average temperature of at sea level to approximately at the tropopause.
Altitude
The temperature of the troposphere decreases with increased altitude, and the rate of decrease in air temperature is measured with the Environmental Lapse Rate () which is the numeric difference between the temperature of the planetary surface and the temperature of the tropopause divided by the altitude. Functionally, the ELR equation assumes that the planetary atmosphere is static, that there is no mixing of the layers of air, either by vertical atmospheric convection or winds that could create turbulence. | Troposphere | Wikipedia | 421 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
The difference in temperature derives from the planetary surface absorbing most of the energy from the sun, which then radiates outwards and heats the troposphere (the first layer of the atmosphere of Earth) while the radiation of surface heat to the upper atmosphere results in the cooling of that layer of the atmosphere. The ELR equation also assumes that the atmosphere is static, but heated air becomes buoyant, expands, and rises. The dry adiabatic lapse rate (DALR) accounts for the effect of the expansion of dry air as it rises in the atmosphere, and the wet adiabatic lapse rate (WALR) includes the effect of the condensation-rate of water vapor upon the environmental lapse rate.
Compression and expansion
A parcel of air rises and expands because of the lower atmospheric pressure at high altitudes. The expansion of the air parcel pushes outwards against the surrounding air, and transfers energy (as work) from the parcel of air to the atmosphere. Transferring energy to a parcel of air by way of heat is a slow and inefficient exchange of energy with the environment, which is an adiabatic process (no energy transfer by way of heat). As the rising parcel of air loses energy while it acts upon the surrounding atmosphere, no heat energy is transferred from the atmosphere to the air parcel to compensate for the heat loss. The parcel of air loses energy as it reaches greater altitude, which is manifested as a decrease in the temperature of the air mass. Analogously, the reverse process occurs within a cold parcel of air that is being compressed and is sinking to the planetary surface.
The compression and the expansion of an air parcel are reversible phenomena in which energy is not transferred into or out of the air parcel; atmospheric compression and expansion are measured as an isentropic process () wherein there occurs no change in entropy as the air parcel rises or falls within the atmosphere. Because the heat exchanged () is related to the change in entropy ( by ) the equation governing the air temperature as a function of altitude for a mixed atmosphere is: where is the entropy. The isentropic equation states that atmospheric entropy does not change with altitude; the adiabatic lapse rate measures the rate at which temperature decreases with altitude under such conditions. | Troposphere | Wikipedia | 464 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
Humidity
If the air contains water vapor, then cooling of the air can cause the water to condense, and the air no longer functions as an ideal gas. If the air is at the saturation vapor pressure, then the rate at which temperature decreases with altitude is called the saturated adiabatic lapse rate. The actual rate at which the temperature decreases with altitude is the environmental lapse rate. In the troposphere, the average environmental lapse rate is a decrease of about 6.5 °C for every 1.0 km (1,000m) of increased altitude.
For dry air, an approximately ideal gas, the adiabatic equation is: wherein is the heat capacity ratio () for air. The combination of the equation for the air pressure yields the dry adiabatic lapse rate:.
Environment
The environmental lapse rate (), at which temperature decreases with altitude, usually is unequal to the adiabatic lapse rate (). If the upper air is warmer than predicted by the adiabatic lapse rate (), then a rising and expanding parcel of air will arrive at the new altitude at a lower temperature than the surrounding air. In which case, the air parcel is denser than the surrounding air, and so falls back to its original altitude as an air mass that is stable against being lifted. If the upper air is cooler than predicted by the adiabatic lapse rate, then, when the air parcel rises to a new altitude, the air mass will have a higher temperature and a lower density than the surrounding air and will continue to accelerate and rise.
Tropopause | Troposphere | Wikipedia | 328 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
The tropopause is the atmospheric boundary layer between the troposphere and the stratosphere, and is located by measuring the changes in temperature relative to increased altitude in the troposphere and in the stratosphere. In the troposphere, the temperature of the air decreases at high altitude, however, in the stratosphere the air temperature initially is constant, and then increases with altitude. The increase of air temperature at stratospheric altitudes results from the ozone layer's absorption and retention of the ultraviolet (UV) radiation that Earth receives from the Sun. The coldest layer of the atmosphere, where the temperature lapse rate changes from a positive rate (in the troposphere) to a negative rate (in the stratosphere) locates and identifies the tropopause as an inversion layer in which limited mixing of air layers occurs between the troposphere and the stratosphere.
Atmospheric flow
The general flow of the atmosphere is from west to east, which, however, can be interrupted by polar flows, either north-to-south flow or a south-to-north flow, which meteorology describes as a zonal flow and as a meridional flow. The terms are used to describe localized areas of the atmosphere at a synoptic scale; the three-cell model more fully explains the zonal and meridional flows of the planetary atmosphere of the Earth.
Three-cell model
The three-cell model of the atmosphere of the Earth describes the actual flow of the atmosphere with the tropical-latitude Hadley cell, the mid-latitude Ferrel cell, and the polar cell to describe the flow of energy and the circulation of the planetary atmosphere. Balance is the fundamental principle of the model — that the solar energy absorbed by the Earth in a year is equal to the energy radiated (lost) into outer space. The Earth's energy balance does not equally apply to each latitude because of the varying strength of the sunlight that strikes each of the three atmospheric cells, consequent to the inclination of the axis of planet Earth within its orbit of the Sun. The resultant atmospheric circulation transports warm tropical air to the geographic poles and cold polar air to the tropics. The effect of the three cells is the tendency to the equilibrium of heat and moisture in the planetary atmosphere of Earth. | Troposphere | Wikipedia | 471 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
Zonal flow
A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow. The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow.
Meridional flow
When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs of low pressure and ridges of high pressure, with more north–south flow in the general pattern than west-to-east flow. | Troposphere | Wikipedia | 158 | 41822 | https://en.wikipedia.org/wiki/Troposphere | Physical sciences | Atmosphere: General | Earth science |
Telephony ( ) is the field of technology involving the development, application, and deployment of telecommunications services for the purpose of electronic transmission of voice, fax, or data, between distant parties. The history of telephony is intimately linked to the invention and development of the telephone.
Telephony is commonly referred to as the construction or operation of telephones and telephonic systems and as a system of telecommunications in which telephonic equipment is employed in the transmission of speech or other sound between points, with or without the use of wires. The term is also used frequently to refer to computer hardware, software, and computer network systems, that perform functions traditionally performed by telephone equipment. In this context the technology is specifically referred to as Internet telephony, or voice over Internet Protocol (VoIP).
Overview
The first telephones were connected directly in pairs. Each user had a separate telephone wired to each locations to be reached. This quickly became inconvenient and unmanageable when users wanted to communicate with more than a few people. The invention of the telephone exchange provided the solution for establishing telephone connections with any other telephone in service in the local area. Each telephone was connected to the exchange at first with one wire, later one wire pair, the local loop. Nearby exchanges in other service areas were connected with trunk lines, and long-distance service could be established by relaying the calls through multiple exchanges.
Initially, exchange switchboards were manually operated by an attendant, commonly referred to as the "switchboard operator". When a customer cranked a handle on the telephone, it activated an indicator on the board in front of the operator, who would in response plug the operator headset into that jack and offer service. The caller had to ask for the called party by name, later by number, and the operator connected one end of a circuit into the called party jack to alert them. If the called station answered, the operator disconnected their headset and completed the station-to-station circuit. Trunk calls were made with the assistance of other operators at other exchangers in the network. | Telephony | Wikipedia | 427 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
Until the 1970s, most telephones were permanently wired to the telephone line installed at customer premises. Later, conversion to installation of jacks that terminated the inside wiring permitted simple exchange of telephone sets with telephone plugs and allowed portability of the set to multiple locations in the premises where jacks were installed. The inside wiring to all jacks was connected in one place to the wire drop which connects the building to a cable. Cables usually bring a large number of drop wires from all over a district access network to one wire center or telephone exchange. When a telephone user wants to make a telephone call, equipment at the exchange examines the dialed telephone number and connects that telephone line to another in the same wire center, or to a trunk to a distant exchange. Most of the exchanges in the world are interconnected through a system of larger switching systems, forming the public switched telephone network (PSTN).
In the second half of the 20th century, fax and data became important secondary applications of the network created to carry voices, and late in the century, parts of the network were upgraded with ISDN and DSL to improve handling of such traffic.
Today, telephony uses digital technology (digital telephony) in the provisioning of telephone services and systems. Telephone calls can be provided digitally, but may be restricted to cases in which the last mile is digital, or where the conversion between digital and analog signals takes place inside the telephone. This advancement has reduced costs in communication, and improved the quality of voice services. The first implementation of this, ISDN, permitted all data transport from end-to-end speedily over telephone lines. This service was later made much less important due to the ability to provide digital services based on the Internet protocol suite.
Since the advent of personal computer technology in the 1980s, computer telephony integration (CTI) has progressively provided more sophisticated telephony services, initiated and controlled by the computer, such as making and receiving voice, fax, and data calls with telephone directory services and caller identification. The integration of telephony software and computer systems is a major development in the evolution of office automation. The term is used in describing the computerized services of call centers, such as those that direct your phone call to the right department at a business you're calling. It is also sometimes used for the ability to use your personal computer to initiate and manage phone calls (in which case you can think of your computer as your personal call center). | Telephony | Wikipedia | 506 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
Digital telephony
Digital telephony is the use of digital electronics in the operation and provisioning of telephony systems and services. Since the late 20th century, a digital core network has replaced the traditional analog transmission and signaling systems, and much of the access network has also been digitized.
Starting with the development of transistor technology, originating from Bell Telephone Laboratories in 1947, to amplification and switching circuits in the 1950s, the public switched telephone network (PSTN) has gradually moved towards solid-state electronics and automation. Following the development of computer-based electronic switching systems incorporating metal–oxide–semiconductor (MOS) and pulse-code modulation (PCM) technologies, the PSTN gradually evolved towards the digitization of signaling and audio transmissions. Digital telephony has since dramatically improved the capacity, quality and cost of the network. Digitization allows wideband voice on the same channel, with improved quality of a wider analog voice channel.
History
The earliest end-to-end analog telephone networks to be modified and upgraded to transmission networks with Digital Signal 1 (DS1/T1) carrier systems date back to the early 1960s. They were designed to support the basic 3 kHz voice channel by sampling the bandwidth-limited analog voice signal and encoding using pulse-code modulation (PCM). Early PCM codec-filters were implemented as passive resistorcapacitorinductor filter circuits, with analog-to-digital conversion (for digitizing voices) and digital-to-analog conversion (for reconstructing voices) handled by discrete devices. Early digital telephony was impractical due to the low performance and high costs of early PCM codec-filters.
Practical digital telecommunication was enabled by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET), which led to the rapid development and wide adoption of PCM digital telephony. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. MOS technology was initially overlooked by Bell because they did not find it practical for analog telephone applications, before it was commercialized by Fairchild and RCA for digital electronics such as computers. | Telephony | Wikipedia | 484 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
MOS technology eventually became practical for telephone applications with the MOS mixed-signal integrated circuit, which combines analog and digital signal processing on a single chip, developed by former Bell engineer David A. Hodges with Paul R. Gray at UC Berkeley in the early 1970s. In 1974, Hodges and Gray worked with R.E. Suarez to develop MOS switched capacitor (SC) circuit technology, which they used to develop a digital-to-analog converter (DAC) chip, using MOS capacitors and MOSFET switches for data conversion. MOS analog-to-digital converter (ADC) and DAC chips were commercialized by 1974.
MOS SC circuits led to the development of PCM codec-filter chips in the late 1970s. The silicon-gate CMOS (complementary MOS) PCM codec-filter chip, developed by Hodges and W.C. Black in 1980, has since been the industry standard for digital telephony. By the 1990s, telecommunication networks such as the public switched telephone network (PSTN) had been largely digitized with very-large-scale integration (VLSI) CMOS PCM codec-filters, widely used in electronic switching systems for telephone exchanges, private branch exchanges (PBX) and key telephone systems (KTS); user-end modems; data transmission applications such as digital loop carriers, pair gain multiplexers, telephone loop extenders, integrated services digital network (ISDN) terminals, digital cordless telephones and digital cell phones; and applications such as speech recognition equipment, voice data storage, voice mail and digital tapeless answering machines. The bandwidth of digital telecommunication networks has been rapidly increasing at an exponential rate, as observed by Edholm's law, largely driven by the rapid scaling and miniaturization of MOS technology. | Telephony | Wikipedia | 380 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
Uncompressed PCM digital audio with 8-bit depth and 8kHz sample rate requires a bit rate of 64kbit/s, which was impractical for early digital telecommunication networks with limited network bandwidth. A solution to this issue was linear predictive coding (LPC), a speech coding data compression algorithm that was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. LPC was capable of audio data compression down to 2.4kbit/s, leading to the first successful real-time conversations over digital networks in the 1970s. LPC has since been the most widely used speech coding method. Another audio data compression method, a discrete cosine transform (DCT) algorithm called the modified discrete cosine transform (MDCT), has been widely adopted for speech coding in voice-over-IP (VoIP) applications since the late 1990s.
The development of transmission methods such as SONET and fiber optic transmission further advanced digital transmission. Although analog carrier systems existed that multiplexed multiple analog voice channels onto a single transmission medium, digital transmission allowed lower cost and more channels multiplexed on the transmission medium. Today the end instrument often remains analog but the analog signals are typically converted to digital signals at the serving area interface (SAI), central office (CO), or other aggregation point. Digital loop carriers (DLC) and fiber to the x place the digital network ever closer to the customer premises, relegating the analog local loop to legacy status.
IP telephony
The field of technology available for telephony has broadened with the advent of new communication technologies. Telephony now includes the technologies of Internet services and mobile communication, including video conferencing.
The new technologies based on Internet Protocol (IP) concepts are often referred to separately as voice over IP (VoIP) telephony, also commonly referred to as IP telephony or Internet telephony. Unlike traditional phone service, IP telephony service is relatively unregulated by government. In the United States, the Federal Communications Commission (FCC) regulates phone-to-phone connections, but says they do not plan to regulate connections between a phone user and an IP telephony service provider. | Telephony | Wikipedia | 467 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
A specialization of digital telephony, Internet Protocol (IP) telephony involves the application of digital networking technology that was the foundation to the Internet to create, transmit, and receive telecommunications sessions over computer networks. Internet telephony is commonly known as voice over Internet Protocol (VoIP), reflecting the principle, but it has been referred with many other terms. VoIP has proven to be a disruptive technology that is rapidly replacing traditional telephone infrastructure technologies. As of January 2005, up to 10% of telephone subscribers in Japan and South Korea have switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing". As of 2006, many VoIP companies offer service to consumers and businesses.
A significant advancement in mobile telephony has been the integration of IP technologies into mobile networks, notably through Voice over LTE (VoLTE) and Voice over 5G (Vo5G). These technologies enable voice calls to be transmitted over the same IP-based infrastructure used for data services, offering improved call quality and faster connections compared to traditional circuit-switched networks. VoLTE and Vo5G are becoming the standard for mobile voice communication in many regions, as mobile operators transition to all-IP networks.
IP telephony uses an Internet connection and hardware IP phones, analog telephone adapters, or softphone computer applications to transmit conversations encoded as data packets. While one of the most common and cost-effective uses of IP telephony is through connections over WiFi hotspots, it is also employed on private networks and over other types of Internet connections, which may or may not have a direct link to the global telephone network.
Social impact research
Direct person-to-person communication includes non-verbal cues expressed in facial and other bodily articulation, that cannot be transmitted in traditional voice telephony. Video telephony restores such interactions to varying degrees. Social Context Cues Theory is a model to measure the success of different types of communication in maintaining the non-verbal cues present in face-to-face interactions. The research examines many different cues, such as the physical context, different facial expressions, body movements, tone of voice, touch and smell. | Telephony | Wikipedia | 451 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
Various communication cues are lost with the usage of the telephone. The communicating parties are not able to identify the body movements, and lack touch and smell. Although this diminished ability to identify social cues is well known, Wiesenfeld, Raghuram, and Garud point out that there is a value and efficiency to the type of communication for different tasks. They examine work places in which different types of communication, such as the telephone, are more useful than face-to-face interaction.
The expansion of communication to mobile telephone service has created a different filter of the social cues than the land-line telephone. The use of instant messaging, such as texting, on mobile telephones has created a sense of community. In The Social Construction of Mobile Telephony it is suggested that each phone call and text message is more than an attempt to converse. Instead, it is a gesture which maintains the social network between family and friends. Although there is a loss of certain social cues through telephones, mobile phones bring new forms of expression of different cues that are understood by different audiences. New language additives attempt to compensate for the inherent lack of non-physical interaction.
Another social theory supported through telephony is the Media Dependency Theory. This theory concludes that people use media or a resource to attain certain goals. This theory states that there is a link between the media, audience, and the large social system. Telephones, depending on the person, help attain certain goals like accessing information, keeping in contact with others, sending quick communication, entertainment, etc. | Telephony | Wikipedia | 315 | 41831 | https://en.wikipedia.org/wiki/Telephony | Technology | Telecommunications | null |
A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves.
Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law.
There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining.
The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances.
Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape.
Uses
The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves guided through a taut wire have been known for a long time, as well as sound through a hollow pipe such as a cave or medical stethoscope. Other uses of waveguides are in transmitting power between the components of a system such as radio, radar or optical devices. Waveguides are the fundamental principle of guided wave testing (GWT), one of the many methods of non-destructive evaluation. | Waveguide | Wikipedia | 473 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
Specific examples:
Optical fibers transmit light and signals for long distances with low attenuation and a wide usable range of wavelengths.
In a microwave oven a waveguide transfers power from the magnetron, where waves are formed, to the cooking chamber.
In a radar, a waveguide transfers radio frequency energy to and from the antenna, where the impedance needs to be matched for efficient power transmission (see below).
Rectangular and circular waveguides are commonly used to connect feeds of parabolic dishes to their electronics, either low-noise receivers or power amplifier/transmitters.
Waveguides are used in scientific instruments to measure optical, acoustic and elastic properties of materials and objects. The waveguide can be put in contact with the specimen (as in a medical ultrasonography), in which case the waveguide ensures that the power of the testing wave is conserved, or the specimen may be put inside the waveguide (as in a dielectric constant measurement, so that smaller objects can be tested and the accuracy is better.
A transmission line is a commonly used specific type of waveguide.
History
The first structure for guiding waves was proposed by J. J. Thomson in 1893, and was first experimentally tested by Oliver Lodge in 1894. The first mathematical analysis of electromagnetic waves in a metal cylinder was performed by Lord Rayleigh in 1897. For sound waves, Lord Rayleigh published a full mathematical analysis of propagation modes in his seminal work, "The Theory of Sound". Jagadish Chandra Bose researched millimeter wavelengths using waveguides, and in 1897 described to the Royal Institution in London his research carried out in Kolkata.
The study of dielectric waveguides (such as optical fibers, see below) began as early as the 1920s, by several people, most famous of which are Rayleigh, Sommerfeld and Debye. Optical fiber began to receive special attention in the 1960s due to its importance to the communications industry. | Waveguide | Wikipedia | 392 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
The development of radio communication initially occurred at the lower frequencies because these could be more easily propagated over large distances. The long wavelengths made these frequencies unsuitable for use in hollow metal waveguides because of the impractically large diameter tubes required. Consequently, research into hollow metal waveguides stalled and the work of Lord Rayleigh was forgotten for a time and had to be rediscovered by others. Practical investigations resumed in the 1930s by George C. Southworth at Bell Labs and Wilmer L. Barrow at MIT. Southworth at first took the theory from papers on waves in dielectric rods because the work of Lord Rayleigh was unknown to him. This misled him somewhat; some of his experiments failed because he was not aware of the phenomenon of waveguide cutoff frequency already found in Lord Rayleigh's work. Serious theoretical work was taken up by John R. Carson and Sallie P. Mead. This work led to the discovery that for the TE01 mode in circular waveguide losses go down with frequency and at one time this was a serious contender for the format for long-distance telecommunications.
The importance of radar in World War II gave a great impetus to waveguide research, at least on the Allied side. The magnetron, developed in 1940 by John Randall and Harry Boot at the University of Birmingham in the United Kingdom, provided a good power source and made microwave radar feasible. The most important centre of US research was at the Radiation Laboratory (Rad Lab) at MIT but many others took part in the US, and in the UK such as the Telecommunications Research Establishment. The head of the Fundamental Development Group at Rad Lab was Edward Mills Purcell. His researchers included Julian Schwinger, Nathan Marcuvitz, Carol Gray Montgomery, and Robert H. Dicke. Much of the Rad Lab work concentrated on finding lumped element models of waveguide structures so that components in waveguide could be analysed with standard circuit theory. Hans Bethe was also briefly at Rad Lab, but while there he produced his small aperture theory which proved important for waveguide cavity filters, first developed at Rad Lab. The German side, on the other hand, largely ignored the potential of waveguides in radar until very late in the war. So much so that when radar parts from a downed British plane were sent to Siemens & Halske for analysis, even though they were recognised as microwave components, their purpose could not be identified. | Waveguide | Wikipedia | 497 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
German academics were even allowed to continue publicly publishing their research in this field because it was not felt to be important.
Immediately after World War II waveguide was the technology of choice in the microwave field. However, it has some problems; it is bulky, expensive to produce, and the cutoff frequency effect makes it difficult to produce wideband devices. Ridged waveguide can increase bandwidth beyond an octave, but a better solution is to use a technology working in TEM mode (that is, non-waveguide) such as coaxial conductors since TEM does not have a cutoff frequency. A shielded rectangular conductor can also be used and this has certain manufacturing advantages over coax and can be seen as the forerunner of the planar technologies (stripline and microstrip). However, planar technologies really started to take off when printed circuits were introduced. These methods are significantly cheaper than waveguide and have largely taken its place in most bands. However, waveguide is still favoured in the higher microwave bands from around Ku band upwards.
Properties
Propagation modes and cutoff frequencies
A propagation mode in a waveguide is one solution of the wave equations, or, in other words, the form of the wave. Due to the constraints of the boundary conditions, there are only limited frequencies and forms for the wave function which can propagate in the waveguide. The lowest frequency in which a certain mode can propagate is the cutoff frequency of that mode. The mode with the lowest cutoff frequency is the fundamental mode of the waveguide, and its cutoff frequency is the waveguide cutoff frequency. | Waveguide | Wikipedia | 326 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
Propagation modes are computed by solving the Helmholtz equation alongside a set of boundary conditions depending on the geometrical shape and materials bounding the region. The usual assumption for infinitely long uniform waveguides allows us to assume a propagating form for the wave, i.e. stating that every field component has a known dependency on the propagation direction (i.e. ). More specifically, the common approach is to first replace all unknown time-varying fields (assuming for simplicity to describe the fields in cartesian components) with their complex phasors representation , sufficient to fully describe any infinitely long single-tone signal at frequency , (angular frequency ), and rewrite the Helmholtz equation and boundary conditions accordingly. Then, every unknown field is forced to have a form like , where the term represents the propagation constant (still unknown) along the direction along which the waveguide extends to infinity. The Helmholtz equation can be rewritten to accommodate such form and the resulting equality needs to be solved for and , yielding in the end an eigenvalue equation for and a corresponding eigenfunction for each solution of the former.
The propagation constant of the guided wave is complex, in general. For a lossless case, the propagation constant might be found to take on either real or imaginary values, depending on the chosen solution of the eigenvalue equation and on the angular frequency . When is purely real, the mode is said to be "below cutoff", since the amplitude of the field phasors tends to exponentially decrease with propagation; an imaginary , instead, represents modes said to be "in propagation" or "above cutoff", as the complex amplitude of the phasors does not change with .
Impedance matching
In circuit theory, the impedance is a generalization of electrical resistance in the case of alternating current, and is measured in ohms (). A waveguide in circuit theory is described by a transmission line having a length and characteristic impedance. In other words, the impedance indicates the ratio of voltage to current of the circuit component (in this case a waveguide) during propagation of the wave. This description of the waveguide was originally intended for alternating current, but is also suitable for electromagnetic and sound waves, once the wave and material properties (such as pressure, density, dielectric constant) are properly converted into electrical terms (current and impedance for example). | Waveguide | Wikipedia | 493 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
Impedance matching is important when components of an electric circuit are connected (waveguide to antenna for example): The impedance ratio determines how much of the wave is transmitted forward and how much is reflected. In connecting a waveguide to an antenna a complete transmission is usually required, so an effort is made to match their impedances.
The reflection coefficient can be calculated using: , where (Gamma) is the reflection coefficient (0 denotes full transmission, 1 full reflection, and 0.5 is a reflection of half the incoming voltage), and are the impedance of the first component (from which the wave enters) and the second component, respectively.
An impedance mismatch creates a reflected wave, which added to the incoming waves creates a standing wave. An impedance mismatch can be also quantified with the standing wave ratio (SWR or VSWR for voltage), which is connected to the impedance ratio and reflection coefficient by: , where are the minimum and maximum values of the voltage absolute value, and the VSWR is the voltage standing wave ratio, which value of 1 denotes full transmission, without reflection and thus no standing wave, while very large values mean high reflection and standing wave pattern.
Electromagnetic waveguides
Radio-frequency waveguides
Waveguides can be constructed to carry waves over a wide portion of the electromagnetic spectrum, but are especially useful in the microwave and optical frequency ranges. Depending on the frequency, they can be constructed from either conductive or dielectric materials. Waveguides are used for transferring both power and communication signals.
Optical waveguides
Waveguides used at optical frequencies are typically dielectric waveguides, structures in which a dielectric material with high permittivity, and thus high index of refraction, is surrounded by a material with lower permittivity. The structure guides optical waves by total internal reflection. An example of an optical waveguide is optical fiber. | Waveguide | Wikipedia | 393 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
Other types of optical waveguide are also used, including photonic-crystal fiber, which guides waves by any of several distinct mechanisms. Guides in the form of a hollow tube with a highly reflective inner surface have also been used as light pipes for illumination applications. The inner surfaces may be polished metal, or may be covered with a multilayer film that guides light by Bragg reflection (this is a special case of a photonic-crystal fiber). One can also use small prisms around the pipe which reflect light via total internal reflection —such confinement is necessarily imperfect, however, since total internal reflection can never truly guide light within a lower-index core (in the prism case, some light leaks out at the prism corners).
Acoustic waveguides
An acoustic waveguide is a physical structure for guiding sound waves. Sound in an acoustic waveguide behaves like electromagnetic waves on a transmission line. Waves on a string, like the ones in a tin can telephone, are a simple example of an acoustic waveguide. Another example are pressure waves in the pipes of an organ. The term acoustic waveguide is also used to describe elastic waves guided in micro-scale devices, like those employed in piezoelectric delay lines and in stimulated Brillouin scattering.
Mathematical waveguides
Waveguides are interesting objects of study from a strictly mathematical perspective. A waveguide (or tube) is defined as type of boundary condition on the wave equation such that the wave function must be equal to zero on the boundary and that the allowed region is finite in all dimensions but one (an infinitely long cylinder is an example.) A large number of interesting results can be proven from these general conditions. It turns out that any tube with a bulge (where the width of the tube increases) admits at least one bound state that exist inside the mode gaps. The frequencies of all the bound states can be identified by using a pulse short in time. This can be shown using the variational principles. An interesting result by Jeffrey Goldstone and Robert Jaffe is that any tube of constant width with a twist, admits a bound state.
Sound synthesis
Sound synthesis uses digital delay lines as computational elements to simulate wave propagation in tubes of wind instruments and the vibrating strings of string instruments. | Waveguide | Wikipedia | 452 | 41863 | https://en.wikipedia.org/wiki/Waveguide | Technology | Components | null |
In abstract algebra, group theory studies the algebraic structures known as groups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
History
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry. | Group theory | Wikipedia | 386 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
Main classes of groups
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
Permutation groups
The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself () by means of the left regular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for , the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals. | Group theory | Wikipedia | 478 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Matrix groups
The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.
Transformation groups
Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.
Abstract groups
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group began to take hold, where "abstract" means that the nature of the elements are ignored in such a way that two isomorphic groups are considered as the same group. A typical way of specifying an abstract group is through a presentation by generators and relations,
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy. | Group theory | Wikipedia | 440 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.
Groups with additional structure
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),
are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group.
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.
Branches of group theory
Finite group theory
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known. | Group theory | Wikipedia | 508 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
Finite groups often occur when considering symmetry of mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry.
Representation of groups
Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions. | Group theory | Wikipedia | 511 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Lie theory
A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Combinatorial and geometric group theory
Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications . A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators , the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by For example, the group presentation describes a group which is isomorphic to A string consisting of generator symbols and their inverses is called a word.
Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free. | Group theory | Wikipedia | 405 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation is isomorphic to the additive group Z of integers, although this may not be immediately apparent. (Writing , one has )
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X.
Connection of groups and symmetry
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation has the two solutions and . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots. | Group theory | Wikipedia | 483 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.
Applications of group theory
Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theory
Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.
Algebraic topology | Group theory | Wikipedia | 408 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
Algebraic geometry
Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example the Hodge conjecture (in certain cases).) The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.
Algebraic number theory
Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula,
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
Harmonic analysis
Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.
Combinatorics
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
Music
The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group. | Group theory | Wikipedia | 510 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Physics
In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.
Chemistry and materials science
In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals.
Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule. | Group theory | Wikipedia | 405 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, where n is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, , since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cn axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF3), the highest order of rotation axis is C3, so the principal axis of rotation is C3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is called σh (horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd). | Group theory | Wikipedia | 373 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example, methane and other tetrahedral molecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Cryptography
Very large groups of prime order constructed in elliptic curve cryptography serve for public-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite non-abelian groups such as a braid group. | Group theory | Wikipedia | 327 | 41890 | https://en.wikipedia.org/wiki/Group%20theory | Mathematics | Algebra | null |
Accuracy and precision are two measures of observational error.
Accuracy is how close a given set of measurements (observations or readings) are to their true value.
Precision is how close the measurements are to each other.
The International Organization for Standardization (ISO) defines a related measure:
trueness, "the closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value."
While precision is a description of random errors (a measure of statistical variability),
accuracy has two different definitions:
More commonly, a description of systematic errors (a measure of statistical bias of a given measure of central tendency, such as the mean). In this definition of "accuracy", the concept is independent of "precision", so a particular set of data can be said to be accurate, precise, both, or neither. This concept corresponds to ISO's trueness.
A combination of both precision and trueness, accounting for the two types of observational error (random and systematic), so that high accuracy requires both high precision and high trueness. This usage corresponds to ISO's definition of accuracy (trueness and precision).
Common technical definition
In simpler terms, given a statistical sample or set of data points from repeated measurements of the same quantity, the sample or set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if their standard deviation is relatively small.
In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.
The field of statistics, where the interpretation of measurements plays a central role, prefers to use the terms bias and variability instead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision. | Accuracy and precision | Wikipedia | 441 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision.
A measurement system is considered valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability).
The terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data.
In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement.
In numerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is the resolution of the representation, typically defined by the number of decimal or binary digits.
In military terms, accuracy refers primarily to the accuracy of fire (justesse de tir), the precision of fire expressed by the closeness of a grouping of shots at and around the centre of the target.
A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards in 1994, which is also reflected in the 2008 issue of the BIPM International Vocabulary of Metrology (VIM), items 2.13 and 2.14.
According to ISO 5725-1, the general term "accuracy" is used to describe the closeness of a measurement to the true value. When the term is applied to sets of measurements of the same measurand, it involves a component of random error and a component of systematic error. In this case trueness is the closeness of the mean of a set of measurement results to the actual (true) value, that is the systematic error, and precision is the closeness of agreement among a set of results, that is the random error.
ISO 5725-1 and VIM also avoid the use of the term "bias", previously specified in BS 5497-1, because it has different connotations outside the fields of science and engineering, as in medicine and law.
Quantification and applications
In industrial instrumentation, accuracy is the measurement tolerance, or transmission of the instrument and defines the limits of the errors made when the instrument is used in normal operating conditions. | Accuracy and precision | Wikipedia | 504 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units (abbreviated SI from French: Système international d'unités) and maintained by national standards organizations such as the National Institute of Standards and Technology in the United States.
This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements.
With regard to accuracy we can distinguish:
the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration.
the combined effect of that and precision.
A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units). | Accuracy and precision | Wikipedia | 329 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
A reading of 8,000 m, with trailing zeros and no decimal point, is ambiguous; the trailing zeros may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103 m indicates that all three zeros are significant, giving a margin of 0.5 m. Similarly, one can use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103 m. It indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. For example, a source reporting a number like 153,753 with precision +/- 5,000 looks like it has precision +/- 0.5. Under the convention it would have been rounded to 150,000.
Alternatively, in a scientific context, if it is desired to indicate the margin of error with more precision, one can use a notation such as 7.54398(23) × 10−10 m, meaning a range of between 7.54375 and 7.54421 × 10−10 m.
Precision includes:
repeatability — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and
reproducibility — the variation arising using the same measurement process among different instruments and operators, and over longer time periods.
In engineering, precision is often taken as three times Standard Deviation of measurements taken, representing the range that 99.73% of measurements can occur within. For example, an ergonomist measuring the human body can be confident that 99.73% of their extracted measurements fall within ± 0.7 cm - if using the GRYPHON processing system - or ± 13 cm - if using unprocessed data.
In classification
In binary classification | Accuracy and precision | Wikipedia | 417 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (both true positives and true negatives) among the total number of cases examined. As such, it compares estimates of pre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index". It is a parameter of the test.
The formula for quantifying binary accuracy is:
where ; ; ;
In this context, the concepts of trueness and precision as defined by ISO 5725-1 are not applicable. One reason is that there is not a single “true value” of a quantity, but rather two possible true values for every case, while accuracy is an average across all cases and therefore takes into account both values. However, the term precision is used in this context to mean a different metric originating from the field of information retrieval (see below).
In multiclass classification
When computing accuracy in multiclass classification, accuracy is simply the fraction of correct classifications:
This is usually expressed as a percentage. For example, if a classifier makes ten predictions and nine of them are correct, the accuracy is 90%.
Accuracy is sometimes also viewed as a micro metric, to underline that it tends to be greatly affected by the particular class prevalence in a dataset and the classifier's biases.
Furthermore, it is also called top-1 accuracy to distinguish it from top-5 accuracy, common in convolutional neural network evaluation. To evaluate top-5 accuracy, the classifier must provide relative likelihoods for each class. When these are sorted, a classification is considered correct if the correct classification falls anywhere within the top 5 predictions made by the network. Top-5 accuracy was popularized by the ImageNet challenge. It is usually higher than top-1 accuracy, as any correct predictions in the 2nd through 5th positions will not improve the top-1 score, but do improve the top-5 score. | Accuracy and precision | Wikipedia | 423 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
In psychometrics and psychophysics
In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's alpha to ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.
In logic simulation
In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.
In information systems
Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives).
None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important. | Accuracy and precision | Wikipedia | 512 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
In cognitive systems
In cognitive systems, accuracy and precision is used to characterize and measure results of a cognitive process performed by biological or artificial entities where a cognitive process is a transformation of data, information, knowledge, or wisdom to a higher-valued form. (DIKW Pyramid) Sometimes, a cognitive process produces exactly the intended or desired output but sometimes produces output far from the intended or desired. Furthermore, repetitions of a cognitive process do not always produce the same output. Cognitive accuracy (CA) is the propensity of a cognitive process to produce the intended or desired output. Cognitive precision (CP) is the propensity of a cognitive process to produce the same output. To measure augmented cognition in human/cog ensembles, where one or more humans work collaboratively with one or more cognitive systems (cogs), increases in cognitive accuracy and cognitive precision assist in measuring the degree of cognitive augmentation. | Accuracy and precision | Wikipedia | 186 | 41932 | https://en.wikipedia.org/wiki/Accuracy%20and%20precision | Physical sciences | Measurement: General | null |
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of the sinusoidal voltage between its terminals, to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage.
Impedance extends the concept of resistance to alternating current (AC) circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude.
Impedance can be represented as a complex number, with the same units as resistance, for which the SI unit is the ohm ().
Its symbol is usually , and it may be represented by writing its magnitude and phase in the polar form . However, Cartesian complex number representation is often more powerful for circuit analysis purposes.
The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law.
In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.
The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho.
Instruments used to measure the electrical impedance are called impedance analyzers.
History
Perhaps the earliest use of complex numbers in circuit analysis was by Johann Victor Wietlisbach in 1879 in analysing the Maxwell bridge. Wietlisbach avoided using differential equations by expressing AC currents and voltages as exponential functions with imaginary exponents (see ). Wietlisbach found the required voltage was given by multiplying the current by a complex number (impedance), although he did not identify this as a general parameter in its own right.
The term impedance was coined by Oliver Heaviside in July 1886. Heaviside recognised that the "resistance operator" (impedance) in his operational calculus was a complex number. In 1887 he showed that there was an AC equivalent to Ohm's law. | Electrical impedance | Wikipedia | 438 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
Arthur Kennelly published an influential paper on impedance in 1893. Kennelly arrived at a complex number representation in a rather more direct way than using imaginary exponential functions. Kennelly followed the graphical representation of impedance (showing resistance, reactance, and impedance as the lengths of the sides of a right angle triangle) developed by John Ambrose Fleming in 1889. Impedances could thus be added vectorially. Kennelly realised that this graphical representation of impedance was directly analogous to graphical representation of complex numbers (Argand diagram). Problems in impedance calculation could thus be approached algebraically with a complex number representation. Later that same year, Kennelly's work was generalised to all AC circuits by Charles Proteus Steinmetz. Steinmetz not only represented impedances by complex numbers but also voltages and currents. Unlike Kennelly, Steinmetz was thus able to express AC equivalents of DC laws such as Ohm's and Kirchhoff's laws. Steinmetz's work was highly influential in spreading the technique amongst engineers.
Introduction
In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part.
Complex impedance
The impedance of a two-terminal circuit element is represented as a complex quantity . The polar form conveniently captures both magnitude and phase characteristics as
where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument (commonly given the symbol ) gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current.
In Cartesian form, impedance is defined as
where the real part of impedance is the resistance and the imaginary part is the reactance . | Electrical impedance | Wikipedia | 431 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.
Complex voltage and current
To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and .
The impedance of a bipolar circuit is defined as the ratio of these quantities:
Hence, denoting , we have
The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.
Validity of complex representation
This representation using complex exponentials may be justified by noting that (by Euler's formula):
The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term. The results are identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that
Ohm's law
The meaning of electrical impedance can be understood by substituting it into Ohm's law. Assuming a two-terminal circuit element with impedance is driven by a sinusoidal voltage or current as above, there holds
The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal).
Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin's theorem and Norton's theorem, can also be extended to AC circuits by replacing resistance with impedance.
Phasors | Electrical impedance | Wikipedia | 478 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids (such as in AC circuits), where they can often reduce a differential equation problem to an algebraic one.
The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel.
Device examples
Resistor
The impedance of an ideal resistor is purely real and is called resistive impedance:
In this case, the voltage and current waveforms are proportional and in phase.
Inductor and capacitor
Ideal inductors and capacitors have a purely imaginary reactive impedance:
the impedance of inductors increases as frequency increases;
the impedance of capacitors decreases as frequency increases;
In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading.
Note the following identities for the imaginary unit and its reciprocal:
Thus the inductor and capacitor impedance equations can be rewritten in polar form:
The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship.
Deriving the device-specific impedances
What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis.
Resistor
For a resistor, there is the relation
which is Ohm's law.
Considering the voltage signal to be
it follows that | Electrical impedance | Wikipedia | 485 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees.
This result is commonly expressed as
Capacitor
For a capacitor, there is the relation:
Considering the voltage signal to be
it follows that
and thus, as previously,
Conversely, if the current through the circuit is assumed to be sinusoidal, its complex representation being
then integrating the differential equation
leads to
The Const term represents a fixed potential bias superimposed to the AC sinusoidal potential, that plays no role in AC analysis. For this purpose, this term can be assumed to be 0, hence again the impedance
Inductor
For the inductor, we have the relation (from Faraday's law):
This time, considering the current signal to be:
it follows that:
This result is commonly expressed in polar form as
or, using Euler's formula, as
As in the case of capacitors, it is also possible to derive this formula directly from the complex representations of the voltages and currents, or by assuming a sinusoidal voltage between the two poles of the inductor. In the latter case, integrating the differential equation above leads to a constant term for the current, that represents a fixed DC bias flowing through the inductor. This is set to zero because AC analysis using frequency domain impedance considers one frequency at a time and DC represents a separate frequency of zero hertz in this context.
Generalised s-plane impedance
Impedance defined in terms of jω can strictly be applied only to circuits that are driven with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows:
For a DC circuit, this simplifies to . For a steady-state sinusoidal AC signal .
Formal derivation
The impedance of an electrical component is defined as the ratio between the Laplace transforms of the voltage over it and the current through it, i.e. | Electrical impedance | Wikipedia | 486 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
where is the complex Laplace parameter. As an example, according to the I-V-law of a capacitor, , from which it follows that .
In the phasor regime (steady-state AC, meaning all signals are represented mathematically as simple complex exponentials and oscillating at a common frequency ), impedance can simply be calculated as the voltage-to-current ratio, in which the common time-dependent factor cancels out:
Again, for a capacitor, one gets that , and hence . The phasor domain is sometimes dubbed the frequency domain, although it lacks one of the dimensions of the Laplace parameter. For steady-state AC, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular:
The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude;
The phase of the complex impedance is the phase shift by which the current lags the voltage.
These two relationships hold even after taking the real part of the complex exponentials (see phasors), which is the part of the signal one actually measures in real-life circuits.
Resistance vs reactance
Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:
In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.
Resistance
Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.
Reactance
Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it.
A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance does not dissipate any power.
Capacitive reactance
A capacitor has a purely reactive impedance that is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric.
The minus sign indicates that the imaginary part of the impedance is negative.
At low frequencies, a capacitor approaches an open circuit so no current flows through it. | Electrical impedance | Wikipedia | 497 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.
Driven by an AC supply, a capacitor accumulates only a limited charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge accumulates and the smaller the opposition to the current.
Inductive reactance
Inductive reactance is proportional to the signal frequency and the inductance .
An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop.
For an inductor consisting of a coil with loops this gives:
The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.
Total reactance
The total reactance is given by
( is negative)
so that the total impedance is
Combining impedances
The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those for combining resistances, except that the numbers in general are complex numbers. The general case, however, requires equivalent impedance transforms in addition to series and parallel.
Series combination
For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances.
Or explicitly in real and imaginary terms:
Parallel combination
For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances.
Hence the inverse total impedance is the sum of the inverses of the component impedances:
or, when n = 2:
The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance . | Electrical impedance | Wikipedia | 482 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
Measurement
The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and other fields. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example, in a radio antenna, the standing wave ratio or reflection coefficient may be more useful than the impedance alone. The measurement of impedance requires the measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device.
The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude.
The use of an impulse response may be used in combination with the fast Fourier transform (FFT) to rapidly measure the electrical impedance of various electrical devices.
The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values, the impedance at any frequency can be calculated.
Example
Consider an LC tank circuit.
The complex impedance of the circuit is
It is immediately seen that the value of is minimal (actually equal to 0 in this case) whenever
Therefore, the fundamental resonance angular frequency is
Variable impedance | Electrical impedance | Wikipedia | 405 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
In general, neither impedance nor admittance can vary with time, since they are defined for complex exponentials in which . If the complex exponential voltage to current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many components and systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage to current ratios that seem to be linear time-invariant (LTI) for small signals and over small observation windows, so they can be roughly described as if they had a time-varying impedance. This description is an approximation: Over large signal swings or wide observation windows, the voltage to current relationship will not be LTI and cannot be described by impedance. | Electrical impedance | Wikipedia | 157 | 41957 | https://en.wikipedia.org/wiki/Electrical%20impedance | Physical sciences | Electrical circuits | null |
Lidar (, also LIDAR, LiDAR or LADAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. Lidar may operate in a fixed direction (e.g., vertical) or it may scan multiple directions, in which case it is known as lidar scanning or 3D laser scanning, a special combination of 3-D scanning and laser scanning. Lidar has terrestrial, airborne, and mobile applications.
Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swathe mapping (ALSM), and laser altimetry. It is used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has also been increasingly used in control and navigation for autonomous cars and for the helicopter Ingenuity on its record-setting flights over the terrain of Mars.
The evolution of quantum technology has given rise to the emergence of Quantum Lidar, demonstrating higher efficiency and sensitivity when compared to conventional lidar systems.
History and etymology
Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging", derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems. | Lidar | Wikipedia | 416 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963, which had a range of 11 km and an accuracy of 4.5 m, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests that it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the Moon by 'lidar' (light radar) ..."
The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar.
Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the Moon.
Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar.
General description
Lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at resolution or better. | Lidar | Wikipedia | 425 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
The essential concept of lidar was originated by E. H. Synge in 1930, who envisaged the use of powerful searchlights to probe the atmosphere. Indeed, lidar has since been used extensively for atmospheric research and meteorology. Lidar instruments fitted to aircraft and satellites carry out surveying and mapping a recent example being the U.S. Geological Survey Experimental Advanced Airborne Research Lidar. NASA has identified lidar as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar-landing vehicles.
Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 nanometers (ultraviolet). Typically, light is reflected via backscattering, as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal.
The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components.
Technology
Mathematical formula
A lidar determines the distance of an object or a surface with the formula:
where c is the speed of light, d is the distance between the detector and the object or surface being detected, and t is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector.
Design
The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers. | Lidar | Wikipedia | 408 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.).
Components
Lidar systems consist of several major components.
Laser
600–1,000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut-off system which turns the laser off at specific altitudes is used in order to make it eye-safe for the people on the ground.
One common alternative, 1,550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye. A trade-off though is that current detector technology is less advanced, so these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1,550 nm is not visible in night vision goggles, unlike the shorter 1,000 nm infrared laser.
Airborne topographic mapping lidars generally use 1,064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1,064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth.
Phased arrays
A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction. | Lidar | Wikipedia | 497 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Phased arrays have been used in radar since the 1940s. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. To achieve this the phase of each individual antenna (emitter) are precisely controlled. It is very difficult, if possible at all, to use the same technique in a lidar. The main problems are that all individual emitters must be coherent (technically coming from the same "master" oscillator or laser source), have dimensions about the wavelength of the emitted light (1 micron range) to act as a point source with their phases being controlled with high accuracy.
Several companies are working on developing commercial solid-state lidar units but these units utilize a different principle described in a Flash Lidar below.
Microelectromechanical machines
Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration.
Scanner and optics
Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror, and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal.
Photodetector and receiver electronics
Two main photodetector technologies are used in lidar: solid-state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design.
Position and navigation systems
Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU). | Lidar | Wikipedia | 488 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Sensor
Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash lidar allows for 3-D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion. This results in less distortion.
3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology.
Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter.
A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array. | Lidar | Wikipedia | 353 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at over Port-au-Prince was able to capture instantaneous snapshots of squares of the city at a resolution of , displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. New technologies for infrared single-photon counting LIDAR are advancing rapidly, including arrays and cameras in a variety of semiconductor and superconducting platforms. | Lidar | Wikipedia | 240 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Flash lidar
In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumination method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3-D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1-D or a 2-D sensor array, each pixel of which collects 3-D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors. Flash lidar is especially advantageous, when compared to scanning lidar, when the camera, scene, or both are moving, since the entire scene is illuminated at the same time. With scanning lidar, motion can cause "jitter" from the lapse in time as the laser rasters over the scene.
As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3-D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios. | Lidar | Wikipedia | 453 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications.
Classification
Based on orientation
Lidar can be oriented to nadir, zenith, or laterally. For example, lidar altimeters look down, an atmospheric lidar looks up, and lidar-based collision avoidance systems are side-looking.
Based on scanning mechanism
Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector.
Based on platform
Lidar applications can be divided into airborne and terrestrial types. The two types require scanners with varying specifications based on the data's purpose, the size of the area to be captured, the range of measurement desired, the cost of equipment, and more. Spaceborne platforms are also possible, see satellite laser altimetry.
Airborne
Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a 3-D point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water. | Lidar | Wikipedia | 491 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles.
Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc.
Multiple commercial lidar systems for unmanned aerial vehicles are currently on the market. These platforms can systematically scan large areas, or provide a cheaper alternative to manned aircraft for smaller scanning operations.
Airborne lidar bathymetry
The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions. | Lidar | Wikipedia | 440 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Water depth measurable by lidar depends on the clarity of the water and the absorption of the wavelength used. Water is most transparent to green and blue light, so these will penetrate deepest in clean water. Blue-green light of 532 nm produced by frequency doubled solid-state IR laser output is the standard for airborne bathymetry. This light can penetrate water but pulse strength attenuates exponentially with distance traveled through the water. Lidar can measure depths from about , with vertical accuracy in the order of . The surface reflection makes water shallower than about difficult to resolve, and absorption limits the maximum depth. Turbidity causes scattering and has a significant role in determining the maximum depth that can be resolved in most situations, and dissolved pigments can increase absorption depending on wavelength. Other reports indicate that water penetration tends to be between two and three times Secchi depth. Bathymetric lidar is most useful in the depth range in coastal mapping.
On average in fairly clear coastal seawater lidar can penetrate to about , and in turbid water up to about . An average value found by Saputra et al, 2021, is for the green laser light to penetrate water about one and a half to two times Secchi depth in Indonesian waters. Water temperature and salinity have an effect on the refractive index which has a small effect on the depth calculation.
The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar. | Lidar | Wikipedia | 372 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Full-waveform lidar
Airborne lidar systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3-D point clouds. Recent studies investigated voxelisation. The intensities of the waveform samples are inserted into a voxelised space (3-D grayscale image) building up a 3-D representation of the scanned area. Related metrics and information can then be extracted from that voxelised space. Structural information can be extracted using 3-D metrics from local areas and there is a case study that used the voxelisation approach for detecting dead standing Eucalypt trees in Australia.
Terrestrial
Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The 3-D point clouds acquired from these types of scanners can be matched with digital images taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point.
Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy. | Lidar | Wikipedia | 508 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation.
Applications
There are a wide variety of lidar applications, in addition to the applications listed below, as it is often mentioned in National lidar dataset programs. These applications are largely determined by the range of effective object detection; resolution, which is how accurately the lidar identifies and classifies objects; and reflectance confusion, meaning how well the lidar can see something in the presence of bright objects, like reflective signs or bright sun.
Companies are working to cut the cost of lidar sensors, currently anywhere from about US$1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets.
Agriculture
Agricultural robots have been used for a variety of purposes ranging from seed and fertilizer dispersions, sensing techniques as well as crop scouting for the task of weed control.
Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield.
Lidar is now used to monitor insects in the field. The use of lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China.
Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants. | Lidar | Wikipedia | 442 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage causes interference for agriculture equipment that would otherwise utilize a precise GNSS fix. Lidar sensors can detect and track the relative position of rows, plants, and other markers so that farming equipment can continue operating until a GNSS fix is reestablished.
Plant species classification
Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning. Lidar produces plant contours as a "point cloud" with range and reflectance values. This data is transformed, and features are extracted from it. If the species is known, the features are added as new data. The species is labelled and its features are initially stored as an example to identify the species in the real environment. This method is efficient because it uses a low-resolution lidar and supervised learning. It includes an easy-to-compute feature set with common statistical features which are independent of the plant size.
Archaeology
Lidar has many uses in archaeology, including planning of field campaigns, mapping features under forest canopy, and overview of broad, continuous features indistinguishable from the ground. Lidar can produce high-resolution datasets quickly and cheaply. Lidar-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. | Lidar | Wikipedia | 280 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data were used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape.
In 2012, lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. In 2024, archaeologists using lidar discovered the Upano Valley sites.
Autonomous vehicles | Lidar | Wikipedia | 435 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Autonomous vehicles may use lidar for obstacle detection and avoidance to navigate safely through environments. The introduction of lidar was a pivotal occurrence that was the key enabler behind Stanley, the first autonomous vehicle to successfully complete the DARPA Grand Challenge. Point cloud output from the lidar sensor provides the necessary data for robot software to determine where potential obstacles exist in the environment and where the robot is in relation to those potential obstacles. Singapore's Singapore-MIT Alliance for Research and Technology (SMART) is actively developing technologies for autonomous lidar vehicles.
The very first generations of automotive adaptive cruise control systems used only lidar sensors.
Object detection for transportation systems
In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding the vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which add to the driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this.
Basics overview: Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead and the lower beams are used to detect lane markings and road features. The major advantage of using lidar is that the spatial structure is obtained and this data can be fused with other sensors such as radar, etc. to get a better picture of the vehicle environment in terms of static and dynamic properties of the objects present in the environment. Conversely, a significant issue with lidar is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from the lidar system are partially reflected off of rain droplets which adds noise to the data, called 'echoes'.
Below mentioned are various approaches of processing lidar data and using it along with data from other sensors through sensor fusion to detect the vehicle environment conditions. | Lidar | Wikipedia | 419 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Obstacle detection and road environment recognition using lidar
This method proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a two-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. The road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces.
Advantages
Roadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data the road border can also be recognized. Also, the usage of a sensor with weather-robust head helps to detect the objects even in bad weather conditions. Canopy Height Model before and after flood is a good example. Lidar can detect highly detailed canopy height data as well as its road border.
Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it.
Lidar systems provide better range and a large field of view, which helps in detecting obstacles on the curves. This is one of its major advantages over RADAR systems, which have a narrower field of view. The fusion of lidar measurement with different sensors makes the system robust and useful in real-time applications, since lidar dependent systems cannot estimate the dynamic information about the detected object.
It has been shown that lidar can be manipulated, such that self-driving cars are tricked into taking evasive action.
Ecology and conservation | Lidar | Wikipedia | 469 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Lidar has also found many applications for mapping natural and managed landscapes such as forests, wetlands, and grasslands. Canopy heights, biomass measurements, and leaf area can all be studied using airborne lidar systems. Similarly, lidar is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from lidar, including for recreational use such as in the production of orienteering maps. Lidar has also been applied to estimate and assess the biodiversity of plants, fungi, and animals. Using southern bull kelp in New Zealand, coastal lidar mapping data has been compared with population genomic evidence to form hypotheses regarding the occurrence and timing of prehistoric earthquake uplift events.
Forestry
Lidar systems have also been applied to improve forestry management. Measurements are used to take inventory in forest plots as well as calculate individual tree heights, crown width and crown diameter. Other statistical analysis use lidar data to estimate total plot information such as canopy volume, mean, minimum and maximum heights, vegetation cover, biomass, and carbon density. Aerial lidar has been used to map the bush fires in Australia in early 2020. The data was manipulated to view bare earth, and identify healthy and burned vegetation.
Geology and soil science
High-resolution digital elevation maps generated by airborne and stationary lidar have led to significant advances in geomorphology (the branch of geoscience concerned with the origin and evolution of the Earth surface topography). The lidar abilities to detect subtle topographic features such as river terraces and river channel banks, glacial landforms, to measure the land-surface elevation beneath the vegetation canopy, to better resolve spatial derivatives of elevation, to rockfall detection, to detect elevation changes between repeat surveys have enabled many novel studies of the physical and chemical processes that shape landscapes.
In 2005 the Tour Ronde in the Mont Blanc massif became the first high alpine mountain on which lidar was employed to monitor the increasing occurrence of severe rock-fall over large rock faces allegedly caused by climate change and degradation of permafrost at high altitude. | Lidar | Wikipedia | 426 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. Airborne lidar systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis.
The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships.
Atmosphere
Initially, based on ruby lasers, lidar for meteorological applications was constructed shortly after the invention of the laser and represents one of the first applications of laser technology. Lidar technology has since expanded vastly in capability and lidar systems are used to perform a range of measurements that include profiling clouds, measuring winds, studying aerosols, and quantifying various atmospheric components. Atmospheric components can in turn provide useful information including surface pressure (by measuring the absorption of oxygen or nitrogen), greenhouse gas emissions (carbon dioxide and methane), photosynthesis (carbon dioxide), fires (carbon monoxide), and humidity (water vapor). Atmospheric lidars can be either ground-based, airborne or satellite-based depending on the type of measurement.
Atmospheric lidar remote sensing works in two ways –
by measuring backscatter from the atmosphere, and
by measuring the scattered reflection off the ground (when the lidar is airborne) or other hard surface. | Lidar | Wikipedia | 393 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as NASA's conical-scanning HARLIE, have been used to measure atmospheric wind velocity. The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition.
Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.
The term, eolics, has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements.
The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (less than 1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane. | Lidar | Wikipedia | 472 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate imaging (millions of frames per second), as well as for speckle reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant.
Law enforcement
Lidar speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes. Additionally, it is used in forensics to aid in crime scene investigations. Scans of a scene are taken to record exact details of object placement, blood, and other important information for later review. These scans can also be used to determine bullet trajectory in cases of shootings.
Military
Few military applications are known to be in place and are classified (such as the lidar-based speed measurement of the AGM-129 ACM stealth nuclear cruise missile), but a considerable amount of research is underway in their use for imaging. Higher resolution systems collect enough detail to identify targets, such as tanks. Examples of military applications of lidar include the Airborne Laser Mine Detection System (ALMDS) for counter-mine warfare by Areté Associates.
A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents.
Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination. | Lidar | Wikipedia | 479 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997.
Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge.
A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar.
Mining
For the calculation of ore volumes is accomplished by periodic (monthly) scanning in areas of ore removal, then comparing surface data to the previous scan.
Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS) used in Rio Tinto's Mine of the Future.
Physics and astronomy
A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the Moon, allowing the position of the Moon to be measured with millimeter precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a lidar instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet. Laser altimeters produced global elevation models of Mars, the Moon (Lunar Orbiter Laser Altimeter (LOLA)) Mercury (Mercury Laser Altimeter (MLA)), NEAR–Shoemaker Laser Rangefinder (NLR). Future missions will also include laser altimeter experiments such as the Ganymede Laser Altimeter (GALA) as part of the Jupiter Icy Moons Explorer (JUICE) mission.
In September, 2008, the NASA Phoenix lander used lidar to detect snow in the atmosphere of Mars.
In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.
At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, lidar Thomson scattering is used to determine electron density and temperature profiles of the plasma. | Lidar | Wikipedia | 508 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Rock mechanics
Lidar has been widely used in rock mechanics for rock mass characterization and slope change detection. Some important geomechanical properties from the rock mass can be extracted from the 3-D point clouds obtained by means of the lidar. Some of these properties are:
Discontinuity orientation
Discontinuity spacing and RQD
Discontinuity aperture
Discontinuity persistence
Discontinuity roughness
Water infiltration
Some of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes.
THOR
THOR is a laser designed toward measuring Earth's atmospheric conditions. The laser enters a cloud cover and measures the thickness of the return halo. The sensor has a fiber optic aperture with a width of that is used to measure the return light.
Robotics
Lidar technology is being used in robotics for the perception of the environment as well as object classification. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and crewed vehicles with a high degree of precision. Lidar are also widely used in robotics for simultaneous localization and mapping and well integrated into robot simulators. Refer to the Military section above for further examples.
Spaceflight
Lidar is increasingly being utilized for rangefinding and orbital element calculation of relative velocity in proximity operations and stationkeeping of spacecraft. Lidar has also been used for atmospheric studies from space. Short pulses of laser light beamed from a spacecraft can reflect off tiny particles in the atmosphere and back to a telescope aligned with the spacecraft laser. By precisely timing the lidar echo, and by measuring how much laser light is received by the telescope, scientists can accurately determine the location, distribution and nature of the particles. The result is a revolutionary new tool for studying constituents in the atmosphere, from cloud droplets to industrial pollutants, which are difficult to detect by other means." | Lidar | Wikipedia | 485 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars.
Surveying
Airborne lidar sensors are used by companies in the remote sensing field. They can be used to create a DTM (Digital Terrain Model) or DEM (Digital Elevation Model); this is quite a common practice for larger areas as a plane can acquire wide swaths in a single flyover. Greater vertical accuracy of below can be achieved with a lower flyover, even in forests, where it is able to give the height of the canopy as well as the ground elevation. Typically, a GNSS receiver configured over a georeferenced control point is needed to link the data in with the WGS (World Geodetic System).
Lidar is also in use in hydrographic surveying. Depending upon the clarity of the water lidar can measure depths from with a vertical accuracy of and horizontal accuracy of .
Transport
Lidar has been used in the railroad industry to generate asset health reports for asset management and by departments of transportation to assess their road conditions. CivilMaps.com is a leading company in the field. Lidar has been used in adaptive cruise control (ACC) systems for automobiles. Systems such as those by Siemens, Hella, Ouster and Cepton use a lidar device mounted on the front of the vehicle, such as the bumper, to monitor the distance between the vehicle and any vehicle in front of it. In the event, the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to accelerate to a speed preset by the driver. Refer to the Military section above for further examples. A lidar-based device, the Ceilometer is used at airports worldwide to measure the height of clouds on runway approach paths. | Lidar | Wikipedia | 445 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
Wind farm optimization
Lidar can be used to increase the energy output from wind farms by accurately measuring wind speeds and wind turbulence. Experimental lidar systems can be mounted on the nacelle of a wind turbine or integrated into the rotating spinner to measure oncoming horizontal winds, winds in the wake of the wind turbine, and proactively adjust blades to protect components and increase power. Lidar is also used to characterise the incident wind resource for comparison with wind turbine power production to verify the performance of the wind turbine by measuring the wind turbine's power curve. Wind farm optimization can be considered a topic in applied eolics. Another aspect of lidar in wind related industry is to use computational fluid dynamics over lidar-scanned surfaces in order to assess the wind potential, which can be used for optimal wind farms placement.
Solar photovoltaic deployment optimization
Lidar can also be used to assist planners and developers in optimizing solar photovoltaic systems at the city level by determining appropriate roof tops and for determining shading losses. Recent airborne laser scanning efforts have focused on ways to estimate the amount of solar light hitting vertical building facades, or by incorporating more detailed shading losses by considering the influence from vegetation and larger surrounding terrain.
Video games
Recent simulation racing games such as rFactor Pro, iRacing, Assetto Corsa and Project CARS increasingly feature race tracks reproduced from 3-D point clouds acquired through lidar surveys, resulting in surfaces replicated with centimeter or millimeter precision in the in-game 3-D environment.
The 2017 exploration game Scanner Sombre, by Introversion Software, uses lidar as a fundamental game mechanic.
In Build the Earth, lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data.
Other uses
The video for the 2007 song "House of Cards" by Radiohead was believed to be the first use of real-time 3-D laser scanning to record a music video. The range data in the video is not completely from a lidar, as structured light scanning is also used. | Lidar | Wikipedia | 463 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, lidar empowers portrait mode pictures with night mode, quickens auto focus and improves accuracy in the Measure app.
In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the season 40 premiere.
Alternative technologies
Computer stereo vision has shown promise as an alternative to lidar for close range applications. | Lidar | Wikipedia | 139 | 41958 | https://en.wikipedia.org/wiki/Lidar | Technology | Surveying tools | null |
In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output port by adding energy converted from some power supply to the signal. It is usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input port. It is often expressed using the logarithmic decibel (dB) units ("dB gain"). A gain greater than one (greater than zero dB), that is, amplification, is the defining property of an active device or circuit, while a passive circuit will have a gain of less than one.
The term gain alone is ambiguous, and can refer to the ratio of output to input voltage (voltage gain), current (current gain) or electric power (power gain). In the field of audio and general purpose amplifiers, especially operational amplifiers, the term usually refers to voltage gain, but in radio frequency amplifiers it usually refers to power gain. Furthermore, the term gain is also applied in systems such as sensors where the input and output have different units; in such cases the gain units must be specified, as in "5 microvolts per photon" for the responsivity of a photosensor. The "gain" of a bipolar transistor normally refers to forward current transfer ratio, either hFE ("beta", the static ratio of Ic divided by Ib at some operating point), or sometimes hfe (the small-signal current gain, the slope of the graph of Ic against Ib at a point).
The gain of an electronic device or circuit generally varies with the frequency of the applied signal. Unless otherwise stated, the term refers to the gain for frequencies in the passband, the intended operating frequency range of the equipment.
The term gain has a different meaning in antenna design; antenna gain is the ratio of radiation intensity from a directional antenna to (mean radiation intensity from a lossless antenna).
Logarithmic units and decibels
Power gain
Power gain, in decibels (dB), is defined as follows:
where is the power applied to the input, is the power from the output.
A similar calculation can be done using a natural logarithm instead of a decimal logarithm, resulting in nepers instead of decibels: | Gain (electronics) | Wikipedia | 488 | 41968 | https://en.wikipedia.org/wiki/Gain%20%28electronics%29 | Physical sciences | Electrical circuits | null |
Voltage gain
The power gain can be calculated using voltage instead of power using Joule's first law ; the formula is:
In many cases, the input impedance and output impedance are equal, so the above equation can be simplified to:
This simplified formula, the 20 log rule, is used to calculate a voltage gain in decibels and is equivalent to a power gain if and only if the impedances at input and output are equal.
Current gain
In the same way, when power gain is calculated using current instead of power, making the substitution , the formula is:
In many cases, the input and output impedances are equal, so the above equation can be simplified to:
This simplified formula is used to calculate a current gain in decibels and is equivalent to the power gain if and only if the impedances at input and output are equal.
The "current gain" of a bipolar transistor, or , is normally given as a dimensionless number, the ratio of to (or slope of the -versus- graph, for ).
In the cases above, gain will be a dimensionless quantity, as it is the ratio of like units (decibels are not used as units, but rather as a method of indicating a logarithmic relationship). In the bipolar transistor example, it is the ratio of the output current to the input current, both measured in amperes. In the case of other devices, the gain will have a value in SI units. Such is the case with the operational transconductance amplifier, which has an open-loop gain (transconductance) in siemens (mhos), because the gain is a ratio of the output current to the input voltage.
Example
Q. An amplifier has an input impedance of 50 ohms and drives a load of 50 ohms. When its input () is 1 volt, its output () is 10 volts. What is its voltage and power gain?
A. Voltage gain is simply:
The units V/V are optional but make it clear that this figure is a voltage gain and not a power gain.
Using the expression for power, P = V2/R, the power gain is:
Again, the units W/W are optional. Power gain is more usually expressed in decibels, thus:
Unity gain
A gain of factor 1 (equivalent to 0 dB) where both input and output are at the same voltage level and impedance is also known as unity gain. | Gain (electronics) | Wikipedia | 512 | 41968 | https://en.wikipedia.org/wiki/Gain%20%28electronics%29 | Physical sciences | Electrical circuits | null |
In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized.
The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment.
Definition
The shortest path problem can be defined for graphs whether undirected, directed, or mixed. The definition for undirected graphs states that every edge can be traversed in either direction. Directed graphs require that consecutive vertices be connected by an appropriate directed edge.
Two vertices are adjacent when they are both incident to a common edge. A path in an undirected graph is a sequence of vertices such that is adjacent to for . Such a path is called a path of length from to . (The are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.)
Let where is the edge incident to both and . Given a real-valued weight function , and an undirected (simple) graph , the shortest path from to is the path (where and ) that over all possible minimizes the sum When each edge in the graph has unit weight or , this is equivalent to finding the path with fewest edges.
The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations:
The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph.
The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph.
The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph.
These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. | Shortest path problem | Wikipedia | 450 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
Algorithms
Several well-known algorithms exist for solving this problem and its variants.
Dijkstra's algorithm solves the single-source shortest path problem with only non-negative edge weights.
Bellman–Ford algorithm solves the single-source problem if edge weights may be negative.
A* search algorithm solves for single-pair shortest path using heuristics to try to speed up the search.
Floyd–Warshall algorithm solves all pairs shortest paths.
Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs.
Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node.
Additional algorithms and associated evaluations may be found in .
Single-source shortest paths
Undirected graphs
Unweighted graphs
Directed acyclic graphs
An algorithm using topological sorting can solve the single-source shortest path problem in time in arbitrarily-weighted directed acyclic graphs.
Directed graphs with nonnegative weights
The following table is taken from , with some corrections and additions.
A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges, assuming integer edge weights.
Directed graphs with arbitrary weights without negative cycles
Directed graphs with arbitrary weights with negative cycles
Finds a negative cycle or calculates distances to all vertices.
Planar graphs with nonnegative weights
Applications
Network flows are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node.
Shortest Path Problems can be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems.
Transformation Steps | Shortest path problem | Wikipedia | 434 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
Create a Residual Graph:
For each edge (u, v) in the original graph, create two edges in the residual graph:
(u, v) with capacity c(u, v)
(v, u) with capacity 0
The residual graph represents the remaining capacity available in the network.
Find the Shortest Path:
Use a shortest path algorithm (e.g., Dijkstra's algorithm, Bellman-Ford algorithm) to find the shortest path from the source node to the sink node in the residual graph.
Augment the Flow:
Find the minimum capacity along the shortest path.
Increase the flow on the edges of the shortest path by this minimum capacity.
Decrease the capacity of the edges in the forward direction and increase the capacity of the edges in the backward direction.
Update the Residual Graph:
Update the residual graph based on the augmented flow.
Repeat:
Repeat steps 2-4 until no more paths can be found from the source to the sink.
All-pairs shortest paths
The all-pairs shortest path problem finds the shortest paths between every pair of vertices , in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced by , who observed that it could be solved by a linear number of matrix multiplications that takes a total time of .
Undirected graph
Directed graph
Applications
Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available.
If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.
In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path.
A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film. | Shortest path problem | Wikipedia | 503 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
Other applications, often studied in operations research, include plant and facility layout, robotics, transportation, and VLSI design.
Road networks
A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension. There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs.
All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network.
The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond. Other techniques that have been used are:
ALT (A* search, landmarks, and triangle inequality)
Arc flags
Contraction hierarchies
Transit node routing
Reach-based pruning
Labeling
Hub labels
Related problems
For shortest path problems in computational geometry, see Euclidean shortest path.
The shortest multiple disconnected path is a representation of the primitive path network within the framework of Reptation theory. The widest path problem seeks a path so that the minimum label of any edge is as large as possible.
Other related problems may be classified into the following categories. | Shortest path problem | Wikipedia | 410 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
Paths with constraints
Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are called Constrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem, which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problem NP-complete (such problems are not believed to be efficiently solvable for large sets of data, see P = NP problem). Another NP-complete example requires a specific set of vertices to be included in the path, which makes the problem similar to the Traveling Salesman Problem (TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem of finding the longest path in a graph is also NP-complete.
Partial observability
The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic.
Strategic shortest paths
Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use a variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights.
Negative cycle detection
In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose: | Shortest path problem | Wikipedia | 486 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
The Bellman–Ford algorithm can be used to detect a negative cycle in time .
Cherkassky and Goldberg survey several other algorithms for negative cycle detection.
General algebraic framework on semirings: the algebraic path problem
Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a semiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the algebraic path problem.
Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures.
More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of valuation algebras.
Shortest path in stochastic time-dependent networks
In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network.
There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability.
To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such as dynamic programming and Dijkstra's algorithm . These methods use stochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length. The terms travel time reliability and travel time variability are used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions. | Shortest path problem | Wikipedia | 456 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. The most reliable path is one that maximizes the probability of arriving on time given a travel time budget. An α-reliable path is one that minimizes the travel time budget required to arrive on time with a given probability. | Shortest path problem | Wikipedia | 63 | 41985 | https://en.wikipedia.org/wiki/Shortest%20path%20problem | Mathematics | Graph theory | null |
In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound), matter waves such as electrons in electron microscopes, and electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler.
The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech.
Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude.
Mathematical description
If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law.
Applying the law of conservation of energy, if the net power emanating is constant,
where
is the net power radiated;
is the intensity vector as a function of position;
the magnitude is the intensity as a function of position;
is a differential element of a closed surface that contains the source.
If one integrates a uniform intensity, , over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes
where
is the intensity at the surface of the sphere;
is the radius of the sphere;
is the expression for the surface area of a sphere.
Solving for gives
If the medium is damped, then the intensity drops off more quickly than the above equation suggests. | Intensity (physics) | Wikipedia | 493 | 41993 | https://en.wikipedia.org/wiki/Intensity%20%28physics%29 | Physical sciences | Optics | Physics |
Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam, if is the complex amplitude of the electric field, then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by:
and the local intensity is obtained by multiplying this expression by the wave velocity,
where
is the refractive index;
is the speed of light in vacuum;
is the vacuum permittivity.
For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector.
Electron beams
For electron beams, intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography.
Alternative definitions
In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer. | Intensity (physics) | Wikipedia | 344 | 41993 | https://en.wikipedia.org/wiki/Intensity%20%28physics%29 | Physical sciences | Optics | Physics |
A twin prime is a prime number that is either 2 less or 2 more than another prime number—for example, either member of the twin prime pair or In other words, a twin prime is a prime that has a prime gap of two. Sometimes the term twin prime is used for a pair of twin primes; an alternative name for this is prime twin or prime pair.
Twin primes become increasingly rare as one examines larger ranges, in keeping with the general tendency of gaps between adjacent primes to become larger as the numbers themselves get larger. However, it is unknown whether there are infinitely many twin primes (the so-called twin prime conjecture) or if there is a largest pair. The breakthrough
work of Yitang Zhang in 2013, as well as work by James Maynard, Terence Tao and others, has made substantial progress towards proving that there are infinitely many twin primes, but at present this remains unsolved.
Properties
Usually the pair is not considered to be a pair of twin primes.
Since 2 is the only even prime, this pair is the only pair of prime numbers that differ by one; thus twin primes are as closely spaced as possible for any other two primes.
The first several twin prime pairs are
.
Five is the only prime that belongs to two pairs, as every twin prime pair greater than is of the form for some natural number ; that is, the number between the two primes is a multiple of 6.
As a result, the sum of any pair of twin primes (other than 3 and 5) is divisible by 12.
Brun's theorem
In 1915, Viggo Brun showed that the sum of reciprocals of the twin primes was convergent.
This famous result, called Brun's theorem, was the first use of the Brun sieve and helped initiate the development of modern sieve theory. The modern version of Brun's argument can be used to show that the number of twin primes less than does not exceed
for some absolute constant
In fact, it is bounded above by
where is the twin prime constant (slightly less than 2/3), given below. | Twin prime | Wikipedia | 437 | 41997 | https://en.wikipedia.org/wiki/Twin%20prime | Mathematics | Prime numbers | null |
Twin prime conjecture
The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states that there are infinitely many primes such that is also prime. In 1849, de Polignac made the more general conjecture that for every natural number , there are infinitely many primes such that is also prime.
The of de Polignac's conjecture is the twin prime conjecture.
A stronger form of the twin prime conjecture, the Hardy–Littlewood conjecture (see below), postulates a distribution law for twin primes akin to the prime number theorem.
On 17 April 2013, Yitang Zhang announced a proof that there exists an integer that is less than 70 million, where there are infinitely many pairs of primes that differ by . Zhang's paper was accepted in early May 2013. Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhang's bound.
One year after Zhang's announcement, the bound had been reduced to 246, where it remains.
These improved bounds were discovered using a different approach that was simpler than Zhang's and was discovered independently by James Maynard and Terence Tao. This second approach also gave bounds for the smallest needed to guarantee that infinitely many intervals of width contain at least primes. Moreover (see also the next section) assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath Project wiki states that the bound is 12 and 6, respectively.
A strengthening of Goldbach’s conjecture, if proved, would also prove there is an infinite number of twin primes, as would the existence of Siegel zeroes. | Twin prime | Wikipedia | 346 | 41997 | https://en.wikipedia.org/wiki/Twin%20prime | Mathematics | Prime numbers | null |
Other theorems weaker than the twin prime conjecture
In 1940, Paul Erdős showed that there is a constant and infinitely many primes such that where denotes the next prime after . What this means is that we can find infinitely many intervals that contain two primes as long as we let these intervals grow slowly in size as we move to bigger and bigger primes. Here, "grow slowly" means that the length of these intervals can grow logarithmically. This result was successively improved; in 1986 Helmut Maier showed that a constant can be used. In 2004 Daniel Goldston and Cem Yıldırım showed that the constant could be improved further to In 2005, Goldston, Pintz, and Yıldırım established that can be chosen to be arbitrarily small,
i.e.
On the other hand, this result does not rule out that there may not be infinitely many intervals that contain two primes if we only allow the intervals to grow in size as, for example,
By assuming the Elliott–Halberstam conjecture or a slightly weaker version, they were able to show that there are infinitely many such that at least two of , , , , , , or are prime. Under a stronger hypothesis they showed that for infinitely many , at least two of , , , and are prime.
The result of Yitang Zhang,
is a major improvement on the Goldston–Graham–Pintz–Yıldırım result. The Polymath Project optimization of Zhang's bound and the work of Maynard have reduced the bound: the limit inferior is at most 246.
Conjectures
First Hardy–Littlewood conjecture
The first Hardy–Littlewood conjecture (named after G. H. Hardy and John Littlewood) is a generalization of the twin prime conjecture. It is concerned with the distribution of prime constellations, including twin primes, in analogy to the prime number theorem. Let denote the number of primes such that is also prime. Define the twin prime constant as
(Here the product extends over all prime numbers .) Then a special case of the first Hardy-Littlewood conjecture is that
in the sense that the quotient of the two expressions tends to 1 as approaches infinity. (The second ~ is not part of the conjecture and is proven by integration by parts.) | Twin prime | Wikipedia | 475 | 41997 | https://en.wikipedia.org/wiki/Twin%20prime | Mathematics | Prime numbers | null |
The conjecture can be justified (but not proven) by assuming that describes the density function of the prime distribution. This assumption, which is suggested by the prime number theorem, implies the twin prime conjecture, as shown in the formula for above.
The fully general first Hardy–Littlewood conjecture on prime -tuples (not given here) implies that the second Hardy–Littlewood conjecture is false.
This conjecture has been extended by Dickson's conjecture.
Polignac's conjecture
Polignac's conjecture from 1849 states that for every positive even integer , there are infinitely many consecutive prime pairs and such that (i.e. there are infinitely many prime gaps of size ). The case is the twin prime conjecture. The conjecture has not yet been proven or disproven for any specific value of , but Zhang's result proves that it is true for at least one (currently unknown) value of . Indeed, if such a did not exist, then for any positive even natural number there are at most finitely many such that for all and so for large enough we have which would contradict Zhang's result.
Large twin primes
Beginning in 2007, two distributed computing projects, Twin Prime Search and PrimeGrid, have produced several record-largest twin primes. , the current largest twin prime pair known is with 388,342 decimal digits. It was discovered in September 2016.
There are 808,675,888,577,436 twin prime pairs below .
An empirical analysis of all prime pairs up to 4.35 × shows that if the number of such pairs less than is then is about 1.7 for small and decreases towards about 1.3 as tends to infinity. The limiting value of is conjectured to equal twice the twin prime constant () (not to be confused with Brun's constant), according to the Hardy–Littlewood conjecture.
Other elementary properties
Every third odd number is divisible by 3, and therefore no three successive odd numbers can be prime unless one of them is 3. Therefore, 5 is the only prime that is part of two twin prime pairs. The lower member of a pair is by definition a Chen prime.
If m − 4 or m + 6 is also prime then the three primes are called a prime triplet.
It has been proven that the pair (m, m + 2) is a twin prime if and only if | Twin prime | Wikipedia | 491 | 41997 | https://en.wikipedia.org/wiki/Twin%20prime | Mathematics | Prime numbers | null |
For a twin prime pair of the form (6n − 1, 6n + 1) for some natural number n > 1, n must end in the digit 0, 2, 3, 5, 7, or 8 (). If n were to end in 1 or 6, 6n would end in 6, and 6n −1 would be a multiple of 5. This is not prime unless n = 1. Likewise, if n were to end in 4 or 9, 6n would end in 4, and 6n +1 would be a multiple of 5. The same rule applies modulo any prime p ≥ 5: If n ≡ ±6−1 (mod p), then one of the pair will be divisible by p and will not be a twin prime pair unless 6n = p ±1. p = 5 just happens to produce particularly simple patterns in base 10.
Isolated prime
An isolated prime (also known as single prime or non-twin prime) is a prime number p such that neither p − 2 nor p + 2 is prime. In other words, p is not part of a twin prime pair. For example, 23 is an isolated prime, since 21 and 25 are both composite.
The first few isolated primes are
2, 23, 37, 47, 53, 67, 79, 83, 89, 97, ... .
It follows from Brun's theorem that almost all primes are isolated in the sense that the ratio of the number of isolated primes less than a given threshold n and the number of all primes less than n tends to 1 as n tends to infinity. | Twin prime | Wikipedia | 332 | 41997 | https://en.wikipedia.org/wiki/Twin%20prime | Mathematics | Prime numbers | null |
Plastic surgery is a surgical specialty involving the restoration, reconstruction, or alteration of the human body. It can be divided into two main categories: reconstructive surgery and cosmetic surgery. Reconstructive surgery covers a wide range of specialties, including craniofacial surgery, hand surgery, microsurgery, and the treatment of burns. This category of surgery focuses on restoring a body part or improving its function. In contrast, cosmetic (or aesthetic) surgery focuses solely on improving the physical appearance of the body. A comprehensive definition of plastic surgery has never been established, because it has no distinct anatomical object and thus overlaps with practically all other surgical specialties. An essential feature of plastic surgery is that it involves the treatment of conditions that require or may require tissue relocation skills.
Etymology
The word plastic in plastic surgery is in reference to the concept of "reshaping" and comes from the Greek πλαστική (τέχνη), plastikē (tekhnē), "the art of modelling" of malleable flesh. This meaning in English is seen as early as 1598. In the surgical context, the word "plastic" first appeared in 1816 and was established in 1838 by Eduard Zeis, preceding the modern technical usage of the word as "engineering material made from petroleum" by 70 years.
History
Treatments for the plastic repair of a broken nose are first mentioned in the Egyptian medical text called the Edwin Smith papyrus. The early trauma surgery textbook was named after the American Egyptologist, Edwin Smith. Reconstructive surgery techniques were being carried out in India by 800 BC. Sushruta was a physician who made contributions to the field of plastic and cataract surgery in the 6th century BC.
The Romans also performed plastic cosmetic surgery, using simple techniques, such as repairing damaged ears, from around the 1st century BC. For religious reasons, they did not dissect either human beings or animals, thus, their knowledge was based in its entirety on the texts of their Greek predecessors. Notwithstanding, Aulus Cornelius Celsus left some accurate anatomical descriptions, some of which—for instance, his studies on the genitalia and the skeleton—are of special interest to plastic surgery. | Plastic surgery | Wikipedia | 457 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Arabs practiced the plastic surgery, during the Abbasid Caliphate in 750 AD. The Arabic translations made their way into Europe via intermediaries. In Italy, the Branca family of Sicily and Gaspare Tagliacozzi (Bologna) became familiar with the techniques of Sushruta.
All fields of surgery, the Arab physician, surgeon, and chemist Al-Zahrawi talks of the use of silk thread suture to achieve good cosmesis. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia. He gives detailed descriptions of other basic surgical techniques such as cautery and wound management.
British physicians travelled to India to see rhinoplasties being performed by Indian methods. Reports on Indian rhinoplasty performed by a Kumhar (potter) vaidya were published in the Gentleman's Magazine by 1794. Joseph Constantine Carpue spent 20 years in India studying local plastic surgery methods. Carpue was able to perform the first major surgery in the Western world in the year 1815. Instruments described in the Sushruta Samhita were further modified in the Western world.
In 1465, Sabuncu's book, description, and classification of hypospadias were more informative and up to date. Localization of the urethral meatus was described in detail. Sabuncuoglu also detailed the description and classification of ambiguous genitalia. In mid-15th-century Europe, Heinrich von Pfolspeundt described a process "to make a new nose for one who lacks it entirely, and the dogs have devoured it" by removing skin from the back of the arm and suturing it in place. However, because of the dangers associated with surgery in any form, especially that involving the head or face, it was not until the 19th and 20th centuries that such surgery became common.
In 1814, Joseph Carpue successfully performed an operative procedure on a British military officer who had lost his nose to the toxic effects of mercury treatments. In 1818, German surgeon Carl Ferdinand von Graefe published his major work entitled Rhinoplastik. Von Graefe modified the Italian method using a free skin graft from the arm instead of the original delayed pedicle flap.
The first American plastic surgeon was John Peter Mettauer, who, in 1827, performed the first cleft palate operation with instruments that he designed himself. | Plastic surgery | Wikipedia | 501 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Johann Friedrich Dieffenbach specialized in skin transplantation and early plastic surgery. His work in rhinoplastic and maxillofacial surgery established many modern techniques of reconstructive surgery. In 1845, Dieffenbach wrote a comprehensive text on rhinoplasty, titled Operative Chirurgie, and introduced the concept of reoperation to improve the cosmetic appearance of the reconstructed nose. Dieffenbach has been called the "father of plastic surgery".
Another case of plastic surgery for nose reconstruction from 1884 at Bellevue Hospital was described in Scientific American.
In 1891, American otorhinolaryngologist John Roe presented an example of his work: a young woman on whom he reduced a dorsal nasal hump for cosmetic indications. In 1892, Robert Weir experimented unsuccessfully with xenografts (duck sternum) in the reconstruction of sunken noses. In 1896, James Israel, a urological surgeon from Germany, and in 1889 George Monks of the United States each described the successful use of heterogeneous free-bone grafting to reconstruct saddle nose defects. In 1898, Jacques Joseph, the German orthopaedic-trained surgeon, published his first account of reduction rhinoplasty. In 1910, Alexander Ostroumov, the Russian pharmacist, and perfume and cosmetics manufacturer, founded a unique plastic surgery department in his Moscow Institute of Medical Cosmetics. In 1928, Jacques Joseph published .
Nascency of maxillofacial surgery | Plastic surgery | Wikipedia | 298 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
The development of weapons such as machine guns and explosive shells during World War I created trench warfare, which led to a rapid increase in the number of mutilations to the faces and the heads of soldiers because the trenches mainly offered protection to the body. The surgeons, who were not prepared for these injuries, were even less prepared for a large number of injuries and had to react quickly and intelligently to treat the greatest number. Facial injuries were hard to treat on the front line and, because of the sanitary conditions, many infections could occur. Sometimes, some stitches were made on a jagged wound without considering the amount of flesh that had been lost, so the resulting scars were hideous and disfigured soldiers. Some of the wounded had severe injuries and the stitches were not sufficient so some became blind, or were left with gaping holes instead of their nose. Harold Gillies, scared by the number of new facial injuries and the lack of good surgical techniques, decided to dedicate an entire hospital to the reconstruction of facial injuries as fully as possible. He took into account the psychological dimension. Gillies introduced skin grafts to the treatments of soldiers, so they would be less horrified by looking at themselves in the mirror. | Plastic surgery | Wikipedia | 243 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
It is the multidisciplinary approach to the treatment of facial lesions, bringing together plastic surgeons, dental surgeons, technicians, and specialized nurses, which has made it possible to develop techniques leading to the reconstruction of injured faces. Before the dentist Auguste Charles Valadier and then Gillies identified the need to advance the specialty of maxillofacial surgery which would be directly dedicated to the management of war wounds at this time. Gillies developed a new technique using rotational and transposition flaps but also bone grafts from the ribs and tibia to reconstruct facial defects caused by the weapons during the war. He experimented with this technique so he knew that he had to start by moving back healthy tissue to its normal position and then he would be able to fill with tissue from another place on the body of the soldier. One of the most successful techniques in skin grafting had the aim of not completely severing the connection to the body. It was possible by releasing and lifting a flap of skin from the wound. The flap of skin, still connected to the donor site, would then be swung over the site of the wound, allowing the maintenance of physical connection and ensuring that blood is supplied to the skin, increasing the chances of the skin graft being accepted by the body. At this time, we assisted also to improving in treating infections also meant that important injuries had become survivable mostly thanks to the new technique of Gillies. Some soldiers arrived at the hospital of Gillies without noses, chins, cheekbones, or even eyes. But for them, the most important trauma was psychological.
Development of modern techniques
The father of modern plastic surgery is generally considered to have been Sir Harold Gillies. A New Zealand otolaryngologist working in London, he developed many of the techniques of modern facial surgery in caring for soldiers with disfiguring facial injuries during the First World War. | Plastic surgery | Wikipedia | 382 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
During World War I, he worked as a medical minder with the Royal Army Medical Corps. After working with the French oral and maxillofacial surgeon Hippolyte Morestin on skin grafts, he persuaded the army's chief surgeon, Arbuthnot-Lane, to establish a facial injury ward at the Cambridge Military Hospital, Aldershot, later upgraded to a new hospital for facial repairs at Sidcup in 1917. There Gillies and his colleagues developed many techniques of plastic surgery; more than 11,000 operations were performed on more than 5,000 men (mostly soldiers with facial injuries, usually from gunshot wounds). After the war, Gillies developed a private practice with Rainsford Mowlem, including many famous patients, and travelled extensively to promote his advanced techniques worldwide.
In 1930, Gillies' cousin, Archibald McIndoe, joined the practice and became committed to plastic surgery. When World War II broke out, plastic surgery provision was largely divided between the different services of the armed forces, and Gillies and his team were split up. Gillies himself was sent to Rooksdown House near Basingstoke, which became the principal army plastic surgery unit; Tommy Kilner (who had worked with Gillies during the First World War, and who now has a surgical instrument named after him, the kilner cheek retractor) went to Queen Mary's Hospital, Roehampton; and Mowlem went to St Albans. McIndoe, consultant to the RAF, moved to the recently rebuilt Queen Victoria Hospital in East Grinstead, Sussex, and founded a Centre for Plastic and Jaw Surgery. There, he treated very deep burns, and serious facial disfigurement, such as loss of eyelids, typical of those caused to aircrew by burning fuel.
McIndoe is often recognized for not only developing new techniques for treating badly burned faces and hands but also for recognising the importance of the rehabilitation of the casualties and particularly of social reintegration back into normal life. He disposed of the "convalescent uniforms" and let the patients use their service uniforms instead. With the help of two friends, Neville and Elaine Blond, he also convinced the locals to support the patients and invite them to their homes. McIndoe kept referring to them as "his boys" and the staff called him "The Boss" or "The Maestro". | Plastic surgery | Wikipedia | 489 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
His other important work included the development of the walking-stalk skin graft, and the discovery that immersion in saline promoted healing as well as improving survival rates for patients with extensive burns—this was a serendipitous discovery drawn from observation of differential healing rates in pilots who had come down on land and in the sea. His radical, experimental treatments led to the formation of the Guinea Pig Club at Queen Victoria Hospital, Sussex. Among the better-known members of his "club" were Richard Hillary, Bill Foxley and Jimmy Edwards.
Sub-specialties
Plastic surgery is a broad field, and may be subdivided further. In the United States, plastic surgeons are board certified by American Board of Plastic Surgery. Subdisciplines of plastic surgery may include:
Aesthetic surgery
Aesthetic surgery is a central component of plastic surgery and includes facial and body aesthetic surgery. Plastic surgeons use cosmetic surgical principles in all reconstructive surgical procedures as well as isolated operations to improve overall appearance.
Burn surgery
Burn surgery generally takes place in two phases. Acute burn surgery is the treatment immediately after a burn. Reconstructive burn surgery takes place after the burn wounds have healed.
Craniofacial surgery
Craniofacial surgery is divided into pediatric and adult craniofacial surgery. Pediatric craniofacial surgery mostly revolves around the treatment of congenital anomalies of the craniofacial skeleton and soft tissues, such as cleft lip and palate, microtia, craniosynostosis, and pediatric fractures. Adult craniofacial surgery deals mostly with reconstructive surgeries after trauma or cancer and revision surgeries along with orthognathic surgery and facial feminization surgery. Craniofacial surgery is an important part of all plastic surgery training programs. Further training and subspecialisation is obtained via a craniofacial fellowship. Craniofacial surgery is also practiced by maxillofacial surgeons.
Ethnic plastic surgery
Ethnic plastic surgery is plastic surgery performed to change ethnic attributes, often considered used as a way of "passing".
Hand surgery | Plastic surgery | Wikipedia | 429 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Hand surgery is concerned with acute injuries and chronic diseases of the hand and wrist, correction of congenital malformations of the upper extremities, and peripheral nerve problems (such as brachial plexus injuries or carpal tunnel syndrome). Hand surgery is an important part of training in plastic surgery, as well as microsurgery, which is necessary to replant an amputated extremity. The hand surgery field is also practiced by orthopedic surgeons and general surgeons. Scar tissue formation after surgery can be problematic on the delicate hand, causing loss of dexterity and digit function if severe enough. There have been cases of surgery on women's hands in order to correct perceived flaws to create the perfect engagement ring photo.
Microsurgery
Microsurgery is generally concerned with the reconstruction of missing tissues by transferring a piece of tissue to the reconstruction site and reconnecting blood vessels. Popular subspecialty areas are breast reconstruction, head and neck reconstruction, hand surgery/replantation, and brachial plexus surgery.
Pediatric plastic surgery
Children often face medical issues very different from the experiences of an adult patient. Many birth defects or syndromes present at birth are best treated in childhood, and pediatric plastic surgeons specialize in treating these conditions in children. Conditions commonly treated by pediatric plastic surgeons include craniofacial anomalies, Syndactyly (webbing of the fingers and toes), Polydactyly (excess fingers and toes at birth), cleft lip and palate, and congenital hand deformities.
Prison plastic surgery
Plastic surgery performed on an incarcerated population in order to affect their recidivism rate, a practice instituted in the early 20th century that lasted until the mid-1990s. Separate from surgery performed for medical need. | Plastic surgery | Wikipedia | 370 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Techniques and procedures
In plastic surgery, the transfer of skin tissue (skin grafting) is a very common procedure. Skin grafts can be derived from the recipient or donors:
Autografts are taken from the recipient. If absent or deficient of natural tissue, alternatives can be cultured sheets of epithelial cells in vitro or synthetic compounds, such as integra, which consists of silicone and bovine tendon collagen with glycosaminoglycans.
Allografts are taken from a donor of the same species. Kidney transplants are an example of allograft transfer. Joseph Murray is credited for completing the first successful kidney transplantation in 1954.
Xenografts are taken from a donor of a different species.
Usually, good results would be expected from plastic surgery that emphasize careful planning of incisions so that they fall within the line of natural skin folds or lines, appropriate choice of wound closure, use of best available suture materials, and early removal of exposed sutures so that the wound is held closed by buried sutures.
Cosmetic surgery procedures | Plastic surgery | Wikipedia | 224 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Cosmetic surgery is a voluntary or elective surgery that is performed on normal parts of the body with the only purpose of improving a person's appearance or removing signs of aging. Some cosmetic surgeries such as breast reduction are also functional and can help to relieve symptoms of discomfort such as back ache or neck ache. Cosmetic surgeries are also undertaken following breast cancer and mastectomy to recreate the natural breast shape which has been lost during the process of removing the cancer. In 2014, nearly 16 million cosmetic procedures were performed in the United States alone. The number of cosmetic procedures performed in the United States has almost doubled since the start of the century. 92% of cosmetic procedures were performed on women in 2014, up from 88% in 2001. 15.6 million cosmetic procedures were performed in 2020, with the five most common surgeries being rhinoplasties, blepharoplasties, rhytidectomies, liposuctions, and breast augmentation. Breast augmentation continues to be one of the top 5 cosmetic surgical procedures and has been since 2006. Silicone implants were used in 84% and saline implants in 16% of all breast augmentations in 2020. The American Society for Aesthetic Plastic Surgery looked at the statistics for 34 different cosmetic procedures. Nineteen of the procedures were surgical, such as rhinoplasties or rhytidectomies. The nonsurgical procedures included botox and laser hair removal. In 2010, their survey revealed that there were 9,336,814 total procedures in the United States. Of those, 1,622,290 procedures were surgical (p. 5). They also found that a large majority, 81%, of the procedures were done on Caucasian people (p. 12). | Plastic surgery | Wikipedia | 362 | 42048 | https://en.wikipedia.org/wiki/Plastic%20surgery | Biology and health sciences | Medical procedures | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.