text
stringlengths
26
3.6k
page_title
stringlengths
1
71
source
stringclasses
1 value
token_count
int64
10
512
id
stringlengths
2
8
url
stringlengths
31
117
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
Silver or metallic gray is a color tone resembling gray that is a representation of the color of polished silver. The visual sensation usually associated with the metal silver is its metallic shine. This cannot be reproduced by a simple solid color because the shiny effect is due to the material's brightness varying with the surface angle to the light source. In addition, there is no mechanism for showing metallic or fluorescent colors on a computer without resorting to rendering software that simulates the action of light on a shiny surface. Consequently, in art and in heraldry, one would typically use a metallic paint that glitters like real silver. A matte gray color could also be used to represent silver. History The first recorded use of silver as a color name in English was in 1481. In heraldry, the word argent is used, derived from Latin argentum over Medieval French argent. Silver Displayed at right is the web color silver. Since version 3.2 of HTML "silver" is a name for one of the 16 basic-VGA-colors. HTML-example: <body bgcolor="silver"> CSS-example: body { background-color:silver; } Variations of silver Silver (Crayola) Crayola crayons have a color called silver which is a pale tone of silver color. This silver has been a Crayola color since 1903. Crayola's silver is not a neutral grayscale color but a warm gray with a very slight tinge of orange-red. Silver pink The color silver pink is displayed on the right. The color name silver pink first came into use in 1948. The source of this color is the Plochere Color System, a color system formulated in 1948 that is widely used by interior designers. Silver sand On the right is displayed the color silver sand. The color name silver sand for this silver-tone has been used since 2001 when it was promulgated as one of the colors on the Xona.com Color List. Silver chalice On the right is displayed the color silver chalice. The color name silver chalice for this tone of silver has been in use since 2001 when it was promulgated as one of the colors on the Xona.com Color List. Roman silver On the right is displayed the color Roman silver. Roman silver, a blue-gray tone of silver, is one of the colors on the Resene Color List. Old silver At right is displayed the color old silver.
Silver (color)
Wikipedia
503
974623
https://en.wikipedia.org/wiki/Silver%20%28color%29
Physical sciences
Colors
Physics
Old silver is a color formulated to resemble tarnished silver. The first recorded use of old silver as a color name in English was in 1905. The normalized color coordinates for old silver are identical to battleship gray. Sonic silver Sonic silver is a tone of silver included in Metallic FX crayons, specialty crayons formulated by Crayola in 2001. Silver in nature Plants A silver birch is a tree in the birch family. The leaves are whitish silver on the underside. A silver fir is a valuable timber tree that originated in Europe. A silver maple is characterized by lacy, delicate leaves that are lighter grayish-green on the underside. These trees get their name from the shimmering effect the two-toned leaves give when fluttering in a breeze. Animals A silverfish is an insect which may eat paper or cloth. Many fish have a silver color. A silver fox is a "genetically determined phase of the common red fox in which the pelt is black tipped with white". A silverback gorilla is an adult male gorilla. Silver in culture Aphorisms The expression "every cloud has a silver lining" is used to point out that something good can often come out of even a bad situation. The expression "silver-tongued" refers to a person who possesses the power of fluent, persuasive, eloquent, and witty speech. The expression "born with a silver spoon in his/her mouth" means someone is born into a wealthy or well-to-do family. Astronomy The Chinese name Silver River (銀河) is used throughout East Asia, including Korea and Japan to denote the Milky Way Galaxy (An alternative name for the Milky Way in ancient China, especially in poems, is "Heavenly Han River"(天汉).). In Japanese, "Silver River" (銀河 ginga) means galaxies in general, and the Milky Way is called the "Silver River System" (銀河系 gingakei) or the "River of Heaven" (天の川 Amanokawa or Amanogawa). Film The silver screen is a poetic name for a motion picture screen. This metaphor derived from the early 20th century when all movies were filmed in black and white, and some screens of the era used metallic silver as a reflecting agent. Science fiction films often show spaceship or starship crews wearing silver body suits. Silver City is a 2004 political satire and drama film written and directed by John Sayles.
Silver (color)
Wikipedia
495
974623
https://en.wikipedia.org/wiki/Silver%20%28color%29
Physical sciences
Colors
Physics
Geography Nevada is referred to as the silver state because of the historically rich silver mines located there, such as the Comstock Lode. Gerontology The aging of the baby boomers has been called the "silver tsunami", although this phrase is controversial due to its ageist connotations. When someone 55 or older gets divorced, it is called a "silver divorce". Heraldry In heraldry there is no distinction between silver and white, represented as "argent". In English heraldry argent (silver) or white signified brightness, purity, virtue, or innocence. Literature The Silver Cord is a 1926 play by Sidney Howard about the emotional tie between a mother and a son, and the term "silver cord" is sometimes used to represent this tie. Silver Child is the first in The Silver Sequence is a fantasy brook trilogy by Cliff McNish consisting of Silver Child, Silver City and Silver World. The Silver Chair is a book in C. S. Lewis's allegorical fantasy series The Chronicles of Narnia. Marriage The 25th wedding anniversary is called the silver anniversary; guests at a 25th wedding anniversary party are expected to bring gifts made of silver. By extension, the 25th anniversary of any significant event is called its Silver Jubilee. Military The Silver Star is the third-highest decoration that can be awarded by the U.S. Military. Music Silver Apples was a psychedelic electronic music duo from New York City that formed in 1967. Silverhead was a British band, led by singer/actor Michael Des Barres. They were a part of the glam rock music scene of the early 1970s. Silver Convention was a popular disco group. Silverchair is a contemporary Australian rock band. Silver Fox is a song by RJD2 from his 2002 album Deadringer. Panelology The Silver Surfer is a popular comic book character. Silver Fox is a character in the Marvel Comics universe. Parapsychology Those who claim to have had out-of-body experiences sometimes report that they observe a silver cord connecting their astral body to their physical body. Politics The Silver Shirts was an American fascist organization during the 1930s.
Silver (color)
Wikipedia
431
974623
https://en.wikipedia.org/wiki/Silver%20%28color%29
Physical sciences
Colors
Physics
Real estate The Silverdome, a stadium in Pontiac, Michigan constructed in 1975 for $55,000,000 (about $220,000,000 in 2009 dollars), sold in 2009 for $583,000, symbolizing the collapse of real estate prices in the Detroit metropolitan area due to deindustrialization in the rust belt. Religion In Paganism, silver represents wisdom, intelligence, and memory. It has a feminine energy and it is used to grow psychic ability. Role playing games In Dungeons & Dragons, the silver dragon is one of the metallic dragons. School colors Silver is one of two school colors of Christopher Newport University. Scouting The Silver Wolf Award is the highest award made by The Scout Association "for services of the most exceptional character". The Silver Award is the highest award for GS Cadettes in Girl Scouting of the United States of America (GSUSA). Sexuality In the bandana code of the gay leather subculture, wearing a silver lamé bandana on the left means that one is a rock star, movie star, celebrity, or big time groupie; wearing a silver lamé bandana on the right means that one is a groupie looking to have sex with one of the aforementioned types of people. Sports The Las Vegas Raiders of the National Football League and the San Antonio Spurs of the National Basketball Association use silver as one of their primary colors, along with black. The Detroit Lions football team uses the color silver along with the color Honolulu blue for its team logo and uniforms.
Silver (color)
Wikipedia
304
974623
https://en.wikipedia.org/wiki/Silver%20%28color%29
Physical sciences
Colors
Physics
Messier 106 (also known as NGC 4258) is an intermediate spiral galaxy in the constellation Canes Venatici. It was discovered by Pierre Méchain in 1781. M106 is at a distance of about 22 to 25 million light-years away from Earth. M106 contains an active nucleus classified as a Type 2 Seyfert, and the presence of a central supermassive black hole has been demonstrated from radio-wavelength observations of the rotation of a disk of molecular gas orbiting within the inner light-year around the black hole. NGC 4217 is a possible companion galaxy of Messier 106. Besides the two visible arms, it has two "anomalous arms" detectable using an X-ray telescope. Characteristics M106 has a water vapor megamaser (the equivalent of a laser operating in microwave instead of visible light and on a galactic scale) that is seen by the 22-GHz line of ortho-H2O that evidences dense and warm molecular gas. Water masers are useful for observing nuclear accretion disks in active galaxies. The water masers in M106 enabled the first case of a direct measurement of the distance to a galaxy, thereby providing an independent anchor for the cosmic distance ladder. M106 has a slightly warped, thin, almost edge-on Keplerian disc which is on a subparsec scale. It surrounds a central area with mass . It is one of the largest and brightest nearby galaxies, similar in size and luminosity to the Andromeda Galaxy. The supermassive black hole at the core has a mass of . M106 has also played an important role in calibrating the cosmic distance ladder. Before, Cepheid variables from other galaxies could not be used to measure distances since they cover ranges of metallicities different from the Milky Way's. M106 contains Cepheid variables similar to both the metallicities of the Milky Way and other galaxies' Cepheids. By measuring the distance of the Cepheids with metallicities similar to our galaxy, astronomers are able to recalibrate the other Cepheids with different metallicities, a key fundamental step in improving quantification of distances to other galaxies in the universe.
Messier 106
Wikipedia
462
975020
https://en.wikipedia.org/wiki/Messier%20106
Physical sciences
Notable galaxies
Astronomy
Supernovae Two supernovae have been observed in M106: SN 1981K (type II, mag. 17) was reported by E. Hummel and verified by Paul Wild by examining archival photos dated 3 November 1981. SN 2014bc (type II, mag. 14.8) was discovered by the PS1 Science Consortium 3Pi survey on 19 May 2014.
Messier 106
Wikipedia
78
975020
https://en.wikipedia.org/wiki/Messier%20106
Physical sciences
Notable galaxies
Astronomy
The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price (which may reflect the order of their short-run marginal costs of production) and sometimes pollution, together with amount of energy that will be generated. In a centralized management scheme, the ranking is such that those with the lowest marginal costs are the first sources to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching power generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons. In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. The effect of renewable energy on merit order The high demand for electricity during peak demand pushes up the bidding price for electricity, and the often relatively inexpensive baseload power supply mix is supplemented by 'peaking power plants', which produce electrical power at higher cost, and therefore are priced higher for their electrical output. Increasing the supply of renewable energy tends to lower the average price per unit of electricity because wind energy and solar energy have very low marginal costs: they do not have to pay for fuel, and the sole contributors to their marginal cost is operations and maintenance. With cost often reduced by feed-in-tariff revenue, their electricity is as a result, less costly on the spot market than that from coal or natural gas, and transmission companies typically` buy from them first. Solar and wind electricity therefore substantially reduce the amount of highly priced peak electricity that transmission companies need to buy, during the times when solar/wind power is available, reducing the overall cost. A study by the Fraunhofer Institute ISI found that this "merit order effect" had allowed solar power to reduce the price of electricity on the German energy exchange by 10% on average, and by as much as 40% in the early afternoon. In 2007; as more solar electricity was fed into the grid, peak prices may come down even further. By 2006, the "merit order effect" indicated that the savings in electricity costs to German consumers, on average, more than offset the support payments paid by customers for renewable electricity generation.
Merit order
Wikipedia
501
11730924
https://en.wikipedia.org/wiki/Merit%20order
Technology
Concepts
null
A 2013 study estimated the merit order effect of both wind and photovoltaic electricity generation in Germany between the years 2008 and 2012. For each additional GWh of renewables fed into the grid, the price of electricity in the day-ahead market was reduced by 0.11–0.13¢/kWh. The total merit order effect of wind and photovoltaics ranged from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012. The near-zero marginal cost of wind and solar energy does not, however, translate into zero marginal cost of peak load electricity in a competitive open electricity market system as wind and solar supply alone often cannot be dispatched to meet peak demand without incurring marginal transmission costs and potentially the costs of ``batteries. The purpose of the merit order dispatching paradigm was to enable the lowest net cost electricity to be dispatched first thus minimising overall electricity system costs to consumers. Intermittent wind and solar is sometimes able to supply this economic function. If peak wind (or solar) supply and peak demand both coincide in time and quantity, the price reduction is larger. On the other hand, solar energy tends to be most abundant at noon, whereas peak demand is late afternoon in warm climates, leading to the so-called duck curve. A 2008 study by the Fraunhofer Institute ISI in Karlsruhe, Germany found that windpower saves German consumers €5billion a year. It is estimated to have lowered prices in European countries with high wind generation by between 3 and 23€/MWh. On the other hand, renewable energy in Germany increased the price for electricity, consumers there now pay 52.8 €/MWh more only for renewable energy (see German Renewable Energy Sources Act), average price for electricity in Germany now is increased to 26¢/kWh. Increasing electrical grid costs for new transmission, market trading and storage associated with wind and solar are not included in the marginal cost of power sources, instead grid costs are combined with source costs at the consumer end. Economic dispatch
Merit order
Wikipedia
420
11730924
https://en.wikipedia.org/wiki/Merit%20order
Technology
Concepts
null
Economic dispatch is the short-term determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission and operational constraints. The Economic Dispatch Problem can be solved by specialized computer software which should satisfy the operational and system constraints of the available resources and corresponding transmission capabilities. In the US Energy Policy Act of 2005, the term is defined as "the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognising any operational limits of generation and transmission facilities". The main idea is that, in order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs must be used first, with the marginal cost of the final generator needed to meet load setting the system marginal cost. This is the cost of delivering one additional MWh of energy onto the system. Due to transmission constraints, this cost can vary at different locations within the power grid - these different cost levels are identified as "locational marginal prices" (LMPs). The historic methodology for economic dispatch was developed to manage fossil fuel burning power plants, relying on calculations involving the input/output characteristics of power stations. Basic mathematical formulation The following is based on an analytical methodology following Biggar and Hesamzadeh (2014) and Kirschen (2010). The economic dispatch problem can be thought of as maximising the economic welfare of a power network whilst meeting system constraints. For a network with buses (nodes), suppose that is the rate of generation, and is the rate of consumption at bus . Suppose, further, that is the cost function of producing power (i.e., the rate at which the generator incurs costs when producing at rate ), and is the rate at which the load receives value or benefits (expressed in currency units) when consuming at rate . The total welfare is then The economic dispatch task is to find the combination of rates of production and consumption () which maximise this expression subject to a number of constraints: The first constraint, which is necessary to interpret the constraints that follow, is that the net injection at each bus is equal to the total production at that bus less the total consumption: The power balance constraint requires that the sum of the net injections at all buses must be equal to the power losses in the branches of the network:
Merit order
Wikipedia
480
11730924
https://en.wikipedia.org/wiki/Merit%20order
Technology
Concepts
null
The power losses depend on the flows in the branches and thus on the net injections as shown in the above equation. However it cannot depend on the injections on all the buses as this would give an over-determined system. Thus one bus is chosen as the Slack bus and is omitted from the variables of the function . The choice of Slack bus is entirely arbitrary, here bus is chosen. The second constraint involves capacity constraints on the flow on network lines. For a system with lines this constraint is modeled as: where is the flow on branch , and is the maximum value that this flow is allowed to take. Note that the net injection at the slack bus is not included in this equation for the same reasons as above. These equations can now be combined to build the Lagrangian of the optimization problem: where π and μ are the Lagrangian multipliers of the constraints. The conditions for optimality are then: where the last condition is needed to handle the inequality constraint on line capacity. Solving these equations is computationally difficult as they are nonlinear and implicitly involve the solution of the power flow equations. The analysis can be simplified using a linearised model called a DC power flow. There is a special case which is found in much of the literature. This is the case in which demand is assumed to be perfectly inelastic (i.e., unresponsive to price). This is equivalent to assuming that for some very large value of and inelastic demand . Under this assumption, the total economic welfare is maximised by choosing . The economic dispatch task reduces to: Subject to the constraint that and the other constraints set out above. Environmental dispatch In environmental dispatch, additional considerations concerning reduction of pollution further complicate the power dispatch problem. The basic constraints of the economic dispatch problem remain in place but the model is optimized to minimize pollutant emission in addition to minimizing fuel costs and total power loss. Due to the added complexity, a number of algorithms have been employed to optimize this environmental/economic dispatch problem. Notably, a modified bees algorithm implementing chaotic modeling principles was successfully applied not only in silico, but also on a physical model system of generators. Other methods used to address the economic emission dispatch problem include Particle Swarm Optimization (PSO) and neural networks
Merit order
Wikipedia
470
11730924
https://en.wikipedia.org/wiki/Merit%20order
Technology
Concepts
null
Another notable algorithm combination is used in a real-time emissions tool called Locational Emissions Estimation Methodology (LEEM) that links electric power consumption and the resulting pollutant emissions. The LEEM estimates changes in emissions associated with incremental changes in power demand derived from the locational marginal price (LMP) information from the independent system operators (ISOs) and emissions data from the US Environmental Protection Agency (EPA). LEEM was developed at Wayne State University as part of a project aimed at optimizing water transmission systems in Detroit, MI starting in 2010 and has since found a wider application as a load profile management tool that can help reduce generation costs and emissions.
Merit order
Wikipedia
136
11730924
https://en.wikipedia.org/wiki/Merit%20order
Technology
Concepts
null
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second (), while that of spectral flux in frequency is the watt per hertz () and that of spectral flux in wavelength is the watt per metre ()—commonly the watt per nanometre (). Mathematical definitions Radiant flux Radiant flux, denoted ('e' for "energetic", to avoid confusion with photometric quantities), is defined as where is the time; is the radiant energy passing out of a closed surface ; is the Poynting vector, representing the current density of radiant energy; is the normal vector of a point on ; represents the area of ; represents the time period. The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving where is the time average, and is the angle between and Spectral flux Spectral flux in frequency, denoted Φe,ν, is defined as where is the frequency. Spectral flux in wavelength, denoted , is defined as where is the wavelength. SI radiometry units
Radiant flux
Wikipedia
299
2970774
https://en.wikipedia.org/wiki/Radiant%20flux
Physical sciences
Electromagnetic radiation
Physics
The van Deemter equation in chromatography, named for Jan van Deemter, relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. These properties include pathways within the column, diffusion (axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates. The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process. Van Deemter equation The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows: Where HETP = a measure of the resolving power of the column [m] A = Eddy-diffusion parameter, related to channeling through a non-ideal packing [m] B = diffusion coefficient of the eluting particles in the longitudinal direction, resulting in dispersion [m2 s−1] C = Resistance to mass transfer coefficient of the analyte between mobile and stationary phase [s] u = speed [m s−1] In open tubular capillaries, the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero.
Van Deemter equation
Wikipedia
481
2971205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
Physical sciences
Chromatography
Chemistry
The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following: Plate count The plate height given as: with the column length and the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time for each component and its standard deviation as a measure for peak width, provided that the elution curve represents a Gaussian curve. In this case the plate count is given by: By using the more practical peak width at half height the equation is: or with the width at the base of the peak: Expanded van Deemter The Van Deemter equation can be further expanded to: Where: H is plate height λ is particle shape (with regard to the packing) dp is particle diameter γ, ω, and R are constants Dm is the diffusion coefficient of the mobile phase dc is the capillary diameter df is the film thickness Ds is the diffusion coefficient of the stationary phase. u is the linear velocity Rodrigues equation The Rodrigues equation, named for Alírio Rodrigues, is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: where and is the intraparticular Péclet number.
Van Deemter equation
Wikipedia
331
2971205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
Physical sciences
Chromatography
Chemistry
A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy (also known as the primary galaxy). Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within the Solar System are gravitationally bound to the Sun. While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud. Moreover, satellite galaxies are not the only astronomical objects that are gravitationally bound to larger host galaxies (see globular clusters). For this reason, astronomers have defined galaxies as gravitationally bound collections of stars that exhibit properties that cannot be explained by a combination of baryonic matter (i.e. ordinary matter) and Newton's laws of gravity. For example, measurements of the orbital speed of stars and gas within spiral galaxies result in a velocity curve that deviates significantly from the theoretical prediction. This observation has motivated various explanations such as the theory of dark matter and modifications to Newtonian dynamics. Therefore, despite also being satellites of host galaxies, globular clusters should not be mistaken for satellite galaxies. Satellite galaxies are not only more extended and diffuse compared to globular clusters, but are also enshrouded in massive dark matter halos that are thought to have been endowed to them during the formation process.
Satellite galaxy
Wikipedia
295
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Satellite galaxies generally lead tumultuous lives due to their chaotic interactions with both the larger host galaxy and other satellites. For example, the host galaxy is capable of disrupting the orbiting satellites via tidal and ram pressure stripping. These environmental effects can remove large amounts of cold gas from satellites (i.e. the fuel for star formation), and this can result in satellites becoming quiescent in the sense that they have ceased to form stars. Moreover, satellites can also collide with their host galaxy resulting in a minor merger (i.e. merger event between galaxies of significantly different masses). On the other hand, satellites can also merge with one another resulting in a major merger (i.e. merger event between galaxies of comparable masses). Galaxies are mostly composed of empty space, interstellar gas and dust, and therefore galaxy mergers do not necessarily involve collisions between objects from one galaxy and objects from the other, however, these events generally result in much more massive galaxies. Consequently, astronomers seek to constrain the rate at which both minor and major mergers occur to better understand the formation of gigantic structures of gravitationally bound conglomerations of galaxies such as galactic groups and clusters. History
Satellite galaxy
Wikipedia
242
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Early 20th century Prior to the 20th century, the notion that galaxies existed beyond the Milky Way was not well established. In fact, the idea was so controversial at the time that it led to what is now heralded as the "Shapley-Curtis Great Debate" aptly named after the astronomers Harlow Shapley and Heber Doust Curtis that debated the nature of "nebulae" and the size of the Milky Way at the National Academy of Sciences on April 26, 1920. Shapley argued that the Milky Way was the entire universe (spanning over 100,000 lightyears or 30 kiloparsec across) and that all of the observed "nebulae" (currently known as galaxies) resided within this region. On the other hand, Curtis argued that the Milky Way was much smaller and that the observed nebulae were in fact galaxies similar to the Milky Way. This debate was not settled until late 1923 when the astronomer Edwin Hubble measured the distance to M31 (currently known as the Andromeda galaxy) using Cepheid Variable stars. By measuring the period of these stars, Hubble was able to estimate their intrinsic luminosity and upon combining this with their measured apparent magnitude he estimated a distance of 300 kpc, which was an order-of-magnitude larger than the estimated size of the universe made by Shapley. This measurement verified that not only was the universe much larger than previously expected, but it also demonstrated that the observed nebulae were actually distant galaxies with a wide range of morphologies (see Hubble sequence). Modern times Despite Hubble's discovery that the universe was teeming with galaxies, a majority of the satellite galaxies of the Milky Way and the Local Group remained undetected until the advent of modern astronomical surveys such as the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES). In particular, the Milky Way is currently known to host 59 satellite galaxies (see satellite galaxies of the Milky Way), of which two known as the Large Magellanic Cloud and Small Magellanic Cloud have been observable in the Southern Hemisphere with the unaided eye since ancient times. Nevertheless, modern cosmological theories of galaxy formation and evolution predict a much larger number of satellite galaxies than what is observed (see missing satellites problem). However, more recent high resolution simulations have demonstrated that the current number of observed satellites pose no threat to the prevalent theory of galaxy formation.
Satellite galaxy
Wikipedia
498
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Motivations to study satellite galaxies Spectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models. Classification of satellite galaxies As mentioned above, satellite galaxies are generally categorized as dwarf galaxies and therefore follow a similar Hubble classification scheme as their host with the minor addition of a lowercase "d" in front of the various standard types to designate the dwarf galaxy status. These types include dwarf irregular (dI), dwarf spheroidal (dSph), dwarf elliptical (dE) and dwarf spiral (dS). However, out of all of these types it is believed that dwarf spirals are not satellites, but rather dwarf galaxies that are only found in the field. Dwarf irregular satellite galaxies Dwarf irregular satellite galaxies are characterized by their chaotic and asymmetric appearance, low gas fractions, high star formation rate and low metallicity. Three of the closest dwarf irregular satellites of the Milky Way include the Small Magellanic Cloud, Canis Major Dwarf, and the newly discovered Antlia 2. Dwarf elliptical satellite galaxies Dwarf elliptical satellite galaxies are characterized by their oval appearance on the sky, disordered motion of constituent stars, moderate to low metallicity, low gas fractions and old stellar population. Dwarf elliptical satellite galaxies in the Local Group include NGC 147, NGC 185, and NGC 205, which are satellites of our neighboring Andromeda galaxy. Dwarf spheroidal satellite galaxies Dwarf spheroidal satellite galaxies are characterized by their diffuse appearance, low surface brightness, high mass-to-light ratio (i.e. dark matter dominated), low metallicity, low gas fractions and old stellar population. Moreover, dwarf spheroidals make up the largest population of known satellite galaxies of the Milky Way. A few of these satellites include Hercules, Pisces II and Leo IV, which are named after the constellation in which they are found.
Satellite galaxy
Wikipedia
446
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Transitional types As a result of minor mergers and environmental effects, some dwarf galaxies are classified as intermediate or transitional type satellite galaxies. For example, Phoenix and LGS3 are classified as intermediate types that appear to be transitioning from dwarf irregulars to dwarf spheroidals. Furthermore, the Large Magellanic Cloud is considered to be in the process of transitioning from a dwarf spiral to a dwarf irregular. Formation of satellite galaxies According to the standard model of cosmology (known as the ΛCDM model), the formation of satellite galaxies is intricately connected to the observed large-scale structure of the Universe. Specifically, the ΛCDM model is based on the premise that the observed large-scale structure is the result of a bottom-up hierarchical process that began after the recombination epoch in which electrically neutral hydrogen atoms were formed as a result of free electrons and protons binding together. As the ratio of neutral hydrogen to free protons and electrons grew, so did fluctuations in the baryonic matter density. These fluctuations rapidly grew to the point that they became comparable to dark matter density fluctuations. Moreover, the smaller mass fluctuations grew to nonlinearity, became virialized (i.e. reached gravitational equilibrium), and were then hierarchically clustered within successively larger bound systems. The gas within these bound systems condensed and rapidly cooled into cold dark matter halos that steadily increased in size by coalescing together and accumulating additional gas via a process known as accretion. The largest bound objects formed from this process are known as superclusters, such as the Virgo Supercluster, that contain smaller clusters of galaxies that are themselves surrounded by even smaller dwarf galaxies. Furthermore, in this model dwarfs galaxies are considered to be the fundamental building blocks that give rise to more massive galaxies, and the satellites that are observed around these galaxies are the dwarfs that have yet to be consumed by their host.
Satellite galaxy
Wikipedia
390
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Accumulation of mass in dark matter halos A crude yet useful method to determine how dark matter halos progressively gain mass through mergers of less massive halos can be explained using the excursion set formalism, also known as the extended Press-Schechter formalism (EPS). Among other things, the EPS formalism can be used to infer the fraction of mass that originated from collapsed objects of a specific mass at an earlier time by applying the statistics of Markovian random walks to the trajectories of mass elements in -space, where and represent the mass variance and overdensity, respectively. In particular the EPS formalism is founded on the ansatz that states "the fraction of trajectories with a first upcrossing of the barrier at is equal to the mass fraction at time that is incorporated in halos with masses ". Consequently, this ansatz ensures that each trajectory will upcross the barrier given some arbitrarily large , and as a result it guarantees that each mass element will ultimately become part of a halo. Furthermore, the fraction of mass that originated from collapsed objects of a specific mass at an earlier time can be used to determine average number of progenitors at time within the mass interval that have merged to produce a halo of at time . This is accomplished by considering a spherical region of mass with a corresponding mass variance and linear overdensity , where is the linear growth rate that is normalized to unity at time and is the critical overdensity at which the initial spherical region has collapsed to form a virialized object. Mathematically, the progenitor mass function is expressed as:where and is the Press-Schechter multiplicity function that describes the fraction of mass associated with halos in a range . Various comparisons of the progenitor mass function with numerical simulations have concluded that good agreement between theory and simulation is obtained only when is small, otherwise the mass fraction in high mass progenitors is significantly underestimated, which can be attributed to the crude assumptions such as assuming a perfectly spherical collapse model and using a linear density field as opposed to a non-linear density field to characterize collapsed structures. Nevertheless, the utility of the EPS formalism is that it provides a computationally friendly approach for determining properties of dark matter halos.
Satellite galaxy
Wikipedia
468
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Halo merger rate Another utility of the EPS formalism is that it can be used to determine the rate at which a halo of initial mass M merges with a halo with mass between M and M+ΔM. This rate is given by where , . In general the change in mass, , is the sum of a multitude of minor mergers. Nevertheless, given an infinitesimally small time interval it is reasonable to consider the change in mass to be due to a single merger events in which transitions to . Galactic cannibalism (minor mergers) Throughout their lifespan, satellite galaxies orbiting in the dark matter halo experience dynamical friction and consequently descend deeper into the gravitational potential of their host as a result of orbital decay. Throughout the course of this descent, stars in the outer region of the satellite are steadily stripped away due to tidal forces from the host galaxy. This process, which is an example of a minor merger, continues until the satellite is completely disrupted and consumed by the host galaxies. Evidence of this destructive process can be observed in stellar debris streams around distant galaxies. Orbital decay rate As satellites orbit their host and interact with each other they progressively lose small amounts of kinetic energy and angular momentum due to dynamical friction. Consequently, the distance between the host and the satellite progressively decreases in order to conserve angular momentum. This process continues until the satellite ultimately mergers with the host galaxy. Furthermore, If we assume that the host is a singular isothermal sphere (SIS) and the satellite is a SIS that is sharply truncated at the radius at which it begins to accelerate towards the host (known as the Jacobi radius), then the time that it takes for dynamical friction to result in a minor merger can be approximated as follows:where is the initial radius at , is the velocity dispersion of the host galaxy, is the velocity dispersion of the satellite and is the Coulomb logarithm defined as with , and respectively representing the maximum impact parameter, the half-mass radius and the typical relative velocity. Moreover, both the half-mass radius and the typical relative velocity can be rewritten in terms of the radius and velocity dispersion such that and . Using the Faber-Jackson relation, the velocity dispersion of satellites and their host can be estimated individually from their observed luminosity. Therefore, using the equation above it is possible to estimate the time that it takes for a satellite galaxy to be consumed by the host galaxy.
Satellite galaxy
Wikipedia
495
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
Minor merger driven star formation In 1978, pioneering work involving the measurement of the colors of merger remnants by the astronomers Beatrice Tinsley and Richard Larson gave rise to the notion that mergers enhance star formation. Their observations showed that an anomalous blue color was associated with the merger remnants. Prior to this discovery, astronomers had already classified stars (see stellar classifications) and it was known that young, massive stars were bluer due to their light radiating at shorter wavelengths. Furthermore, it was also known that these stars live short lives due to their rapid consumption of fuel to remain in hydrostatic equilibrium. Therefore, the observation that merger remnants were associated with large populations of young, massive stars suggested that mergers induced rapid star formation (see starburst galaxy). Since this discovery was made, various observations have verified that mergers do indeed induce vigorous star formation. Despite major mergers being far more effective at driving star formation than minor mergers, it is known that minor mergers are significantly more common than major mergers so the cumulative effect of minor mergers over cosmic time is postulated to also contribute heavily to burst of star formation. Minor mergers and the origins of thick disk components Observations of edge-on galaxies suggest the universal presence of a thin disk, thick disk and halo component of galaxies. Despite the apparent ubiquity of these components, there is still ongoing research to determine if the thick disk and thin disk are truly distinct components. Nevertheless, many theories have been proposed to explain the origin of the thick disk component, and among these theories is one that involves minor mergers. In particular, it is speculated that the preexisting thin disk component of a host galaxy is heated during a minor merger and consequently the thin disk expands to form a thicker disk component.
Satellite galaxy
Wikipedia
348
2973295
https://en.wikipedia.org/wiki/Satellite%20galaxy
Physical sciences
Basics_2
Astronomy
The night herons are medium-sized herons, 58–65 cm, in the genera Nycticorax, Nyctanassa, and Gorsachius. The genus name Nycticorax derives from the Greek for "night raven" and refers to the largely nocturnal feeding habits of this group of birds, and the croaking crow-like, almost like a barking sound, call of the best known species, the black-crowned night heron. In Europe and the Western United States, night heron is often used to refer to the black-crowned night heron, since it is the only member of the genus in that continent. The black-crowned night heron was named the official bird of the city of Oakland, California. Adults are short-necked, short-legged, and stout herons with a primarily brown or grey plumage, and, in most, a black crown. Young birds are brown, flecked with white. At least some of the extinct Mascarenes taxa appear to have retained this juvenile plumage in adult birds. Night herons nest alone or in colonies, on platforms of sticks in a group of trees, or on the ground in protected locations such as islands or reedbeds. 3–8 eggs are laid. Night herons stand still at the water's edge, and wait to ambush prey, mainly at night. They primarily eat small fish, crustaceans, frogs, aquatic insects, and small mammals. During the day, they rest in trees or bushes. There are seven extant species. The genus Nycticorax has suffered more than any other Pelecaniformes genus from extinction, mainly because of their capability to colonize small, predator-free oceanic islands, and a tendency to evolve towards flightlessness. Night herons in Europe breed mainly in southern and southeastern Europe and migrate across the Sahara to winter in central and west Africa. Genera Nyctanassa Nycticorax Gorsachius Gallery
Night heron
Wikipedia
395
2977013
https://en.wikipedia.org/wiki/Night%20heron
Biology and health sciences
Pelecanimorphae
Animals
The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars. Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium.
Hadley cell
Wikipedia
375
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley circulation is named after George Hadley, who in 1735 postulated the existence of hemisphere-spanning circulation cells driven by differences in heating to explain the trade winds. Other scientists later developed similar arguments or critiqued Hadley's qualitative theory, providing more rigorous explanations and formalism. The existence of a broad meridional circulation of the type suggested by Hadley was confirmed in the mid-20th century once routine observations of the upper troposphere became available via radiosondes. Observations and climate modelling indicate that the Hadley circulation has expanded poleward since at least the 1980s as a result of climate change, with an accompanying but less certain intensification of the circulation; these changes have been associated with trends in regional weather patterns. Model projections suggest that the circulation will widen and weaken throughout the 21st century due to climate change. Mechanism and characteristics The Hadley circulation describes the broad, thermally direct, and meridional overturning of air within the troposphere over the low latitudes. Within the global atmospheric circulation, the meridional flow of air averaged along lines of latitude are organized into circulations of rising and sinking motions coupled with the equatorward or poleward movement of air called meridional cells. These include the prominent "Hadley cells" centered over the tropics and the weaker "Ferrell cells" centered over the mid-latitudes. The Hadley cells result from the contrast of insolation between the warm equatorial regions and the cooler subtropical regions. The uneven heating of Earth's surface results in regions of rising and descending air. Over the course of a year, the equatorial regions absorb more radiation from the Sun than they radiate away. At higher latitudes, the Earth emits more radiation than it receives from the Sun. Without a mechanism to exchange heat meridionally, the equatorial regions would warm and the higher latitudes would cool progressively in disequilibrium. The broad ascent and descent of air results in a pressure gradient force that drives the Hadley circulation and other large-scale flows in both the atmosphere and the ocean, distributing heat and maintaining a global long-term and subseasonal thermal equilibrium.
Hadley cell
Wikipedia
435
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley circulation covers almost half of the Earth's surface area, spanning from roughly the Tropic of Cancer to the Tropic of Capricorn. Vertically, the circulation occupies the entire depth of the troposphere. The Hadley cells comprising the circulation consist of air carried equatorward by the trade winds in the lower troposphere that ascends when heated near the equator, along with air moving poleward in the upper troposphere. Air that is moved into the subtropics cools and then sinks before returning equatorward to the tropics; the position of the sinking air associated with the Hadley cell is often used as a measure of the meridional width of the global tropics. The equatorward return of air and the strong influence of heating make the Hadley cell a thermally-driven and enclosed circulation. Due to the buoyant rise of air near the equator and the sinking of air at higher latitudes, a pressure gradient develops near the surface with lower pressures near the equator and higher pressures in the subtropics; this provides the motive force for the equatorward flow in the lower troposphere. However, the release of latent heat associated with condensation in the tropics also relaxes the decrease in pressure with height, resulting in higher pressures aloft in the tropics compared to the subtropics for a given height in the upper troposphere; this pressure gradient is stronger than its near-surface counterpart and provides the motive force for the poleward flow in the upper troposphere. Hadley cells are most commonly identified using the mass-weighted, zonally-averaged stream function of meridional winds, but they can also be identified by other measurable or derivable physical parameters such as velocity potential or the vertical component of wind at a particular pressure level. Given the latitude and the pressure level , the Stokes stream function characterizing the Hadley circulation is given by
Hadley cell
Wikipedia
396
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
where is the radius of Earth, is the acceleration due to the gravity of Earth, and is the zonally averaged meridional wind at the prescribed latitude and pressure level. The value of gives the integrated meridional mass flux between the specified pressure level and the top of the Earth's atmosphere, with positive values indicating northward mass transport. The strength of the Hadley cells can be quantified based on including the maximum and minimum values or averages of the stream function both overall and at various pressure levels. Hadley cell intensity can also be assessed using other physical quantities such as the velocity potential, vertical component of wind, transport of water vapor, or total energy of the circulation. Structure and components The structure of the Hadley circulation and its components can be inferred by graphing zonal and temporal averages of global winds throughout the troposphere. At shorter timescales, individual weather systems perturb wind flow. Although the structure of the Hadley circulation varies seasonally, when winds are averaged annually (from an Eulerian perspective) the Hadley circulation is roughly symmetric and composed of two similar Hadley cells with one in each of the northern and southern hemispheres, sharing a common region of ascending air near the equator; however, the Southern Hemisphere Hadley cell is stronger. The winds associated with the annually-averaged Hadley circulation are on the order of . However, when averaging the motions of air parcels as opposed to the winds at fixed locations (a Lagrangian perspective), the Hadley circulation manifests as a broader circulation that extends farther poleward. Each Hadley cell can be described by four primary branches of airflow within the tropics: An equatorward, lower branch within the planetary boundary layer An ascending branch near the equator A poleward, upper branch in the upper troposphere A descending branch in the subtropics
Hadley cell
Wikipedia
370
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The trade winds in the low-latitudes of both Earth's northern and southern hemispheres converge air towards the equator, producing a belt of low atmospheric pressure exhibiting abundant storms and heavy rainfall known as the Intertropical Convergence Zone (ITCZ). This equatorward movement of air near the Earth's surface constitutes the lower branch of the Hadley cell. The position of the ITCZ is influenced by the warmth of sea surface temperatures (SST) near the equator and the strength of cross-equatorial pressure gradients. In general, the ITCZ is located near the equator or is offset towards the summer hemisphere where the warmest SSTs are located. On an annual average, the rising branch of the Hadley circulation is slightly offset towards the Northern Hemisphere, away from the equator. Due to the Coriolis force, the trade winds deflect opposite the direction of Earth's rotation, blowing partially westward rather than directly equatorward in both hemispheres. The lower branch accrues moisture resulting from evaporation across Earth's tropical oceans. A warmer environment and converging winds force the moistened air to ascend near the equator, resulting in the rising branch of the Hadley cell. The upward motion is further enhanced by the release of latent heat as the uplift of moist air results in an equatorial band of condensation and precipitation. The Hadley circulation's upward branch largely occurs in thunderstorms occupying only around one percent of the surface area of the tropics. The transport of heat in the Hadley circulation's ascending branch is accomplished most efficiently by hot towerscumulonimbus clouds bearing strong updrafts that do not mix in drier air commonly found in the middle troposphere and thus allow the movement of air from the highly moist tropical lower troposphere into the upper troposphere. Approximately 1,500–5,000 hot towers daily near the ITCZ region are required to sustain the vertical heat transport exhibited by the Hadley circulation.
Hadley cell
Wikipedia
401
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The ascent of air rises into the upper troposphere to a height of , after which air diverges outward from the ITCZ and towards the poles. The top of the Hadley cell is set by the height of the tropopause as the stable stratosphere above prevents the continued ascent of air. Air arising from the low latitudes has higher absolute angular momentum about Earth's axis of rotation. The distance between the atmosphere and Earth's axis decreases poleward; to conserve angular momentum, poleward-moving air parcels must accelerate eastward. The Coriolis effect limits the poleward extent of the Hadley circulation, accelerating air in the direction of the Earth's rotation and forming a jet stream directed zonally rather than continuing the poleward flow of air at each Hadley cell's poleward boundary. Considering only the conservation of angular momentum, a parcel of air at rest along the equator would accelerate to a zonal speed of by the time it reached 30° latitude. However, small-scale turbulence along the parcel's poleward trek and large-scale eddies in the mid-latitude dissipate angular momentum. The jet associated with the Southern Hemisphere Hadley cell is stronger than its northern counterpart due to the stronger intensity of the Southern Hemisphere cell. The cooler, higher-latitudes leads to cooling of air parcels, which causes the poleward air to eventually descend. When the movement of air is averaged annually, the descending branch of the Hadley cell is located roughly over the 25th parallel north and the 25th parallel south. The moisture in the subtropics is then partly advected poleward by eddies and partly advected equatorward by the lower branch of the Hadley cell, where it is later brought towards the ITCZ. Although the zonally-averaged Hadley cell is organized into four main branches, these branches are aggregations of more concentrated air flows and regions of mass transport. Several theories and physical models have attempted to explain the latitudinal width of the Hadley cell. The Held–Hou model provides one theoretical constraint on the meridional extent of the Hadley cells. By assuming a simplified atmosphere composed of a lower layer subject to friction from the Earth's surface and an upper layer free from friction, the model predicts that the Hadley circulation would be restricted to within of the equator if parcels do not have any net heating within the circulation. According to the Held–Hou model, the latitude of the Hadley cell's poleward edge scales according to
Hadley cell
Wikipedia
507
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
where is the difference in potential temperature between the equator and the pole in radiative equilibrium, is the height of the tropopause, is the Earth's rotation rate, and is a reference potential temperature. Other compatible models posit that the width of the Hadley cell may scale with other physical parameters such as the vertically-averaged Brunt–Väisälä frequency in the tropopshere or the growth rate of baroclinic waves shed by the cell. Seasonality and variability The Hadley circulation varies considerably with seasonal changes. Around the equinox during the spring and autumn for either the northern or southern hemisphere, the Hadley circulation takes the form of two relatively weaker Hadley cells in both hemispheres, sharing a common region of ascent over the ITCZ and moving air aloft towards each cell's respective hemisphere. However, closer to the solstices, the Hadley circulation transitions into a more singular and stronger cross-equatorial Hadley cell with air rising in the summer hemisphere and broadly descending in the winter hemisphere. The transition between the two-cell and single-cell configuration is abrupt, and during most of the year the Hadley circulation is characterized by a single dominant Hadley cell that transports air across the equator. In this configuration, the ascending branch is located in the tropical latitudes of the warmer summer hemisphere and the descending branch is positioned in the subtropics of the cooler winter hemisphere. Two cells are still present in each hemisphere, though the winter hemisphere's cell becomes much more prominent while the summer hemisphere's cell becomes displaced poleward. The intensification of the winter hemisphere's cell is associated with a steepening of gradients in geopotential height, leading to an acceleration of trade winds and stronger meridional flows. The presence of continents relaxes temperature gradients in the summer hemisphere, accentuating the contrast between the hemispheric Hadley cells. Reanalysis data from 1979–2001 indicated that the dominant Hadley cell in boreal summer extended from 13°S to 31°N on average. In both boreal and austral winters, the Indian Ocean and the western Pacific Ocean contribute most to the rising and sinking motions in the zonally-averaged Hadley circulation. However, vertical flows over Africa and the Americas are more marked in boreal winter.
Hadley cell
Wikipedia
463
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
At longer interannual timescales, variations in the Hadley circulation are associated with variations in the El Niño–Southern Oscillation (ENSO), which impacts the positioning of the ascending branch; the response of the circulation to ENSO is non-linear, with a more marked response to El Niño events than La Niña events. During El Niño, the Hadley circulation strengthens due to the increased warmth of the upper troposphere over the tropical Pacific and the resultant intensification of poleward flow. However, these changes are not asymmetric, during the same events, the Hadley cells over the western Pacific and the Atlantic are weakened. During the Atlantic Niño, the circulation over the Atlantic is intensified. The Atlantic circulation is also enhanced during periods when the North Atlantic oscillation is strongly positive. The variation in the seasonally-averaged and annually-averaged Hadley circulation from year to year is largely accounted for by two juxtaposed modes of oscillation: an equatorial symmetric mode characterized by single cell straddling the equator and an equatorial symmetric mode characterized by two cells on either side of the equator. Energetics and transport
Hadley cell
Wikipedia
230
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley cell is an important mechanism by which moisture and energy are transported both between the tropics and subtropics and between the northern and southern hemispheres. However, it is not an efficient transporter of energy due to the opposing flows of the lower and upper branch, with the lower branch transporting sensible and latent heat equatorward and the upper branch transporting potential energy poleward. The resulting net energy transport poleward represents around 10 percent of the overall energy transport involved in the Hadley cell. The descending branch of the Hadley cell generates clear skies and a surplus of evaporation relative to precipitation in the subtropics. The lower branch of the Hadley circulation accomplishes most of the transport of the excess water vapor accumulated in the subtropical atmosphere towards the equatorial region. The strong Southern Hemisphere Hadley cell relative to its northern counterpart leads to a small net energy transport from the northern to the southern hemisphere; as a result, the transport of energy at the equator is directed southward on average, with an annual net transport of around 0.1 PW. In contrast to the higher latitudes where eddies are the dominant mechanism for transporting energy poleward, the meridional flows imposed by the Hadley circulation are the primary mechanism for poleward energy transport in the tropics. As a thermally direct circulation, the Hadley circulation converts available potential energy to the kinetic energy of horizontal winds. Based on data from January 1979 and December 2010, the Hadley circulation has an average power output of 198 TW, with maxima in January and August and minima in May and October. Although the stability of the tropopause largely limits the movement of air from the troposphere to the stratosphere, some tropospheric air penetrates into the stratosphere via the Hadley cells.
Hadley cell
Wikipedia
362
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley circulation may be idealized as a heat engine converting heat energy into mechanical energy. As air moves towards the equator near the Earth's surface, it accumulates entropy from the surface either by direct heating or the flux of sensible or latent heat. In the ascending branch of a Hadley cell, the ascent of air is approximately an adiabatic process with respect to the surrounding environment. However, as parcels of air move equatorward in the cell's upper branch, they lose entropy by radiating heat to space at infrared wavelengths and descend in response. This radiative cooling occurs at a rate of at least 60  W m−2 and may exceed 100 W m−2 in winter. The heat accumulated during the equatorward branch of the circulation is greater than the heat lost in the upper poleward branch; the excess heat is converted into the mechanical energy that drives the movement of air. This difference in heating also results in the Hadley circulation transporting heat poleward as the air supplying the Hadley cell's upper branch has greater moist static energy than the air supplying the cell's lower branch. Within the Earth's atmosphere, the timescale at which air parcels lose heat due to radiative cooling and the timescale at which air moves along the Hadley circulation are at similar orders of magnitude, allowing the Hadley circulation to transport heat despite cooling in the circulation's upper branch. Air with high potential temperature is ultimately moved poleward in the upper troposphere while air with lower potential temperature is brought equatorward near the surface. As a result, the Hadley circulation is one mechanism by which the disequilibrium produced by uneven heating of the Earth is brought towards equilibrium. When considered as a heat engine, the thermodynamic efficiency of the Hadley circulation averaged around 2.6 percent between 1979–2010, with small seasonal variability.
Hadley cell
Wikipedia
375
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley circulation also transports planetary angular momentum poleward due to Earth's rotation. Because the trade winds are directed opposite the Earth's rotation, eastward angular momentum is transferred to the atmosphere via frictional interaction between the winds and topography. The Hadley cell then transfers this angular momentum through its upward and poleward branches. The poleward branch accelerates and is deflected east in both the northern and southern hemispheres due to the Coriolis force and the conservation of angular momentum, resulting in a zonal jet stream above the descending branch of the Hadley cell. The formation of such a jet implies the existence of a thermal wind balance supported by the amplification of temperature gradients in the jet's vicinity resulting from the Hadley circulation's poleward heat advection. The subtropical jet in the upper troposphere coincides with where the Hadley cell meets the Ferrell cell. The strong wind shear accompanying the jet presents a significant source of baroclinic instability from which waves grow; the growth of these waves transfers heat and momentum polewards. Atmospheric eddies extract westerly angular momentum from the Hadley cell and transport it downward, resulting in the mid-latitude westerly winds. Formulation and discovery The broad structure and mechanism of the Hadley circulationcomprising convective cells moving air due to temperature differences in a manner influenced by the Earth's rotationwas first proposed by Edmund Halley in 1685 and George Hadley in 1735. Hadley had sought to explain the physical mechanism for the trade winds and the westerlies; the Hadley circulation and the Hadley cells are named in honor of his pioneering work. Although Hadley's ideas invoked physical concepts that would not be formalized until well after his death, his model was largely qualitative and without mathematical rigor. Hadley's formulation was later recognized by most meteorologists by the 1920s to be a simplification of more complicated atmospheric processes. The Hadley circulation may have been the first attempt to explain the global distribution of winds in Earth's atmosphere using physical processes. However, Hadley's hypothesis could not be verified without observations of winds in the upper-atmosphere. Data collected by routine radiosondes beginning in the mid-20th century confirmed the existence of the Hadley circulation. Early explanations of the trade winds
Hadley cell
Wikipedia
458
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
In the 15th and 16th centuries, observations of maritime weather conditions were of considerable importance to maritime transport. Compilations of these observations showed consistent weather conditions from year to year and significant seasonal variability. The prevalence of dry conditions and weak winds at around 30° latitude and the equatorward trade winds closer to the equator, mirrored in the northern and southern hemispheres, was apparent by 1600. Early efforts by scientists to explain aspects of global wind patterns often focused on the trade winds as the steadiness of the winds was assumed to portend a simple physical mechanism. Galileo Galilei proposed that the trade winds resulted from the atmosphere lagging behind the Earth's faster tangential rotation speed in the low latitudes, resulting in the westward trades directed opposite of Earth's rotation. In 1685, English polymath Edmund Halley proposed at a debate organized by the Royal Society that the trade winds resulted from east to west temperature differences produced over the course of a day within the tropics. In Halley's model, as the Earth rotated, the location of maximum heating from the Sun moved west across the Earth's surface. This would cause air to rise, and by conservation of mass, Halley argued that air would be moved to the region of evacuated air, generating the trade winds. Halley's hypothesis was criticized by his friends, who noted that his model would lead to changing wind directions throughout the course of a day rather than the steady trade winds. Halley conceded in personal correspondence with John Wallis that "Your questioning my hypothesis for solving the Trade Winds makes me less confident of the truth thereof". Nonetheless, Halley's formulation was incorporated into Chambers's Encyclopaedia and La Grande Encyclopédie, becoming the most widely-known explanation for the trade winds until the early 19th century. Though his explanation of the trade winds was incorrect, Halley correctly predicted that the surface trade winds should be accompanied by an opposing flow aloft following mass conservation. George Hadley's explanation
Hadley cell
Wikipedia
406
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Unsatisfied with preceding explanations for the trade winds, George Hadley proposed an alternate mechanism in 1735. Hadley's hypothesis was published in the paper "On the Cause of the General Trade Winds" in Philosophical Transactions of the Royal Society. Like Halley, Hadley's explanation viewed the trade winds as a manifestation of air moving to take the place of rising warm air. However, the region of rising air prompting this flow lay along the lower latitudes. Understanding that the tangential rotation speed of the Earth was fastest at the equator and slowed farther poleward, Hadley conjectured that as air with lower momentum from higher latitudes moved equatorward to replace the rising air, it would conserve its momentum and thus curve west. By the same token, the rising air with higher momentum would spread poleward, curving east and then sinking as it cooled to produce westerlies in the mid-latitudes. Hadley's explanation implied the existence of hemisphere-spanning circulation cells in the northern and southern hemispheres extending from the equator to the poles, though he relied on an idealization of Earth's atmosphere that lacked seasonality or the asymmetries of the oceans and continents. His model also predicted rapid easterly trade winds of around , though he argued that the action of surface friction over the course of a few days slowed the air to the observed wind speeds. Colin Maclaurin extended Hadley's model to the ocean in 1740, asserting that meridional ocean currents were subject to similar westward or eastward deflections.
Hadley cell
Wikipedia
307
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Hadley was not widely associated with his theory due to conflation with his older brother, John Hadley, and Halley; his theory failed to gain much traction in the scientific community for over a century due to its unintuitive explanation and the lack of validating observations. Several other natural philosophers independently forwarded explanations for the global distribution of winds soon after Hadley's 1735 proposal. In 1746, Jean le Rond d'Alembert provided a mathematical formulation for global winds, but disregarded solar heating and attributed the winds to the gravitational effects of the Sun and Moon. Immanuel Kant, also unsatisfied with Halley's explanation for the trade winds, published an explanation for the trade winds and westerlies in 1756 with similar reasoning as Hadley. In the latter part of the 18th century, Pierre-Simon Laplace developed a set of equations establishing a direct influence of Earth's rotation on wind direction. Swiss scientist Jean-André Deluc published an explanation of the trade winds in 1787 similar to Hadley's hypothesis, connecting differential heating and the Earth's rotation with the direction of the winds. English chemist John Dalton was the first to clearly credit Hadley's explanation of the trade winds to George Hadley, mentioning Hadley's work in his 1793 book Meteorological Observations and Essays. In 1837, Philosophical Magazine published a new theory of wind currents developed by Heinrich Wilhelm Dove without reference to Hadley but similarly explaining the direction of the trade winds as being influenced by the Earth's rotation. In response, Dalton later wrote a letter to the editor to the journal promoting Hadley's work. Dove subsequently credited Hadley so frequently that the overarching theory became known as the "Hadley–Dove principle", popularizing Hadley's explanation for the trade winds in Germany and Great Britain. Critique of Hadley's explanation
Hadley cell
Wikipedia
369
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The work of Gustave Coriolis, William Ferrel, Jean Bernard Foucault, and Henrik Mohn in the 19th century helped establish the Coriolis force as the mechanism for the deflection of winds due to Earth's rotation, emphasizing the conservation of angular momentum in directing flows rather than the conservation of linear momentum as Hadley suggested; Hadley's assumption led to an underestimation of the deflection by a factor of two. The acceptance of the Coriolis force in shaping global winds led to debate among German atmospheric scientists beginning in the 1870s over the completeness and validity of Hadley's explanation, which narrowly explained the behavior of initially meridional motions. Hadley's use of surface friction to explain why the trade winds were much slower than his theory would predict was seen as a key weakness in his ideas. The southwesterly motions observed in cirrus clouds at around 30°N further discounted Hadley's theory as their movement was far slower than the theory would predict when accounting for the conservation of angular momentum. In 1899, William Morris Davis, a professor of physical geography at Harvard University, gave a speech at the Royal Meteorological Society criticizing Hadley's theory for its failure to account for the transition of an initially unbalanced flow to geostrophic balance. Davis and other meteorologists in the 20th century recognized that the movement of air parcels along Hadley's envisaged circulation was sustained by a constant interplay between the pressure gradient and Coriolis forces rather than the conservation of angular momentum alone. Ultimately, while the atmospheric science community considered the general ideas of Hadley's principle valid, his explanation was viewed as a simplification of more complex physical processes.
Hadley cell
Wikipedia
347
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Hadley's model of the global atmospheric circulation being characterized by hemisphere-wide circulation cells was also challenged by weather observations showing a zone of high pressure in the subtropics and a belt of low pressure at around 60° latitude. This pressure distribution would imply a poleward flow near the surface in the mid-latitudes rather than an equatorward flow implied by Hadley's envisioned cells. Ferrel and James Thomson later reconciled the pressure pattern with Hadley's model by proposing a circulation cell limited to lower altitudes in the mid-latitudes and nestled within the broader, hemisphere-wide Hadley cells. Carl-Gustaf Rossby proposed in 1947 that the Hadley circulation was limited to the tropics, forming one part of a dynamically-driven and multi-celled meridional flow. Rossby's model resembled that of a similar three-celled model developed by Ferrel in 1860. Direct observation The three-celled model of the global atmospheric circulationwith Hadley's conceived circulation forming its tropical componenthad been widely accepted by the meteorological community by the early 20th century. However, the Hadley cell's existence was only validated by weather observations near the surface, and its predictions of winds in the upper troposphere remained untested. The routine sampling of the upper troposphere by radiosondes that emerged in the mid-20th century confirmed the existence of meridional overturning cells in the atmosphere. Influence on climate
Hadley cell
Wikipedia
296
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The Hadley circulation is one of the most important influences on global climate and planetary habitability, as well as an important transporter of angular momentum, heat, and water vapor. Hadley cells flatten the temperature gradient between the equator and the poles, making the extratropics milder. The global precipitation pattern of high precipitation in the tropics and a lack of precipitation at higher latitudes is a consequence of the positioning of the rising and sinking branches of Hadley cells, respectively. Near the equator, the ascent of humid air results in the heaviest precipitation on Earth. The periodic movement of the ITCZ and thus the seasonal variation of the Hadley circulation's rising branches produces the world's monsoons. The descending motion of air associating with the sinking branch produces surface divergence consistent with the prominence of subtropical high-pressure areas. These semipermanent regions of high pressure lie primarily over the ocean between 20° and 40° latitude. Arid conditions are associated with the descending branches of the Hadley circulation, with many of the Earth's deserts and semiarid or arid regions underlying the sinking branches of the Hadley circulation. The cloudy marine boundary layer common in the subtropics may be seeded by cloud condensation nuclei exported out of the tropics by the Hadley circulation. Effects of climate change Natural variability Paleoclimate reconstructions of trade winds and rainfall patterns suggest that the Hadley circulation changed in response to natural climate variability. During Heinrich events within the last 100,000 years, the Northern Hemisphere Hadley cell strengthened while the Southern Hemisphere Hadley cell weakened. Variation in insolation during the mid- to late-Holocene resulted in a southward migration of the Northern Hemisphere Hadley cell's ascending and descending branches closer to their present-day positions. Tree rings from the mid-latitudes of the Northern Hemisphere suggest that the historical position of the Hadley cell branches have also shifted in response to shorter oscillations, with the Northern Hemisphere descending branch moving southward during positive phases of the El Niño–Southern Oscillation and Pacific decadal oscillation and northward during the corresponding negative phases. The Hadley cells were displaced southward between 1400–1850, concurrent with drought in parts of the Northern Hemisphere. Hadley cell expansion and intensity changes Observed trends
Hadley cell
Wikipedia
456
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
According to the IPCC Sixth Assessment Report (AR6), the Hadley circulation has likely expanded since at least the 1980s in response to climate change, with medium confidence in an accompanying intensification of the circulation. An expansion of the overall circulation poleward by about 0.1°–0.5° latitude per decade since the 1980s is largely accounted for by the poleward shift of the Northern Hemisphere Hadley cell, which in atmospheric reanalysis has shown a more marked expansion since 1992. However, the AR6 also reported medium confidence in the expansion of the Northern Hemisphere Hadley cell being within the range of internal variability. In contrast, the AR6 assessed that it was likely that the Southern Hemisphere Hadley cell's poleward expansion was due to anthropogenic influence; this finding was based on CMIP5 and CMIP6 climate models. Studies have produced a large range of estimates for the rate of widening of the tropics due to the use of different metrics; estimates based on upper-tropospheric properties tend to yield a wider range of values. The degree to which the circulation has expanded varies by season, with trends in summer and autumn being larger and statistically significant in both hemispheres. The widening of the Hadley circulation has also resulted in a likely widening of the ITCZ since the 1970s. Reanalyses also suggest that the summer and autumn Hadley cells in both hemispheres have widened and that the global Hadley circulation has intensified since 1979, with a more pronounced intensification in the Northern Hemisphere. Between 1979–2010, the power generated by the global Hadley circulation increased by an average of 0.54 TW per year, consistent with an increased input of energy into the circulation by warming SSTs over the tropical oceans. (For comparison, the Hadley circulation's overall power ranges from 0.5 TW to 218 TW throughout the year in the Northern Hemisphere and from 32 to 204 TW in the Southern.) In contrast to reanalyses, CMIP5 climate models depict a weakening of the Hadley circulation since 1979. The magnitude of long-term changes in the circulation strength are thus uncertain due to the influence of large interannual variability and the poor representation of the distribution of latent heat release in reanalyses.
Hadley cell
Wikipedia
460
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
The expansion of the Hadley circulation due to climate change is consistent with the Held–Hou model, which predicts that the latitudinal extent of the circulation is proportional to the square root of the height of the tropopause. Warming of the troposphere raises the tropopause height, enabling the upper poleward branch of the Hadley cells to extend farther and leading to an expansion of the cells. Results from climate models suggest that the impact of internal variability (such as from the Pacific decadal oscillation) and the anthropogenic influence on the expansion of the Hadley circulation since the 1980s have been comparable. Human influence is most evident in the expansion of the Southern Hemisphere Hadley cell; the AR6 assessed medium confidence in associating the expansion of the Hadley circulation in both hemispheres with the added radiative forcing of greenhouse gasses. Physical mechanisms and projected changes The physical processes by which the Hadley circulation expands by human influence are unclear but may be linked to the increased warming of the subtropics relative to other latitudes in both the Northern and Southern hemispheres. The enhanced subtropical warmth could enable expansion of the circulation poleward by displacing the subtropical jet and baroclinic eddies poleward. Poleward expansion of the Southern Hemisphere Hadley cell in the austral summer was attributed by the IPCC Fifth Assessment Report (AR5) to stratospheric ozone depletion based on CMIP5 model simulations, while CMIP6 simulations have not shown as clear of a signal. Ozone depletion could plausibly affect the Hadley circulation through the increase of radiative cooling in the lower stratosphere; this would increase the phase speed of baroclinic eddies and displace them poleward, leading to expansion of Hadley cells. Other eddy-driven mechanisms for expanding Hadley cells have been proposed, involving changes in baroclinicity, wave breaking, and other releases of instability. In the extratropics of the Northern Hemisphere, increasing concentrations of black carbon and tropospheric ozone may be a major forcing on that hemisphere's Hadley cell expansion in boreal summer.
Hadley cell
Wikipedia
440
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Projections from climate models indicate that a continued increase in the concentration of greenhouse gas would result in continued widening of the Hadley circulation. However, simulations using historical data suggest that forcing from greenhouse gasses may account for about 0.1° per decade of expansion of the tropics. Although the widening of the Hadley cells due to climate change has occurred concurrent with an increase in their intensity based on atmospheric reanalyses, climate model projections generally depict a weakening circulation in tandem with a widening circulation by the end of the 21st century. A longer term increase in the concentration of carbon dioxide may lead to a weakening of the Hadley circulation as a result of the reduction of radiative cooling in the troposphere near the circulation's sinking branches. However, changes in the oceanic circulation within the tropics may attenuate changes in the intensity and width of the Hadley cells by reducing thermal contrasts. Changes to weather patterns The expansion of the Hadley circulation due to climate change is connected to changes in regional and global weather patterns. A widening of the tropics could displace the tropical rain belt, expand subtropical deserts, and exacerbate wildfires and drought. The documented shift and expansion of subtropical ridges are associated with changes in the Hadley circulation, including a westward extension of the subtropical high over the northwestern Pacific, changes in the intensity and position of the Azores High, and the poleward displacement and intensification of the subtropical high pressure belt in the Southern Hemisphere. These changes have influenced regional precipitation amounts and variability, including drying trends over southern Australia, northeastern China, and northern South Asia. The AR6 assessed limited evidence that the expansion of the Northern Hemisphere Hadley cell may have led in part to drier conditions in the subtropics and a poleward expansion of aridity during boreal summer. Precipitation changes induced by Hadley circulation changes may lead to changes in regional soil moisture, with modelling showing the most significant declines in the Mediterranean Sea, South Africa, and the Southwestern United States. However, the concurrent effects of changing surface temperature patterns over land lead to uncertainties over the influence of Hadley cell broadening on drying over subtropical land areas.
Hadley cell
Wikipedia
432
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Climate modelling suggests that the shift in the position of the subtropical highs induced by Hadley cell broadening may reduce oceanic upwelling at low latitudes and enhance oceanic upwelling at high latitudes. The expansion of subtropical highs in tandem with the circulation's expansion may also entail a widening of oceanic regions of high salinity and low marine primary production. A decline in extratropical cyclones in the storm track regions in model projections is partly influenced by Hadley cell expansion. Poleward shifts in the Hadley circulation are associated with shifts in the paths of tropical cyclones in the Northern and Southern hemispheres, including a poleward trend in the locations where storms attained their peak intensity. Extraterrestrial Hadley circulations Outside of Earth, any thermally direct circulation that circulates air meridionally across planetary-scale gradients of insolation may be described as a Hadley circulation. A terrestrial atmosphere subject to excess equatorial heating tends to maintain an axisymmetric Hadley circulation with rising motions near the equator and sinking at higher latitudes. Differential heating is hypothesized to result in Hadley circulations analogous to Earth's on other atmospheres in the Solar System, such as on Venus, Mars, and Titan. As with Earth's atmosphere, the Hadley circulation would be the dominant meridional circulation for these extraterrestrial atmospheres. Though less understood, Hadley circulations may also be present on the gas giants of the Solar System and should in principle materialize on exoplanetary atmospheres. The spatial extent of a Hadley cell on any atmosphere may be dependent on the rotation rate of the planet or moon, with a faster rotation rate leading to more contracted Hadley cells (with a more restrictive poleward extent) and a more cellular global meridional circulation. The slower rotation rate reduces the Coriolis effect, thus reducing the meridional temperature gradient needed to sustain a jet at the Hadley cell's poleward boundary and thus allowing the Hadley cell to extend farther poleward.
Hadley cell
Wikipedia
404
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Venus, which rotates slowly, may have Hadley cells that extend farther poleward than Earth's, spanning from the equator to high latitudes in each of the northern and southern hemispheres. Its broad Hadley circulation would efficiently maintain the nearly isothermal temperature distribution between the planet's pole and equator and vertical velocities of around . Observations of chemical tracers such as carbon monoxide provide indirect evidence for the existence of the Venusian Hadley circulation. The presence of poleward winds with speeds up to around at an altitude of are typically understood to be associated with the upper branch of a Hadley cell, which may be located above the Venusian surface. The slow vertical velocities associated with the Hadley circulation have not been measured, though they may have contributed to the vertical velocities measured by Vega and Venera missions. The Hadley cells may extend to around 60° latitude, equatorward of a mid-latitude jet stream demarcating the boundary between the hypothesized Hadley cell and the polar vortex. The planet's atmosphere may exhibit two Hadley circulations, with one near the surface and the other at the level of the upper cloud deck. The Venusian Hadley circulation may contribute to the superrotation of the planet's atmosphere.
Hadley cell
Wikipedia
257
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
Simulations of the Martian atmosphere suggest that a Hadley circulation is also present in Mars' atmosphere, exhibiting a stronger seasonality compared to Earth's Hadley circulation. This greater seasonality results from diminished thermal inertia resulting from the lack of an ocean and the planet's thinner atmosphere. Additionally, Mars' orbital eccentricity leads to a stronger and wider Hadley cell during its northern winter compared to its southern winter. During most of the Martian year, when a single Hadley cell prevails, its rising and sinking branches are located at 30° and 60° latitude, respectively, in global climate modelling. The tops of the Hadley cells on Mars may reach higher (to around altitude) and be less defined compared to on Earth due to the lack of a strong tropopause on Mars. While latent heating from phase changes associated with water drive much of the ascending motion in Earth's Hadley circulation, ascent in Mars' Hadley circulation may be driven by radiative heating of lofted dust and intensified by the condensation of carbon dioxide near the polar ice cap of Mars' wintertime hemisphere, steepening pressure gradients. Over the course of the Martian year, the mass flux of the Hadley circulation ranges between 109 kg s−1 during the equinoxes and 1010 at the solstices. A Hadley circulation may also be present in the atmosphere of Saturn's moon Titan. Like Venus, the slow rotation rate of Titan may support a spatially broad Hadley circulation. General circulation modeling of Titan's atmosphere suggests the presence of a cross-equatorial Hadley cell. This configuration is consistent with the meridional winds observed by the Huygens spacecraft when it landed near Titan's equator. During Titan's solstices, its Hadley circulation may take the form of a single Hadley cell that extends from pole to pole, with warm gas rising in the summer hemisphere and sinking in the winter hemisphere. A two-celled configuration with ascent near the equator is present in modelling during a limited transitional period near the equinoxes. The distribution of convective methane clouds on Titan and observations from Huygens spacecraft suggest that the rising branch of its Hadley circulation occurs in the mid-latitudes of its summer hemisphere. Frequent cloud formation occurs at 40° latitude in Titan's summer hemisphere from ascent analogous to Earth's ITCZ.
Hadley cell
Wikipedia
474
6953458
https://en.wikipedia.org/wiki/Hadley%20cell
Physical sciences
Atmospheric circulation
null
A differential equation can be homogeneous in either of two respects. A first order differential equation is said to be homogeneous if it may be written where and are homogeneous functions of the same degree of and . In this case, the change of variable leads to an equation of the form which is easy to solve by integration of the two members. Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term. History The term homogeneous was first applied to differential equations by Johann Bernoulli in section 9 of his 1726 article De integraionibus aequationum differentialium (On the integration of differential equations). Homogeneous first-order differential equations A first-order ordinary differential equation in the form: is a homogeneous type if both functions and are homogeneous functions of the same degree . That is, multiplying each variable by a parameter , we find Thus, Solution method In the quotient , we can let to simplify this quotient to a function of the single variable : That is Introduce the change of variables ; differentiate using the product rule: This transforms the original differential equation into the separable form or which can now be integrated directly: equals the antiderivative of the right-hand side (see ordinary differential equation). Special case A first order differential equation of the form (, , , , , are all constants) where can be transformed into a homogeneous type by a linear transformation of both variables ( and are constants): Homogeneous linear differential equations A linear differential equation is homogeneous if it is a homogeneous linear equation in the unknown function and its derivatives. It follows that, if is a solution, so is , for any (non-zero) constant . In order for this condition to hold, each nonzero term of the linear differential equation must depend on the unknown function or any derivative of it. A linear differential equation that fails this condition is called inhomogeneous. A linear differential equation can be represented as a linear operator acting on where is usually the independent variable and is the dependent variable. Therefore, the general form of a linear homogeneous differential equation is where is a differential operator, a sum of derivatives (defining the "0th derivative" as the original, non-differentiated function), each multiplied by a function of :
Homogeneous differential equation
Wikipedia
511
6954092
https://en.wikipedia.org/wiki/Homogeneous%20differential%20equation
Mathematics
Differential equations
null
where may be constants, but not all may be zero. For example, the following linear differential equation is homogeneous: whereas the following two are inhomogeneous: The existence of a constant term is a sufficient condition for an equation to be inhomogeneous, as in the above example.
Homogeneous differential equation
Wikipedia
60
6954092
https://en.wikipedia.org/wiki/Homogeneous%20differential%20equation
Mathematics
Differential equations
null
Ornithology is a branch of zoology that concerns the study of birds. Several aspects of ornithology differ from related disciplines, due partly to the high visibility and the aesthetic appeal of birds. It has also been an area with a large contribution made by amateurs in terms of time, resources, and financial support. Studies on birds have helped develop key concepts in biology including evolution, behaviour and ecology such as the definition of species, the process of speciation, instinct, learning, ecological niches, guilds, insular biogeography, phylogeography, and conservation. While early ornithology was principally concerned with descriptions and distributions of species, ornithologists today seek answers to very specific questions, often using birds as models to test hypotheses or predictions based on theories. Most modern biological theories apply across life forms, and the number of scientists who identify themselves as "ornithologists" has therefore declined. A wide range of tools and techniques are used in ornithology, both inside the laboratory and out in the field, and innovations are constantly made. Most biologists who recognise themselves as "ornithologists" study specific biology research areas, such as anatomy, physiology, taxonomy (phylogenetics), ecology, or behaviour. Definition and etymology The word "ornithology" comes from the late 16th-century Latin ornithologia meaning "bird science" from the Greek ὄρνις ornis ("bird") and λόγος logos ("theory, science, thought"). History The history of ornithology largely reflects the trends in the history of biology, as well as many other scientific disciplines, including ecology, anatomy, physiology, paleontology, and more recently, molecular biology. Trends include the move from mere descriptions to the identification of patterns, thus towards elucidating the processes that produce these patterns. Early knowledge and study Humans have had an observational relationship with birds since prehistory, with some stone-age drawings being amongst the oldest indications of an interest in birds. Birds were perhaps important as food sources, and bones of as many as 80 species have been found in excavations of early Stone Age settlements. Water bird and seabird remains have also been found in shell mounds on the island of Oronsay off the coast of Scotland.
Ornithology
Wikipedia
477
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Cultures around the world have rich vocabularies related to birds. Traditional bird names are often based on detailed knowledge of the behaviour, with many names being onomatopoeic, and still in use. Traditional knowledge may also involve the use of birds in folk medicine and knowledge of these practices are passed on through oral traditions (see ethnoornithology). Hunting of wild birds as well as their domestication would have required considerable knowledge of their habits. Poultry farming and falconry were practised from early times in many parts of the world. Artificial incubation of poultry was practised in China around 246 BC and around at least 400 BC in Egypt. The Egyptians also made use of birds in their hieroglyphic scripts, many of which, though stylized, are still identifiable to species. Early written records provide valuable information on the past distributions of species. For instance, Xenophon records the abundance of the ostrich in Assyria (Anabasis, i. 5); this subspecies from Asia Minor is extinct and all extant ostrich races are today restricted to Africa. Other old writings such as the Vedas (1500–800 BC) demonstrate the careful observation of avian life histories and include the earliest reference to the habit of brood parasitism by the Asian koel (Eudynamys scolopaceus). Like writing, the early art of China, Japan, Persia, and India also demonstrate knowledge, with examples of scientifically accurate bird illustrations.
Ornithology
Wikipedia
301
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Aristotle in 350 BC in his History of animals noted the habit of bird migration, moulting, egg laying, and lifespans, as well as compiling a list of 170 different bird species. However, he also introduced and propagated several myths, such as the idea that swallows hibernated in winter, although he noted that cranes migrated from the steppes of Scythia to the marshes at the headwaters of the Nile. The idea of swallow hibernation became so well established that even as late as in 1878, Elliott Coues could list as many as 182 contemporary publications dealing with the hibernation of swallows and little published evidence to contradict the theory. Similar misconceptions existed regarding the breeding of barnacle geese. Their nests had not been seen, and they were believed to grow by transformations of goose barnacles, an idea that became prevalent from around the 11th century and noted by Bishop Giraldus Cambrensis (Gerald of Wales) in Topographia Hiberniae (1187). Around 77 AD, Pliny the Elder described birds, among other creatures, in his Historia Naturalis.
Ornithology
Wikipedia
230
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The earliest record of falconry comes from the reign of Sargon II (722–705 BC) in Assyria. Falconry is thought to have made its entry to Europe only after AD 400, brought in from the east after invasions by the Huns and Alans. Starting from the eighth century, numerous Arabic works on the subject and general ornithology were written, as well as translations of the works of ancient writers from Greek and Syriac. In the 12th and 13th centuries, crusades and conquest had subjugated Islamic territories in southern Italy, central Spain, and the Levant under European rule, and for the first time translations into Latin of the great works of Arabic and Greek scholars were made with the help of Jewish and Muslim scholars, especially in Toledo, which had fallen into Christian hands in 1085 and whose libraries had escaped destruction. Michael Scotus from Scotland made a Latin translation of Aristotle's work on animals from Arabic here around 1215, which was disseminated widely and was the first time in a millennium that this foundational text on zoology became available to Europeans. Falconry was popular in the Norman court in Sicily, and a number of works on the subject were written in Palermo. Emperor Frederick II of Hohenstaufen (1194–1250) learned about an falconry during his youth in Sicily and later built up a menagerie and sponsored translations of Arabic texts, among which the popular Arabic work known as the Liber Moaminus by an unknown author which was translated into Latin by Theodore of Antioch from Syria in 1240–1241 as the De Scientia Venandi per Aves, and also Michael Scotus (who had removed to Palermo) translated Ibn Sīnā's Kitāb al-Ḥayawān of 1027 for the Emperor, a commentary and scientific update of Aristotle's work which was part of Ibn Sīnā's massive Kitāb al-Šifāʾ. Frederick II eventually wrote his own treatise on falconry, the De arte venandi cum avibus, in which he related his ornithological observations and the results of the hunts and experiments his court enjoyed performing.
Ornithology
Wikipedia
440
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Several early German and French scholars compiled old works and conducted new research on birds. These included Guillaume Rondelet, who described his observations in the Mediterranean, and Pierre Belon, who described the fish and birds that he had seen in France and the Levant. Belon's Book of Birds (1555) is a folio volume with descriptions of some 200 species. His comparison of the skeleton of humans and birds is considered as a landmark in comparative anatomy. Volcher Coiter (1534–1576), a Dutch anatomist, made detailed studies of the internal structures of birds and produced a classification of birds, De Differentiis Avium (around 1572), that was based on structure and habits. Konrad Gesner wrote the Vogelbuch and Icones avium omnium around 1557. Like Gesner, Ulisse Aldrovandi, an encyclopedic naturalist, began a 14-volume natural history with three volumes on birds, entitled ornithologiae hoc est de avibus historiae libri XII, which was published from 1599 to 1603. Aldrovandi showed great interest in plants and animals, and his work included 3000 drawings of fruits, flowers, plants, and animals, published in 363 volumes. His Ornithology alone covers 2000 pages and included such aspects as the chicken and poultry techniques. He used a number of traits including behaviour, particularly bathing and dusting, to classify bird groups. William Turner's Historia Avium (History of Birds), published at Cologne in 1544, was an early ornithological work from England. He noted the commonness of kites in English cities where they snatched food out of the hands of children. He included folk beliefs such as those of anglers. Anglers believed that the osprey emptied their fishponds and would kill them, mixing the flesh of the osprey into their fish bait. Turner's work reflected the violent times in which he lived, and stands in contrast to later works such as Gilbert White's 1789 The Natural History and Antiquities of Selborne that were written in a tranquil era.
Ornithology
Wikipedia
439
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
In the 17th century, Francis Willughby (1635–1672) and John Ray (1627–1705) created the first major system of bird classification that was based on function and morphology rather than on form or behaviour. Willughby's Ornithologiae libri tres (1676) completed by John Ray is sometimes considered to mark the beginning of scientific ornithology. Ray also worked on Ornithologia, which was published posthumously in 1713 as Synopsis methodica avium et piscium. The earliest list of British birds, Pinax Rerum Naturalium Britannicarum, was written by Christopher Merrett in 1667, but authors such as John Ray considered it of little value. Ray did, however, value the expertise of the naturalist Sir Thomas Browne (1605–82), who not only answered his queries on ornithological identification and nomenclature, but also those of Willoughby and Merrett in letter correspondence. Browne himself in his lifetime kept an eagle, owl, cormorant, bittern, and ostrich, penned a tract on falconry, and introduced the words "incubation" and "oviparous" into the English language.
Ornithology
Wikipedia
250
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Towards the late 18th century, Mathurin Jacques Brisson (1723–1806) and Comte de Buffon (1707–1788) began new works on birds. Brisson produced a six-volume work Ornithologie in 1760 and Buffon's included nine volumes (volumes 16–24) on birds Histoire naturelle des oiseaux (1770–1785) in his work on science Histoire naturelle générale et particulière (1749–1804). Jacob Temminck sponsored François Le Vaillant [1753–1824] to collect bird specimens in Southern Africa and Le Vaillant's six-volume Histoire naturelle des oiseaux d'Afrique (1796–1808) included many non-African birds. His other bird books produced in collaboration with the artist Barraband are considered among the most valuable illustrated guides ever produced. Louis Pierre Vieillot (1748–1831) spent 10 years studying North American birds and wrote the Histoire naturelle des oiseaux de l'Amerique septentrionale (1807–1808?). Vieillot pioneered in the use of life histories and habits in classification. Alexander Wilson composed a nine-volume work, American Ornithology, published 1808–1814, which is the first such record of North American birds, significantly antedating Audubon. In the early 19th century, Lewis and Clark studied and identified many birds in the western United States. John James Audubon, born in 1785, observed and painted birds in France and later in the Ohio and Mississippi valleys. From 1827 to 1838, Audubon published The Birds of America, which was engraved by Robert Havell Sr. and his son Robert Havell Jr. Containing 435 engravings, it is often regarded as the greatest ornithological work in history. Scientific studies
Ornithology
Wikipedia
373
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The emergence of ornithology as a scientific discipline began in the 18th century, when Mark Catesby published his two-volume Natural History of Carolina, Florida, and the Bahama Islands, a landmark work which included 220 hand-painted engravings and was the basis for many of the species Carl Linnaeus described in the 1758 Systema Naturae. Linnaeus' work revolutionised bird taxonomy by assigning every species a binomial name, categorising them into different genera. However, ornithology did not emerge as a specialised science until the Victorian era—with the popularization of natural history, and the collection of natural objects such as bird eggs and skins. This specialization led to the formation in Britain of the British Ornithologists' Union in 1858. In 1859, the members founded its journal The Ibis. The sudden spurt in ornithology was also due in part to colonialism. At 100 years later, in 1959, R. E. Moreau noted that ornithology in this period was preoccupied with the geographical distributions of various species of birds. The bird collectors of the Victorian era observed the variations in bird forms and habits across geographic regions, noting local specialization and variation in widespread species. The collections of museums and private collectors grew with contributions from various parts of the world. The naming of species with binomials and the organization of birds into groups based on their similarities became the main work of museum specialists. The variations in widespread birds across geographical regions caused the introduction of trinomial names.
Ornithology
Wikipedia
312
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The search for patterns in the variations of birds was attempted by many. Friedrich Wilhelm Joseph Schelling (1775–1854), his student Johann Baptist von Spix (1781–1826), and several others believed that a hidden and innate mathematical order existed in the forms of birds. They believed that a "natural" classification was available and superior to "artificial" ones. A particularly popular idea was the Quinarian system popularised by Nicholas Aylward Vigors (1785–1840), William Sharp Macleay (1792–1865), William Swainson, and others. The idea was that nature followed a "rule of five" with five groups nested hierarchically. Some had attempted a rule of four, but Johann Jakob Kaup (1803–1873) insisted that the number five was special, noting that other natural entities such as the senses also came in fives. He followed this idea and demonstrated his view of the order within the crow family. Where he failed to find five genera, he left a blank insisting that a new genus would be found to fill these gaps. These ideas were replaced by more complex "maps" of affinities in works by Hugh Edwin Strickland and Alfred Russel Wallace. A major advance was made by Max Fürbringer in 1888, who established a comprehensive phylogeny of birds based on anatomy, morphology, distribution, and biology. This was developed further by Hans Gadow and others. The Galapagos finches were especially influential in the development of Charles Darwin's theory of evolution. His contemporary Alfred Russel Wallace also noted these variations and the geographical separations between different forms leading to the study of biogeography. Wallace was influenced by the work of Philip Lutley Sclater on the distribution patterns of birds. For Darwin, the problem was how species arose from a common ancestor, but he did not attempt to find rules for delineation of species. The species problem was tackled by the ornithologist Ernst Mayr, who was able to demonstrate that geographical isolation and the accumulation of genetic differences led to the splitting of species. Early ornithologists were preoccupied with matters of species identification. Only systematics counted as true science and field studies were considered inferior through much of the 19th century. In 1901, Robert Ridgway wrote in the introduction to The Birds of North and Middle America that:
Ornithology
Wikipedia
482
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
This early idea that the study of living birds was merely recreation held sway until ecological theories became the predominant focus of ornithological studies. The study of birds in their habitats was particularly advanced in Germany with bird ringing stations established as early as 1903. By the 1920s, the Journal für Ornithologie included many papers on the behaviour, ecology, anatomy, and physiology, many written by Erwin Stresemann. Stresemann changed the editorial policy of the journal, leading both to a unification of field and laboratory studies and a shift of research from museums to universities. Ornithology in the United States continued to be dominated by museum studies of morphological variations, species identities, and geographic distributions, until it was influenced by Stresemann's student Ernst Mayr. In Britain, some of the earliest ornithological works that used the word ecology appeared in 1915. The Ibis, however, resisted the introduction of these new methods of study, and no paper on ecology appeared until 1943. The work of David Lack on population ecology was pioneering. Newer quantitative approaches were introduced for the study of ecology and behaviour, and this was not readily accepted. For instance, Claud Ticehurst wrote: David Lack's studies on population ecology sought to find the processes involved in the regulation of population based on the evolution of optimal clutch sizes. He concluded that population was regulated primarily by density-dependent controls, and also suggested that natural selection produces life-history traits that maximize the fitness of individuals. Others, such as Wynne-Edwards, interpreted population regulation as a mechanism that aided the "species" rather than individuals. This led to widespread and sometimes bitter debate on what constituted the "unit of selection". Lack also pioneered the use of many new tools for ornithological research, including the idea of using radar to study bird migration. Birds were also widely used in studies of the niche hypothesis and Georgii Gause's competitive exclusion principle. Work on resource partitioning and the structuring of bird communities through competition were made by Robert MacArthur. Patterns of biodiversity also became a topic of interest. Work on the relationship of the number of species to area and its application in the study of island biogeography was pioneered by E. O. Wilson and Robert MacArthur. These studies led to the development of the discipline of landscape ecology.
Ornithology
Wikipedia
472
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
John Hurrell Crook studied the behaviour of weaverbirds and demonstrated the links between ecological conditions, behaviour, and social systems. Principles from economics were introduced to the study of biology by Jerram L. Brown in his work on explaining territorial behaviour. This led to more studies of behaviour that made use of cost-benefit analyses. The rising interest in sociobiology also led to a spurt of bird studies in this area. The study of imprinting behaviour in ducks and geese by Konrad Lorenz and the studies of instinct in herring gulls by Nicolaas Tinbergen led to the establishment of the field of ethology. The study of learning became an area of interest and the study of bird songs has been a model for studies in neuroethology. The study of hormones and physiology in the control of behaviour has also been aided by bird models. These have helped in finding the proximate causes of circadian and seasonal cycles. Studies on migration have attempted to answer questions on the evolution of migration, orientation, and navigation. The growth of genetics and the rise of molecular biology led to the application of the gene-centered view of evolution to explain avian phenomena. Studies on kinship and altruism, such as helpers, became of particular interest. The idea of inclusive fitness was used to interpret observations on behaviour and life history, and birds were widely used models for testing hypotheses based on theories postulated by W. D. Hamilton and others. The new tools of molecular biology changed the study of bird systematics, which changed from being based on phenotype to the underlying genotype. The use of techniques such as DNA–DNA hybridization to study evolutionary relationships was pioneered by Charles Sibley and Jon Edward Ahlquist, resulting in what is called the Sibley–Ahlquist taxonomy. These early techniques have been replaced by newer ones based on mitochondrial DNA sequences and molecular phylogenetics approaches that make use of computational procedures for sequence alignment, construction of phylogenetic trees, and calibration of molecular clocks to infer evolutionary relationships. Molecular techniques are also widely used in studies of avian population biology and ecology. Rise to popularity The use of field glasses or telescopes for bird observation began in the 1820s and 1830s, with pioneers such as J. Dovaston (who also pioneered in the use of bird feeders), but instruction manuals did not begin to insist on the use of optical aids such as "a first-class telescope" or "field glass" until the 1880s.
Ornithology
Wikipedia
502
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The rise of field guides for the identification of birds was another major innovation. The early guides such as Thomas Bewick's two-volume guide and William Yarrell's three-volume guide were cumbersome, and mainly focused on identifying specimens in the hand. The earliest of the new generation of field guides was prepared by Florence Merriam, sister of Clinton Hart Merriam, the mammalogist. This was published in 1887 in a series Hints to Audubon Workers: Fifty Birds and How to Know Them in Grinnell's Audubon Magazine. These were followed by new field guides, from the pioneering illustrated handbooks of Frank Chapman to the classic Field Guide to the Birds by Roger Tory Peterson in 1934, to Birds of the West Indies published in 1936 by Dr. James Bond - the same who inspired the amateur ornithologist Ian Fleming in naming his famous literary spy. The interest in birdwatching grew in popularity in many parts of the world, and the possibility for amateurs to contribute to biological studies was soon realized. As early as 1916, Julian Huxley wrote a two-part article in The Auk, noting the tensions between amateurs and professionals, and suggested the possibility that the "vast army of bird lovers and bird watchers could begin providing the data scientists needed to address the fundamental problems of biology." The amateur ornithologist Harold F. Mayfield noted that the field was also funded by non-professionals. He noted that in 1975, 12% of the papers in American ornithology journals were written by persons who were not employed in biology related work.
Ornithology
Wikipedia
321
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Organizations were started in many countries, and these grew rapidly in membership, most notable among them being the Royal Society for the Protection of Birds (RSPB) in Britain and the Audubon Society in the US, which started in 1885. Both these organizations were started with the primary objective of conservation. The RSPB, born in 1889, grew from a small Croydon-based group of women, including Eliza Phillips, Etta Lemon, Catherine Hall and Hannah Poland. Calling themselves the "Fur, Fin, and Feather Folk", the group met regularly and took a pledge "to refrain from wearing the feathers of any birds not killed for the purpose of food, the ostrich only exempted." The organization did not allow men as members initially, avenging a policy of the British Ornithologists' Union to keep out women. Unlike the RSPB, which was primarily conservation oriented, the British Trust for Ornithology was started in 1933 with the aim of advancing ornithological research. Members were often involved in collaborative ornithological projects. These projects have resulted in atlases which detail the distribution of bird species across Britain. In Canada, citizen scientist Elsie Cassels studied migratory birds and was involved in establishing Gaetz Lakes bird sanctuary. In the United States, the Breeding Bird Surveys, conducted by the United States Geological Survey, have also produced atlases with information on breeding densities and changes in the density and distribution over time. Other volunteer collaborative ornithology projects were subsequently established in other parts of the world. Techniques The tools and techniques of ornithology are varied, and new inventions and approaches are quickly incorporated. The techniques may be broadly dealt under the categories of those that are applicable to specimens and those that are used in the field, but the classification is rough and many analysis techniques are usable both in the laboratory and field or may require a combination of field and laboratory techniques. Collections The earliest approaches to modern bird study involved the collection of eggs, a practice known as oology. While collecting became a pastime for many amateurs, the labels associated with these early egg collections made them unreliable for the serious study of bird breeding. To preserve eggs, a tiny hole was made and the contents extracted. This technique became standard with the invention of the blow drill around 1830. Egg collection is no longer popular; however, historic museum collections have been of value in determining the effects of pesticides such as DDT on physiology. Museum bird collections continue to act as a resource for taxonomic studies.
Ornithology
Wikipedia
510
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The use of bird skins to document species has been a standard part of systematic ornithology. Bird skins are prepared by retaining the key bones of the wings, legs, and skull along with the skin and feathers. In the past, they were treated with arsenic to prevent fungal and insect (mostly dermestid) attack. Arsenic, being toxic, was replaced by less-toxic borax. Amateur and professional collectors became familiar with these skinning techniques and started sending in their skins to museums, some of them from distant locations. This led to the formation of huge collections of bird skins in museums in Europe and North America. Many private collections were also formed. These became references for comparison of species, and the ornithologists at these museums were able to compare species from different locations, often places that they themselves never visited. Morphometrics of these skins, particularly the lengths of the tarsus, bill, tail, and wing became important in the descriptions of bird species. These skin collections have been used in more recent times for studies on molecular phylogenetics by the extraction of ancient DNA. The importance of type specimens in the description of species make skin collections a vital resource for systematic ornithology. However, with the rise of molecular techniques, establishing the taxonomic status of new discoveries, such as the Bulo Burti boubou (Laniarius liberatus, no longer a valid species) and the Bugun liocichla (Liocichla bugunorum), using blood, DNA and feather samples as the holotype material, has now become possible. Other methods of preservation include the storage of specimens in spirit. Such wet specimens have special value in physiological and anatomical study, apart from providing better quality of DNA for molecular studies. Freeze drying of specimens is another technique that has the advantage of preserving stomach contents and anatomy, although it tends to shrink, making it less reliable for morphometrics. In the field The study of birds in the field was helped enormously by improvements in optics. Photography made it possible to document birds in the field with great accuracy. High-power spotting scopes today allow observers to detect minute morphological differences that were earlier possible only by examination of the specimen "in the hand".
Ornithology
Wikipedia
456
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The capture and marking of birds enable detailed studies of life history. Techniques for capturing birds are varied and include the use of bird liming for perching birds, mist nets for woodland birds, cannon netting for open-area flocking birds, the bal-chatri trap for raptors, decoys and funnel traps for water birds. The bird in the hand may be examined and measurements can be made, including standard lengths and weights. Feather moult and skull ossification provide indications of age and health. Sex can be determined by examination of anatomy in some sexually nondimorphic species. Blood samples may be drawn to determine hormonal conditions in studies of physiology, identify DNA markers for studying genetics and kinship in studies of breeding biology and phylogeography. Blood may also be used to identify pathogens and arthropod-borne viruses. Ectoparasites may be collected for studies of coevolution and zoonoses. In many cryptic species, measurements (such as the relative lengths of wing feathers in warblers) are vital in establishing identity. Captured birds are often marked for future recognition. Rings or bands provide long-lasting identification, but require capture for the information on them to be read. Field-identifiable marks such as coloured bands, wing tags, or dyes enable short-term studies where individual identification is required. Mark and recapture techniques make demographic studies possible. Ringing has traditionally been used in the study of migration. In recent times, satellite transmitters provide the ability to track migrating birds in near-real time. Techniques for estimating population density include point counts, transects, and territory mapping. Observations are made in the field using carefully designed protocols and the data may be analysed to estimate bird diversity, relative abundance, or absolute population densities. These methods may be used repeatedly over large timespans to monitor changes in the environment. Camera traps have been found to be a useful tool for the detection and documentation of elusive species, nest predators and in the quantitative analysis of frugivory, seed dispersal and behaviour.
Ornithology
Wikipedia
418
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
In the laboratory Many aspects of bird biology are difficult to study in the field. These include the study of behavioural and physiological changes that require a long duration of access to the bird. Nondestructive samples of blood or feathers taken during field studies may be studied in the laboratory. For instance, the variation in the ratios of stable hydrogen isotopes across latitudes makes establishing the origins of migrant birds possible using mass spectrometric analysis of feather samples. These techniques can be used in combination with other techniques such as ringing. The first attenuated vaccine developed by Louis Pasteur, for fowl cholera, was tested on poultry in 1878. Anti-malarials were tested on birds which harbour avian-malarias. Poultry continues to be used as a model for many studies in non-mammalian immunology. Studies in bird behaviour include the use of tamed and trained birds in captivity. Studies on bird intelligence and song learning have been largely laboratory-based. Field researchers may make use of a wide range of techniques such as the use of dummy owls to elicit mobbing behaviour, and dummy males or the use of call playback to elicit territorial behaviour and thereby to establish the boundaries of bird territories. Studies of bird migration including aspects of navigation, orientation, and physiology are often studied using captive birds in special cages that record their activities. The Emlen funnel, for instance, makes use of a cage with an inkpad at the centre and a conical floor where the ink marks can be counted to identify the direction in which the bird attempts to fly. The funnel can have a transparent top and visible cues such as the direction of sunlight may be controlled using mirrors or the positions of the stars simulated in a planetarium.
Ornithology
Wikipedia
348
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The entire genome of the domestic fowl (Gallus gallus) was sequenced in 2004, and was followed in 2008 by the genome of the zebra finch (Taeniopygia guttata). Such whole-genome sequencing projects allow for studies on evolutionary processes involved in speciation. Associations between the expression of genes and behaviour may be studied using candidate genes. Variations in the exploratory behaviour of great tits (Parus major) have been found to be linked with a gene orthologous to the human gene DRD4 (Dopamine receptor D4) which is known to be associated with novelty-seeking behaviour. The role of gene expression in developmental differences and morphological variations have been studied in Darwin's finches. The difference in the expression of Bmp4 have been shown to be associated with changes in the growth and shape of the beak. The chicken has long been a model organism for studying vertebrate developmental biology. As the embryo is readily accessible, its development can be easily followed (unlike mice). This also allows the use of electroporation for studying the effect of adding or silencing a gene. Other tools for perturbing their genetic makeup are chicken embryonic stem cells and viral vectors. Collaborative studies With the widespread interest in birds, use of a large number of people to work on collaborative ornithological projects that cover large geographic scales has been possible. These citizen science projects include nationwide projects such as the Christmas Bird Count, Backyard Bird Count, the North American Breeding Bird Survey, the Canadian EPOQ or regional projects such as the Asian Waterfowl Census and Spring Alive in Europe. These projects help to identify distributions of birds, their population densities and changes over time, arrival and departure dates of migration, breeding seasonality, and even population genetics. The results of many of these projects are published as bird atlases. Studies of migration using bird ringing or colour marking often involve the cooperation of people and organizations in different countries. Applications Wild birds impact many human activities, while domesticated birds are important sources of eggs, meat, feathers, and other products. Applied and economic ornithology aim to reduce the ill effects of problem birds and enhance gains from beneficial species.
Ornithology
Wikipedia
453
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
The role of some species of birds as pests has been well known, particularly in agriculture. Granivorous birds such as the queleas in Africa are among the most numerous birds in the world, and foraging flocks can cause devastation. Many insectivorous birds are also noted as beneficial in agriculture. Many early studies on the benefits or damages caused by birds in fields were made by analysis of stomach contents and observation of feeding behaviour. Modern studies aimed to manage birds in agriculture make use of a wide range of principles from ecology. Intensive aquaculture has brought humans in conflict with fish-eating birds such as cormorants. Large flocks of pigeons and starlings in cities are often considered as a nuisance, and techniques to reduce their populations or their impacts are constantly innovated. Birds are also of medical importance, and their role as carriers of human diseases such as Japanese encephalitis, West Nile virus, and influenza H5N1 have been widely recognized. Bird strikes and the damage they cause in aviation are of particularly great importance, due to the fatal consequences and the level of economic losses caused. The airline industry incurs worldwide damages of an estimated US$1.2 billion each year. Many species of birds have been driven to extinction by human activities. Being conspicuous elements of the ecosystem, they have been considered as indicators of ecological health. They have also helped in gathering support for habitat conservation. Bird conservation requires specialized knowledge in aspects of biology and ecology, and may require the use of very location-specific approaches. Ornithologists contribute to conservation biology by studying the ecology of birds in the wild and identifying the key threats and ways of enhancing the survival of species. Critically endangered species such as the California condor have had to be captured and bred in captivity. Such ex situ conservation measures may be followed by reintroduction of the species into the wild.
Ornithology
Wikipedia
379
42967
https://en.wikipedia.org/wiki/Ornithology
Biology and health sciences
Basics_2
Biology
Hubble's law, also known as the Hubble–Lemaître law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther a galaxy is from the Earth, the faster it moves away. A galaxy's recessional velocity is typically determined by measuring its redshift, a shift in the frequency of light emitted by the galaxy. The discovery of Hubble's law is attributed to work published by Edwin Hubble in 1929, but the notion of the universe expanding at a calculable rate was first derived from general relativity equations in 1922 by Alexander Friedmann. The Friedmann equations showed the universe might be expanding, and presented the expansion speed if that were the case. Before Hubble, astronomer Carl Wilhelm Wirtz had, in 1922 and 1924, deduced with his own data that galaxies that appeared smaller and dimmer had larger redshifts and thus that more distant galaxies recede faster from the observer. In 1927, Georges Lemaître concluded that the universe might be expanding by noting the proportionality of the recessional velocity of distant bodies to their respective distances. He estimated a value for this ratio, which—after Hubble confirmed cosmic expansion and determined a more precise value for it two years later—became known as the Hubble constant. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917. Combining Slipher's velocities with Henrietta Swan Leavitt's intergalactic distance calculations and methodology allowed Hubble to better calculate an expansion rate for the universe.
Hubble's law
Wikipedia
349
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Hubble's law is considered the first observational basis for the expansion of the universe, and is one of the pieces of evidence most often cited in support of the Big Bang model. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. It is described by the equation , with the constant of proportionality—the Hubble constant—between the "proper distance" to a galaxy (which can change over time, unlike the comoving distance) and its speed of separation , i.e. the derivative of proper distance with respect to the cosmic time coordinate. Though the Hubble constant is constant at any given moment in time, the Hubble parameter , of which the Hubble constant is the current value, varies with time, so the term constant is sometimes thought of as somewhat of a misnomer. The Hubble constant is most frequently quoted in km/s/Mpc, which gives the speed of a galaxy away as . Simplifying the units of the generalized form reveals that specifies a frequency (SI unit: s−1), leading the reciprocal of to be known as the Hubble time (14.4 billion years). The Hubble constant can also be stated as a relative rate of expansion. In this form  = 7%/Gyr, meaning that, at the current rate of expansion, it takes one billion years for an unbound structure to grow by 7%. Discovery A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of an expanding universe by using Einstein field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then-prevalent notion of a static universe. Slipher's observations In 1912, Vesto M. Slipher measured the first Doppler shift of a "spiral nebula" (the obsolete term for spiral galaxies) and soon discovered that almost all such objects were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside the Milky Way galaxy. FLRW equations
Hubble's law
Wikipedia
446
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
In 1922, Alexander Friedmann derived his Friedmann equations from Einstein field equations, showing that the universe might expand at a rate calculable by the equations. The parameter used by Friedmann is known today as the scale factor and can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in his 1927 paper discussed in the following section. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology. Lemaître's equation In 1927, two years before Hubble published his own article, the Belgian priest and astronomer Georges Lemaître was the first to publish research deriving what is now known as Hubble's law. According to the Canadian astronomer Sidney van den Bergh, "the 1927 discovery of the expansion of the universe by Lemaître was published in French in a low-impact journal. In the 1931 high-impact English translation of this article, a critical equation was changed by omitting reference to what is now known as the Hubble constant." It is now known that the alterations in the translated paper were carried out by Lemaître himself. Shape of the universe Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the Shapley–Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy, and Curtis argued that the universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations. Cepheid variable stars outside the Milky Way Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, home to the world's most powerful telescope at the time. His observations of Cepheid variable stars in "spiral nebulae" enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called nebulae, and it was only gradually that the term galaxies replaced it. Combining redshifts with distance measurements
Hubble's law
Wikipedia
491
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
The velocities and distances that appear in Hubble's law are not directly measured. The velocities are inferred from the redshift of radiation and distance is inferred from brightness. Hubble sought to correlate brightness with parameter . Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities—the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 (km/s)/Mpc (much higher than the currently accepted value due to errors in his distance calibrations; see cosmic distance ladder for details). Hubble diagram Hubble's law can be easily depicted in a "Hubble diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer. A straight line of positive slope on this diagram is the visual depiction of Hubble's law. Cosmological constant abandoned After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, a term he had inserted into his equations of general relativity to coerce them into producing the static solution he previously considered the correct state of the universe. The Einstein equations in their simplest form model either an expanding or contracting universe, so Einstein introduced the constant to counter expansion or contraction and lead to a static and flat universe. After Hubble's discovery that the universe was, in fact, expanding, Einstein called his faulty assumption that the universe is static his "greatest mistake". On its own, general relativity could predict the expansion of the universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated. In 1931, Einstein went to Mount Wilson Observatory to thank Hubble for providing the observational basis for modern cosmology. The cosmological constant has regained attention in recent decades as a hypothetical explanation for dark energy. Interpretation
Hubble's law
Wikipedia
511
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's law as follows: where is the recessional velocity, typically expressed in km/s. is Hubble's constant and corresponds to the value of (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript . This value is the same throughout the universe for a given comoving time. is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just ). Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted and is not established except for small redshifts. For distances larger than the radius of the Hubble sphere , objects recede at a rate faster than the speed of light (See Uses of the proper distance for a discussion of the significance of this): Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today. Current evidence suggests that the expansion of the universe is accelerating (see Accelerating universe), meaning that for any given galaxy, the recession velocity is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some distance and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
Hubble's law
Wikipedia
417
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Redshift velocity and recessional velocity Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus, redshift is a quantity unambiguously acquired from observation. Care is required, however, in translating these to recessional velocities: for small redshift values, a linear relation of redshift to recessional velocity applies, but more generally the redshift-distance law is nonlinear, meaning the co-relation must be derived specifically for each given model and epoch. Redshift velocity The redshift is often described as a redshift velocity, which is the recessional velocity that would produce the same redshift it were caused by a linear Doppler effect (which, however, is not the case, as the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light. In other words, to determine the redshift velocity , the relation: is used. That is, there is between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau–Doppler formula Here, , are the observed and emitted wavelengths respectively. The "redshift velocity" is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed. Recessional velocity Suppose is called the scale factor of the universe, and increases as the universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured proper distances between co-moving points increase proportionally to . (The co-moving points are not moving relative to their local environments.) In other words: where is some reference time. If light is emitted from a galaxy at time and received by us at , it is redshifted due to the expansion of the universe, and this redshift is simply:
Hubble's law
Wikipedia
491
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Suppose a galaxy is at distance , and this distance changes with time at a rate . We call this rate of recession the "recession velocity" : We now define the Hubble constant as and discover the Hubble law: From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity associated with the expansion of the universe and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift approximately by making a Taylor series expansion: If the distance is not too large, all other complications of the model become small corrections, and the time interval is simply the distance divided by the speed of light: or According to this approach, the relation is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure. Observability of parameters Strictly speaking, neither nor in the formula are directly observable, because they are properties of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it. For relatively nearby galaxies (redshift much less than one), and will not have changed much, and can be estimated using the formula where is the speed of light. This gives the empirical relation found by Hubble. For distant galaxies, (or ) cannot be calculated from without specifying a detailed model for how changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: is the factor by which the universe has expanded while the photon was traveling towards the observer. Expansion velocity vs. peculiar velocity In using Hubble's law to determine distances, only the velocity due to the expansion of the universe can be used. Since gravitationally interacting galaxies move relative to each other independent of the expansion of the universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law. Such peculiar velocities give rise to redshift-space distortions. Time-dependence of Hubble parameter
Hubble's law
Wikipedia
469
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
The parameter is commonly called the "Hubble constant", but that is a misnomer since it is constant in space only at a fixed time; it varies with time in nearly all cosmological models, and all observations of far distant objects are also observations into the distant past, when the "constant" had a different value. "Hubble parameter" is a more correct term, with denoting the present-day value. Another common source of confusion is that the accelerating universe does imply that the Hubble parameter is actually increasing with time; since in most accelerating models increases relatively faster than so decreases with time. (The recession velocity of one chosen galaxy does increase, but different galaxies passing a sphere of fixed radius cross the sphere more slowly at later times.) On defining the dimensionless deceleration parameter it follows that From this it is seen that the Hubble parameter is decreasing with time, unless ; the latter can only occur if the universe contains phantom energy, regarded as theoretically somewhat improbable. However, in the standard Lambda cold dark matter model (Lambda-CDM or ΛCDM model), will tend to −1 from above in the distant future as the cosmological constant becomes increasingly dominant over matter; this implies that will approach from above to a constant value of ≈ 57 (km/s)/Mpc, and the scale factor of the universe will then grow exponentially in time. Idealized Hubble's law The mathematical derivation of an idealized Hubble's law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated, the theorem is this: In fact, this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic, specifically to the negatively and positively curved spaces frequently considered as cosmological models (see shape of the universe). An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that observer in an expanding universe will see objects receding from them. Ultimate fate and age of the universe The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the so-called deceleration parameter , which is defined by
Hubble's law
Wikipedia
501
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
In a universe with a deceleration parameter equal to zero, it follows that , where is the time since the Big Bang. A non-zero, time-dependent value of simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero. It was long thought that was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the universe less than (which is about 14 billion years). For instance, a value for of 1/2 (once favoured by most theorists) would give the age of the universe as . The discovery in 1998 that is apparently negative means that the universe could actually be older than . However, estimates of the age of the universe are very close to . Olbers' paradox The expansion of space summarized by the Big Bang interpretation of Hubble's law is relevant to the old conundrum known as Olbers' paradox: If the universe were infinite in size, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star. However, the night sky is largely dark. Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part on the Big Bang theory, and in part on the Hubble expansion: in a universe that existed for a finite amount of time, only the light of a finite number of stars has had enough time to reach us, and the paradox is resolved. Additionally, in an expanding universe, distant objects recede from us, which causes the light emanated from them to be redshifted and diminished in brightness by the time we see it. Dimensionless Hubble constant Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble constant, usually denoted by and commonly referred to as "little h", then to write Hubble's constant as , all the relative uncertainty of the true value of being then relegated to . The dimensionless Hubble constant is often used when giving distances that are calculated from redshift using the formula . Since is not precisely known, the distance is expressed as: In other words, one calculates 2998 × and one gives the units as Mpc  or  Mpc.
Hubble's law
Wikipedia
487
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Occasionally a reference value other than 100 may be chosen, in which case a subscript is presented after to avoid confusion; e.g. denotes  , which implies . This should not be confused with the dimensionless value of Hubble's constant, usually expressed in terms of Planck units, obtained by multiplying by (from definitions of parsec and ), for example for , a Planck unit version of is obtained. Acceleration of the expansion A value for measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the universe is currently "accelerating" (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model). Derivation of the Hubble parameter Start with the Friedmann equation: where is the Hubble parameter, is the scale factor, is the gravitational constant, is the normalised spatial curvature of the universe and equal to −1, 0, or 1, and is the cosmological constant. Matter-dominated universe (with a cosmological constant) If the universe is matter-dominated, then the mass density of the universe can be taken to include just matter so where is the density of matter today. From the Friedmann equation and thermodynamic principles we know for non-relativistic particles that their mass density decreases proportional to the inverse volume of the universe, so the equation above must be true. We can also define (see density parameter for ) therefore: Also, by definition, where the subscript refers to the values today, and . Substituting all of this into the Friedmann equation at the start of this section and replacing with gives Matter- and dark energy-dominated universe If the universe is both matter-dominated and dark energy-dominated, then the above equation for the Hubble parameter will also be a function of the equation of state of dark energy. So now: where is the mass density of the dark energy. By definition, an equation of state in cosmology is , and if this is substituted into the fluid equation, which describes how the mass density of the universe evolves with time, then If is constant, then implying: Therefore, for dark energy with a constant equation of state , If this is substituted into the Friedman equation in a similar way as before, but this time set , which assumes a spatially flat universe, then (see shape of the universe)
Hubble's law
Wikipedia
510
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
If the dark energy derives from a cosmological constant such as that introduced by Einstein, it can be shown that . The equation then reduces to the last equation in the matter-dominated universe section, with set to zero. In that case the initial dark energy density is given by If dark energy does not have a constant equation-of-state , then and to solve this, must be parametrized, for example if , giving Other ingredients have been formulated. Units derived from the Hubble constant Hubble time The Hubble constant has units of inverse time; the Hubble time is simply defined as the inverse of the Hubble constant, i.e. This is slightly different from the age of the universe, which is approximately 13.8 billion years. The Hubble time is the age it would have had if the expansion had been linear, and it is different from the real age of the universe because the expansion is not linear; it depends on the energy content of the universe (see ). We currently appear to be approaching a period where the expansion of the universe is exponential due to the increasing dominance of vacuum energy. In this regime, the Hubble parameter is constant, and the universe grows by a factor each Hubble time: Likewise, the generally accepted value of 2.27 Es−1 means that (at the current rate) the universe would grow by a factor of in one exasecond. Over long periods of time, the dynamics are complicated by general relativity, dark energy, inflation, etc., as explained above. Hubble length The Hubble length or Hubble distance is a unit of distance in cosmology, defined as — the speed of light multiplied by the Hubble time. It is equivalent to 4,420 million parsecs or 14.4 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) Substituting into the equation for Hubble's law, reveals that the Hubble distance specifies the distance from our location to those galaxies which are receding from us at the speed of light Hubble volume
Hubble's law
Wikipedia
435
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
The Hubble volume is sometimes defined as a volume of the universe with a comoving size of . The exact definition varies: it is sometimes defined as the volume of a sphere with radius , or alternatively, a cube of side . Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger. Determining the Hubble constant The value of the Hubble constant, , cannot be measured directly, but is derived from a combination of astronomical observations and model-dependent assumptions. Increasingly accurate observations and new models over many decades have led to two sets of highly precise values which do not agree. This difference is known as the "Hubble tension". Earlier measurements For the original 1929 estimate of the constant now bearing his name, Hubble used observations of Cepheid variable stars as "standard candles" to measure distance. The result he obtained was , much larger than the value astronomers currently calculate. Later observations by astronomer Walter Baade led him to realize that there were distinct "populations" for stars (Population I and Population II) in a galaxy. The same observations led him to discover that there are two types of Cepheid variable stars with different luminosities. Using this discovery, he recalculated Hubble constant and the size of the known universe, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of was estimated to be between . The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50. In one demonstration of vitriol shared between the parties, when Sandage and Gustav Andreas Tammann (Sandage's research colleague) formally acknowledged the shortcomings of confirming the systematic error of their method in 1975, Vaucouleurs responded "It is unfortunate that this sober warning was so soon forgotten and ignored by most astronomers and textbook writers". In 1996, a debate moderated by John Bahcall between Sidney van den Bergh and Gustav Tammann was held in similar fashion to the earlier Shapley–Curtis debate over these two competing values.
Hubble's law
Wikipedia
487
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the universe in the late 1990s. Incorporating the ΛCDM model, observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev–Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 50–70 km/s/Mpc for the constant. Precision cosmology and the Hubble tension By the late 1990s, advances in ideas and technology allowed higher precision measurements. However, two major categories of methods, each with high precision, fail to agree. "Late universe" measurements using calibrated distance ladder techniques have converged on a value of approximately . Since 2000, "early universe" techniques based on measurements of the cosmic microwave background have become available, and these agree on a value near . (This accounts for the change in the expansion rate since the early universe, so is comparable to the first number.) Initially, this discrepancy was within the estimated measurement uncertainties and thus no cause for concern. However, as techniques have improved, the estimated measurement uncertainties have shrunk, but the discrepancies have not, to the point that the disagreement is now highly statistically significant. This discrepancy is called the Hubble tension. An example of an "early" measurement, the Planck mission published in 2018 gives a value for of . In the "late" camp is the higher value of determined by the Hubble Space Telescope and confirmed by the James Webb Space Telescope in 2023. The "early" and "late" measurements disagree at the >5 σ level, beyond a plausible level of chance. The resolution to this disagreement is an ongoing area of active research. Reducing systematic errors Since 2013 much effort has gone in to new measurements to check for possible systematic errors and improved reproducibility.
Hubble's law
Wikipedia
396
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
The "late universe" or distance ladder measurements typically employ three stages or "rungs". In the first rung distances to Cepheids are determined while trying to reduce luminosity errors from dust and correlations of metallicity with luminosity. The second rung uses Type Ia supernova, explosions of almost constant amount of mass and thus very similar amounts of light; the primary source of systematic error is the limited number of objects that can be observed. The third rung of the distance ladder measures the red-shift of supernova to extract the Hubble flow and from that the constant. At this rung corrections due to motion other than expansion are applied. As an example of the kind of work needed to reduce systematic errors, photometry on observations from the James Webb Space Telescope of extra-galactic Cepheids confirm the findings from the HST. The higher resolution avoided confusion from crowding of stars in the field of view but came to the same value for H0. The "early universe" or inverse distance ladder measures the observable consequences of spherical sound waves on primordial plasma density. These pressure waves – called baryon acoustic oscillations (BAO) – cease once the universe cooled enough for electrons to stay bound to nuclei, ending the plasma and allowing the photons trapped by interaction with the plasma to escape. The pressure waves then become very small perturbations in density imprinted on the cosmic microwave background and on the large scale density of galaxies across the sky. Detailed structure in high precision measurements of the CMB can matched to physics models of the oscillations. These models depend upon the Hubble constant such that a match reveals a value for the constant. Similarly, the BAO affects the statistical distribution of matter, observed as distant galaxies across the sky. These two independent kinds of measurements produce similar values for the constant from the current models, giving strong evidence that systematic errors in the measurements themselves do not affect the result. Other kinds of measurements In addition to measurements based on calibrated distance ladder techniques or measurements of the CMB, other methods have been used to determine the Hubble constant. In October 2018, scientists used information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble constant.
Hubble's law
Wikipedia
472
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817, an event known as a dark siren. Their measurement of the Hubble constant is (km/s)/Mpc. Also in July 2019, astronomers reported another new method, using data from the Hubble Space Telescope and based on distances to red giant stars calculated using the tip of the red-giant branch (TRGB) distance indicator. Their measurement of the Hubble constant is . In February 2020, the Megamaser Cosmology Project published independent results based on astrophysical masers visible at cosmological distances and which do not require multi-step calibration. That work confirmed the distance ladder results and differed from the early-universe results at a statistical significance level of 95%. In July 2020, measurements of the cosmic background radiation by the Atacama Cosmology Telescope predict that the Universe should be expanding more slowly than is currently observed. In July 2023, an independent estimate of the Hubble constant was derived from a kilonova, the optical afterglow of a neutron star merger. Due to the blackbody nature of early kilonova spectra, such systems provide strongly constraining estimators of cosmic distance. Using the kilonova AT2017gfo (the aftermath of, once again, GW170817), these measurements indicate a local-estimate of the Hubble constant of . Possible resolutions of the Hubble tension The cause of the Hubble tension is unknown, and there are many possible proposed solutions. The most conservative is that there is an unknown systematic error affecting either early-universe or late-universe observations. Although intuitively appealing, this explanation requires multiple unrelated effects regardless of whether early-universe or late-universe observations are incorrect, and there are no obvious candidates. Furthermore, any such systematic error would need to affect multiple different instruments, since both the early-universe and late-universe observations come from several different telescopes.
Hubble's law
Wikipedia
440
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Alternatively, it could be that the observations are correct, but some unaccounted-for effect is causing the discrepancy. If the cosmological principle fails (see ), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension. In particular, we would need to be located within a very large void, up to about a redshift of 0.5, for such an explanation to conflate with supernovae and baryon acoustic oscillation observations. Yet another possibility is that the uncertainties in the measurements could have been underestimated, but given the internal agreements this is neither likely, nor resolves the overall tension. Finally, another possibility is new physics beyond the currently accepted cosmological model of the universe, the ΛCDM model. There are very many theories in this category, for example, replacing general relativity with a modified theory of gravity could potentially resolve the tension, as can a dark energy component in the early universe, dark energy with a time-varying equation of state, or dark matter that decays into dark radiation. A problem faced by all these theories is that both early-universe and late-universe measurements rely on multiple independent lines of physics, and it is difficult to modify any of those lines while preserving their successes elsewhere. The scale of the challenge can be seen from how some authors have argued that new early-universe physics alone is not sufficient; while other authors argue that new late-universe physics alone is also not sufficient. Nonetheless, astronomers are trying, with interest in the Hubble tension growing strongly since the mid 2010s. Measurements of the Hubble constant
Hubble's law
Wikipedia
342
42975
https://en.wikipedia.org/wiki/Hubble%27s%20law
Physical sciences
Physical cosmology
null
Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage. The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission. Transmission, distribution, and domestic power supply Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses () in the wire are a product of the square of the current ( I ) and the resistance (R) of the wire, described by the formula: This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss due to the wire's resistance will be reduced to one quarter. The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is,
Alternating current
Wikipedia
454
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Consequently, power transmitted at a higher voltage requires less loss-producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts on pylons, and transformed down to tens of kilovolts to be transmitted on lower level lines, and finally transformed down to 100 V – 240 V for domestic use. High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world. High-voltage direct-current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step the voltage of DC down for end user applications such as lighting incandescent bulbs.
Alternating current
Wikipedia
304
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Three-phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used. For example, a 12-pole machine would have 36 coils (10° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three-phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst-case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch-mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors.
Alternating current
Wikipedia
291
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
For three-phase at utilization voltages a four-wire system is often used. When stepping down three-phase, a transformer with a Delta (3-wire) primary and a Star (4-wire, center-earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations, all three phases and neutral are taken to the main distribution panel. From the three-phase main panel, both single and three-phase circuits may lead off. Three-wire single-phase systems, with a single center-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as two phase. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools. An additional wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current-carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral/identified conductor if present. AC power supply frequencies The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 Hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan.
Alternating current
Wikipedia
491
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Low frequency A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower transmission losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland. High frequency Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units. Effects at high frequencies A direct current flows uniformly throughout the cross-section of a homogeneous electrically conducting wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because an alternating current (which is the result of the acceleration of electric charge) creates electromagnetic waves (a phenomenon known as electromagnetic radiation). Electric conductors are not conducive to electromagnetic waves (a perfect electric conductor prohibits all electromagnetic waves within its boundary), so a wire that is made of a non-perfect conductor (a conductor with finite, rather than infinite, electrical conductivity) pushes the alternating current, along with their associated electromagnetic fields, away from the wire's center. The phenomenon of alternating current being pushed away from the center of the conductor is called skin effect, and a direct current does not exhibit this effect, since a direct current does not create electromagnetic waves.
Alternating current
Wikipedia
438
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
At very high frequencies, the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50 Hz – 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high-current conductors are usually hollow to reduce their mass and cost. This tendency of alternating current to flow predominantly in the periphery of conductors reduces the effective cross-section of the conductor. This increases the effective AC resistance of the conductor since resistance is inversely proportional to the cross-sectional area. A conductor's AC resistance is higher than its DC resistance, causing a higher energy loss due to ohmic heating (also called I2R loss). Techniques for reducing AC resistance For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers. Techniques for reducing radiation loss As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation. Twisted pairs At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signaling system so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively canceled by radiation from the other wire, resulting in almost no radiation loss.
Alternating current
Wikipedia
471
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Coaxial cables Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the dielectric separating the inner and outer tubes being a non-ideal insulator) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric. Waveguides Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that waveguides have no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are feasible only at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large.
Alternating current
Wikipedia
447
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Fiber optics At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used. Formulation Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation: , where is the peak voltage (unit: volt), is the angular frequency (unit: radians per second). The angular frequency is related to the physical frequency, (unit: hertz), which represents the number of cycles per second, by the equation . is the time (unit: second). The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of is +1 and the minimum value is −1, an AC voltage swings between and . The peak-to-peak voltage, usually written as or , is therefore . Root mean square voltage Below an AC waveform (with no DC component) is assumed. The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage. Power The relationship between voltage and the power delivered is: , where represents a load resistance. Rather than using instantaneous power, , it is more practical to use a time-averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because Power oscillation For this reason, AC power's waveform becomes Full-wave rectified sine, and its fundamental frequency is double of the one of the voltage's. Examples of alternating current To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to:
Alternating current
Wikipedia
463
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
For 230 V AC, the peak voltage is therefore , which is about 325 V, and the peak power is , that is 460 RW. During the course of one cycle (two cycle as the power) the voltage rises from zero to 325 V, the power from zero to 460 RW, and both falls through zero. Next, the voltage descends to reverse direction, -325 V, but the power ascends again to 460 RW, and both returns to zero. Information transmission Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable-transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air. History The first alternator to produce alternating current was an electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris. In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high-voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several electric candles (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.
Alternating current
Wikipedia
467
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
Transformers The development of the alternating current transformer to change voltage from low to high level and back, allowed generation and consumption at low voltages and transmission, over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They exhibited an AC system powering arc and incandescent lights was installed along five railway stations for the Metropolitan Railway in London and a single-phase multiple-user AC distribution system Turin in 1884. These early induction coils with open magnetic circuits were inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems. In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and open core transformer designs with serial connections for utilization loads - similar to Gaulard and Gibbs. In 1890, he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system.
Alternating current
Wikipedia
411
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around a ring core of iron wires or else surrounded by a core of iron wires. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 140 to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems' by the invention of constant voltage generators in 1885. In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Ottó Bláthy also invented the first AC electricity meter. Adoption
Alternating current
Wikipedia
448
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
The AC power system was developed and adopted rapidly after 1886. In March of that year, Westinghouse engineer William Stanley, designing a system based on the Gaulard and Gibbs transformer, demonstrated a lighting system in Great Barrington: A Siemens generator's voltage of 500 volts was converted into 3000 volts, and then the voltage was stepped down to 500 volts by six Westinghouse transformers. With this setup, the Westinghouse company successfully powered thirty 100-volt incandescent bulbs in twenty shops along the main street of Great Barrington. By the fall of that year Ganz engineers installed a ZBD transformer power system with AC generators in Rome. Based on Stanley's success, the new Westinghouse Electric went on to develop alternating current (AC) electric infrastructure throughout the United States. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current), who attempted to discredit alternating current as too dangerous in a public campaign called the "war of the currents". In 1888, alternating current systems gained further viability with the introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was independently further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown in Germany on one side, and Jonas Wenström in Sweden on the other, though Brown favored the two-phase system.
Alternating current
Wikipedia
335
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
The Ames Hydroelectric Generating Plant, constructed in 1890, was among the first hydroelectric alternating current power plants. A long-distance transmission of single-phase electricity from a hydroelectric generating plant in Oregon at Willamette Falls sent power fourteen miles downriver to downtown Portland for street lighting in 1890. In 1891, another transmission system was installed in Telluride Colorado. The first three-phase system was established in 1891 in Frankfurt, Germany. The Tivoli–Rome transmission was completed in 1892. The San Antonio Canyon Generator was the third commercial single-phase hydroelectric AC power plant in the United States to provide long-distance electricity. It was completed on December 31, 1892, by Almarian William Decker to provide power to the city of Pomona, California, which was 14 miles away. Meanwhile, the possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine in Sweden. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer 400 horsepower a distance of , becoming the first commercial application. In 1893, Westinghouse built an alternating current system for the Chicago World Exposition. In 1893, Decker designed the first American commercial three-phase power plant using alternating current—the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used in USA today. The original Niagara Falls Adams Power Plant with three two-phase generators was put into operation in August 1895, but was connected to the remote transmission system only in 1896. The Jaruga Hydroelectric Power Plant in Croatia was set in operation two days later, on 28 August 1895. Its generator (42 Hz, 240 kW) was made and installed by the Hungarian company Ganz, while the transmission line from the power plant to the City of Šibenik was long, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles LeGeyt Fortescue in 1918.
Alternating current
Wikipedia
490
42986
https://en.wikipedia.org/wiki/Alternating%20current
Physical sciences
Electrical circuits
null
The sugar glider (Petaurus breviceps) is a small, omnivorous, arboreal, and nocturnal gliding possum. The common name refers to its predilection for sugary foods such as sap and nectar and its ability to glide through the air, much like a flying squirrel. They have very similar habits and appearance to the flying squirrel, despite not being closely related—an example of convergent evolution. The scientific name, Petaurus breviceps, translates from Latin as "short-headed rope-dancer", a reference to their canopy acrobatics. The sugar glider is characterised by its pair of gliding membranes, known as patagia, which extend from its forelegs to its hindlegs. Gliding serves as an efficient means of reaching food and evading predators. The animal is covered in soft, pale grey to light brown fur which is countershaded, being lighter in colour on its underside. The sugar glider, as strictly defined in a recent analysis, is only native to a small portion of southeastern Australia, corresponding to southern Queensland and most of New South Wales east of the Great Dividing Range; the extended species group, including populations which may or may not belong to P. breviceps, occupies a larger range covering much of coastal eastern and northern Australia, New Guinea, and nearby islands. Members of Petaurus are popular exotic pets; these pet animals are also frequently referred to as "sugar gliders", but recent research indicates, at least for American pets, that they are not P. breviceps but a closely related species, ultimately originating from a single source near Sorong in West Papua. This would possibly make them members of the Krefft's glider (P. notatus), but the taxonomy of Papuan Petaurus populations is still poorly resolved. Taxonomy and evolution The genus Petaurus is believed to have originated in New Guinea during the mid Miocene epoch, approximately 18 to 24 million years ago. The modern Australian Petaurus, along with New Guinean members of what were formerly considered P. breviceps, diverged from their closest living New Guinean relatives ~9-12 mya. They probably dispersed from New Guinea to Australia between 4.8 and ~8.4 mya, with the oldest Petaurus fossils in Australia being dated to 4.46 million years. This may have been possible due to sea level lowering from about 7 to 10 mya, resulting in land bridges between New Guinea and Australia.
Sugar glider
Wikipedia
509
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Biology and health sciences
Diprotodontia
Animals
The taxonomy of the species is complex, and is still not fully resolved. It was formerly understood to have a wide range across Australia and New Guinea, being the only glider to have this distribution, and to be divided into seven subspecies, with three occurring in Australia and four in New Guinea. This traditional subspecific division was based on small morphological differences, such as colour and body size. However, a 2010 genetic analysis using mitochondrial DNA indicates that these morphologically-defined subspecies may not represent genetically unique populations. Further studies have found significant genetic variation within populations traditionally classified in P. breviceps, sufficient to warrant splitting the species into multiple. The subspecies P. b. biacensis, from Biak Island off of New Guinea, was reclassified as a separate species, the Biak glider (Petaurus biacensis). In 2020, a landmark study suggested that P. breviceps actually comprised three cryptic species: the Krefft's glider (Petaurus notatus), found throughout most of eastern Australia and introduced to Tasmania, the savanna glider (Petaurus ariel), native to northern Australia, and a more narrowly defined P. breviceps, restricted to a small section of coastal forest in southern Queensland and most of New South Wales. In addition, other sugar glider populations throughout this range (such as those on New Guinea and the Cape York Peninsula) may represent undescribed species or be conspecific with previously described species. This indicates that contrary to previous findings of a large range (which in fact applied to P. notatus and, to a lesser extent, to P. ariel), P. breviceps is a range-restricted species that is sensitive to ecological disasters, such as the 2019-20 Australian bushfires, which significantly affected large portions of its habitat. P. breviceps and P. notatus are estimated to have diverged ~1 million years ago, and may have originated from long term geographic isolation. The early-mid Pleistocene saw an uplifting of the Great Dividing Range, contributing to and coinciding with aridification of the interior of Australia, including on the western side of the range. This, as well as other climactic and geographic factors, may have isolated the ancestors of P. breviceps to refugia on the eastern, coastal side of the Great Dividing Range. This would be an example of allopatric speciation.
Sugar glider
Wikipedia
498
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Biology and health sciences
Diprotodontia
Animals
Distribution and habitat Sugar gliders are distributed in the coastal forests of southeastern Queensland and most of New South Wales. Their distribution extends to altitudes of 2000m in the eastern ranges. In parts of its range, it may overlap with Krefft's glider (P. notatus). The sugar glider occurs in sympatry with the squirrel glider and yellow-bellied glider; and their coexistence is permitted through niche partitioning where each species has different patterns of resource use. Like all arboreal, nocturnal marsupials, sugar gliders are active at night, and they shelter during the day in tree hollows lined with leafy twigs. The average home range of sugar gliders is , and is largely related to the abundance of food sources; density ranges from two to six individuals per hectare (0.8–2.4 per acre). Native owls (Ninox sp.) are their primary predators; others in their range include kookaburras, goannas, snakes, and quolls. Feral cats (Felis catus) also represent a significant threat. Appearance and anatomy The sugar glider has a squirrel-like body with a long, partially (weakly) prehensile tail. The length from the nose to the tip of the tail is about , and males and females weigh respectively. Heart rate range is 200–300 beats per minute, and respiratory rate is 16–40 breaths per minute. The sugar glider is a sexually dimorphic species, with males typically larger than females. Sexual dimorphism has likely evolved due to increased mate competition arising through social group structure; and is more pronounced in regions of higher latitude, where mate competition is greater due to increased food availability.
Sugar glider
Wikipedia
352
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Biology and health sciences
Diprotodontia
Animals
The fur coat on the sugar glider is thick, soft, and is usually blue-grey; although some have been known to be yellow, tan or (rarely) albino. A black stripe is seen from its nose to midway on its back. Its belly, throat, and chest are cream in colour. Males have four scent glands, located on the forehead, chest, and two paracloacal (associated with, but not part of the cloaca, which is the common opening for the intestinal, urinal and genital tracts) that are used for marking of group members and territory. Scent glands on the head and chest of males appear as bald spots. Females also have a paracloacal scent gland and a scent gland in the pouch, but do not have scent glands on the chest or forehead. The sugar glider is nocturnal; its large eyes help it to see at night and its ears swivel to help locate prey in the dark. The eyes are set far apart, allowing more precise triangulation from launching to landing locations while gliding. Each foot on the sugar glider has five digits, with an opposable toe on each hind foot. These opposable toes are clawless, and bend such that they can touch all the other digits, like a human thumb, allowing it to firmly grasp branches. The second and third digits of the hind foot are partially syndactylous (fused together), forming a grooming comb. The fourth digit of the forefoot is sharp and elongated, aiding in extraction of insects under the bark of trees. The gliding membrane extends from the outside of the fifth digit of each forefoot to the first digit of each hind foot. When the legs are stretched out, this membrane allows the sugar glider to glide a considerable distance. The membrane is supported by well developed tibiocarpalis, humerodorsalis and tibioabdominalis muscles, and its movement is controlled by these supporting muscles in conjunction with trunk, limb and tail movement. Lifespan in the wild is up to 9 years; is typically up to 12 years in captivity, and the maximum reported lifespan is 17.8 years. Biology and behaviour
Sugar glider
Wikipedia
441
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Biology and health sciences
Diprotodontia
Animals
Gliding The sugar glider is one of a number of volplane (gliding) possums in Australia. It glides with the fore- and hind-limbs extended at right angles to the body, with feet flexed upwards. The animal launches itself from a tree, spreading its limbs to expose the gliding membranes. This creates an aerofoil enabling it to glide or more. For every travelled horizontally when gliding, it falls . Steering is controlled by moving limbs and adjusting the tension of the gliding membrane; for example, to turn left, the left forearm is lowered below the right. This form of arboreal locomotion is typically used to travel from tree to tree; the species rarely descends to the ground. Gliding provides three dimensional avoidance of arboreal predators, and minimal contact with ground dwelling predators; as well as possible benefits in decreasing time and energy consumption spent foraging for nutrient poor foods that are irregularly distributed. Young carried in the pouch of females are protected from landing forces by the septum that separates them within the pouch. Torpor Sugar gliders can tolerate ambient air temperatures of up to through behavioural strategies such as licking their coat and exposing the wet area, as well as drinking small quantities of water. In cold weather, sugar gliders will huddle together to avoid heat loss, and will enter torpor to conserve energy. Huddling as an energy conserving mechanism is not as efficient as torpor. Before entering torpor, a sugar glider will reduce activity and body temperature normally in order to lower energy expenditure and avoid torpor. With energetic constraints, the sugar glider will enter into daily torpor for 2–23 hours while in rest phase. Torpor differs from hibernation in that torpor is usually a short-term daily cycle. Entering torpor saves energy for the animal by allowing its body temperature to fall to a minimum of to . When food is scarce, as in winter, heat production is lowered in order to reduce energy expenditure. With low energy and heat production, it is important for the sugar glider to peak its body mass by fat content in the autumn (May/June) in order to survive the following cold season. In the wild, sugar gliders enter into daily torpor more often than sugar gliders in captivity. The use of torpor is most frequent during winter, likely in response to low ambient temperature, rainfall, and seasonal fluctuation in food sources. Diet and nutrition
Sugar glider
Wikipedia
487
42991
https://en.wikipedia.org/wiki/Sugar%20glider
Biology and health sciences
Diprotodontia
Animals