id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
17805223 | https://en.wikipedia.org/wiki/Concentrated%20solar%20power | Concentrated solar power | Concentrated solar power (CSP, also known as concentrating solar power, concentrated solar thermal) systems generate solar power by using mirrors or lenses to concentrate a large area of sunlight into a receiver. Electricity is generated when the concentrated light is converted to heat (solar thermal energy), which drives a heat engine (usually a steam turbine) connected to an electrical power generator or powers a thermochemical reaction.
As of 2021, global installed capacity of concentrated solar power stood at 6.8 GW. As of 2023, the total was 8.1 GW, with the inclusion of three new CSP projects in construction in China and in Dubai in the UAE. The U.S.-based National Renewable Energy Laboratory (NREL), which maintains a global database of CSP plants, counts 6.6 GW of operational capacity and another 1.5 GW under construction.
Comparison between CSP and other electricity sources
As a thermal energy generating power station, CSP has more in common with thermal power stations such as coal, gas, or geothermal. A CSP plant can incorporate thermal energy storage, which stores energy either in the form of sensible heat or as latent heat (for example, using molten salt), which enables these plants to continue supplying electricity whenever it is needed, day or night. This makes CSP a dispatchable form of solar. Dispatchable renewable energy is particularly valuable in places where there is already a high penetration of photovoltaics (PV), such as California, because demand for electric power peaks near sunset just as PV capacity ramps down (a phenomenon referred to as duck curve).
CSP is often compared to photovoltaic solar (PV) since they both use solar energy. While solar PV experienced huge growth during the 2010s due to falling prices, solar CSP growth has been slow due to technical difficulties and high prices. In 2017, CSP represented less than 2% of worldwide installed capacity of solar electricity plants.
However, CSP can more easily store energy during the night, making it more competitive with dispatchable generators and baseload plants.
The DEWA project in Dubai, under construction in 2019, held the world record for lowest CSP price in 2017 at US$73 per MWh for its 700 MW combined trough and tower project: 600 MW of trough, 100 MW of tower with 15 hours of thermal energy storage daily.
Base-load CSP tariff in the extremely dry Atacama region of Chile reached below $50/MWh in 2017 auctions.
History
A legend has it that Archimedes used a "burning glass" to concentrate sunlight on the invading Roman fleet and repel them from Syracuse. In 1973 a Greek scientist, Dr. Ioannis Sakkas, curious about whether Archimedes could really have destroyed the Roman fleet in 212 BC, lined up nearly 60 Greek sailors, each holding an oblong mirror tipped to catch the sun's rays and direct them at a tar-covered plywood silhouette away. The ship caught fire after a few minutes; however, historians continue to doubt the Archimedes story.
In 1866, Auguste Mouchout used a parabolic trough to produce steam for the first solar steam engine. The first patent for a solar collector was obtained by the Italian Alessandro Battaglia in Genoa, Italy, in 1886. Over the following years, invеntors such as John Ericsson and Frank Shuman developed concentrating solar-powered dеvices for irrigation, refrigеration, and locomоtion. In 1913 Shuman finished a parabolic solar thermal energy station in Maadi, Egypt for irrigation. The first solar-power system using a mirror dish was built by Dr. R.H. Goddard, who was already well known for his research on liquid-fueled rockets and wrote an article in 1929 in which he asserted that all the previous obstacles had been addressed.
Professor Giovanni Francia (1911–1980) designed and built the first concentrated-solar plant, which entered into operation in Sant'Ilario, near Genoa, Italy in 1968. This plant had the architecture of today's power tower plants, with a solar receiver in the center of a field of solar collectors. The plant was able to produce 1 MW with superheated steam at 100 bar and 500 °C. The 10 MW Solar One power tower was developed in Southern California in 1981. Solar One was converted into Solar Two in 1995, implementing a new design with a molten salt mixture (60% sodium nitrate, 40% potassium nitrate) as the receiver working fluid and as a storage medium. The molten salt approach proved effective, and Solar Two operated successfully until it was decommissioned in 1999. The parabolic-trough technology of the nearby Solar Energy Generating Systems (SEGS), begun in 1984, was more workable. The 354 MW SEGS was the largest solar power plant in the world until 2014.
No commercial concentrated solar was constructed from 1990, when SEGS was completed, until 2006, when the Compact linear Fresnel reflector system at Liddell Power Station in Australia was built. Few other plants were built with this design, although the 5 MW Kimberlina Solar Thermal Energy Plant opened in 2009.
In 2007, 75 MW Nevada Solar One was built, a trough design and the first large plant since SEGS. Between 2010 and 2013, Spain built over 40 parabolic trough systems, size constrained at no more than 50 MW by the support scheme. Where not bound in other countries, the manufacturers have adopted up to 200 MW size for a single unit, with a cost soft point around 125 MW for a single unit.
Due to the success of Solar Two, a commercial power plant, called Solar Tres Power Tower, was built in Spain in 2011, later renamed Gemasolar Thermosolar Plant. Gemasolar's results paved the way for further plants of its type. Ivanpah Solar Power Facility was constructed at the same time but without thermal storage, using natural gas to preheat water each morning.
Most concentrated solar power plants use the parabolic trough design, instead of the power tower or Fresnel systems. There have also been variations of parabolic trough systems like the integrated solar combined cycle (ISCC) which combines troughs and conventional fossil fuel heat systems.
CSP was originally treated as a competitor to photovoltaics, and Ivanpah was built without energy storage, although Solar Two included several hours of thermal storage. By 2015, prices for photovoltaic plants had fallen and PV commercial power was selling for of contemporary CSP contracts. However, increasingly, CSP was being bid with 3 to 12 hours of thermal energy storage, making CSP a dispatchable form of solar energy. As such, it is increasingly seen as competing with natural gas and PV with batteries for flexible, dispatchable power.
Current technology
CSP is used to produce electricity (sometimes called solar thermoelectricity, usually generated through steam). Concentrated solar technology systems use mirrors or lenses with tracking systems to focus a large area of sunlight onto a small area. The concentrated light is then used as heat or as a heat source for a conventional power plant (solar thermoelectricity). The solar concentrators used in CSP systems can often also be used to provide industrial process heating or cooling, such as in solar air conditioning.
Concentrating technologies exist in four optical types, namely parabolic trough, dish, concentrating linear Fresnel reflector, and solar power tower. Parabolic trough and concentrating linear Fresnel reflectors are classified as linear focus collector types, while dish and solar tower are point focus types. Linear focus collectors achieve medium concentration factors (50 suns and over), and point focus collectors achieve high concentration factors (over 500 suns). Although simple, these solar concentrators are quite far from the theoretical maximum concentration. For example, the parabolic-trough concentration gives about of the theoretical maximum for the design acceptance angle, that is, for the same overall tolerances for the system. Approaching the theoretical maximum may be achieved by using more elaborate concentrators based on nonimaging optics.
Different types of concentrators produce different peak temperatures and correspondingly varying thermodynamic efficiencies due to differences in the way that they track the sun and focus light. New innovations in CSP technology are leading systems to become more and more cost-effective.
In 2023, Australia’s national science agency CSIRO tested a CSP arrangement in which tiny ceramic particles fall through the beam of concentrated solar energy, the ceramic particles capable of storing a greater amount of heat than molten salt, while not requiring a container that would diminish heat transfer.
Parabolic trough
A parabolic trough consists of a linear parabolic reflector that concentrates light onto a receiver positioned along the reflector's focal line. The receiver is a tube positioned at the longitudinal focal line of the parabolic mirror and filled with a working fluid. The reflector follows the sun during the daylight hours by tracking along a single axis. A working fluid (e.g. molten salt) is heated to as it flows through the receiver and is then used as a heat source for a power generation system. Trough systems are the most developed CSP technology. The Solar Energy Generating Systems (SEGS) plants in California, some of the longest-running in the world until their 2021 closure; Acciona's Nevada Solar One near Boulder City, Nevada; and Andasol, Europe's first commercial parabolic trough plant are representative, along with Plataforma Solar de Almería's SSPS-DCS test facilities in Spain.
Enclosed trough
The design encapsulates the solar thermal system within a greenhouse-like glasshouse. The glasshouse creates a protected environment to withstand the elements that can negatively impact reliability and efficiency of the solar thermal system. Lightweight curved solar-reflecting mirrors are suspended from the ceiling of the glasshouse by wires. A single-axis tracking system positions the mirrors to retrieve the optimal amount of sunlight. The mirrors concentrate the sunlight and focus it on a network of stationary steel pipes, also suspended from the glasshouse structure. Water is carried throughout the length of the pipe, which is boiled to generate steam when intense solar radiation is applied. Sheltering the mirrors from the wind allows them to achieve higher temperature rates and prevents dust from building up on the mirrors.
GlassPoint Solar, the company that created the Enclosed Trough design, states its technology can produce heat for Enhanced Oil Recovery (EOR) for about $5 per in sunny regions, compared to between $10 and $12 for other conventional solar thermal technologies.
Solar power tower
A solar power tower consists of an array of dual-axis tracking reflectors (heliostats) that concentrate sunlight on a central receiver atop a tower; the receiver contains a heat-transfer fluid, which can consist of water-steam or molten salt. Optically a solar power tower is the same as a circular Fresnel reflector. The working fluid in the receiver is heated to 500–1000 °C () and then used as a heat source for a power generation or energy storage system. An advantage of the solar tower is the reflectors can be adjusted instead of the whole tower. Power-tower development is less advanced than trough systems, but they offer higher efficiency and better energy storage capability. Beam down tower application is also feasible with heliostats to heat the working fluid. CSP with dual towers are also used to enhance the conversion efficiency by nearly 24%.
The Solar Two in Daggett, California and the CESA-1 in Plataforma Solar de Almeria Almeria, Spain, are the most representative demonstration plants. The Planta Solar 10 (PS10) in Sanlucar la Mayor, Spain, is the first commercial utility-scale solar power tower in the world. The 377 MW Ivanpah Solar Power Facility, located in the Mojave Desert, was the largest CSP facility in the world, and uses three power towers. Ivanpah generated only 0.652 TWh (63%) of its energy from solar means, and the other 0.388 TWh (37%) was generated by burning natural gas.
Supercritical carbon dioxide can be used instead of steam as heat-transfer fluid for increased electricity production efficiency. However, because of the high temperatures in arid areas where solar power is usually located, it is impossible to cool down carbon dioxide below its critical temperature in the compressor inlet. Therefore, supercritical carbon dioxide blends with higher critical temperatures are currently in development.
Fresnel reflectors
Fresnel reflectors are made of many thin, flat mirror strips to concentrate sunlight onto tubes through which working fluid is pumped. Flat mirrors allow more reflective surface in the same amount of space than a parabolic reflector, thus capturing more of the available sunlight, and they are much cheaper than parabolic reflectors. Fresnel reflectors can be used in various size CSPs.
Fresnel reflectors are sometimes regarded as a technology with a worse output than other methods. The cost efficiency of this model is what causes some to use this instead of others with higher output ratings. Some new models of Fresnel reflectors with Ray Tracing capabilities have begun to be tested and have initially proved to yield higher output than the standard version.
Dish Stirling
A dish Stirling or dish engine system consists of a stand-alone parabolic reflector that concentrates light onto a receiver positioned at the reflector's focal point. The reflector tracks the Sun along two axes. The working fluid in the receiver is heated to and then used by a Stirling engine to generate power. Parabolic-dish systems provide high solar-to-electric efficiency (between 31% and 32%), and their modular nature provides scalability. The Stirling Energy Systems (SES), United Sun Systems (USS) and Science Applications International Corporation (SAIC) dishes at UNLV, and Australian National University's Big Dish in Canberra, Australia are representative of this technology. A world record for solar to electric efficiency was set at 31.25% by SES dishes at the National Solar Thermal Test Facility (NSTTF) in New Mexico on 31 January 2008, a cold, bright day. According to its developer, Ripasso Energy, a Swedish firm, in 2015 its dish Stirling system tested in the Kalahari Desert in South Africa showed 34% efficiency. The SES installation in Maricopa, Phoenix, was the largest Stirling Dish power installation in the world until it was sold to United Sun Systems. Subsequently, larger parts of the installation have been moved to China to satisfy part of the large energy demand.
CSP with thermal energy storage
In a CSP plant that includes storage, the solar energy is first used to heat molten salt or synthetic oil, which is stored providing thermal/heat energy at high temperature in insulated tanks. Later the hot molten salt (or oil) is used in a steam generator to produce steam to generate electricity by steam turbo generator as required. Thus solar energy which is available in daylight only is used to generate electricity round the clock on demand as a load following power plant or solar peaker plant. The thermal storage capacity is indicated in hours of power generation at nameplate capacity. Unlike solar PV or CSP without storage, the power generation from solar thermal storage plants is dispatchable and self-sustainable, similar to coal/gas-fired power plants, but without the pollution. CSP with thermal energy storage plants can also be used as cogeneration plants to supply both electricity and process steam round the clock. As of December 2018, CSP with thermal energy storage plants' generation costs have ranged between 5 c € / kWh and 7 c € / kWh, depending on good to medium solar radiation received at a location. Unlike solar PV plants, CSP with thermal energy storage can also be used economically around the clock to produce process steam, replacing polluting fossil fuels. CSP plants can also be integrated with solar PV for better synergy.
CSP with thermal storage systems are also available using Brayton cycle generators with air instead of steam for generating electricity and/or steam round the clock. These CSP plants are equipped with gas turbines to generate electricity. These are also small in capacity (<0.4 MW), with flexibility to install in few acres' area. Waste heat from the power plant can also be used for process steam generation and HVAC needs. In case land availability is not a limitation, any number of these modules can be installed, up to 1000 MW with RAMS and cost advantages since the per MW costs of these units are lower than those of larger size solar thermal stations.
Centralized district heating round the clock is also feasible with concentrated solar thermal storage plants.
Deployment around the world
An early plant operated in Sicily at Adrano. The US deployment of CSP plants started by 1984 with the SEGS plants. The last SEGS plant was completed in 1990. From 1991 to 2005, no CSP plants were built anywhere in the world. Global installed CSP-capacity increased nearly tenfold between 2004 and 2013 and grew at an average of 50 percent per year during the last five of those years, as the number of countries with installed CSP was growing. In 2013, worldwide installed capacity increased by 36% or nearly 0.9 gigawatt (GW) to more than 3.4 GW. The record for capacity installed was reached in 2014, corresponding to 925 MW; however, it was followed by a decline caused by policy changes, the global financial crisis, and the rapid decrease in price of the photovoltaic cells. Nevertheless, total capacity reached 6800 MW in 2021.
Spain accounted for almost one third of the world's capacity, at 2,300 MW, despite no new capacity entering commercial operation in the country since 2013.
The United States follows with 1,740 MW. Interest is also notable in North Africa and the Middle East, as well as China and India. There is a notable trend towards developing countries and regions with high solar radiation with several large plants under construction in 2017.
The global market was initially dominated by parabolic-trough plants, which accounted for 90% of CSP plants at one point.
Since about 2010, central power tower CSP has been favored in new plants due to its higher temperature operation – up to vs. trough's maximum of – which promises greater efficiency.
Among the larger CSP projects are the Ivanpah Solar Power Facility (392 MW) in the United States, which uses solar power tower technology without thermal energy storage, and the Ouarzazate Solar Power Station in Morocco, which combines trough and tower technologies for a total of 510 MW with several hours of energy storage.
Cost
On purely generation cost, bulk power from CSP today is much more expensive than solar PV or Wind power, however, PV and Wind power are intermittent sources. Comparing cost on the electricity grid, gives a different conclusion. Developers are hoping that CSP with energy storage can be a cheaper alternative to PV with BESS. Research found that PV with BESS is competitive for short storage durations, while CSP with TES gains economic advantages for long storage periods. Tipping point lies at 2–10 hours depending on cost of the composing blocks: CSP, PV, TES and BESS.
As early as 2011, the rapid decline of the price of photovoltaic systems led to projections that CSP (without TES) would no longer be economically viable. As of 2020, the least expensive utility-scale concentrated solar power stations in the United States and worldwide were five times more expensive than utility-scale photovoltaic power stations, with a projected minimum price of 7 cents per kilowatt-hour for the most advanced CSP stations (with TES) against record lows of 1.32 cents per kWh for utility-scale PV (without BESS). This five-fold price difference has been maintained since 2018. Some PV-CSP plants in China have sought to operate profitably on the regional coal tariff of 5 US cents per kWh in 2021.
Even though overall deployment of CSP remains limited in the early 2020s, the levelized cost of power from commercial scale plants has decreased significantly since the 2010s. With a learning rate estimated at around 20% cost reduction of every doubling in capacity, the costs were approaching the upper end of the fossil fuel cost range at the beginning of the 2020s, driven by support schemes in several countries, including Spain, the US, Morocco, South Africa, China, and the UAE:
CSP deployment has slowed down considerably in OECD countries, as most of the above-mentioned markets have cancelled their support, but CSP is one of the few renewable electricity technologies that can generate fully dispatchable or even fully baseload power at very large scale. Therefore, it may have an important role to play in the decarbonization of power grids as a dispatchable electricity source to balance the intermittent renewables, such as wind power and PV. CSP in combination with Thermal Energy Storage (TES) is expected by some to become cheaper than PV with lithium batteries for storage durations above 4 hours per day, while NREL expects that by 2030 PV with 10-hour storage lithium batteries will cost the same as PV with 4-hour storage used to cost in 2020. Countries with no PV cell production capability and low labour cost may reduce substantially the local CSP/PV cost gap.
Efficiency
The efficiency of a concentrating solar power system depends on the technology used to convert the solar power to electrical energy, the operating temperature of the receiver and the heat rejection, thermal losses in the system, and the presence or absence of other system losses; in addition to the conversion efficiency, the optical system which concentrates the sunlight will also add additional losses.
Real-world systems claim a maximum conversion efficiency of 23-35% for "power tower" type systems, operating at temperatures from 250 to 565 °C, with the higher efficiency number assuming a combined cycle turbine. Dish Stirling systems, operating at temperatures of 550-750 °C, claim an efficiency of about 30%. Due to variation in sun incidence during the day, the average conversion efficiency achieved is not equal to these maximum efficiencies, and the net annual solar-to-electricity efficiencies are 7-20% for pilot power tower systems, and 12-25% for demonstration-scale Stirling dish systems.
Conversion efficiencies are relevant only where real estate land costs are not low.
Theory
The maximum conversion efficiency of any thermal to electrical energy system is given by the Carnot efficiency, which represents a theoretical limit to the efficiency that can be achieved by any system, set by the laws of thermodynamics. Real-world systems do not achieve the Carnot efficiency.
The conversion efficiency of the incident solar radiation into mechanical work depends on the thermal radiation properties of the solar receiver and on the heat engine (e.g. steam turbine).
Solar irradiation is first converted into heat by the solar receiver with the efficiency , and subsequently the heat is converted into mechanical energy by the heat engine with the efficiency , using Carnot's principle. The mechanical energy is then converted into electrical energy by a generator.
For a solar receiver with a mechanical converter (e.g., a turbine), the overall conversion efficiency can be defined as follows:
where represents the fraction of incident light concentrated onto the receiver, the fraction of light incident on the receiver that is converted into heat energy, the efficiency of conversion of heat energy into mechanical energy, and the efficiency of converting the mechanical energy into electrical power.
is:
with , , respectively the incoming solar flux and the fluxes absorbed and lost by the system solar receiver.
The conversion efficiency is at most the Carnot efficiency, which is determined by the temperature of the receiver and the temperature of the heat rejection ("heat sink temperature") ,
The real-world efficiencies of typical engines achieve 50% to at most 70% of the Carnot efficiency due to losses such as heat loss and windage in the moving parts.
Ideal case
For a solar flux (e.g. ) concentrated times with an efficiency on the system solar receiver with a collecting area and an absorptivity :
,
,
For simplicity's sake, one can assume that the losses are only radiative ones (a fair assumption for high temperatures), thus for a reradiating area A and an emissivity applying the Stefan–Boltzmann law yields:
Simplifying these equations by considering perfect optics ( = 1) and without considering the ultimate conversion step into electricity by a generator, collecting and reradiating areas equal and maximum absorptivity and emissivity ( = 1, = 1) then substituting in the first equation gives
The graph shows that the overall efficiency does not increase steadily with the receiver's temperature. Although the heat engine's efficiency (Carnot) increases with higher temperature, the receiver's efficiency does not. On the contrary, the receiver's efficiency is decreasing, as the amount of energy it cannot absorb (Qlost) grows by the fourth power as a function of temperature. Hence, there is a maximum reachable temperature. When the receiver efficiency is null (blue curve on the figure below), Tmax is:
There is a temperature Topt for which the efficiency is maximum, i.e.. when the efficiency derivative relative to the receiver temperature is null:
Consequently, this leads us to the following equation:
Solving this equation numerically allows us to obtain the optimum process temperature according to the solar concentration ratio (red curve on the figure below)
Theoretical efficiencies aside, real-world experience of CSP reveals a 25%–60% shortfall in projected production, a good part of which is due to the practical Carnot cycle losses not included in the above analysis.
Incentives and markets
Spain
In 2008, Spain launched the first commercial scale CSP market in Europe. Until 2012, solar-thermal electricity generation was initially eligible for feed-in tariff payments (art. 2 RD 661/2007) – leading to the creation of the largest CSP fleet in the world which at 2.3 GW of installed capacity contributes about 5TWh of power to the Spanish grid every year.
The initial requirements for plants in the FiT were:
Systems registered in the register of systems prior to 29 September 2008: 50 MW for solar-thermal systems.
Systems registered after 29 September 2008 (PV only).
The capacity limits for the different system types were re-defined during the review of the application conditions every quarter (art. 5 RD 1578/2008, Annex III RD 1578/2008). Prior to the end of an application period, the market caps specified for each system type are published on the website of the Ministry of Industry, Tourism and Trade (art. 5 RD 1578/2008). Because of cost concerns Spain has halted acceptance of new projects for the feed-in-tariff on 27 January 2012 Already accepted projects were affected by a 6% "solar-tax" on feed-in-tariffs, effectively reducing the feed-in-tariff.
In this context, the Spanish Government enacted the Royal Decree-Law 9/2013 in 2013, aimed at the adoption of urgent measures to guarantee the economic and financial stability of the electric system, laying the foundations of the new Law 24/2013 of the Spanish electricity sector. This new retroactive legal-economic framework applied to all the renewable energy systems was developed in 2014 by the RD 413/2014, which abolished the former regulatory frameworks set by the RD 661/2007 and the RD 1578/2008 and defined a new remuneration scheme for these assets.
After a lost decade for CSP in Europe, Spain announced in its National Energy and Climate Plan with the intention of adding 5GW of CSP capacity between 2021 and 2030. Towards this end bi-annual auctions of 200 MW of CSP capacity starting in October 2022 are expected, but details are not yet known.
Australia
Several CSP dishes have been set up in remote Aboriginal settlements in the Northern Territory: Hermannsburg, Yuendumu and Lajamanu.
So far no commercial scale CSP project has been commissioned in Australia, but several projects have been suggested. In 2017, now-bankrupt American CSP developer SolarReserve was awarded a PPA to realize the 150 MW Aurora Solar Thermal Power Project in South Australia at a record low rate of just AUD$ 0.08/kWh, or close to USD$ 0.06/kWh. Unfortunately the company failed to secure financing, and the project was cancelled. Another promising application for CSP in Australia are mines that need 24/7 electricity but often have no grid connection. Vast Solar, a startup company aiming to commercialize a novel modular third generation CSP design, is looking to start construction of a 50 MW combined CSP and PV facility in Mt. Isa of North-West Queensland in 2021.
At the federal level, under the Large-scale Renewable Energy Target (LRET), in operation under the Renewable Energy Electricity Act 2000, large-scale solar thermal electricity generation from accredited RET power stations may be entitled to create large-scale generation certificates (LGCs). These certificates can then be sold and transferred to liable entities (usually electricity retailers) to meet their obligations under this tradeable certificates scheme. However, as this legislation is technology neutral in its operation, it tends to favour more established RE technologies with a lower levelised cost of generation, such as large-scale onshore wind, rather than solar thermal and CSP.
At state level, renewable energy feed-in laws typically are capped by maximum generation capacity in kWp, and are open only to micro or medium scale generation and in a number of instances are only open to solar photovoltaic (PV) generation. This means that larger scale CSP projects would not be eligible for payment for feed-in incentives in many of the State and Territory jurisdictions.
China
In 2024, China is offering second generation CSP technology to compete with other on-demand electricity generation methods based on renewable or non-renewable fossil fuels without any direct or indirect subsidies. In the current 14th five-year plan CSP projects are developed in several provinces alongside large GW sized solar PV and wind projects.
In 2016, China announced its intention to build a batch of 20 technologically diverse CSP demonstration projects in the context of the 13th five-year plan, with the intention of building up an internationally competitive CSP industry. Since the first plants were completed in 2018, the generated electricity from the plants with thermal storage is supported with an administratively set FiT of RMB 1.5 per kWh. At the end of 2020, China operated a total of 545 MW in 12 CSP plants: seven plants (320 MW) are molten-salt towers, another two plants (150 MW) use the proven Eurotrough 150 parabolic trough design, and three plants (75 MW) use linear Fresnel collectors. Plans to build a second batch of demonstration projects were never enacted and further technology specific support for CSP in the upcoming 14th five-year plan is unknown. Federal support projects from the demonstration batch ran out at the end of 2021.
India
In March 2024, SECI announced that a RfQ for 500 MW would be called in the year 2024.
Solar thermal reactors
CSP has other uses than electricity. Researchers are investigating solar thermal reactors for the production of solar fuels, making solar a fully transportable form of energy in the future. These researchers use the solar heat of CSP as a catalyst for thermochemistry to break apart molecules of H2O to create hydrogen (H2) from solar energy with no carbon emissions. By splitting both H2O and CO2, other much-used hydrocarbons – for example, the jet fuel used to fly commercial airplanes – could also be created with solar energy rather than from fossil fuels.
Heat from the sun can be used to provide steam used to make heavy oil less viscous and easier to pump. This process is called solar thermal enhanced oil recovery. Solar power towers and parabolic troughs can be used to provide the steam, which is used directly, so no generators are required and no electricity is produced. Solar thermal enhanced oil recovery can extend the life of oilfields with very thick oil which would not otherwise be economical to pump.
Carbon neutral synthetic fuel production using concentrated solar thermal energy at nearly 1500 °C temperature is technically feasible and will be commercially viable in the future if the costs of CSP plants decline. Also, carbon-neutral hydrogen can be produced with solar thermal energy (CSP) using the sulfur–iodine cycle, hybrid sulfur cycle, iron oxide cycle, copper–chlorine cycle, zinc–zinc oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, or an alternative.
Gigawatt-scale solar power plants
Around the turn of the millennium up to about 2010, there have been several proposals for gigawatt-size, very-large-scale solar power plants using CSP. They include the Euro-Mediterranean Desertec proposal and Project Helios in Greece (10 GW), both now canceled. A 2003 study concluded that the world could generate 2,357,840 TWh each year from very large-scale solar power plants using 1% of each of the world's deserts. Total consumption worldwide was 15,223 TWh/year (in 2003). The gigawatt size projects would have been arrays of standard-sized single plants. In 2012, the BLM made available of land in the southwestern United States for solar projects, enough for between 10,000 and 20,000 GW. The largest single plant in operation is the 510 MW Noor Solar Power Station. In 2022 the 700 MW CSP 4th phase of the 5GW Mohammed bin Rashid Al Maktoum Solar Park in Dubai will become the largest solar complex featuring CSP.
Suitable sites
The locations with highest direct irradiance are dry, at high altitude, and located in the tropics. These locations have a higher potential for CSP than areas with less sun.
Abandoned opencast mines, moderate hill slopes, and crater depressions may be advantageous in the case of power tower CSP, as the power tower can be located on the ground integral with the molten salt storage tank.
Environmental effects
CSP has a number of environmental impacts, particularly by the use of water and land.
Water is generally used for cooling and to clean mirrors. Some projects are looking into various approaches to reduce the water and cleaning agents used, including the use of barriers, non-stick coatings on mirrors, water misting systems, and others.
Water use
Concentrating solar power plants with wet-cooling systems have the highest water-consumption intensities of any conventional type of electric power plant; only fossil-fuel plants with carbon-capture and storage may have higher water intensities. A 2013 study comparing various sources of electricity found that the median water consumption during operations of concentrating solar power plants with wet cooling was for power tower plants and for trough plants. This was higher than the operational water consumption (with cooling towers) for nuclear at , coal at , or natural gas at . A 2011 study by the National Renewable Energy Laboratory came to similar conclusions: for power plants with cooling towers, water consumption during operations was for CSP trough, for CSP tower, for coal, for nuclear, and for natural gas. The Solar Energy Industries Association noted that the Nevada Solar One trough CSP plant consumes . The issue of water consumption is heightened because CSP plants are often located in arid environments where water is scarce.
In 2007, the US Congress directed the Department of Energy to report on ways to reduce water consumption by CSP. The subsequent report noted that dry cooling technology was available that, although more expensive to build and operate, could reduce water consumption by CSP by 91 to 95 percent. A hybrid wet/dry cooling system could reduce water consumption by 32 to 58 percent. A 2015 report by NREL noted that of the 24 operating CSP power plants in the US, 4 used dry cooling systems. The four dry-cooled systems were the three power plants at the Ivanpah Solar Power Facility near Barstow, California, and the Genesis Solar Energy Project in Riverside County, California. Of 15 CSP projects under construction or development in the US as of March 2015, 6 were wet systems, 7 were dry systems, 1 hybrid, and 1 unspecified.
Although many older thermoelectric power plants with once-through cooling or cooling ponds use more water than CSP, meaning that more water passes through their systems, most of the cooling water returns to the water body available for other uses, and they consume less water by evaporation. For instance, the median coal power plant in the US with once-through cooling uses , but only (less than one percent) is lost through evaporation.
Effects on wildlife
Insects can be attracted to the bright light caused by concentrated solar technology, and as a result birds that hunt them can be killed by being burned if they fly near the point where light is being focused. This can also affect raptors that hunt the birds. Federal wildlife officials were quoted by opponents as calling the Ivanpah power towers "mega traps" for wildlife.
Some media sources have reported that concentrated solar power plants have injured or killed large numbers of birds due to intense heat from the concentrated sunrays. Some of the claims may have been overstated or exaggerated.
According to rigorous reporting, in over six months of its first year of operation, 321 bird fatalities were counted at Ivanpah, of which 133 were related to sunlight being reflected onto the boilers. Over a year, this figure rose to a total count of 415 bird fatalities from known causes, and 288 from unknown causes. Taking into account the search efficiency of the dead bird carcasses, the total avian mortality for the first year was estimated at 1492 for known causes and 2012 from unknown causes. Of the bird deaths due to known causes, 47.4% were burned, 51.9% died of collision effects, and 0.7% died from other causes. Mitigations actions can be taken to reduce these numbers, such as focusing no more than four mirrors on any one place in the air during standby, as was done at Crescent Dunes Solar Energy Project. Over the 2020-2021 period, 288 bird fatalities were directly accounted for at Ivanpah, a figure consistent with the ranges found in previous annual assessments. To put this in perspective, alone in Germany, every year up to 2 million birds die interacting with overhead power lines. In more general terms, a 2016 preliminary study assessed that the annual bird mortality per MW of installed power was similar between U.S. concentrated solar power plants and wind power plants, and higher for fossil fuel power plants.
| Technology | Power generation | null |
5805247 | https://en.wikipedia.org/wiki/Nodule%20%28geology%29 | Nodule (geology) | In geology and particularly in sedimentology, a nodule is a small, irregularly rounded knot, mass, or lump of a mineral or mineral aggregate that typically has a contrasting composition from the enclosing sediment or sedimentary rock. Examples include pyrite nodules in coal, a chert nodule in limestone, or a phosphorite nodule in marine shale. Normally, a nodule has a warty or knobby surface and exists as a discrete mass within the host strata. In general, they lack any internal structure except for the preserved remnants of original bedding or fossils. Nodules are closely related to concretions and sometimes these terms are used interchangeably. Minerals that typically form nodules include calcite, chert, apatite (phosphorite), anhydrite, and pyrite.
Nodular is used to describe a sediment or sedimentary rock composed of scattered to loosely packed nodules in matrix of like or unlike character. It is also used to describe mineral aggregates that occur in the form of nodules, e.g. colloform mineral aggregate with a bulbed surface.
Nodule is also used for widely scattered concretionary lumps of manganese, cobalt, iron, and nickel found on the floors of the world's oceans. This is especially true of manganese nodules. Manganese and phosphorite nodules form on the seafloor and are syndepositional in origin. Thus, technically speaking, they are concretions instead of nodules.
Chert and flint nodules are often found in beds of limestone and chalk. They form from the redeposition of amorphous silica arising from the dissolution of siliceous spicules of sponges, or debris from radiolaria and the postdepositional replacement of either the enclosing limestone or chalk by this silica.
| Physical sciences | Sedimentary rocks | Earth science |
13843273 | https://en.wikipedia.org/wiki/Turritellidae | Turritellidae | Turritellidae, with the common name "tower shells" or "tower snails", is a taxonomic family of small- to medium-sized sea snails, marine gastropod molluscs in the Sorbeoconcha clade.
They are filter feeders; this method of feeding is somewhat unusual among gastropod mollusks, but is very common in bivalves.
Shell description
The shells of turritellid species have whorls that are more convex and their apertures being more circular than it is in the auger shells, which are similarly high-spired. The columella is curved and the thin operculum has many horns.
Anatomy of the soft parts
These snails burrow into mud or sand, with their feet being relatively small.
Taxonomy
The following genera are recognised in the family Turritellidae:
†Omalaxinae
†Omalaxis Deshayes, 1832
Orectospirinae
Orectospira Dall, 1925
Pareorinae
†Batillona Finlay, 1927
†Eligmostoma Cossmann, 1888
Mesalia Gray, 1847
Pareora Marwick, 1931
†Sigmesalia H. J. Finlay & Marwick, 1937
Protominae
†Allmonia Harzhauser & Landau, 2019
Protoma Baird, 1870
Turritellinae
Archimediella
Armatus
†Asiella
Banzarecolpus
Broderiptella
Callostracum
†Calvertitella
Caviturritella
†Colposigma
Colpospira
†Costacolpus
†Cristispira
Gazameda
Haustator
Helminthia
Incatella
†Kapalmerella
Maoricolpus
†Mariacolpus
†Nairiella
Neohaustator
†Nodosella
†Oligodia
†Peyrotia
†Ptychidia
†Roamerella
†Spirocolpus
Stiracolpus
†Tachyrhinchella
Tachyrhynchus
Torcula
†Torquesia
†Torquesiella
Tropicolpus
Turritella - the type genus of the family
Turritellinella
Vermicularia
†Viennella
Zeacolpus
Other
†Arcotia Stoliczka, 1867
†Leptocolpus H. J. Finlay & Marwick, 1937
Palaeontological locations
The Turritellenplatte of Ermingen ("Erminger Turritellenplatte" near Ulm, Germany) is situated in the northern part of the North Alpine Foreland Basin (NAFB) and is of interest for its abundance of Turritella turris gastropod shells within sedimentary deposits. The fauna of the gastropod-rich sandstone reflects mainly towards near-coastal and shallow marine conditions. Petrographical and palaeontological data allow for a correlation with this area and the Burdigalian age (Lower Miocene epoch). Based on the Sr-isotope composition of shark teeth in the area, the age of the area is about 18,5 Ma.
| Biology and health sciences | Gastropods | Animals |
2328336 | https://en.wikipedia.org/wiki/Otitis | Otitis | Otitis is a general term for inflammation in ear or ear infection, inner ear infection, middle ear infection of the ear, in both humans and other animals. When infection is present, it may be viral or bacterial. When inflammation is present due to fluid build up in the middle ear and infection is not present it is considered Otitis media with effusion. It is subdivided into the following:
Otitis externa, external otitis, involves inflammation (either infectious or non-infectious) of the external auditory canal, sometimes extending to the pinna or tragus. Otitis externa can be acute or chronic. It can be fungal or bacterial. The most common aetiology of acute otitis externa is bacterial infection, while chronic cases are often associated with underlying skin diseases such as eczema or psoriasis. A third form, malignant otitis externa, or necrotising otitis externa, is a potentially life-threatening, invasive infection of the external auditory canal and skull. Usually associated with Pseudomonas aeruginosa infection, this form typically occurs in older people with diabetes mellitus, or immunocompromised people. Otomycosis is the fungal form of Otitis Externa that is more common in coastal regions.
Otitis media, or middle ear infection, involves the middle ear. In otitis media, the ear is infected or clogged with fluid behind the ear drum, in the normally air-filled middle-ear space. This is the most common infection and very common in babies younger than 6 months. This condition sometimes requires a surgical procedure called myringotomy and tube insertion.
Otitis interna, or labyrinthitis, involves the inner ear. The inner ear includes sensory organs for balance and hearing. When the inner ear is inflamed, vertigo is a common symptom. Other symptoms in adults include pain and drainage from ear or problems with hearing. Symptoms in children can include excessive crying, touching at ears, drainage, and fever.
Treatment can range from increasing fluids and over-the-counter medicine to manage symptoms to antibiotics prescribed by medical providers.
| Biology and health sciences | Infectious diseases by site | Health |
2330138 | https://en.wikipedia.org/wiki/Agricultural%20cooperative | Agricultural cooperative | An agricultural cooperative, also known as a farmers' co-op, is a producer cooperative in which farmers pool their resources in certain areas of activities.
A broad typology of agricultural cooperatives distinguishes between agricultural service cooperatives, which provide various services to their individually-farming members, and agricultural production cooperatives in which production resources (land, machinery) are pooled and members farm jointly.
Agricultural production cooperatives are relatively rare in the world. They include collective farms in former socialist countries, the kibbutzim in Israel, collectively-governed community shared agriculture, Longo Maï co-operatives in Costa Rica, France, and some other countries, CPAs in Cuba, and Nicaraguan production cooperatives.
The default meaning of "agricultural cooperative" in English is usually an agricultural service cooperative, the numerically dominant form in the world. There are two primary types of agricultural service cooperatives: supply cooperatives and marketing cooperatives. Supply cooperatives supply their members with inputs for agricultural production, including seeds, fertilizers, fuel, and machinery services. Marketing cooperatives are established by farmers to undertake transportation, packaging, pricing, distribution, sales and promotion of farm products (both crop and livestock). Farmers also widely rely on credit cooperatives as a source of financing for both working capital and investments.
Notable examples of agricultural cooperatives include Dairy Farmers Of America, the largest dairy company in the US, Amul, the largest food product marketing organization in India and Zen-Noah, a federation of agricultural cooperatives that handles 70% of the sales of chemical fertilizers in Japan.
Purpose
Cooperatives as a form of business organization are distinct from the more common investor-owned firms (IOFs). Both are organized as corporations, but IOFs pursue profit maximization objectives, whereas cooperatives strive to maximize the benefits they generate for their members (which usually involves zero-profit operation). Agricultural cooperatives are therefore created in situations where farmers cannot obtain essential services from IOFs (because the provision of these services is judged to be unprofitable by the IOFs), or when IOFs provide the services at disadvantageous terms to the farmers (i.e., the services are available, but the profit-motivated prices are too high for the farmers). The former situations are characterized in economic theory as market failure or missing services motive. The latter drive the creation of cooperatives as a competitive yardstick or as a means of allowing farmers to build countervailing market power to oppose the IOFs. The concept of competitive yardstick implies that farmers, faced with an unsatisfactory performance by IOFs, may form a cooperative firm whose purpose is to force the IOFs, through competition, to improve their service to farmers.
A practical motivation for the creation of agricultural cooperatives is related to the ability of farmers to pool production and/or resources. In many situations within agriculture, it is simply too expensive for farmers to manufacture products or undertake a service. Cooperatives provide a method for farmers to join in an 'association', through which a group of farmers can acquire a better outcome, typically financial, than by going alone. This approach is aligned to the concept of economies of scale and can also be related as a form of economic synergy, where "two or more agents working together to produce a result not obtainable by any of the agents independently". While it may seem reasonable to conclude that the larger the cooperative the better, this is not necessarily true. Cooperatives exist across a broad membership base, with some cooperatives having fewer than 20 members while others can have over 10,000.
While the economic benefits are a strong driver in forming cooperatives, it is not the sole consideration. In fact, it is possible for the economic benefits from a cooperative to be replicated in other organisational forms, such as an IOF. An important strength of a cooperative for the farmer is that they retain the governance of the association, thereby ensuring they have ultimate ownership and control. This ensures that the profit reimbursement (either through the dividend payout or rebate) is shared only amongst the farmer members, rather than shareholders as in an IOF.
As agricultural production is often the main source of employment and income in rural and impoverished areas, agricultural cooperatives play an instrumental role in socio-economic development, food security and poverty reduction. They provide smallholder farmers with access to natural and educational resources, tools, and otherwise inaccessible marketplaces. Producer organisations can also empower smallholders to become more resilient; in other words, they build the capacity of farmers to prepare for and react to economic and environmental stressors and shocks in a way that limits vulnerability and promotes their sustainability. Research suggests that membership in a producer organisation is more highly correlated with farmer output or income than other standalone investments such as training, certification, or credit.
In agriculture, there are broadly three types of cooperatives: a machinery pool, a manufacturing/marketing cooperative, and a credit union.
Machinery pool: A family farm may be too small to justify the purchase of expensive farm machinery, which may be only used irregularly, say only during harvest; instead local farmers may get together to form a machinery pool that purchases the necessary equipment for all the members to use.
Manufacturing/marketing cooperative: A farm does not always have the means of transportation necessary for delivering its produce to the market, or else the small volume of its production may put it in an unfavorable negotiating position with respect to intermediaries and wholesalers; a cooperative will act as an integrator, collecting the output from members, sometimes undertaking manufacturing, and delivering it in large aggregated quantities downstream through the marketing channels.
Credit Union: Farmers, especially in developing countries, can be charged relatively high interest rates by commercial banks, or credit may not even be available for farmers to access. When providing loans, these banks are often mindful of high transaction costs on small loans, or may refuse credit altogether due to lack of collateral – something very acute in developing countries. To provide a source of credit, farmers can group together funds that can be loaned out to members. Alternatively, the credit union can raise loans at better rates from commercial banks due to the cooperative having a larger associative size than an individual farmer. Often members of a credit union will provide mutual or peer-pressure guarantees for repayment of loans. In some instances, manufacturing/marketing cooperatives may have credit unions as part of their broader business. Such an approach allows farmers to have a more direct access to critical farm inputs, such as seeds and implements. The loans for these inputs are repaid when the farmer sends produce to the manufacturing/marketing cooperative.
Origins and history
The first agricultural cooperatives were created in Europe in the seventeenth century in the Military Frontier, where the wives and children of the border guards lived together in organized agricultural cooperatives next to a funfair and a public bath.
During the eighteenth and nineteenth centuries in certain areas of Greece, back then, under Ottoman rule, a particular form of cooperative organization was developed. Networks of adjacent rural communities were organized as a local production system designed to produce specific agricultural or craft products which were then destined for international markets. Derived from the Byzantine guilds, they were enabling better control of the production and tax collection by the Ottoman administration.
One of the first civil cooperatives, was the Rochdale Society, formed in 1844 in Rochdale, England. While it was a society of textile workers, and thus not an agriculture cooperative in the strict sense, it also aimed to rent land, to be cultivated by members "who may be out of employment or whose labour may be badly remunerated". The Society’s first enterprise was a retail store, but it very soon also established a corn mill.
The first civil agricultural cooperatives were created also in Europe in the second half of the nineteenth century. They spread later to North America and the other continents. They have become one of the tools of agricultural development in emerging countries.
Farmers also cooperated to form mutual farm insurance societies.
Also related are rural credit unions.
They were created in the same periods, with the initial purpose of offering farm loans.
Some became universal banks such as Crédit Agricole or Rabobank.
Supply cooperatives
Agricultural supply cooperatives aggregate purchases, storage, and distribution of farm inputs for their members. By taking advantage of volume discounts and utilizing other economies of scale, supply cooperatives bring down the cost of the inputs that the members purchase from the cooperative compared with direct purchases from commercial suppliers. Supply cooperatives provide inputs required for agricultural production including seeds, fertilizers, chemicals, fuel, and farm machinery. Some supply cooperatives operate machinery pools that provide mechanical field services (e.g., plowing, harvesting) to their members.
Examples
Australia
Co-operative Bulk Handling Limited
Westralian Farmers Co-operative Limited
Canada
Farmers' Storehouse Company
United Farmers of Alberta
Farmers of North America
France
Agrial (Normandy)
Terrena (Pays de la Loire)
Vivescia
Israel
Granot central cooperative
Japan
Japan Agricultural Cooperatives
Korea (South)
National Agricultural Cooperative Federation
Ukraine
Ukrainian cooperative movement
United States
Landisville Produce Co-op, established 1914
Rockingham Cooperative, established in 1921
MFA Incorporated
Darigold
Organic Valley
National Council of Farmer Cooperatives
Southern States Cooperative
Farmers Cooperative Association, Inc.; Frederick, Maryland
Ocean Spray (cooperative)
Land O'Lakes
Michigan Sugar
Sunkist
Wilco stores (Oregon)
Grange Cooperative
Saline Valley Farm Cooperative, established 1931
Netherlands
Avebe
Agrico
Agrifirm
Marketing cooperatives
Agricultural marketing cooperatives are cooperative businesses owned by farmers, to undertake transformation, packaging, distribution, and marketing of farm products (both crop and livestock.)
New Zealand
New Zealand has a strong history of agricultural cooperatives, dating back to the late 19th century. The first was the small Otago Peninsula Co-operative Cheese Factory Co. Ltd, started in 1871 at Highcliff on the Otago Peninsula. With active support by the New Zealand government, and small cooperatives being suitable in isolated areas, cooperatives quickly began to dominate the industry. By 1905, dairy cooperatives were the main organisational structure in the industry. In the 1920s–'30s, there were around 500 co-operative dairy companies compared to less than 70 that were privately owned.
However, after World War II, with the advent of improved transportation, processing technologies and energy systems, a trend to merge dairy cooperatives occurred. By the late 1990s, there were two major cooperatives: the Waikato-based New Zealand Dairy Group and the Taranaki-based Kiwi Co-operative Dairies. In 2001 these two cooperatives, together with the New Zealand Dairy Board, merged to form Fonterra. This mega-merger was supported by the New Zealand Government as part of broader dairy industry deregulation, which allowed other companies to directly export dairy products. Two smaller cooperatives did not join Fonterra, preferring to remain independent – the Morrinsville-based Tatua Dairy Company and Westland Milk Products on the West Coast of the South Island.
The other main agricultural co-operatives in New Zealand are in the meat and fertiliser industries. The meat industry, which has struggled at times, has proposed various mergers similar to the creation of Fonterra; however, these have failed to gain the necessary member support.
Canada
In Canada, the most important cooperatives of this kind were the wheat pools. These farmer-owned cooperatives bought and transported grain throughout Western Canada. They replaced the earlier privately and often foreign-owned grain buyers and came to dominate the market in the post-war period. By the 1990s, most had demutualized (privatized), and several mergers occurred. Now all the former wheat pools are part of the Viterra corporation.
Former wheat pools include:
Alberta Wheat Pool
Manitoba Pool Elevators
Saskatchewan Wheat Pool
United Grain Growers
Other agricultural marketing cooperatives in Canada include:
Organic Meadow Cooperative (organic dairy)
Gay Lea Foods Co-operative Limited (dairy)
Agropur
Ecuador
The Amazon region of Ecuador is known for producing world-renowned cacao beans. In the Napo region 850 Kichwa families have come together with help from American biologist, Judy Logback, to form an agricultural marketing cooperatives, Kallari Association. This cooperative has helped increase benefits for the families involved as well as to protect and defend their Kichwa culture and the Amazon rainforest.
India
In India, there are networks of cooperatives at the local, regional, state and national levels that assist in agricultural marketing. The commodities that are mostly handled are food grains, jute, cotton, sugar, milk and nuts
Dairy farming based on the Anand Pattern, with a single marketing cooperative, is India's largest self-sustaining industry and its largest rural employment provider. Successful implementation of the Anand model has made India the world's largest milk producer. Here small, marginal farmers with a couple or so heads of milch cattle queue up twice daily to pour milk from their small containers into the village union collection points. The milk after processing at the district unions is then marketed by the state cooperative federation nationally under the Amul brand name, India's largest food brand. With the Anand pattern three-fourths of the price paid by the mainly urban consumers goes into the hands of millions of small dairy farmers, who are the owners of the brand and the cooperative. The cooperative hires professionals for their expertise and skills and uses hi-tech research labs and modern processing plants & transport cold-chains, to ensure quality of their produce and value-add to the milk.
Production of sugar from sugarcane mostly takes place at cooperative sugar cane mills owned by local farmers. The shareholders include all farmers, small and large, supplying sugarcane to the mill. Over the last sixty years, the local sugar mills have played a crucial part in encouraging rural political participation and as a stepping stone for aspiring politicians. This is particularly true in the state of Maharashtra where a large number of politicians belonging to the Congress party or NCP had ties to sugar cooperatives from their respective local areas. Mismanagement and manipulation of the cooperative principles have made a number of these operations inefficient.
Israel
Tnuva Central Cooperative for the Marketing of Agricultural Produce in Israel Ltd.
Netherlands
Coöperatieve Nederlandse Bloembollencentrale (CNB)
Coforta
Royal Cosun
ZON
FloraHolland
FrieslandCampina
Ukraine
Ukrainian cooperative movements
United States
American Legend Cooperative (mink fur) "Blackglama" brand
Blue Diamond Growers (almonds)
Cabot Creamery (dairy)
Darigold
Diamond of California (nuts), formerly a cooperative
Dairylea Cooperative Inc. (Dairy), formerly Dairymen's League
Dairy Farmers of America
Edible Garden
Florida's Natural Growers (citrus fruit)
Humboldt Creamery (dairy), formerly a cooperative
Land O'Lakes (dairy and farm supply)
Maine's Own Organic Milk Company (dairy)
Michigan Milk Producers Association (dairy)
Michigan Sugar Company (sugar beets)
Ocean Spray (cranberries and citrus fruit)
Organic Valley (organic milk, cheese, eggs, soy, butter, yogurt, snack items)
Riceland Foods (rice, soybeans, corn and wheat)
Snokist Growers (pears, apples, cherries)
Sunkist Growers, Incorporated (citrus fruit)
Sun-Maid (raisins)
Sunsweet Growers Incorporated (dried fruit, especially prunes)
Tillamook County Creamery Association (dairy)
Lone Star Milk Producers (dairy)
United Egg Producers
Welch Foods Inc. (Welch's)
Mexico
Zapatista coffee cooperatives
| Technology | Agriculture, labor and economy | null |
2330207 | https://en.wikipedia.org/wiki/Army%20ant | Army ant | The name army ant (or legionary ant or marabunta) is applied to over 200 ant species in different lineages. Because of their aggressive predatory foraging groups, known as "raids", a huge number of ants forage simultaneously over a limited area.
Another shared feature is that, unlike most ant species, army ants do not construct permanent nests; an army ant colony moves almost incessantly over the time it exists. All species are members of the true ant family, Formicidae, but several groups have independently evolved the same basic behavioural and ecological syndrome. This syndrome is often referred to as "legionary behaviour", and may be an example of convergent evolution.
Most New World army ants belong to the genera Cheliomyrmex, Neivamyrmex, Nomamyrmex, Labidus, and Eciton. The largest genus is Neivamyrmex, which contains more than 120 species; the most predominant species is Eciton burchellii; its common name "army ant" is considered to be the archetype of the species. Most Old World army ants are divided between the tribes Aenictini and Dorylini. Aenictini contains more than 50 species of army ants in the single genus, Aenictus. However, the Dorylini contain the genus Dorylus, the most aggressive group of driver ants; 70 species are known.
Originally, some of the Old World and New World lineages of army ants were thought to have evolved independently, in an example of convergent evolution. In 2003, though, genetic analysis of various species suggests that several of these groups evolved from a single common ancestor, which lived approximately 100 million years ago at the time of the separation of the continents of Africa and South America, while other army ant lineages (Leptanillinae, plus members of Ponerinae, Amblyoponinae, and Myrmicinae) are still considered to represent independent evolutionary events. Army ant taxonomy remains in flux, and genetic analysis will likely continue to provide more information about the relatedness of the various taxa.
Morphology
Workers
The workers of army ants are usually blind or can have compound eyes that are reduced to a single lens. There are species of army ants where the worker caste may show polymorphism based on physical differences and job allocations; however, there are also species that show no polymorphism at all. The worker caste is usually composed of sterile female worker ants.
Soldiers
The soldiers of army ants are larger than the workers, and they have much larger mandibles than the worker class of ants, with older soldiers possessing larger heads and stronger mandibles than the younger ones. They protect the colony, and help carry the heaviest loads of prey to the colony bivouac.
Males
Males are large in size and have a large cylindrical abdomen, highly modified mandibles and uncommon genitalia not seen in other ants. They have 13 segments on their antennae, are alated (have wings) and therefore can resemble wasps. Males are born as part of a sexual brood. As soon as they are born, they will fly off in search of a queen to mate with. In some instances where males seek to mate with a queen from an existing colony, the receiving workers will forcibly remove the wings in order to accommodate the large males into the colony for mating. Because of their size, males are sometimes called "sausage flies" or "sausage ants."
Queen
Colonies of real army ants always have only one queen, while some other ant species can have several queens. The queen is dichthadiigyne (a blind ant with large gaster) but may sometimes possess vestigial eyes. The queens of army ants are unique in that they do not have wings, have an enlarged gaster size and an extended cylindrical abdomen. They are significantly larger than worker army ants and possess 10–12 segments on their antennae. Queens will mate with multiple males and because of their enlarged gaster, can produce 3 to 4 million eggs a month, resulting in synchronized brood cycles and colonies composed of millions of individuals all related to a single queen.
Behaviour
Army ant syndrome
The term "army ant syndrome" refers to behavioral and reproductive traits such as obligate collective foraging, nomadism and highly specialized queens that allow these organisms to become the most ferocious social hunters.
Most ant species will send individual scouts to find food sources and later recruit others from the colony to help; however, army ants dispatch a cooperative, leaderless group of foragers to detect and overwhelm the prey at once. Army ants do not have a permanent nest but instead form many bivouacs as they travel. The constant traveling is due to the need to hunt large amounts of prey to feed its enormous colony population. Their queens are wingless and have abdomens that expand significantly during egg production. This allows for the production of 3–4 million eggs every month and often results in synchronized brood cycles, thus each colony will be formed of millions of individuals that descend from a single queen. These three traits are found in all army ant species and are the defining traits of army ants.
Nomadic and stationary phase
Army ants have two phases of activitya nomadic (wandering) phase and a stationary (statary) phasethat constantly cycle, and can be found throughout all army ant species.
The nomadic phase begins around 10 days after the queen lays her eggs. This phase will last approximately 15 days to let the larvae develop. The ants move during the day, capturing insects, spiders, and small vertebrates to feed their brood. At dusk, they will form their nests or bivouac, which they change almost daily. At the end of the nomadic phase, the larvae will spin pupal cases and no longer require food. The colony can then live in the same bivouac site for around 20 days, foraging only on approximately two-thirds of these days. This pattern of diurnal activity does not apply to all army ants: there are also species that forage at night (nocturnal) or at both day and night (cathemeral).
The stationary phase, which lasts about two to three weeks, begins when the larvae pupate. From this point on, the prey that were previously fed to the larvae are now fed exclusively to the queen. The abdomen (gaster) of the queen swells significantly, and she lays her eggs. At the end of the stationary phase, both the pupae emerge from their cocoons (eclosion) and the next generation of eggs hatch so the colony has a new group of workers and larvae. After this, the ants resume the nomadic phase.
Colony fission
Army ants will split into groups when the size of the colony has reached a size threshold, which happens approximately every three years. Wingless virgin queens will hatch among a male sexual brood that hatches at a later date. When the colony fissions, there are two ways new queens are decided. A possible outcome is a new queen will stay at the original nest with a portion of the workers and the male brood while the old queen will leave with the rest of the workers and find a new nest. Another possibility is that the workers will reject the old queen and new queens will each head a newly-divided colony. The workers will affiliate with individual queens based on the pheromone cues that are unique to each queen. When new bivouacs are formed, communication between the original colony and the new bivouacs will cease.
Queen behaviour
Being the largest ants on Earth, army ants, such as African Dorylus queens have the greatest reproductive potential among insects, with an egg-laying capacity of several million per month. Army ant queens never have to leave the protection of the colony, where they mate with foreign incoming males which disperse on nuptial flights. The exact mating behaviour of the army ant queen is still unknown, but observations seem to imply that queens may be fertilized by multiple males. Due to the queen's large reproductive potential, a colony of army ants can be descended from a single queen.
When the queen ant dies, there is no replacement and army ants cannot rear emergency queens. Most of the time, if the queen dies, the colony will likely die too. Queen loss can occur due to accidents during emigrations, predator attack, old age or illness. However, there are possibilities to avoid colony death. When a colony loses its queen, the worker ants will usually fuse with another colony that has a queen, within a few days. Sometimes, the workers will backtrack along the paths of prior emigrations to search for a queen that has been lost or merge with a sister colony. By merging with a related colony, the workers would increase their overall inclusive fitness. The workers that merge into a new colony may cause the colony to increase in size by 50%.
Sexual selection by workers
Workers in army ant species have a unique role in selecting both the queen and the male mate.
When the queens emerge, the workers in the colony will form two 'systems' or arms in opposite directions. These queens that are hatched will move down either of the arms and only two queens will succeed, one for each branch. Any remaining new queens will be left in the middle and are abandoned. Two new bivouacs will be formed and break off into different directions. The workers will surround the two to-be queens to ensure they survive. These workers that surround the queens are affected by the CHC (pheromone) profile emitted by the new queen.
When males hatch from their brood, they will fly off to find a mate. For males to access the queen and mate, they must run through the workers in the colony. Males that are favoured are superficially similar in size and shape to the queen. The males also produce large quantities of pheromones to pacify the worker ants.
Reproduction responsibilities and problems
In a colony, the queen is the primary individual responsible for reproduction in the colony. Analysis of genotypes have confirmed that workers are, on average, more closely related to the offspring of the queen than to that of other workers, and that workers rarely, if ever, reproduce. Three factors have been suggested to rationalize the loss of worker reproduction in the presence of a queen. First, if the worker reproduces, it lowers the general performance of the colony because it is not working. Second, workers increase their inclusive fitness by policing other workers because they themselves are more related to the queen's offspring than other worker's offspring. Lastly, the large male larvae become too large to be transported, forcing colonies with a sexual brood to nest for a period of 41–56 days, as compared to non-reproductive colonies that remain in the nest an average of 17 days before returning to a nomadic phase. This suggests that if workers produced male offspring, they might be hatched out of sync with the queen's sexual brood and not likely to be successfully reared to adulthood.
Ant mills
Army ants can get lost from the pheromone track while foraging, making the ants follow each other in a circular motion, potentially causing them to die of exhaustion.
Foraging
The whole colony of army ants can consume up to 500,000 prey animals each day, so can have a significant influence on the population, diversity, and behaviour of their prey. The prey selection differs with the species. Underground species prey primarily on ground-dwelling arthropods and their larvae, earthworms, and occasionally also the young of vertebrates, turtle eggs, or oily seeds. A majority of the species, the "colony robbers", specialize in the offspring of other ants and wasps. Only a few species seem to have the very broad spectrum of prey seen in the raiding species. Even these species do not eat every kind of animal. Although small vertebrates that get caught in the raid will be killed, the jaws of the American Eciton are not suited to this type of prey, in contrast to the African Dorylus. These undesired prey are simply left behind and consumed by scavengers or by the flies that accompany the ant swarm. Only a few species hunt primarily on the surface of the earth; they seek their prey mainly in leaf litter and in low vegetation. About five species hunt in higher trees, where they can attack birds and their eggs, although they focus on hunting other social insects along with their eggs and larvae. Colonies of army ants are large compared to the colonies of other Formicidae. Colonies can have over 15 million workers and can transport 3000 prey (items) per hour during the raid period.
When army ants forage, the trails that are formed can be over wide and over long. They stay on the path through the use of a concentration gradient of pheromones. The concentration of pheromone is highest in the middle of the trail, splitting the trail into two distinct regions: an area with high concentration and two areas with low concentrations of pheromones. The outbound ants will occupy the outer two lanes and the returning ants will occupy the central lane. The returning worker ants have also been found to emit more pheromones than those leaving the nest, causing the difference in concentration of pheromone in the trails. The pheromones will allow foraging to be much more efficient by allowing the army ants to avoid their own former paths and those of their conspecifics. Scaffolds structure has been observed when workers carried heavy prey food to inclined surface. Walking ants are prevented from falling by other ants.
While foraging, army ants cause many invertebrates to flee from their hiding places under leaves of the forest floor, under tree bark, and other such locations, thereby allowing predators to catch them more easily. For example, in the tropical rainforests of Panama, swarms of army ants attract many species of birds to this feast of scrambling insects, spiders, scorpions, worms, and other animals. Some of these birds are named "antbirds" due to this tendency. While focused on feeding on these invertebrates, birds at army-ant swarms typically allow very close approach by peoplewithin in many casesoften providing the best opportunities to see many of these species. Depending on the size of the ant swarm and the amount of prey the ants stir up, birds can number from a few to dozens of individuals. Birds that frequent army-ant swarms include the white-whiskered puffbird, rufous motmot, rufous-vented ground cuckoo, grey-cowled wood rail, plain-brown woodcreeper, northern barred woodcreeper, cocoa woodcreeper, black-striped woodcreeper, fasciated antshrike, black-crowned antshrike, spotted antbird, bicolored antbird, ocellated antbird, chestnut-backed antbird, black-faced antthrush, and gray-headed tanager.
Nesting
Army ants do not build a nest like most other ants. Instead, they build a living nest with their bodies, known as a bivouac. Bivouacs tend to be found in tree trunks or in burrows dug by the ants. The members of the bivouac hold onto each other's legs and so build a sort of ball, which may look unstructured to a layman's eyes, but is actually a well-organized structure. The older female workers are located on the exterior; in the interior are the younger female workers. At the smallest disturbance, soldiers gather on the top surface of the bivouac, ready to defend the nest with powerful mandibles and (in the case of the Ecitoninae) stingers. Inside the nest, there are numerous passages that have 'chambers' of food, larvae, eggs, and most importantly, the queen.
Symbionts
Many species of army ants are widely considered to be keystone species due to their important ecological role as arthropod predators and due to their large number of vertebrate and invertebrate associates that rely on army ant colonies for nutrition or protection. During their hunt, many surface-raiding army ants are accompanied by various birds, such as antbirds, thrushes, ovenbirds and wrens, which devour the insects that are flushed out by the ants, a behavior known as kleptoparasitism. A wide variety of arthropods including staphylinid beetles, histerid beetles, spiders, silverfish, isopods, and mites also follow colonies. While some guests follow the colony emigrations on foot, many others are phoretically transported, for example by attaching themselves on army ant workers such as the histerid beetle Nymphister kronaueri. The Neotropical army ant Eciton burchellii has an estimated 350 to 500 animal associates, the most of any one species known to science.
It has been speculated that the nocturnal foraging of some army ant species is done to reduce kleptoparasitism by birds, since the bird kleptoparasites of army ants are diurnal.
Taxonomy
Historically, "army ant" in the broad sense referred to various members of five different ant subfamilies. In two of these cases, the Ponerinae and Myrmicinae, only a few species and genera exhibit legionary behavior; in the other three lineages, Ecitoninae, Dorylinae, and Leptanillinae, all of the constituent species were considered to be legionary. More recently, ant classifications now recognize an additional New World subfamily, Leptanilloidinae, which also consists of obligate legionary species, so is another group now included among the army ants.
A 2003 study of thirty species (by Sean Brady of Cornell University) indicates that army ants of subfamilies Ecitoninae (South America), Dorylinae (Africa) and Aenictinae (Asia) together formed a monophyletic group, based on data from three molecular genes and one mitochondrial gene. Brady concluded that these groups are, therefore, a single lineage that evolved in the mid-Cretaceous period in Gondwana, so these subfamilies are now generally united into a single subfamily Dorylinae, though this is still not universally recognized. However, the unification of these lineages means that the only subfamily that is composed solely of legionary species is Leptanillinae, as Dorylinae contains many non-legionary genera.
Accordingly, the "army ants" as presently recognized consist of legionary species in these genera:
Subfamily Dorylinae (Aenictinae, Aenictogitoninae, Cerapachyinae, Ecitoninae and Leptanilloidinae, 2014)
Aenictus
Asphinctanilloides
Cheliomyrmex
Dorylus
Eciton
Labidus
Leptanilloides
Neivamyrmex
Nomamyrmex
Subfamily Leptanillinae
Anomalomyrma
Leptanilla
Phaulomyrma
Protanilla
Yavnella
Subfamily Myrmicinae
Pheidologeton
Subfamily Ponerinae
Leptogenys (some species)
Simopelta
Subfamily Amblyoponinae
Onychomyrmex
| Biology and health sciences | Hymenoptera | Animals |
2332677 | https://en.wikipedia.org/wiki/Archive%20file | Archive file | In computing, an archive file is a computer file that is composed of one or more files along with metadata. Many archive formats also support compression of member files. Archive files are used to collect multiple data files together into a single file for easier portability and storage, or simply to compress files to use less storage space. Archive files often store directory structures, error detection and correction information, comments, and some use built-in encryption.
Applications
Portability
Archive files are particularly useful in that they store file system data and metadata within the contents of a particular file, and thus can be stored on systems or sent over channels that do not support the file system in question, only file contents – examples include sending a directory structure over email, files with names unsupported on the target file system due to length or characters, and retaining files' date and time information.
A single archive file may contain multiple member files; this can speed file transfers and other operations with processing overheads for each file, in addition to gains due to compression.
Software distribution
Beyond archival purposes, archive files are frequently used for packaging software for distribution, as software contents are often naturally spread across several files; the archive is then known as a package. While the archival file format is the same, there are additional conventions about contents, such as requiring a manifest file, and the resulting format is known as a package format. Examples include deb for Debian, JAR for Java, APK for Android, and self-extracting Windows Installer executables.
Features
Features supported by various kinds of archives include:
converting metadata into data stored inside a file (e.g., file name, permissions, etc.)
checksums to detect errors
data compression
file concatenation to store multiple files in a single file
file patches / updates (when recording changes since a previous archive)
encryption
error correction code to fix errors
splitting a large file into many equal sized files for storage or transmission
Some archive programs have self-extraction, self-installation, source volume and medium information, and package notes/description.
The file extension or file header of the archive file are indicators of the file format used. Computer archive files are created by file archiver software, optical disc authoring software, and disk image software.
Archive formats
An archive format is the file format of an archive file. Some formats are well-defined by their authors and have become conventions supported by multiple vendors and communities.
Types
Archiving only formats store metadata and concatenate files.
Compression only formats only compress files.
Multi-function formats can store metadata, concatenate, compress, encrypt, create error detection and recovery information, and package the archive into self-extracting and self-expanding files.
Software packaging formats are used to create software packages that may be self-installing files.
Disk image formats are used to create disk images of mass storage volumes.
Examples
Filename extensions used to distinguish different types of archives include zip, rar, 7z, and tar, the first of which is the most widely implemented.
Java also introduced a whole family of archive extensions such as jar and war (j is for Java and w is for web). They are used to exchange entire byte-code deployment. Sometimes they are also used to exchange source code and other text, HTML and XML files. By default they are all compressed.
Error detection and recovery
Archive files often include parity checks and other checksums for error detection, for instance zip files use a cyclic redundancy check (CRC). RAR archives may include additional error correction data (called recovery records).
Archive files that do not natively support recovery records can use separate parchive (PAR) files that allows for additional error correction and recovery of missing files in a multi-file archive.
| Technology | File formats | null |
2333272 | https://en.wikipedia.org/wiki/Typha%20latifolia | Typha latifolia | Typha latifolia is a perennial, herbaceous flowering wetland plant in the family Typhaceae. It is known commonly as bulrush (sometimes as common bulrush, to distinguish from other species of Typha); in North America, it is often referred to as broadleaf cattail, or simply as cat-tail or cattail reed. It is native throughout most of temperate Eurasia and North America, and found more locally in Africa and South America. The genome of Typha latifolia was published in 2022.
Other names
Typha latifolia is sometimes referred to as "great reedmace" (mainly an historical name, but occasionally still in modern use), common "cat-tail", "cat-o'-nine-tails", "Cooper's reed", or "cumbungi".
Description
Typha latifolia grows 1.5 to 3 metres (5 to 10 feet) high, with leaves broad. It will generally grow from 0.75 to 1 m (2 to 3 ft) of water depth. The foliage is deciduous, regrowing in spring and dying down in the autumn.
The flowers appear in a dense cluster at the top of the main stem; they are divided into a female portion below, and a tassel of male flowers above. The female and male reproductive parts are contiguous, in contrast to other species, such as T. angustifolia, which has a 3-8 cm gap of bare stem between the female and male flowers.
Flowering is in June and July; following this, the male portion falls off, leaving the female portion to form a fruiting head which matures into the familiar brown "sausage"-shaped cat-tail spikes. The seed heads persist through the winter before they break up in spring, aiming to release their tiny seeds onto the wind for dispersal.
Distribution and habitat
It is found as a native plant species widely in the northern hemisphere, across Eurasia and North America, and more locally in Africa and South America. In Canada, it occurs in all provinces, including in the Yukon and Northwest Territories, and portions of southern Nunavut. In the United States, it is native to all states, including Alaska, except Hawaii. However, it is an introduced and invasive species (and is considered a noxious weed) in the latter state, and also in Australia. It has further been reported in Indonesia, Malaysia, New Zealand, Papua New Guinea and the Philippines (where it is referred to as soli-soli).
The species has been found in a variety of climates, including tropical, subtropical, southern and northern temperate, humid coastal, and dry continental. It is found at elevations from sea level to .
T. latifolia is an "obligate wetland" species, meaning that it is always found in or near water. The species generally grows in flooded areas where the water depth does not exceed , but has also been reported growing in floating mats in slightly deeper water. It grows mostly in freshwater but also occurs in slightly brackish marshes. The species can displace other species native to salt marshes upon reduction in salinity. Under such conditions, the plant may be considered aggressive, as it interferes with long-term preservation of salt marsh habitat.
T. latifolia shares its range with other, related species, and hybridizes with T. angustifolia (lesser bulrush or narrow-leaf cattail) to form Typha × glauca (T. angustifolia × T. latifolia). T. latifolia is usually found in shallower water than T. angustifolia.
Uses
Traditionally, the plant has been a part of certain indigenous cultures of British Columbia, as a source of food, medicine, and for other uses. The rhizomes are edible after cooking and removing the skin, while peeled stems and leaf bases can be eaten raw or cooked. The young flower spikes, young shoots, and sprouts at the end of the rootstocks are edible as well. The pollen from the mature cones can be used as a flavouring. The starchy rootstalks are ground into meal by certain tribes of Native Americans.
It is not advisable to eat specimens growing from polluted water, as the plant readily absorbs contaminants and other pollutants and, in fact, is used as a bioremediator. Specimens with a very bitter or spicy taste should not continue to be eaten.
In Greece, the plant is used in a dried form for traditional chair making, namely in the woven seat of the chair. To prepare the material, the plant is collected in the summer and left to dry for 40–50 days.
In San Francisco, a town on the Pacijan Island of the Camotes Islands of Cebu, Philippines, the plant is known by the name soli-soli, and is used as a type of weaving fibre and/or material for making floor mats, bags, hats, and other handmade items and ornaments. Soli-soli weaving is considered to be one of the main livelihoods of the townspeople, showcasing the local crafts of the San Franciscohanons, as well as offering a viable outlet for cultural expression and eco-tourism. The town even celebrates the overabundance of this plant on the island, showcasing the weaving industry through a local soli-soli festival, an event of thanksgiving also dedicated to Saint Joseph, the patron saint of the town. The festival is celebrated around the 19th of March, the solemnity of St. Joseph, the Spouse of Mary. The townspeople incorporate the plant in their festival costumes, oftentimes wearing outfits made completely from woven Soli-soli.
| Biology and health sciences | Poales | Plants |
2333728 | https://en.wikipedia.org/wiki/Haumea | Haumea | Haumea (minor-planet designation: 136108 Haumea) is a dwarf planet located beyond Neptune's orbit. It was discovered in 2004 by a team headed by Mike Brown of Caltech at the Palomar Observatory, and formally announced in 2005 by a team headed by José Luis Ortiz Moreno at the Sierra Nevada Observatory in Spain, who had discovered it that year in precovery images taken by the team in 2003. From that announcement, it received the provisional designation 2003 EL61.
On 17 September 2008, it was named after Haumea, the Hawaiian goddess of childbirth, under the expectation by the International Astronomical Union (IAU) that it would prove to be a dwarf planet. Nominal estimates make it the third-largest known trans-Neptunian object, after Eris and Pluto, and approximately the size of Uranus's moon Titania. Precovery images of Haumea have been identified back to 22 March 1955.
Haumea's mass is about one-third that of Pluto and 1/1400 that of Earth. Although its shape has not been directly observed, calculations from its light curve are consistent with it being a Jacobi ellipsoid (the shape it would be if it were a dwarf planet), with its major axis twice as long as its minor. In October 2017, astronomers announced the discovery of a ring system around Haumea, representing the first ring system discovered for a trans-Neptunian object and a dwarf planet.
Haumea's gravity was until recently thought to be sufficient for it to have relaxed into hydrostatic equilibrium, though that is now unclear. Haumea's elongated shape together with its rapid rotation, rings, and high albedo (from a surface of crystalline water ice), are thought to be the consequences of a giant collision, which left Haumea the largest member of a collisional family (the Haumea family) that includes several large trans-Neptunian objects and Haumea's two known moons, Hiʻiaka and Namaka.
History
Discovery
Two teams claim credit for the discovery of Haumea. A team consisting of Mike Brown of Caltech, David Rabinowitz of Yale University, and Chad Trujillo of Gemini Observatory in Hawaii discovered Haumea on 28 December 2004, on images they had taken on 6 May 2004. On 20 July 2005, they published an online abstract of a report intended to announce the discovery at a conference in September 2005.
At around this time, José Luis Ortiz Moreno and his team at the Instituto de Astrofísica de Andalucía at Sierra Nevada Observatory in Spain found Haumea on images taken on 7–10 March 2003. Ortiz emailed the Minor Planet Center with their discovery on the night of 27 July 2005.
Brown initially conceded discovery credit to Ortiz, but came to suspect the Spanish team of fraud upon learning that the Spanish observatory had accessed Brown's observation logs the day before the discovery announcement, a fact that they did not disclose in the announcement as would be customary. Those logs included enough information to allow the Ortiz team to precover Haumea in their 2003 images, and they were accessed again just before Ortiz scheduled telescope time to obtain confirmation images for a second announcement to the MPC on 29 July. Ortiz later admitted he had accessed the Caltech observation logs but denied any wrongdoing, stating he was merely verifying whether they had discovered a new object.
IAU protocol is that discovery credit for a minor planet goes to whoever first submits a report to the MPC (Minor Planet Center) with enough positional data for a decent determination of its orbit, and that the credited discoverer has priority in choosing a name. However, the IAU announcement on 17 September 2008, that Haumea had been named by a dual committee established for bodies expected to be dwarf planets, did not mention a discoverer. The location of discovery was listed as the Sierra Nevada Observatory of the Spanish team, but the chosen name, Haumea, was the Caltech proposal. Ortiz's team had proposed "Ataecina", the ancient Iberian goddess of spring; as a chthonic deity, it would have been appropriate for a plutino, which Haumea was not.
Name and symbol
Until it was given a permanent name, the Caltech discovery team used the nickname "Santa" among themselves, because they had discovered Haumea on 28 December 2004, just after Christmas. The Spanish team were the first to file a claim for discovery to the Minor Planet Center, in July 2005. On 29 July 2005, Haumea was given the provisional designation 2003 EL61, based on the date of the Spanish discovery image. On 7 September 2006, it was numbered and admitted into the official minor planet catalog as (136108) 2003 EL61.
Following guidelines established at the time by the IAU that classical Kuiper belt objects be given names of mythological beings associated with creation, in September 2006 the Caltech team submitted formal names from Hawaiian mythology to the IAU for both (136108) 2003 EL61 and its moons, in order "to pay homage to the place where the satellites were discovered". The names were proposed by David Rabinowitz of the Caltech team. Haumea is the matron goddess of the island of Hawaiʻi, where the Mauna Kea Observatory is located. In addition, she is identified with Papa, the goddess of the earth and wife of Wākea (space), which, at the time, seemed appropriate because Haumea was thought to be composed almost entirely of solid rock, without the thick ice mantle over a small rocky core typical of other known Kuiper belt objects. Lastly, Haumea is the goddess of fertility and childbirth, with many children who sprang from different parts of her body; this corresponds to the swarm of icy bodies thought to have broken off the main body during an ancient collision. The two known moons, also believed to have formed in this manner, are thus named after two of Haumea's daughters, Hiʻiaka and Nāmaka.
The proposal by the Ortiz team, Ataecina, did not meet IAU naming requirements, because the names of chthonic deities were reserved for stably resonant trans-Neptunian objects such as plutinos that resonate 3:2 with Neptune, whereas Haumea was in an intermittent 7:12 resonance and so by some definitions was not a resonant body. The naming criteria would be clarified in late 2019, when the IAU decided that chthonic figures were to be used specifically for plutinos. (See Ataecina § Dwarf planet.)
A planetary symbol for Haumea, , is included in Unicode at U+1F77B. Planetary symbols are no longer much used in astronomy, and 🝻 is mostly used by astrologers, but has also been used by NASA. The symbol was designed by Denis Moskowitz, a software engineer in Massachusetts; it combines and simplifies Hawaiian petroglyphs meaning 'woman' and 'childbirth'.
Orbit
Haumea has an orbital period of 284 Earth years, a perihelion of 35 AU, and an orbital inclination of 28°. It passed aphelion in early 1992, and is currently more than 50 AU from the Sun. It will come to perihelion in 2133. Haumea's orbit has a slightly greater eccentricity than that of the other members of its collisional family. This is thought to be due to Haumea's weak 7:12 orbital resonance with Neptune gradually modifying its initial orbit over the course of a billion years, through the Kozai effect, which allows the exchange of an orbit's inclination for increased eccentricity.
With a visual magnitude of 17.3, Haumea is the third-brightest object in the Kuiper belt after Pluto and , and easily observable with a large amateur telescope. However, because the planets and most small Solar System bodies share a common orbital alignment from their formation in the primordial disk of the Solar System, most early surveys for distant objects focused on the projection on the sky of this common plane, called the ecliptic. As the region of sky close to the ecliptic became well explored, later sky surveys began looking for objects that had been dynamically excited into orbits with higher inclinations, as well as more distant objects, with slower mean motions across the sky. These surveys eventually covered the location of Haumea, with its high orbital inclination and current position far from the ecliptic.
Possible resonance with Neptune
Haumea is thought to be in an intermittent 7:12 orbital resonance with Neptune. Its ascending node Ω precesses with a period of about 4.6 million years, and the resonance is broken twice per precession cycle, or every 2.3 million years, only to return a hundred thousand years or so later. As this is not a simple resonance, Marc Buie qualifies it as non-resonant.
Rotation
Haumea displays large fluctuations in brightness over a period of 3.9 hours, which can only be explained by a rotational period of this length. This is faster than any other known equilibrium body in the Solar System, and indeed faster than any other known body larger than 100 km in diameter. While most rotating bodies in equilibrium are flattened into oblate spheroids, Haumea rotates so quickly that it is distorted into a triaxial ellipsoid. If Haumea were to rotate much more rapidly, it would distort itself into a dumbbell shape and split in two. This rapid rotation is thought to have been caused by the impact that created its satellites and collisional family.
The plane of Haumea's equator is oriented nearly edge-on from Earth at present and is also slightly offset to the orbital planes of its ring and its outermost moon Hiʻiaka. Although initially assumed to be coplanar to Hiʻiaka's orbital plane by Ragozzine and Brown in 2009, their models of the collisional formation of Haumea's satellites consistently suggested Haumea's equatorial plane to be at least aligned with Hiʻiaka's orbital plane by approximately 1°. This was supported with observations of a stellar occultation by Haumea in 2017, which revealed the presence of a ring approximately coincident with the plane of Hiʻiaka's orbit and Haumea's equator. A mathematical analysis of the occultation data by Kondratyev and Kornoukhov in 2018 placed constraints on the relative inclination angles of Haumea's equator to the orbital planes of its ring and Hiʻiaka, which were found to be inclined and relative to Haumea's equator, respectively.
Physical characteristics
Size, shape, and composition
The size of a Solar System object can be deduced from its optical magnitude, its distance, and its albedo. Objects appear bright to Earth observers either because they are large or because they are highly reflective. If their reflectivity (albedo) can be ascertained, then a rough estimate can be made of their size. For most distant objects, the albedo is unknown, but Haumea is large and bright enough for its thermal emission to be measured, which has given an approximate value for its albedo and thus its size. However, the calculation of its dimensions is complicated by its rapid rotation. The rotational physics of deformable bodies predicts that over as little as a hundred days, a body rotating as rapidly as Haumea will have been distorted into the equilibrium form of a triaxial ellipsoid. It is thought that most of the fluctuation in Haumea's brightness is caused not by local differences in albedo but by the alternation of the side view and ends view as seen from Earth.
The rotation and amplitude of Haumea's light curve were argued to place strong constraints on its composition. If Haumea were in hydrostatic equilibrium and had a low density like Pluto, with a thick mantle of ice over a small rocky core, its rapid rotation would have elongated it to a greater extent than the fluctuations in its brightness allow. Such considerations constrained its density to a range of 2.6–3.3 g/cm3. By comparison, the Moon, which is rocky, has a density of 3.3 g/cm3, whereas Pluto, which is typical of icy objects in the Kuiper belt, has a density of 1.86 g/cm3. Haumea's possible high density covered the values for silicate minerals such as olivine and pyroxene, which make up many of the rocky objects in the Solar System. This also suggested that the bulk of Haumea was rock covered with a relatively thin layer of ice. A thick ice mantle more typical of Kuiper belt objects may have been blasted off during the impact that formed the Haumean collisional family.
Because Haumea has moons, the mass of the system can be calculated from their orbits using Kepler's third law. The result is , 28% the mass of the Plutonian system and 6% that of the Moon. Nearly all of this mass is in Haumea.
Several ellipsoid-model calculations of Haumea's dimensions have been made. The first model produced after Haumea's discovery was calculated from ground-based observations of Haumea's light curve at optical wavelengths: it provided a total length of 1,960 to 2,500 km and a visual albedo (pv) greater than 0.6. The most likely shape is a triaxial ellipsoid with approximate dimensions of 2,000 × 1,500 × 1,000 km, with an albedo of 0.71. Observations by the Spitzer Space Telescope gave a diameter of and an albedo of , from photometry at infrared wavelengths of 70 μm. Subsequent light-curve analyses have suggested an equivalent circular diameter of 1,450 km. In 2010 an analysis of measurements taken by Herschel Space Telescope together with the older Spitzer Telescope measurements yielded a new estimate of the equivalent diameter of Haumea—about 1300 km. These independent size estimates overlap at an average geometric mean diameter of roughly 1,400 km. In 2013 the Herschel Space Telescope measured Haumea's equivalent circular diameter to be roughly .
However the observations of a stellar occultation in January 2017 cast a doubt on all those conclusions. The measured shape of Haumea, while elongated as presumed before, appeared to have significantly larger dimensions according to the data obtained from the occultation Haumea is approximately the diameter of Pluto along its longest axis and about half that at its poles. The resulting density calculated from the observed shape of Haumea was about more in line with densities of other large TNOs. This resulting shape appeared to be inconsistent with a homogenous body in hydrostatic equilibrium, though Haumea appears to be one of the largest trans-Neptunian objects discovered nonetheless, smaller than , , similar to , and possibly , and larger than , , and .
A 2019 study attempted to resolve the conflicting measurements of Haumea's shape and density using numerical modeling of Haumea as a differentiated body. It found that dimensions of ≈ 2,100 × 1,680 × 1,074 km (modeling the long axis at intervals of 25 km) were a best-fit match to the observed shape of Haumea during the 2017 occultation, while also being consistent with both surface and core scalene ellipsoid shapes in hydrostatic equilibrium. The revised solution for Haumea's shape implies that it has a core of approximately 1,626 × 1,446 × 940 km, with a relatively high density of ≈ , indicative of a composition largely of hydrated silicates such as kaolinite. The core is surrounded by an icy mantle that ranges in thickness from about 70 km at the poles to 170 km along its longest axis, comprising up to 17% of Haumea's mass. Haumea's mean density is estimated at ≈ , with an albedo of ≈ 0.66.
Surface
In 2005, the Gemini and Keck telescopes obtained spectra of Haumea which showed strong crystalline water ice features similar to the surface of Pluto's moon Charon. This is peculiar, because crystalline ice forms at temperatures above 110 K, whereas Haumea's surface temperature is below 50 K, a temperature at which amorphous ice is formed. In addition, the structure of crystalline ice is unstable under the constant rain of cosmic rays and energetic particles from the Sun that strike trans-Neptunian objects. The timescale for the crystalline ice to revert to amorphous ice under this bombardment is on the order of ten million years, yet trans-Neptunian objects have been in their present cold-temperature locations for timescales of billions of years.
Radiation damage should also redden and darken the surface of trans-Neptunian objects where the common surface materials of organic ices and tholin-like compounds are present, as is the case with Pluto. Therefore, the spectra and colour suggest Haumea and its family members have undergone recent resurfacing that produced fresh ice. However, no plausible resurfacing mechanism has been suggested.
Haumea is as bright as snow, with an albedo in the range of 0.6–0.8, consistent with crystalline ice. Other large TNOs such as appear to have albedos as high or higher. Best-fit modeling of the surface spectra suggested that 66% to 80% of the Haumean surface appears to be pure crystalline water ice, with one contributor to the high albedo possibly hydrogen cyanide or phyllosilicate clays. Inorganic cyanide salts such as copper potassium cyanide may also be present.
However, further studies of the visible and near infrared spectra suggest a homogeneous surface covered by an intimate 1:1 mixture of amorphous and crystalline ice, together with no more than 8% organics. The absence of ammonia hydrate excludes cryovolcanism and the observations confirm that the collisional event must have happened more than 100 million years ago, in agreement with the dynamic studies.
The absence of measurable methane in the spectra of Haumea is consistent with a warm collisional history that would have removed such volatiles, in contrast to .
In addition to the large fluctuations in Haumea's light curve due to the body's shape, which affect all colours equally, smaller independent colour variations seen in both visible and near-infrared wavelengths show a region on the surface that differs both in colour and in albedo. More specifically, a large dark red area on Haumea's bright white surface was seen in September 2009, possibly an impact feature, which indicates an area rich in minerals and organic (carbon-rich) compounds, or possibly a higher proportion of crystalline ice. Thus Haumea may have a mottled surface reminiscent of Pluto, if not as extreme.
Ring
A stellar occultation observed on 21 January 2017, and described in an October 2017 Nature article indicated the presence of a ring around Haumea. This represents the first ring system discovered for a TNO. The ring has a radius of about 2,287 km, a width of ~70 km and an opacity of 0.5. It is well within Haumea's Roche limit, which would be at a radius of about 4,400 km if it were spherical (being nonspherical pushes the limit out farther).
The ring plane is inclined with respect to Haumea's equatorial plane and approximately coincides with the orbital plane of its larger, outer moon Hiʻiaka. The ring is also close to the 1:3 orbit-spin resonance with Haumea's rotation (which is at a radius of 2,285 ± 8 km from Haumea's center). The ring is estimated to contribute 5% to the total brightness of Haumea.
In a study about the dynamics of ring particles published in 2019, Othon Cabo Winter and colleagues have shown that the 1:3 resonance with Haumea's rotation is dynamically unstable, but that there is a stable region in the phase space consistent with the location of Haumea's ring. This indicates that the ring particles originate on circular, periodic orbits that are close to, but not inside, the resonance.
Satellites
Two small satellites have been discovered orbiting Haumea, (136108) Haumea I Hiʻiaka and (136108) Haumea II Namaka. Darin Ragozzine and Michael Brown discovered both in 2005, through observations of Haumea using the W. M. Keck Observatory.
Hiʻiaka, at first nicknamed "Rudolph" by the Caltech team, was discovered 26 January 2005. It is the outer and, at roughly 310 km in diameter, the larger and brighter of the two, and orbits Haumea in a nearly circular path every 49 days. Strong absorption features at 1.5 and 2 micrometres in the infrared spectrum are consistent with nearly pure crystalline water ice covering much of the surface. The unusual spectrum, along with similar absorption lines on Haumea, led Brown and colleagues to conclude that capture was an unlikely model for the system's formation, and that the Haumean moons must be fragments of Haumea itself.
Namaka, the smaller, inner satellite of Haumea, was discovered on 30 June 2005, and nicknamed "Blitzen". It is a tenth the mass of Hiʻiaka, orbits Haumea in 18 days in a highly elliptical, non-Keplerian orbit, and is inclined 13° from the larger moon, which perturbs its orbit.
The relatively large eccentricities together with the mutual inclination of the orbits of the satellites are unexpected as they should have been damped by the tidal effects. A relatively recent passage by a 3:1 resonance with Hiʻiaka might explain the current excited orbits of the Haumean moons.
From around 2008 to 2011, the orbits of the Haumean moons appeared almost exactly edge-on from Earth, with Namaka periodically occulting Haumea. Observation of such transits would have provided precise information on the size and shape of Haumea and its moons, as happened in the late 1980s with Pluto and Charon. The tiny change in brightness of the system during these occultations would have required at least a medium-aperture professional telescope for detection. Hiʻiaka last occulted Haumea in 1999, a few years before discovery, and will not do so again for some 130 years. However, in a situation unique among regular satellites, Namaka's orbit was being greatly torqued by Hiʻiaka, which preserved the viewing angle of Namaka–Haumea transits for several more years. One occultation event was observed on 19 June 2009, from the Pico dos Dias Observatory in Brazil.
Collisional family
Haumea is the largest member of its collisional family, a group of astronomical objects with similar physical and orbital characteristics thought to have formed when a larger progenitor was shattered by an impact. This family is the first to be identified among TNOs and includes—beside Haumea and its moons— (≈364 km), (≈174 km), (≈200 km), (≈230 km), and (≈252 km). Brown and colleagues proposed that the family were a direct product of the impact that removed Haumea's ice mantle, but a second proposal suggests a more complicated origin: that the material ejected in the initial collision instead coalesced into a large moon of Haumea, which was later shattered in a second collision, dispersing its shards outwards. This second scenario appears to produce a dispersion of velocities for the fragments that is more closely matched to the measured velocity dispersion of the family members.
The presence of the collisional family could imply that Haumea and its "offspring" might have originated in the scattered disc. In today's sparsely populated Kuiper belt, the chance of such a collision occurring over the age of the Solar System is less than 0.1 percent. The family could not have formed in the denser primordial Kuiper belt because such a close-knit group would have been disrupted by Neptune's migration into the belt—the believed cause of the belt's current low density. Therefore, it appears likely that the dynamic scattered disc region, in which the possibility of such a collision is far higher, is the place of origin for the object that generated Haumea and its kin.
Because it would have taken at least a billion years for the group to have diffused as far as it has, the collision which created the Haumea family is believed to have occurred very early in the Solar System's history.
Exploration
Haumea was observed from afar by the New Horizons spacecraft in October 2007, January 2017, and May 2020, from distances of 49 AU, 59 AU, and 63 AU, respectively. The spacecraft's outbound trajectory permitted observations of Haumea at high phase angles that are otherwise unobtainable from Earth, enabling the determination of the light scattering properties and phase curve behavior of Haumea's surface.
Joel Poncy and colleagues calculated that a flyby mission to Haumea could take 14.25 years using a gravity assist from Jupiter, based on a launch date of 25 September 2025. Haumea would be 48.18 AU from the Sun when the spacecraft arrives. A flight time of 16.45 years can be achieved with launch dates on 1 November 2026, 23 September 2037, and 29 October 2038.
Haumea could become a target for an exploration mission, and an example of this work is a preliminary study on a probe to Haumea and its moons (at 35–51 AU). Probe mass, power source, and propulsion systems are key technology areas for this type of mission.
| Physical sciences | Solar System | Astronomy |
7570884 | https://en.wikipedia.org/wiki/Megalaimidae | Megalaimidae | Megalaimidae, the Asian barbets, are a family of birds, comprising two genera with 35 species native to the forests of the Indomalayan realm from Tibet to Indonesia. They were once clubbed with all barbets in the family Capitonidae but the Old World species have been found to be distinctive and are considered, along with the Lybiidae and Ramphastidae, as sister groups.
Taxonomy
In the past the species were placed in three genera, Caloramphus, Megalaima and Psilopogon, but studies show that Psilopogon to be nested within the clade of Megalaima. Since members of this clade are better treated under a single genus, they have been moved to the genus Psilopogon which was described and erected earlier than Megalaima and is therefore chosen on the basis of taxonomic priority principles. Nearly all members of the family are now in the genus Psilopogon, with the exception of those in Caloramphus, which are thought to have genetically diverged from the common ancestor around 21.32 million years ago. The latter species are distinct enough to warrant placement in a subfamily Caloramphinae. The family name is derived from that of the genus Megalaima which means ‘large throat’, from the Greek (, ‘large, great’) and (, ‘throat’).
The phylogenetic relationship between the Asian barbets and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Classification
Subfamily Megalaiminae
Subfamily Caloramphinae
| Biology and health sciences | Piciformes | Animals |
7571211 | https://en.wikipedia.org/wiki/Lybiidae | Lybiidae | Lybiidae is a family of birds also known as the African barbets. There are 42 species ranging from the type genus Lybius of forest interior to the tinkerbirds (Pogoniulus) of forest and scrubland. They are found throughout sub-Saharan Africa, with the exception of the far south-west of South Africa.
The African terrestrial barbets, Trachyphoninae, range from the southern Sahara to South Africa. Members of one genus, Trachyphonus, are the most open-country species of barbets. The subfamily Lybiinae contains the African arboreal barbets. There are 37 species of Lybiinae in 6 genera.
Taxonomy
The phylogenetic relationship between the African barbets and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Description and ecology
Most African barbets are about long, plump-looking, with large heads, and their heavy bill is fringed with bristles; the tinkerbirds are smaller, ranging down to the red-rumped tinkerbird (Pogoniulus atroflavus) at and .
They are mainly solitary birds, eating insects and fruit. Figs and numerous other species of fruiting tree and bush are visited. An individual barbet may feed on as many as 60 different species in its range. They will also visit plantations and take cultivated fruit and vegetables. Fruit is eaten whole and indigestible material such as seed pits regurgitated later (often before singing). Regurgitation does not usually happen in the nest (as happens with toucans), although tinkerbirds do place sticky mistletoe seeds around the entrances of their nests, possibly to deter predators. Like other barbets, they are thought to be important agents in seed dispersal in tropical forests.
As well as taking fruit, African barbets also take arthropod prey, gleaned from the branches and trunks of trees. A wide range of insects are taken, including ants, cicadas, dragonflies, crickets, locusts, beetles, moths and mantids. Scorpions and centipedes are also taken, and a few species will take small vertebrates such as lizards, frogs and geckos.
The precise nesting details of many African barbets are not yet known, although peculiarly among the Piciformes, some sociable species will nest in riverbanks or termite nests. Like many members of their order, Piciformes, their nests are in holes bored into a tree, and they usually lay between 2 and 4 eggs (except for the yellow-breasted barbet which lays up to 6), incubated for 13–15 days. Nesting duties are shared by both parents.
There has been generally little interference by humans. Some of the species which require primary woodland are declining due to deforestation, occasionally to the benefit of close relatives. For example, the loss of highland woods in Kenya has seen the moustached tinkerbird almost disappear and the red-fronted tinkerbird expand its range.
Systematics
Subfamily Lybiinae
Subfamily Trachyphoninae
It is not entirely resolved whether the Early to Middle Miocene genus Capitonides from Europe belongs to this family or the Asian barbets (now Megalaimidae). Indeed, given that the prehistoric birds somewhat resembled a primitive toucan (without these birds' present autapomorphies), they might occupy a more basal position among the barbet-toucan clade altogether. On the other hand, they show some similarities to Trachyphonus in particular and have even been placed into this genus, but this move is not widely accepted.
"CMC 152", a distal carpometacarpus similar to that of barbets and found at the Middle Miocene locality of Grive-Saint-Alban (France) was considered to differ from Capitonides in the initial description, being closer to extant (presumably Old World) barbets. This fossil is sometimes lumped into Trachyphonus too; in this case it may have more merit.
Supposed fossil remains of Late Miocene Pogoniulus were found at Kohfidisch (Austria) but are not yet thoroughly studied. It is not clear whether they belong to the extant genus but given the late date this may well be so.
| Biology and health sciences | Piciformes | Animals |
12232494 | https://en.wikipedia.org/wiki/Underground%20power%20line | Underground power line | An underground power line provides electrical power with underground cables. Compared to overhead power lines, underground lines have lower risk of starting a wildfire and reduce the risk of the electrical supply being interrupted by outages during high winds, thunderstorms or heavy snow or ice storms. An added benefit of undergrounding is the aesthetic quality of the landscape without the powerlines. Undergrounding can increase the capital cost of electric power transmission and distribution but may decrease operating costs over the lifetime of the cables.
History
Early undergrounding had a basis in the detonation of mining explosives and in undersea telegraph cables. Electric cables were used in Russia to detonate mining explosives in 1812, and to carry telegraph signals across the English Channel in 1850.
With the spread of early electrical power systems, undergrounding began to increase as well. Thomas Edison used underground DC “street pipes” in his early electric power distribution networks; they were insulated first with jute in 1880, and progressed to rubber insulation in 1882.
Subsequent developments occurred in both insulation and fabrication techniques:
1925: Pressurized paper insulation used on cables
1930: PVC insulation used on cables
1942: Polyethylene insulation first used on cables
1962: Ethylene propylene rubber-insulated cables become commercially available
1963: Preformed cable accessories become available
1970s: Shrinkable cable accessories become available
During the 20th century direct-buried cable became commonplace.
Comparison
Aerial cables that carry high-voltage electricity and are supported by large pylons are generally considered an unattractive feature of the countryside. Underground cables can transmit power across densely populated areas or areas where land is costly, environmentally sensitive, or aesthetically sensitive. Underground and underwater crossings may be a practical alternative to crossing rivers.
For example, as of 2024, the Public Service Commission of Wisconsin determined that the installation cost of a 69-kilovolt aboveground power line is $284,000 per mile. In contrast, an equivalent underground line costs $1.5 million per mile. As ratepayers ultimately bear these costs, utilities exercise discretion in selecting which lines to bury.
Advantages
Less subject to damage from severe weather conditions (mainly lightning, hurricanes/cyclones/typhoons, tornados, other winds, and freezing)
Decreased risk of fire. Overhead power lines can draw high fault currents from vegetation-to-conductor, conductor-to-conductor, or conductor-to-ground contact, which result in large, hot arcs.
Reduced range of electromagnetic fields (EMF) emission, into the surrounding area. However, depending on the depth of the underground cable; greater EMF may be experienced on the surface. The electric current in the cable conductor produces a magnetic field, but the closer grouping of underground power cables reduces the resultant external magnetic field, and further magnetic shielding may be provided. See Electromagnetic radiation and health.
Underground cables need a narrower surrounding strip of about 1–10 meters to install (up to 30 m for 400 kV cables during construction), whereas an overhead line requires a surrounding strip of about 20–200 meters wide to be kept permanently clear for safety, maintenance, and repair.
Underground cables pose no hazard to low-flying aircraft or to wildlife.
Underground cables have a much-reduced risk of damage caused by human activity such as theft, illegal connections, sabotage, and damage from accidents.
Burying utility lines makes room for more large trees on sidewalks, for environmental benefits and increase of property values.
Disadvantages
Undergrounding is more expensive, since the cost of burying cables at transmission voltages is several times greater than overhead power lines, and the life-cycle cost of an underground power cable is two to four times the cost of an overhead power line. Above-ground lines cost around $10 per and underground lines cost in the range of $20 to $40 per . In highly urbanized areas, the cost of underground transmission can be 10–14 times as expensive as overhead. However, these calculations may neglect the cost of power interruptions. The lifetime cost difference is smaller for lower-voltage distribution networks, in the range of 12-28% higher than overhead lines of equivalent voltage.
Whereas finding and repairing overhead wire breaks can be accomplished in hours, underground repairs can take days or weeks, and for this reason redundant lines are run.
Underground cable locations are not always obvious, which can lead to unwary diggers damaging cables or being electrocuted.
Operations are more difficult since underground cables' high reactive power produces large charging currents, making voltage control more difficult. Large charging currents arise due to the higher capacitance from underground power lines and thus limit how long an AC line can be. To avoid capacitance issues when undergrounding long-distance transmission lines, HVDC lines can be used as they do not suffer from the same issue.
Whereas overhead lines can easily be uprated by modifying line clearances and power poles to carry more power, underground cables cannot be uprated and must be supplemented or replaced to increase capacity. Transmission and distribution companies generally future-proof underground lines by installing the highest-rated cables while being still cost-effective.
Underground cables are more subject to damage by ground movement. The 2011 Christchurch earthquake in New Zealand caused damage to of high voltage underground cables and subsequently cut power to large parts of Christchurch city, whereas only a few kilometres of overhead lines were damaged, largely due to pole foundations being compromised by liquefaction.
As underground repair and check-ups require street digging, they create patches and potholes, leading to bumpy and unsafe rides for cars and bicycles. Utility work also increases lane closure, which leads to traffic jams and increases the cost of resurfacing work by the local government.
Methods
Horizontal Boring – This is a method in which one uses a drill bit to bore horizontal starting at one point on the surface of the ground and creating an arc underground to come back out of the surface. This method is used when minimal damage to the surface is preferred.
Trench Undergrounding - Another method for undergrounding power lines is to dig trenches, lay power lines into the trench and cover them back up. This is done for the length of the power line.
Duct Bank - A third method uses parallel conduits held by spacers with sand or concrete filled in-between the conduits. Installation methods: conduit and spacers placed directly into a trench, conduit and spacers placed in concrete forms, or pre-made sections of concrete and conduit.
Regulations
Europe
The UK regulator Office of Gas and Electricity Markets (OFGEM) permits transmission companies to recoup the cost of some undergrounding in their prices to consumers. The undergrounding must be in National Parks or designated Areas of Outstanding Natural Beauty (AONB) to qualify. In 2021 work started on a project to bury of 400kV overhead power lines running from near Winterbourne Abbas to Friar Waddon (, north-west of Weymouth) in Dorset AONB. Similar schemes are planned for Snowdonia, the Peak District and the North Wessex Downs.
The most visually intrusive overhead cables of the core transmission network are excluded from the scheme. Some undergrounding projects are funded by the proceeds of national lottery.
All low and medium voltage electrical power (<50 kV) in the Netherlands is now supplied underground.
In Germany, 73% of the medium voltage cables are underground and 87% of low voltage cables are underground. The high percentage of underground cables contributes to the very high grid reliability (SAIDI < 20). In comparison, the SAIDI value (minutes without electricity per year) in the Netherlands is about 30, and in the UK it is about 70.
California
In the United States, the California Public Utilities Commission (CPUC) Rule 20 permits the undergrounding of electrical power cables under certain situations. Rule 20A projects are paid for by all customers of the utility companies. Rule 20B projects are partially funded this way and cover the cost of an equivalent overhead system. Rule 20C projects enable property owners to fund the undergrounding.
Japan
Most electrical power in Japan is still distributed by aerial cables. In Tokyo's 23 wards, according to Japan's Construction and Transport Ministry, just 7.3 percent of cables were laid underground as of March 2008.
Variants
A compromise between undergrounding and using overhead lines is installing air cables. Aerial cables are insulated cables spun between poles and used for power transmission or telecommunication services. An advantage of aerial cables is that their insulation removes the danger of electric shock (unless the cables are damaged). Another advantage is that they forgo the costs—particularly high in rocky areas—of burying. The disadvantages of aerial cables are that they have the same aesthetic issues as standard overhead lines and that they can be affected by storms. However, if the insulation is not destroyed during pylon failure or when hit by a tree, there is no interruption of service. Electrical hazards are minimised and re-hanging the cables may be possible without power interruption.
| Technology | Earthworks | null |
3211372 | https://en.wikipedia.org/wiki/Screening%20%28medicine%29 | Screening (medicine) | Screening, in medicine, is a strategy used to look for as-yet-unrecognised conditions or risk markers. This testing can be applied to individuals or to a whole population without symptoms or signs of the disease being screened.
Screening interventions are designed to identify conditions which could at some future point turn into disease, thus enabling earlier intervention and management in the hope to reduce mortality and suffering from a disease. Although screening may lead to an earlier diagnosis, not all screening tests have been shown to benefit the person being screened; overdiagnosis, misdiagnosis, and creating a false sense of security are some potential adverse effects of screening. Additionally, some screening tests can be inappropriately overused. For these reasons, a test used in a screening program, especially for a disease with low incidence, must have good sensitivity in addition to acceptable specificity.
Several types of screening exist: universal screening involves screening of all individuals in a certain category (for example, all children of a certain age). Case finding involves screening a smaller group of people based on the presence of risk factors (for example, because a family member has been diagnosed with a hereditary disease). Screening interventions are not designed to be diagnostic, and often have significant rates of both false positive and false negative results.
Frequently updated recommendations for screening are provided by the independent panel of experts, the United States Preventive Services Task Force.
Principles
In 1968, the World Health Organization published guidelines on the Principles and practice of screening for disease, which is often referred to as the Wilson and Jungner criteria. The principles are still broadly applicable today:
The condition should be an important health problem.
There should be a treatment for the condition.
Facilities for diagnosis and treatment should be available.
There should be a latent stage of the disease.
There should be a test or examination for the condition.
The test should be acceptable to the population.
The natural history of the disease should be adequately understood.
There should be an agreed policy on whom to treat.
The total cost of finding a case should be economically balanced in relation to medical expenditure as a whole.
Case-finding should be a continuous process, not just a "once and for all" project.
In 2008, with the emergence of new genomic technologies, the WHO synthesised and modified these with the new understanding as follows:
Synthesis of emerging screening criteria proposed over the past 40 years
The screening programme should respond to a recognized need.
The objectives of screening should be defined at the outset.
There should be a defined target population.
There should be scientific evidence of screening programme effectiveness.
The programme should integrate education, testing, clinical services and programme management.
There should be quality assurance, with mechanisms to minimize potential risks of screening.
The programme should ensure informed consent, confidentiality and respect for personal, bodily autonomy.
The programme should promote equity and access to screening for the entire target population.
Programme evaluation should be planned from the outset.
The overall benefits of screening should outweigh the harm.
In summation, "when it comes to the allocation of scarce resources, economic considerations must be considered alongside 'notions of justice, equity, personal freedom, political feasibility, and the constraints of current law'."
Types
Mass screening: The screening of a whole population or subgroup. It is offered to all, irrespective of the risk status of the individual.
High risk or selective screening: High risk screening is conducted only among high-risk people.
Multiphasic screening: The application of two or more screening tests to a large population at one time, instead of carrying out separate screening tests for single diseases.
When done thoughtfully and based on research, identification of risk factors can be a strategy for medical screening.
Examples
Common programs
In many countries there are population-based screening programmes. In some countries, such as the UK, policy is made nationally and programmes are delivered nationwide to uniform quality standards. Common screening programmes include:
Cancer screening
Pap smear or liquid-based cytology to detect potentially precancerous lesions and prevent cervical cancer
Mammography to detect breast cancer
Colonoscopy and fecal occult blood test to detect colorectal cancer
Dermatological check to detect melanoma
PSA to detect prostate cancer
PPD test to screen for exposure to tuberculosis
Beck Depression Inventory to screen for depression
SPAI-B, the Liebowitz Social Anxiety Scale and Social Phobia Inventory to screen for social anxiety disorder
Alpha-fetoprotein, blood tests and ultrasound scans for pregnant women to detect fetal abnormalities
Bitewing radiographs to screen for interproximal dental caries
Ophthalmoscopy or digital photography and image grading for diabetic retinopathy
Ultrasound scan for abdominal aortic aneurysm
SARI Screening Tool for COVID-19 and MERS
Screening of potential sperm bank donors
Screening for metabolic syndrome
Screening for potential hearing loss in newborns
School-based
Most public school systems in the United States screen students periodically for hearing and vision deficiencies and dental problems. Screening for spinal and posture issues such as scoliosis is sometimes carried out, but is controversial as scoliosis (unlike vision or dental issues) is found in only a very small segment of the general population and because students must remove their shirts for screening. Many states no longer mandate scoliosis screenings, or allow them to be waived with parental notification. There are currently bills being introduced in various U.S. states to mandate mental health screenings for students attending public schools in hopes to prevent self-harm as well as the harming of peers. Those proposing these bills hope to diagnose and treat mental illnesses such as depression and anxiety.
Screening for social determinants of health
The social determinants of health are the economic and social conditions that influence individual and group differences in health status. Those conditions may have adverse effects on their health and well-being. To mitigate those adverse effects, certain health policies like the United States Affordable Care Act (2010) gave increased traction to preventive programs, such as those that routinely screen for social determinants of health. Screening is believed to a valuable tool in identifying patients' basic needs in a social determinants of health framework so that they can be better served.
Policy background in the United States
When established in the United States, the Affordable Care Act was able to bridge the gap between community-based health and healthcare as a medical treatment, leading to programs that screened for social determinants of health. The Affordable Care Act established several services with an eye for social determinants or an openness to more diverse clientele, such as Community Transformation Grants, which were delegated to the community in order to establish "preventive community health activities" and "address health disparities".
Clinical programs
Social determinants of health include social status, gender, ethnicity, economic status, education level, access to services, education, immigrant status, upbringing, and much, much more. Several clinics across the United States have employed a system in which they screen patients for certain risk factors related to social determinants of health. In such cases, it is done as a preventive measure in order to mitigate any detrimental effects of prolonged exposure to certain risk factors, or to simply begin remedying the adverse effects already faced by certain individuals. They can be structured in different ways, for example, online or in person, and yield different outcomes based on the patient's responses. Some programs, like the FIND Desk at UCSF Benioff Children's Hospital, employ screening for social determinants of health in order to connect their patients with social services and community resources that may provide patients greater autonomy and mobility.
Medical equipment used
Medical equipment used in screening tests is usually different from equipment used in diagnostic tests as screening tests are used to indicate the likely presence or absence of a disease or condition in people not presenting symptoms; while diagnostic medical equipment is used to make quantitative physiological measurements to confirm and determine the progress of a suspected disease or condition. Medical screening equipment must be capable of fast processing of many cases, but may not need to be as precise as diagnostic equipment.
Limitations
Screening can detect medical conditions at an early stage before symptoms present while treatment is more effective than for later detection. In the best of cases lives are saved. Like any medical test, the tests used in screening are not perfect. The test result may incorrectly show positive for those without disease (false positive), or negative for people who have the condition (false negative). Limitations of screening programmes can include:
Screening can involve cost and use of medical resources on a majority of people who do not need treatment.
Adverse effects of screening procedure (e.g. stress and anxiety, discomfort, radiation exposure, chemical exposure).
Stress and anxiety caused by prolonging knowledge of an illness without any improvement in outcome. This problem is referred to as overdiagnosis (see also below).
Stress and anxiety caused by a false positive screening result.
Unnecessary investigation and treatment of false positive results (namely misdiagnosis with Type I error).
A false sense of security caused by false negatives, which may delay final diagnosis (namely misdiagnosis with Type II error).
Screening for dementia in the English NHS is controversial because it could cause undue anxiety in patients and support services would be stretched. A GP reported "The main issue really seems to be centred around what the consequences of a such a diagnosis is and what is actually available to help patients."
Analysis
To many people, screening instinctively seems like an appropriate thing to do, because catching something earlier seems better. However, no screening test is perfect. There will always be the problems with incorrect results and other issues listed above. It is an ethical requirement for balanced and accurate information to be given to participants at the point when screening is offered, in order that they can make a fully informed choice about whether or not to accept.
Before a screening program is implemented, it should be looked at to ensure that putting it in place would do more good than harm. The best studies for assessing whether a screening test will increase a population's health are rigorous randomized controlled trials.When studying a screening program using case-control or, more usually, cohort studies, various factors can cause the screening test to appear more successful than it really is. A number of different biases, inherent in the study method, will skew results.
Overdiagnosis
Screening may identify abnormalities that would never cause a problem in a person's lifetime.
An example of this is prostate cancer screening; it has been said that "more men die with prostate cancer than of it". Autopsy studies have shown that between 14 and 77% of elderly men who have died of other causes are found to have had prostate cancer.
Aside from issues with unnecessary treatment (prostate cancer treatment is by no means without risk), overdiagnosis makes a study look good at picking up abnormalities, even though they are sometimes harmless.
Overdiagnosis occurs when all of these people with harmless abnormalities are counted as "lives saved" by the screening, rather than as "healthy people needlessly harmed by overdiagnosis". So it might lead to an endless cycle: the greater the overdiagnosis, the more people will think screening is more effective than it is, which can reinforce people to do more screening tests, leading to even more overdiagnosis. Raffle, Mackie and Gray call this the popularity paradox of screening: "The greater the harm
through overdiagnosis and overtreatment from screening, the more people there are who believe they owe their health, or even their life, to the programme"(p56 Box 3.4)
The screening for neuroblastoma, the most common malignant solid tumor in children, in Japan is a very good example of why a screening program must be evaluated rigorously before it is implemented. In 1981, Japan started a program of screening for neuroblastoma by measuring homovanillic acid and vanilmandelic acid in urine samples of six-month-old infants. In 2003, a special committee was organized to evaluate the motivation for the neuroblastoma screening program. In the same year, the committee concluded that there was sufficient evidence that screening method used in the time led to overdiagnosis, but there was no enough evidence that the program reduced neuroblastoma deaths. As such, the committee recommended against screening and the Ministry of Health, Labor and Welfare decided to stop the screening program.
Another example of overdiagnosis happened with thyroid cancer: its incidence tripled in United States between 1975 and 2009, while mortality was constant. In South Korea, the situation was even worse with 15-fold increase in the incidence from 1993 to 2011 (the world's greatest increase of thyroid cancer incidence), while the mortality remained stable. The increase in incidence was associated with the introduction of ultrasonography screening.
The problem of overdiagnosis in cancer screening is that at the time of diagnosis it not possible to differentiate between a harmless lesion and lethal one, unless the patient is not treated and dies from other causes. So almost all patients tend to be treated, leading to what is called overtreatment. As researchers Welch and Black put it, "Overdiagnosis—along with the subsequent unneeded treatment with its attendant risks—is arguably the most important harm associated with early cancer detection."
Lead time bias
If screening works, it must diagnose the target disease earlier than it would be without screening (when symptoms appear).
Even if in both cases (with screening vs without screening) patients die at the same time, just because the disease was diagnosed earlier by screening, the survival time since diagnosis is longer in screened people than in persons who was not screened. This happens even when life span has not been prolonged. As the diagnosis was made earlier without life being prolonged, the patient might be more anxious as he must live with knowledge of his diagnosis for longer.
If screening works, it must introduce a lead time. So statistics of survival time since diagnosis tends to increase with screening because of the lead time introduced, even when screening offers no benefits. If we do not think about what survival time actually means in this context, we might attribute success to a screening test that does nothing but advance diagnosis. As survival statistics suffers from this and other biases, comparing the disease mortality (or even all-cause mortality) between screened and unscreened population gives more meaningful information.
Length time bias
Many screening tests involve the detection of cancers. Screening is more likely to detect slower-growing tumors (due to longer pre-clinical sojourn time) that are less likely to cause harm. Also, those aggressive cancers tend to produce symptoms in the gap between scheduled screening, being less likely to be detected by screening. So, the cases screening often detects automatically have better prognosis than symptomatic cases. The consequence is those more slow progressive cases are now classified as cancers, which increases the incidence, and due to its better prognosis, the survival rates of screened people will be better than non-screened people even if screening makes no difference.
Selection bias
Not everyone will partake in a screening program. There are factors that differ between those willing to get tested and those who are not.
If people with a higher risk of a disease are more likely to be screened, for instance women with a family history of breast cancer are more likely than other women to join a mammography program, then a screening test will look worse than it really is: negative outcomes among the screened population will be higher than for a random sample.
Selection bias may also make a test look better than it really is. If a test is more available to young and healthy people (for instance if people have to travel a long distance to get checked) then fewer people in the screening population will have negative outcomes than for a random sample, and the test will seem to make a positive difference.
Studies have shown that people who attend screening tend to be healthier than those who do not. This has been called the healthy screenee effect, which is a form of selection bias. The reason seems to be that people who are healthy, affluent, physically fit, non-smokers with long-lived parents are more likely to come and get screened than those on low-income, who have existing health and social problems. One example of selection bias occurred in Edinbourg trial of mammography screening, which used cluster randomisation. The trial found reduced cardiovascular mortality in those who were screened for breast cancer. That happened because baseline differences regarding socio-economic status in the groups: 26% of the women in the control group and 53% in the study group belonged to the highest socioeconomic level. Cardiovascular risk screening is a vital tool in reducing the global incidence of cardiovascular diseases.
Study Design for the Research of Screening Programs
The best way to minimize selection bias is to use a randomized controlled trial, though observational, naturalistic, or retrospective studies can be of some value and are typically easier to conduct. Any study must be sufficiently large (include many patients) and sufficiently long (follow patients for many years) to have the statistical power to assess the true value of a screening program. For rare diseases, hundreds of thousands of patients may be needed to realize the value of screening (find enough treatable disease), and to assess the effect of the screening program on mortality a study may have to follow the cohort for decades. Such studies take a long time and are expensive, but can provide the most useful data with which to evaluate the screening program and practice evidence-based medicine.
All-cause mortality vs disease-specific mortality
The main outcome of cancer screening studies is usually the number of deaths caused by the disease being screened for - this is called disease-specific mortality. To give an example: in trials of mammography screening for breast cancer, the main outcome reported is often breast cancer mortality. However, disease-specific mortality might be biased in favor of screening. In the example of breast cancer screening, women overdiagnosed with breast cancer might receive radiotherapy, which increases mortality due to lung cancer and heart disease. The problem is those deaths are often classified as other causes and might even be larger than the number of breast cancer deaths avoided by screening. So the non-biased outcome is all-cause mortality. The problem is that much larger trials are needed to detect a significant reduction in all-cause mortality. In 2016, researcher Vinay Prasad and colleagues published an article in BMJ titled "Why cancer screening has never been shown to save lives", as cancer screening trials did not show all-cause mortality reduction.
| Biology and health sciences | Diagnostics | Health |
3212996 | https://en.wikipedia.org/wiki/Pseudevernia%20furfuracea | Pseudevernia furfuracea | Pseudevernia furfuracea, commonly known as tree moss, is a lichenized species of fungus that grows on the bark of firs and pines. The lichen is rather sensitive to air pollution, its presence usually indicating good air conditions in the growing place. The species has numerous human uses, including use in perfume, embalming and in medicine. Large amounts of tree moss is annually processed in France for the perfume industry.
Description
Pseudevernia furfuracea is associated with photobionts from the green algae genus Trebouxia. It reproduces asexually by isidia. The ontogeny of isidia development and its role in CO2 gas exchange in P. furfuracea has been investigated.
The preferred growing surfaces for P. furfuracea are the so-called "nutrient poor" bark trees, including birch, pine and spruce.
The species has two morphologically identical varieties that are distinguished by the secondary metabolites they produce: var. ceratea Zopf. produces olivetoric acid and other physodic acids, while var. furfuracea produces physodic but not olivetoric acid. Some authors (e.g., Hale 1968) have separated the chemotypes at the species level, designating the olivetoric acid-containing specimens as Pseudevernia olivetorina, but more recent literature separates them at the varietal level.
Uses
Perfumes
Large amounts of tree moss (approximately 1900 tons in 1997) are processed in Grasse, France for the perfume industry.
Embalming
In ancient Egyptian embalming, P. furfuracea was found packed into the body cavity of mummies, although it is not certain whether this was done because of the supposed preservative properties or the aromatic properties of the lichen.
Antimicrobial activity
Soluble extracts from P. furfuracea var. furfuracea and var. ceratea, as well as specific compounds found therein, have antimicrobial activity against a variety of microorganisms.
Medicinal use
In Alfacar and Viznar, Andalucia (Spain), P. furfuracea is used for respiratory complaints. The thallus is washed and boiled for a long time to prepare a decoction that is drunk.
Water extracts of this species have been shown to have a potent protective effect on genotoxicity caused by bismuth compounds such as colloidal bismuth subcitrate.
Heavy metal sorption
Pseudevernia furfuracea has been investigated for its ability to absorb heavy metals from solution. The metal-binding biosorption for copper(II) and nickel(II) was shown to follow the Langmuir and Freundlich isotherm models, suggesting it may have potential as a biosorbent for treatment of heavy metal wastes.
Pollution monitors
Because specimens of P. furfuracea tend to bioaccumulate heavy metals like Cr, Zn, Cd, Pb, Ni, Fe, Mn and Cu proportional to their concentration in airborne particulates, they may be used as a biomonitor of air quality, although it has been noted that both trace metal accumulation and major element accumulation is partly dependent on the hydration level of the specimen. Also, the species is sensitive to ozone concentrations: ozone fumigation results in biophysical, physiological, and structural impairment of specimens.
P. furfuracea has also been used to monitor the levels of radionuclides such as Cesium-137 in Austria after the Chernobyl nuclear accident.
Conservation status
In Iceland, P. furfuracea is found in only two locations and is classified as critically endangered (CR).
Bioactive compounds
In addition to the physodic acid mentioned above, P. furfuracea also contains 2-hydroxy-4-methoxy-3,6-dimethyl benzoic acid, atranorin, oxyphysodic acid, and virensic acid. Of these compounds, atranorin showed the highest inhibition of proteolytic enzymes trypsin and porcine pancreatic elastase. Research suggests that the biosynthesis of both atranorin and physodic acid is influenced by the cooperation of epiphytic bacteria.
A number of sterol compounds have been identified from P. furfuracea, including ergosterol peroxide, ergosterol and lichosterol.
| Biology and health sciences | Lichens | Plants |
16637905 | https://en.wikipedia.org/wiki/Lattice%20phase%20equaliser | Lattice phase equaliser | A lattice phase equaliser or lattice filter is an example of an all-pass filter. That is, the attenuation of the filter is constant at all frequencies but the relative phase between input and output varies with frequency. The lattice filter topology has the particular property of being a constant-resistance network and for this reason is often used in combination with other constant-resistance filters such as bridge-T equalisers. The topology of a lattice filter, also called an X-section, is identical to bridge topology. The lattice phase equaliser was invented by Otto Zobel using a filter topology proposed by George Campbell.
Characteristics
The characteristic impedance of this structure is given by
and the transfer function is given by
.
Applications
The lattice filter has an important application on lines used by broadcasters for stereo audio feeds. Phase distortion on a monophonic line does not have a serious effect on the quality of the sound unless it is very large. The same is true of the absolute phase distortion on each leg (left and right channels) of a stereo pair of lines. However, the differential phase between legs has a very dramatic effect on the stereo image. This is because the formation of the stereo image in the brain relies on the phase difference information from the two ears. A phase difference translates to a delay, which in turn can be interpreted as a direction the sound came from. Consequently, landlines used by broadcasters for stereo transmissions are equalised to very tight differential phase specifications.
Another property of the lattice filter is that it is an intrinsically balanced topology. This is useful when used with landlines which invariably use a balanced format. Many other types of filter section are intrinsically unbalanced and have to be transformed into a balanced implementation in these applications, which increases the component count. This is not required in the case of lattice filters.
Design
The essential requirement for a lattice filter is that for it to be constant resistance, the lattice element of the filter must be the dual of the series element with respect to the characteristic impedance. That is,
.
Such a network, when terminated in R0, will have an input resistance of R0 at all frequencies. If the impedance Z is purely reactive such that then the phase shift, φ, inserted by the filter is given by
.
The prototype lattice filter shown here passes low frequencies without modification but phase-shifts high frequencies. That is, it is phase correction for the high end of the band. At low frequencies the phase shift is 0° but as the frequency increases the phase shift approaches 180°. It can be seen qualitatively that this is so by replacing the inductors with open circuits and the capacitors with short circuits, which is what they become at high frequencies. At high frequencies the lattice filter is a cross-over network and will produce 180° phase shift. A 180° phase shift is the same as an inversion in the frequency domain, but is a delay in the time domain. At an angular frequency of the phase shift is exactly 90° and this is the midpoint of the filter's transfer function.
Low-in-phase section
The prototype section can be scaled and transformed to the desired frequency, impedance and bandform by applying the usual prototype filter transforms. A filter which is in-phase at low frequencies (that is, one that is correcting phase at high frequencies) can be obtained from the prototype with simple scaling factors.
The phase response of a scaled filter is given by
,
where ωm is the midpoint frequency and is given by
.
High-in-phase section
A filter that is in-phase at high frequencies (that is, a filter to correct low-end phase) can be obtained by applying the high-pass transformation to the prototype filter. However, it can be seen that due to the lattice topology this is also equivalent to a crossover on the output of the corresponding low-in-phase section. This second method may not only make calculation easier but it is also a useful property where lines are being equalised on a temporary basis, for instance for outside broadcasts. It is desirable to keep the number of different types of adjustable sections to a minimum for temporary work and being able to use the same section for both high end and low end correction is a distinct advantage.
Band equalise section
A filter that corrects a limited band of frequencies (that is, a filter that is in-phase everywhere except in the band being corrected) can be obtained by applying the band-stop transformation to the prototype filter. This results in resonant elements appearing in the filter's network.
An alternative, and possibly more accurate, view of this filter's response is to describe it as a phase change that varies from 0° to 360° with increasing frequency. At 360° phase shift, of course, the input and output are now back in phase with each other.
Resistance compensation
With ideal components there is no need to use resistors in the design of lattice filters. However, practical considerations of properties of real components leads to resistors being incorporated. Sections designed to equalise low audio frequencies will have larger inductors with a high number of turns. This results in significant resistance being in the inductive branches of the filter, which in turn causes attenuation at low frequencies.
In the example diagram, the resistors placed in series with the capacitors, R1, are made equal to the unwanted stray resistance present in the inductors. This ensures that the attenuation at high frequency is the same as the attenuation at low frequency and brings the filter back to a flat response. The purpose of the shunt resistors, R2, is to bring the image impedance of the filter back to the original design R0. The resulting filter is the equivalent of a box attenuator formed from the R1's and R2's connected in cascade with an ideal lattice filter as shown in the diagram.
Unbalanced topology
The lattice phase equaliser cannot be directly transformed into T-section topology without introducing active components. However, a T-section is possible if ideal transformers are introduced. Transformer action can be conveniently achieved in the low-in-phase T-section by winding both inductors on a common core. The response of this section is identical to the original lattice, albeit with a non-constant-resistance input. This circuit was first used by George Washington Pierce, who needed a delay line as part of the improved sonar he developed between the world wars. Pierce used a cascade of these sections to provide the required delay. The circuit can be considered a low-pass m-derived filter with , which puts the transmission zero on the jω axis of the complex frequency plane. Other unbalanced transformations utilising ideal transformers are possible; one such is shown on the right.
| Technology | Signal processing | null |
11280915 | https://en.wikipedia.org/wiki/Frailty%20syndrome | Frailty syndrome | Frailty or frailty syndrome refers to a state of health in which older adults gradually lose their bodies' in-built reserves and functioning. This makes them more vulnerable, less able to recover and even apparently minor events (infections, environmental changes) can have drastic impacts on their physical and mental health.
Frailty can have various symptoms including muscle weakness (reduced grip strength), slower walking speed, exhaustion, unintentional weight loss, and frequent falls. Older people with certain medical conditions such as diabetes, heart disease and dementia, are also more likely to have frailty. In addition, adults living with frailty face more symptoms of anxiety and depression than those who do not.
Frailty is not an inevitable part of aging. Its development can be prevented, delayed and its progress slowed. The most effective ways of preventing or improving frailty are regular physical activity and a healthy diet.
The prevalence of frailty varies based on countries and the assessment technique but it is estimated to range from 12% to 24% in people over 50.
Frailty can have impacts on public health due to the factors that comprise the syndrome affecting physical and mental health outcomes. There are several ways to identify, prevent, and mitigate the prevalence of frailty and the evaluation of frailty can be done through clinical assessments created to combine recognized signs and symptoms of frailty.
Definitions
Frailty refers to an age-related functional decline and heightened state of vulnerability. It is a worsening of functional status compared to the normal physiological process of aging. It can refer to the combination of a decline of physical and physiological aspects of a human body. The reduced reserve capacity of organ systems, muscle, and bone create a state where the body is not capable of coping with stressors such as illness or falls. Frailty can lead to increased risk of adverse side effects, complications, and mortality.
Older age by itself is not what defines frailty, it is however a syndrome found in older adults. Many adults over 65 are not living with frailty. Frailty is not one specific disease, however is a combination of many factors. Frailty does not have a specific universal criteria on which it is diagnosed; there are a combination of signs and symptoms that can lead to a diagnosis of frailty. Evaluations can be done on physical status, weight fluctuations, or subjective symptoms. Frailty most commonly refers to physical status and is not a syndrome of mental capacity such as dementia, which is a decline in cognitive function. Although, frailty can be a risk factor for the development of dementia.
Although no universal diagnostic criteria exist, some clinical screening tools are commonly used to identify frailty. These include the Fried Frailty Phenotype and a deficit accumulation frailty index. The Fried Frailty Phenotype assesses five domains commonly affected by frailty: exhaustion, weakness, slowness, physical inactivity, and weight loss. The presence of 1-2 findings is classified as "pre-frailty", 3 or more as frailty and the presence of all 5 indicates "end-stage frailty" and is associated with poor prognosis. The deficit accumulation characterization of frailty tallies deficits present in a variety of clinical areas (including nutritional deficiency, laboratory abnormalities, disability index, cognitive and physical impairment) to create a frailty index. A higher number of deficits is associated with a worse prognosis.
Signs and symptoms
Frailty is a complex condition that is a result of multiple body systems experiencing decline in function, and the more body systems that are affected, the higher the risk is for developing frailty. There is a variety of risk factors and signs that can suggest an older person having frailty. However, the development of any of these risk factors or signs alone does not establish frailty as they can be symptoms of numerous other health conditions. For establishing that a person has frailty multiple factors or signs need to be present at the same time.
Most often frailty is identified by having three out of five of the following symptoms: unintentional weight loss, muscle weakness, self-reported exhaustion, slowness and low physical activity. At the same time there are many other risk factors, signs and symptoms can be part of frailty. The presence of some factors are thought to increase the likelihood of having or developing frailty more than others. In general, risk factors, signs and symptoms can be biological, psychological, and social.
Health-related
Decreases in skeletal muscle mass (sarcopenia) and bone density (osteopenia and osteoporosis) are two major contributors to developing frailty in older adults. In early to middle age, bone density and muscle mass are closely related. As adults age, skeletal muscle mass or bone density may begin to decline. This decline can lead to frailty and both have been identified as contributors to disability.
Sarcopenia is the degenerative loss of skeletal muscle mass, quality, and strength associated with aging. The rate of muscle loss is dependent on exercise level, co-existing health conditions, nutrition and other factors. Sarcopenia can lead to reduction in functional status and cause significant disability from increased muscle weakness. Aging, lower levels of DHEA, testosterone, IGF-1 and increased levels of cortisol are thought to contribute to muscle wasting in those with frailty.
Osteopenia and osteoporosis are diseases of bone mineral density loss (usually age related) that lead to an increased risk of bone fractures, especially with falls. Frailty is associated with an increased risk of osteoporosis related bone fractures.
Frailty is also common in those with heart failure. Both frailty and heart failure share similar methods of progressive health decline and often lead to worsened health conditions when combined.
There are many other health-related factors that can be present in frailty including incontinence, lung disease, having multiple long-term health conditions, taking multiple medications regularly, malnutrition, cognitive impairment, diabetes, and obesity. Poor oral health, difficulties with chewing and swallowing, dry mouth and pain in the mouth are also signs of frailty in some people.
Conditions and symptoms related to mental health that can increase the likelihood of frailty include depression and loneliness.
Lifestyle
Lifestyle factors and behaviors that increase the likelihood of having or developing frailty include smoking, sedentary lifestyle, low level of physical exercise. Dietary factors include low intake of certain vitamins (D, E, C, folate, carotenoids, α-tocopherol) and having a higher score on the Dietary Inflammatory Index.
Demographic characteristics
People in certain demographic groups have a higher risk of frailty than others either due to direct or indirect reasons. Demographic factors include older age, being female, having lower level of education, and having low income.
Social
Certain factors in social background and situation, interpersonal relationships can also be risk factors for frailty. Such factors include living alone, being single or widowed, having lower family income or having suffered abuse.
Living in poor neighborhood conditions, in a rural area, and having low social support are also potential risk factors for frailty.
Mechanism
The causes of frailty are multifactorial involving dysregulation across many physiological systems. Frailty may be related to a proinflammatory state. A common interleukin elevated in this state is IL-6. A pro-inflammatory cytokine, IL-6 was found to be common in older adults with frailty. IL-6 is typically up-regulated by inflammatory mediators, such as C-reactive protein, released in the presence of chronic disease. Increased levels of inflammatory mediators are often associated with chronic disease; however, they may also be elevated even in the absence of chronic disease.
Sarcopenia, anemia, anabolic hormone deficiencies, and excess exposure to catabolic hormones such as cortisol have been associated with an increased likelihood of frailty. Other mechanisms associated with frailty include insulin resistance, increased glucose levels, compromised immune function, micronutrient deficiencies, and oxidative stress.
Mitochondrial dysfunction, including mitochondrial DNA mutations, cellular respiration dysfunction, and changes in mitochondrial hemostasis is thought to contribute to reduced cellular energy, production of reactive oxygen species and inflammation. This mitochondrial dysfunction is thought to contribute to the signs of frailty.
Researchers found that individual abnormal body functions may not be the best predictor of risk of frailty. However, they did conclude that once the number of conditions reaches a certain threshold, the risk of frailty increases. This finding suggests that treatment of frailty syndrome should not be focused on a single condition, but a multitude in order to increase the likelihood of better treatment results.
Theoretical understanding
Declines in physiologic reserves and resilience contribute to frailty. The risk of frailty increases with age and with the incidence of diseases. The development of frailty is also thought to involve declines in energy production, energy utilization and repair systems in the body, resulting in declines in the function of many different physiological systems. This decline in multiple systems affects the normal complex adaptive behavior that is essential to health and eventually results in frailty.
A comparison of peripheral blood mononuclear cells from frail older individuals to cells from healthy younger individuals showed evidence in the frail older individuals of increased oxidative stress, increased apurinic/pyrimidinic sites in DNA, increased accumulation of endogenous DNA damage and reduced ability to repair DNA double-strand breaks.
Diagnosis
The syndrome of geriatric frailty is hypothesized to reflect impairments in the regulation of multiple physiologic systems, embodying a lack of resilience to physiologic challenges and thus elevated risk for a range of deleterious endpoints. Generally speaking, the empirical assessment of geriatric frailty in individuals seeks ultimately to capture this or related features, though distinct approaches to such assessment have been developed in the literature.
Two most widely used approaches, different in their nature and scopes, are the physical frailty phenotype and frailty index/deficit accumulation model.
Physical frailty phenotype
A popular approach to the assessment of geriatric frailty encompasses the assessment of five dimensions that are hypothesized to reflect systems whose impaired regulation underlies the syndrome. These five dimensions are:
unintentional weight loss
exhaustion
muscle weakness
slowness while walking
low levels of activity
These five dimensions form specific criteria indicating adverse functioning, which are implemented using a combination of self-reported and performance-based measures. Those who meet at least three of the criteria are defined as "frail", while those not matching any of the five criteria are defined as "robust".
Frailty index/deficit accumulation
Another common approach to the assessment of geriatric frailty in which frailty is viewed in terms of the number of health "deficits" that are manifest in the individual, leading to a continuous measure of frailty. This score is based the presence of deficits in may areas related to frailty, including symptoms of cognitive or physical impairment, laboratory abnormalities, nutritional deficits, or disability.
Comprehensive geriatric assessment
Comprehensive geriatric assessment (CGA) is a method to assess frailty typically in a healthcare environment. CGA looks at multiple domains of potential risk factors including physical, psychological, and social health.
CGAs for older people with frailty who do not live in a long-term care institution could improve medication adherence, patient functioning, quality of care, and reduce the risk of unplanned hospital admissions. At the same time CGA for this demographic seems to have no impact on death or nursing home admissions.
Older people with moderate or severe frailty who are admitted to a hospital due to an unexpected emergency have an increased risk of a prolonged length of stay, death, and being discharged to a place other than their home. However, those who undergo a comprehensive geriatric assessment on admission are more likely to survive and be discharged to their homes.
In the United Kingdom, best practice guidelines recommend a medical review based on CGA to establish the management plan for people with frailty.
Four domains of frailty
A model consisting of four domains of frailty was proposed in response to an article in the BMJ. This conceptualisation could be viewed as blending the phenotypic and index models. Researchers tested this model for signal in routinely collected hospital data, and then used this signal in the development of a frailty model, finding even predictive capability across 3 outcomes of care. In the care home setting, one study indicated that not all four domains of frailty were routinely assessed in residents, giving evidence to suggest that frailty may still primarily be viewed only in terms of physical health.
SHARE Frailty Index
The SHARE-Frailty Index (SHARE-FI) assesses frailty based on five domains of the frailty phenotype:
Fatigue
Loss of appetite
Grip strength
Functional difficulties
Physical activity
Clinical Frailty Scale
The Clinical Frailty Scale (CFS) is a scale used to assess frailty which was evolved from the Canadian Study of Health and Aging. It is a 9-point scale used to assess a persons frailty level, where a score of 1 point would mean a person is very fit and robust, to a score of 9 points meaning the person is severely frail and terminally ill.
Edmonton Frail Scale
The Edmonton Frail Scale (EFS) is another method used to screen frailty. This scale is given scores of up to 17 points. It has been assessed to screen all domains of frailty, and is said to be easy to perform by clinicians. Specific tests used in this scaling system are walking tests and clock drawing.
Electronic Frail Scale (eFI)
The electronic Frail Scale (eFI) is a scale weighted out of 36 deficit points where the higher the number in the score will represent the more frail, or more prone to frailty. Each frailty-related deficit the person has is given a point and the more deficits the person is experiencing the more likely they are frail or will experience frailty in the future. The total number of deficits is divided by 36. Then, a frailty category is assigned. A person with a score of 0.00–0.12 is in the "Fit" category. A person with a score of 0.13–0.24 is in the "Mild" category. A person with a score of 0.25–0.36 is in the "Moderate" category. Finally, a person with the score of 0.36 or above is considered to be in the "Severe" category.
Assessment for surgical outcomes
Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Frailty more than doubles the risk of morbidity and mortality from surgery and cardiovascular conditions. Assessment of older patients before elective surgeries can accurately predict the patients' recovery trajectories. One frailty scale consists of five items:
unintentional weight loss >4.5 kg in the past year
self-reported exhaustion
<20th population percentile for grip strength
slowed walking speed, defined as lowest population quartile on 4-minute walking test
low physical activity such that persons would only rarely undertake a short walk
A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. Frail elderly patients (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people.
Another tool that has been used to predict frailty outcome post-surgery is the Modifies Frailty Index, or mFI-5. This scale consists of 5 key co-morbidities:
Congestive heart failure within 1 month of surgery
Diabetes mellitus
Chronic Obstruction Pulmonary Disease or pneumonia in the past
Individuals needing additional assistance to perform everyday activities of living
High blood pressure that is controlled with medication
An individual without one of these conditions would be given a score of 0 for the condition absent. An individual who does have one of the conditions would be given a score of 1 for each of the conditions present. In an initial study using the mFI-5 scale, individuals with a sum mFI-5 score of 2 or greater were predicted to experience post-surgery complications due to frailty, which was supported by the results of the study.
Frailty scales can be used to predict the risk of complications in patients before and after surgery. There is an association between frailty and delayed transplant function after a kidney transplant. Other studies note that frailty scales alone may be inaccurate in predicting outcomes for people undergoing surgical procedures, and other factors such as co-morbid medical conditions need to be considered.
Prevention
Frailty is not an inevitable part of aging, and its development (or worsening) can be prevented or delayed.
When considering prevention of frailty, it is important to understand the individual's risk factors that contribute to frailty and identify them early on. Some of these risk factors can be changed or controlled (for example an unhealthy diet), so early identification of such risk factors allows for preventative actions, reducing risks of future complications.
Exercise
Physical activity is a significant part of the prevention of frailty. As people age, physical activity markedly drops, with the steepest declines seen in adolescence and continuing on throughout life. Lower levels of physical activity are a key component of developing frailty. Therefore, regular exercise such as walking, strength training, and self-directed physical activity is an important way to prevent frailty.
Nutrition
Having a healthy diet and balanced nutrition also plays a major role in preventing frailty. A healthy dietary pattern consisting of high consumption of healthy fats, fruits, vegetables, low-fat dairy products, and whole grains can contribute to maintaining a healthy weight and prevent or postpone frailty.
Specifically, an adherence to the Mediterranean diet may help decreasing the risk of frailty. A higher protein intake and a higher intake of certain vitamins (B6, C, D, α-carotene, β-carotene, α-tocopherol, and folate) might also support prevention.
Taking part in dietary counseling, dietary education, or cooking classes can also help older people to prevent frailty.
Social factors
Some social risk factors commonly seen in people with frailty can also be improved. Physical activity may help to improve social functioning besides its health benefits. Receiving training in how to use the computer and the internet, using the internet to communicate with other people (for example through a videocall) can also help reduce loneliness and social isolation.
Management
Through management and interventions, it is possible to decrease frailty or slow down its progress. Physical activity and nutritional supplementation are the most effective way of decreasing and managing frailty. There are currently no pharmacological interventions available for frailty.
As frailty comes with a heightened vulnerability to stress, avoiding known stressors (ie. surgeries, infections, etc.) and understanding mechanisms to reduce frailty can help older adults prevent worsening their frail status. Currently, preventative interventions focus on minimizing muscle loss and improvement of overall well-being in older adults or individuals with chronic illnesses.
Specific ways of frailty management largely depends on an individual's classification (i.e. pre-fail, frail) and treatment needs. Physicians need to work closely with people who have frailty to develop a realistic management plan to ensure their compliance, leading to better health outcomes. Providing personalised care for local communities using the primary care medical home (PCMH) model could improve health-related quality of life, mental health, self-management, and reduce hospital admissions. Providing care at home (using the hospital at home model) might reduce admissions into residential care and result in the same or potentially reduced death rate compared to inpatient care in a hospital.
Exercise
Physical activity is the most effective way of decreasing frailty and increasing the quality of life.
Individualized physical therapy programs developed by physicians can help improve frail status. For example, progressive resistance strength training for older adults can be used in clinical practice or at-home as a way to regain mobility. Mobility training can increase mobility level and functioning in older adults living in community-dwellings, such as a nursing home.
Nutritional supplementation
Nutritional supplementation (including protein supplementation) is another effective way of managing frailty. Frailty can involve changes such as weight loss and people might have difficulties with supplementation and diet. For those who may be undernourished and not acquiring adequate calories, oral nutritional supplements in between meals may decrease nutritional deficits. Nutritional supplementation is even more effective when coupled with regular physical activity.
Vitamin D, omega-3 fatty acid, sex hormone (such as testosterone) or growth hormone supplementation have not shown benefits in physical functioning, activities of daily living or frailty.
Occupational therapy
Occupational therapy might provide some improvements in elderly adults living at home or in community-dwellings, such as a nursing home. It can improve mobility, social participation, provide empowerment, and help with activities of daily living (brushing teeth, bathing, dressing up, etc.).
Palliative care
Palliative care may be helpful for individuals who are experiencing an advanced state of frailty with possible other co-existing health conditions. The goal of palliative care in people with frailty is improving quality of life by reducing pain and other harmful symptoms.
Epidemiology
Frailty is a common geriatric syndrome. Due to the absence of international diagnostic criteria, the prevalence estimates may not be accurate. Estimates of frailty prevalence in older populations vary according to a number of factors, including the setting in which the prevalence is being estimated — e.g., nursing home (higher prevalence) vs. community (lower prevalence) — and the definition used for frailty. Using the widely used frailty phenotype framework, prevalence estimates of 7–16% have been reported in non-institutionalized, community-dwelling older adults. In a systemic review exploring the prevalence of frailty based on geographical location it was found that Africa and North and South America had the largest prevalence at 22% and 17% respectively. Europe had the lowest prevalence at 8%.
Frailty is more common in those with mental health conditions including anxiety disorders, bipolar disorder and depression. The presence of frailty with these mental disorders was also associated with a poor prognosis and increased mortality
Research comparing case management trials to standard care for people living with frailty in high-income countries found that there was no difference in reducing cost or improving patient outcomes between the two approaches.
Sex and ethnicity differences in frailty
Frailty is more common in female older adults compared to male older adults. This difference is influenced by various biological, social, and environmental factors influence. Studies have found that the incidence of frailty was higher in females with more medical comorbidities. Frailty-related physical changes in muscle also show sex-specific differences.
In a population based study, Non-Hispanic Black-Americans and Hispanic-Americans had a higher incidence of frailty compared to non-Hispanic White-Americans.
Research directions
, ongoing clinical trials on frailty syndrome in the US include:
the impact of frailty on clinical outcomes of patients treated for abdominal aortic aneurysms
the use of "pre-habilitation," an exercise regimen used before transplant surgery, to prevent the frailty effects of kidney transplant in recipients
defining the acute changes in frailty following sepsis in the abdomen
the efficacy of the anti-inflammatory drug, Fisetin, in reducing frailty markers in elderly adults
Physical Performance Testing and Frailty in Prediction of Early Postoperative Course After Cardiac Surgery (Cardiostep)
| Biology and health sciences | Health and fitness: General | Health |
11281160 | https://en.wikipedia.org/wiki/CIELUV | CIELUV | In colorimetry, the CIE 1976 L*, u*, v* color space, commonly known by its abbreviation CIELUV, is a color space adopted by the International Commission on Illumination (CIE) in 1976, as a simple-to-compute transformation of the 1931 CIE XYZ color space, but which attempted perceptual uniformity. It is extensively used for applications such as computer graphics which deal with colored lights. Although additive mixtures of different colored lights will fall on a line in CIELUV's uniform chromaticity diagram (called the CIE 1976 UCS), such additive mixtures will not, contrary to popular belief, fall along a line in the CIELUV color space unless the mixtures are constant in lightness.
Historical background
CIELUV is an Adams chromatic valence color space and is an update of the CIE 1964 (U*, V*, W*) color space (CIEUVW). The differences include a slightly modified lightness scale and a modified uniform chromaticity scale, in which one of the coordinates, v′, is 1.5 times as large as v in its 1960 predecessor. CIELUV and CIELAB were adopted simultaneously by the CIE when no clear consensus could be formed behind only one or the other of these two color spaces.
CIELUV uses Judd-type (translational) white point adaptation (in contrast with CIELAB, which uses a von Kries transform). This can produce useful results when working with a single illuminant, but can predict imaginary colors (i.e., outside the spectral locus) when attempting to use it as a chromatic adaptation transform. The translational adaptation transform used in CIELUV has also been shown to perform poorly in predicting corresponding colors.
XYZ → CIELUV and CIELUV → XYZ conversions
By definition, .
The forward transformation
CIELUV is based on CIEUVW and is another attempt to define an encoding with uniformity in the perceptibility of color differences. The non-linear relations for L*, u*, and v* are given below:
The quantities u′n and v′n are the chromaticity coordinates of a "specified white object" – which may be termed the white point – and Yn is its luminance. In reflection mode, this is often (but not always) taken as the of the perfect reflecting diffuser under that illuminant. (For example, for the 2° observer and standard illuminant C, , .) Equations for u′ and v′ are given below:
The reverse transformation
The transformation from to is:
The transformation from CIELUV to XYZ is performed as follows:
Cylindrical representation (CIELCh)
CIELChuv, or HCL color space (hue–chroma–luminance) is increasingly seen in the information visualization community as a way to help with presenting data without the bias implicit in using varying saturation.
The cylindrical version of CIELUV is known as CIELChuv, or CIELChuv, CIELCh(uv) or CIEHLCuv, where C*uv is the chroma and huv is the hue:
where atan2 function, a "two-argument arctangent", computes the polar angle from a Cartesian coordinate pair.
Furthermore, the saturation correlate can be defined as
Similar correlates of chroma and hue, but not saturation, exist for CIELAB. See Colorfulness for more discussion on saturation.
Color and hue difference
The color difference can be calculated using the Euclidean distance of the coordinates. It follows that a chromaticity distance of corresponds to the same ΔE*uv as a lightness difference of , in direct analogy to CIEUVW.
The Euclidean metric can also be used in CIELCh, with that component of ΔE*uv attributable to difference in hue as , where .
| Physical sciences | Basics | Physics |
4370004 | https://en.wikipedia.org/wiki/Lepton%20epoch | Lepton epoch | In cosmological models of the Big Bang, the lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the Universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch, the temperature of the Universe was still high enough to create neutrino and electron-positron pairs. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where electron-positron pairs were gradually annihilated. A small residue of electrons needed to charge-neutralize the Universe remained along with free streaming neutrinos: an important aspect of this epoch is the neutrino decoupling. The Big Bang nucleosynthesis epoch follows, overlapping with the photon epoch.
| Physical sciences | Physical cosmology | Astronomy |
4370036 | https://en.wikipedia.org/wiki/Electroweak%20epoch | Electroweak epoch | In physical cosmology, the electroweak epoch was the period in the evolution of the early universe when the temperature of the universe had fallen enough that the strong force separated from the electronuclear interaction, but was still high enough for electromagnetism and the weak interaction to remain merged into a single electroweak interaction above the critical temperature for electroweak symmetry breaking (159.5±1.5 GeV
in the Standard Model of particle physics). Some cosmologists place the electroweak epoch at the start of the inflationary epoch, approximately 10−36 seconds after the Big Bang. Others place it at approximately 10−32 seconds after the Big Bang, when the potential energy of the inflaton field that had driven the inflation of the universe during the inflationary epoch was released, filling the universe with a dense, hot quark–gluon plasma.
Particle interactions in this phase were energetic enough to create large numbers of exotic particles, including W and Z bosons and Higgs bosons. As the universe expanded and cooled, interactions became less energetic, and when the universe was about 10−12 seconds old, W and Z bosons ceased to be created at observable rates. The remaining W and Z bosons decayed quickly, and the weak interaction became a short-range force in the following quark epoch.
The electroweak epoch ended with an electroweak phase transition, the nature of which is unknown. If first order, this could source a gravitational wave background. The electroweak phase transition is also a potential source of baryogenesis, provided the Sakharov conditions are satisfied.
In the minimal Standard Model, the transition during the electroweak epoch was not a first- or a second-order phase transition but a continuous crossover, preventing any baryogenesis,
or the production of an observable gravitational wave background.
However, many extensions to the Standard Model including supersymmetry and the two-Higgs-doublet model have a first-order electroweak phase transition (but require additional CP violation).
| Physical sciences | Physical cosmology | Astronomy |
4370125 | https://en.wikipedia.org/wiki/Grand%20unification%20epoch | Grand unification epoch | In physical cosmology, the grand unification epoch is a poorly understood period in the evolution of the early universe following the Planck epoch and preceding inflation (cosmology). This places it between about 10−43 seconds after the Big Bang and 10−35 seconds, when the temperature of the universe was comparable to the characteristic temperatures of grand unified theories. However, these theories have not been successful producing quantitative agreement with the results of modern astrophysical observations.
If the grand unification energy is taken to be 1015 GeV, this corresponds to temperatures higher than 1027 K. During this period, three of the four fundamental interactions — electromagnetism, the strong interaction, and the weak interaction — were unified as the electronuclear force. Gravity had separated from the electronuclear force at the end of the Planck era. During the grand unification epoch, physical characteristics such as mass, charge, flavour and colour charge were meaningless.
The grand unification epoch ended at approximately 10−36 seconds after the Big Bang. At this point several key events took place. The strong force separated from the other fundamental forces.
It is possible that some part of this decay process violated the conservation of baryon number and gave rise to a small excess of matter over antimatter (see baryogenesis). This phase transition is also thought to have triggered the process of cosmic inflation that dominated the development of the universe during the following inflationary epoch.
| Physical sciences | Physical cosmology | Astronomy |
193031 | https://en.wikipedia.org/wiki/Stamen | Stamen | The stamen (: stamina or stamens) is a part consisting of the male reproductive organs of a flower. Collectively, the stamens form the androecium.
Morphology and terminology
A stamen typically consists of a stalk called the filament and an anther which contains microsporangia. Most commonly anthers are two-lobed (each lobe is termed a locule) and are attached to the filament either at the base or in the middle area of the anther. The sterile tissue between the lobes is called the connective, an extension of the filament containing conducting strands. It can be seen as an extension on the dorsal side of the anther. A pollen grain develops from a microspore in the microsporangium and contains the male gametophyte. The size of anthers differs greatly, from a tiny fraction of a millimeter in Wolfia spp up to five inches (13 centimeters) in Canna iridiflora and Strelitzia nicolai.
The stamens in a flower are collectively called the androecium. The androecium can consist of as few as one-half stamen (i.e. a single locule) as in Canna species or as many as 3,482 stamens which have been counted in the saguaro (Carnegiea gigantea). The androecium in various species of plants forms a great variety of patterns, some of them highly complex. It generally surrounds the gynoecium and is surrounded by the perianth. A few members of the family Triuridaceae, particularly Lacandonia schismatica and Lacandonia brasiliana, along with a few species of Trithuria (family Hydatellaceae) are exceptional in that their gynoecia surround their androecia.
Etymology
Stamen is the Latin word meaning "thread" (originally thread of the warp, in weaving).
Filament derives from classical Latin filum, meaning "thread"
Anther derives from French anthère, from classical Latin anthera, meaning "medicine extracted from the flower" in turn from Ancient Greek ἀνθηρά (), feminine of ἀνθηρός () meaning "flowery", from ἄνθος () meaning "flower"
Androecium (: androecia) derives from Ancient Greek ἀνήρ () meaning "man", and οἶκος () meaning "house" or "chamber/room".
Variation in morphology
Depending on the species of plant, some or all of the stamens in a flower may be attached to the petals or to the floral axis. They also may be free-standing or fused to one another in many different ways, including fusion of some but not all stamens. The filaments may be fused and the anthers free, or the filaments free and the anthers fused. Rather than there being two locules, one locule of a stamen may fail to develop, or alternatively the two locules may merge late in development to give a single locule. Extreme cases of stamen fusion occur in some species of Cyclanthera in the family Cucurbitaceae and in section Cyclanthera of genus Phyllanthus (family Euphorbiaceae) where the stamens form a ring around the gynoecium, with a single locule. Plants having a single stamen are referred to as "monandrous."
Pollen production
A typical anther contains four microsporangia. The microsporangia form sacs or pockets (locules) in the anther (anther sacs or pollen sacs). The two separate locules on each side of an anther may fuse into a single locule. Each microsporangium is lined with a nutritive tissue layer called the tapetum and initially contains diploid pollen mother cells. These undergo meiosis to form haploid spores. The spores may remain attached to each other in a tetrad or separate after meiosis. Each microspore then divides mitotically to form an immature microgametophyte called a pollen grain.
The pollen is eventually released when the anther forms openings (dehisces). These may consist of longitudinal slits, pores, as in the heath family (Ericaceae), or by valves, as in the barberry family (Berberidaceae). In some plants, notably members of Orchidaceae and Asclepiadoideae, the pollen remains in masses called pollinia, which are adapted to attach to particular pollinating agents such as birds or insects. More commonly, mature pollen grains separate and are dispensed by wind or water, pollinating insects, birds or other pollination vectors.
Pollen of angiosperms must be transported to the stigma, the receptive surface of the carpel, of a compatible flower, for successful pollination to occur. After arriving, the pollen grain (an immature microgametophyte) typically completes its development. It may grow a pollen tube and undergo mitosis to produce two sperm nuclei.
Sexual reproduction in plants
In the typical flower (that is, in the majority of flowering plant species) each flower has both carpels and stamens. In some species, however, the flowers are unisexual with only carpels or stamens. (monoecious = both types of flowers found on the same plant; dioecious = the two types of flower found only on different plants). A flower with only stamens is called androecious. A flower with only carpels is called gynoecious.
A pistil consists of one or more carpels. A flower with functional stamens but no functional pistil is called a staminate flower, or (inaccurately) a male flower. A flower with a functional pistil but no functional stamens is called a pistillate flower, or (inaccurately) a female flower.
An abortive or rudimentary stamen is called a staminodium or staminode, such as in Scrophularia nodosa.
The carpels and stamens of orchids are fused into a column. The top part of the column is formed by the anther, which is covered by an anther cap.
Terminology
Stamen
Stamens can also be adnate (fused or joined from more than one whorl):
epipetalous: adnate to the corolla
epiphyllous: adnate to undifferentiated tepals (as in many Liliaceae)
They can have different lengths from each other:
didymous: two equal pairs
didynamous: occurring in two pairs, a long pair and a shorter pair
tetradynamous: occurring as a set of six stamens with four long and two shorter ones
or respective to the rest of the flower (perianth):
exserted: extending beyond the corolla
included: not extending beyond the corolla
They may be arranged in one of two different patterns:
spiral; or
whorled: one or more discrete whorls (series)
They may be arranged, with respect to the petals:
diplostemonous: in two whorls, the outer alternating with the petals, while the inner is opposite the petals.
haplostemenous: having a single series of stamens, equal in number to the proper number of petals and alternating with them
obdiplostemonous: in two whorls, with twice the number of stamens as petals, the outer opposite the petals, inner opposite the sepals, e.g. Simaroubaceae (see diagram)
Connective
Where the connective is very small, or imperceptible, the anther lobes are close together, and the connective is referred to as discrete, e.g. Euphorbia pp., Adhatoda zeylanica. Where the connective separates the anther lobes, it is called divaricate, e.g. Tilia, Justicia gendarussa. The connective may also be a long and stalk-like, crosswise on the filament, this is a distractile connective, e.g. Salvia. The connective may also bear appendages, and is called appendiculate, e.g. Nerium odorum and some other species of Apocynaceae. In Nerium, the appendages are united as a staminal corona.
Filament
A column formed from the fusion of multiple filaments is known as an androphore. Stamens can be connate (fused or joined in the same whorl) as follows:
extrorse: anther dehiscence directed away from the centre of the flower. Cf. introrse, directed inwards, and latrorse towards the side.
monadelphous: fused into a single, compound structure
declinate: curving downwards, then up at the tip (also – declinate-descending)
diadelphous: joined partially into two androecial structures
pentadelphous: joined partially into five androecial structures
synandrous: only the anthers are connate (such as in the Asteraceae). The fused stamens are referred to as a synandrium.
Anther
Anther shapes are variously described by terms such as linear, rounded, sagittate, sinuous, or reniform.
The anther can be attached to the filament's connective in two ways:
basifixed: attached at its base to the filament
pseudobasifixed: a somewhat misnomer configuration where connective tissue extends in a tube around the filament tip
dorsifixed: attached at its center to the filament, usually versatile (able to move)
Gallery
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
193284 | https://en.wikipedia.org/wiki/Baking%20powder | Baking powder | Baking powder is a dry chemical leavening agent, a mixture of a carbonate or bicarbonate and a weak acid. The base and acid are prevented from reacting prematurely by the inclusion of a buffer such as cornstarch. Baking powder is used to increase the volume and lighten the texture of baked goods. It works by releasing carbon dioxide gas into a batter or dough through an acid–base reaction, causing bubbles in the wet mixture to expand and thus leavening the mixture.
The first single-acting baking powder (meaning that it releases all of its carbon dioxide as soon as it is dampened) was developed by food manufacturer Alfred Bird in England in 1843. The first double-acting baking powder, which releases some carbon dioxide when dampened and later releases more of the gas when heated by baking, was developed by Eben Norton Horsford in the U.S. in the 1860s.
Baking powder is used instead of yeast for end-products where fermentation flavors would be undesirable,
or where the batter lacks the elastic structure to hold gas bubbles for more than a few minutes, and to speed the production of baked goods. Because carbon dioxide is released at a faster rate through the acid-base reaction than through fermentation, breads made by chemical leavening are called quick breads. The introduction of baking powder was revolutionary in minimizing the time and labor required to make breadstuffs. It led to the creation of new types of cakes, cookies, biscuits, and other baked goods.
Formulation and mechanism
Baking powder is made up of a base, an acid, and a buffering material to prevent the acid and base from reacting before their intended use.
Most commercially available baking powders are made up of sodium bicarbonate (NaHCO3, also known as baking soda or bicarbonate of soda) and one or more acid salts.
Acid-base reactions
When combined with water, the sodium bicarbonate and acid salts react to produce gaseous carbon dioxide.
Whether commercially or domestically prepared, the principles behind baking powder formulations remain the same. The acid-base reaction can be generically represented as shown:
NaHCO3 + H+ → Na+ + CO2 + H2O
The real reactions are more complicated because the acids are complicated. For example, starting with baking soda and monocalcium phosphate, the reaction produces carbon dioxide by the following stoichiometry:
14 NaHCO3 + 5 Ca(H2PO4)2 → 14 CO2 + Ca5(PO4)3OH + 7 Na2HPO4 + 13 H2O
A typical formulation (by weight) could call for 30% sodium bicarbonate, 5–12% monocalcium phosphate, and 21–26% sodium aluminium sulfate.
Alternately, a commercial baking powder might use sodium acid pyrophosphate as one of the two acidic components instead of sodium aluminium sulfate.
Another typical acid in such formulations is cream of tartar (KC4H5O6), a derivative of tartaric acid.
Single- and double-acting baking powders
The use of two acidic components is the basis of the term "double acting".
The acid in a baking powder can be either fast-acting or slow-acting.
A fast-acting acid reacts in a wet mixture with baking soda at room temperature, and a slow-acting acid does not react until heated. When the chemical reactions in baking powders involve both fast- and slow-acting acids, they are known as "double-acting"; those that contain only one acid are "single-acting".
By providing a second rise in the oven, double-acting baking powders increase the reliability of baked goods by rendering the time elapsed between mixing and baking less critical. This is the type of baking powder most widely available to consumers today. Double-acting baking powders work in two phases; once when cold, and once when hot.
For example, Rumford Baking Powder is a double-acting product that contains only monocalcium phosphate as a leavening acid. With this acid, about two-thirds of the available gas is released within about two minutes of mixing at room temperature. It then becomes dormant because an intermediate species, dicalcium phosphate, is generated during the initial mixing. A further release of gas requires the batter to be heated above 140 °F (60 °C).
Common low-temperature acid salts include cream of tartar and monocalcium phosphate (also called calcium acid phosphate). High-temperature acid salts include sodium aluminium sulfate, sodium aluminium phosphate, and sodium acid pyrophosphate.
Starch component
Baking powders also include components to improve their stability and consistency. Cornstarch, flour, or potato starch are often used as buffers.
An inert starch serves several functions in baking powder. Primarily it is used to absorb moisture, and so prolong shelf life of the compound by keeping the powder's alkaline and acidic components dry so as not to react with each other prematurely. A dry powder also flows and mixes more easily. Finally, the added bulk allows for more accurate measurements.
Commonly used bases and acids
Baking powder is made of two main components: an acid and a bicarbonate base. When they are hydrated, an acid–base reaction occurs, releasing carbon dioxide. Commonly used acids and bases for baking powders are:
Bases
Sodium bicarbonate
Ammonium bicarbonate
Potassium bicarbonate
Acids
Potassium acid tartrate (cream of tartar)
Monocalcium phosphate (MCP)
Sodium acid pyrophosphate (SAPP)
Sodium aluminium phosphate (SALP)
Dicalcium phosphate dihydrate
Sodium aluminium sulfate
Glucono delta-lactone (GDL)
Fumaric acid
Dimagnesium phosphate (DMP)
Neutralizing value
The neutralizing value (NV) is defined as the amount of baking soda required to neutralize 100 parts of a leavening acid (by weight).
Neutralizing value can be expressed through the following formula:
NV = g of NaHCO3 neutralized by 100 g leavening acid
Rate of reaction
The rate of reaction (ROR) is represented by the percentage of carbon dioxide released by the acid-base reaction.
Other subcategories exist to classify the reaction rated during mixing and holding called “Dough Reaction Rate (DRR)” and during baking at a given temperature denominated “Batter Reaction Rate (BRR)”.
The ROR of baking powders is impacted by many factors, including:
Acid type: moisture and/or heat reactivity are influenced by its physicochemical properties, such as solubility and dissociation extent.
Granulometry
Temperature of dough or batter
Concentration of leavening acid and base
Hydration
Presence of water-binding ingredients (e.g. sugars, alcohols, starches, gums, salt)
Presence of cations (calcium)
History
Before baking powder
When Amelia Simmons published American Cookery (1792), the first American cookbook, three known types of leavening were used in its recipes: baker's yeast, emptins (from the leavings of brewer's yeast), and pearlash.
At that time, the mechanisms underlying the action of yeasts and other leavenings were not understood, and reliable commercial products were not available. Bakers obtained yeasts from brewers or distillers or made their own by exposing mixtures of flour and water to the open air. If lucky, they could capture useful wild yeast and keep some alive, regularly feeding it for ongoing use and trying to avoid contamination. Women who made their own ale could use the brewing dregs or "emptins" in their baking.
The effectiveness of such leavenings varied widely. Resulting baked goods often had a sour or bitter taste. Breads were made of grain, water, yeast, and sometimes salt. Cooks also made yeast, sponge and pound cakes. Yeast cakes were similar to breads but included fancier ingredients, like sugar, spices, fruits or nuts. Sponge cakes used beaten egg whites for leavening. Pound cakes combined butter, sugar, and flour and eggs, and were particularly dense. Making cakes was even more laborious than making bread: to prepare a cake, a manservant might have to beat the ingredients together as long as an hour.
Pearlash
The third type of leavening, pearlash, was the precursor to modern baking powder. Pearlash was a purified form of potash.
According to Joop Witteveen (1985), pearlash was used in Europe by professional bakers in the mid-seventeenth century. The Oxford English Dictionary credits the first written use of the term "pearl ash" to 1703 and the writing of Abel Boyer.
By the mid-1700s, practical treatises on the calcination of pearlash were available in both England and the United States. Pearlash was the subject of the first patent in the United States, issued in April 1790. Its preparation was time-consuming, but could be accomplished with a cast-iron kettle: it involved soaking fireplace ashes in water to make lye, and then boiling the lye to remove water and obtain "salts".
The active ingredient in pearlash was potassium carbonate (K2CO3). Combining it with an acidic ingredient like sour milk or lemon juice resulted in a chemical reaction that produced carbon dioxide.
Once prepared, the white powder was much more stable than yeast. Small amounts could be used on a daily basis, rather than baking a week or two weeks' worth of bread at one time. American Cookery was the first cookbook to call for its use, but by no means the last. With pearlash, cooks were able to create new recipes for new types of cakes, cookies, and biscuits that were quicker and easier to make than yeast-based recipes.
Experimentation
Between the publication of American Cookery in 1796, and the mid-1800s, cooks experimented with a variety of acids, alkalis, and mineral salts as possible chemical leaveners. Many were already available in households as medicinal, cleaning or solvent products. Smelling salts, hartshorn, and sal volatile were all ammonia inhalants, containing forms of ammonium carbonate. The term "saleratus" was applied confusingly to both potassium bicarbonate and to sodium bicarbonate (NaHCO3, what we now call baking soda). Baking soda and cream of tartar were relatively new ingredients for cooks: Soda may have been introduced to American cooking by female Irish immigrants who found work as kitchen help. Cream of tartar, also known as tartaric acid or potassium bitartrate, was a by-product of wine-making and had to be imported from France and Italy.
In 1846, the first edition of Catherine Beecher's cookbook Domestic Recipe Book (1846) included a recipe for an early prototype of baking powder biscuits that used both baking soda and cream of tartar.
Several recipes in the compilation cookbook Practical American Cookery (1855) used baking soda and cream of tartar to form new types of dough. There were recipes for a "crust" similar to modern dumplings or cobbler, several for cakes, and one for "soda doughnuts". When the third edition of Miss Beecher's Domestic Receipt Book appeared in 1858, it included 8 types of leaveners, only two of which could be made at home.
Baking soda and cream of tartar were sold by chemists rather than in grocery stores. Pharmacists purchased the materials in bulk and then dispensed them individually in small amounts in paper packaging. At least one contributor to Practical American Cookery provided instructions on how to handle baking soda and cream of tartar. Even with instructions, early leaveners could be difficult to obtain, awkward to store, unstandardized, and unpredictable to use.
The chemical leavening effects were accomplished by the activating of a base such as baking soda in the presence of liquid(s) and an acid such as sour milk, vinegar, lemon juice, or cream of tartar. Because these acidulants react with baking soda quickly, retention of gas bubbles was dependent on batter viscosity. It was critical for the batter to be baked quickly, before the gas escaped. The next step, the development of baking powder, created a system where the gas-producing reactions could be delayed until needed.
The rise of baking powder
The creation of shelf-stable chemical combinations of sodium bicarbonate and cream of tartar is seen as marking the true introduction of baking powder. Although cooks had used both sodium bicarbonate and cream of tartar in recipes, they had to purchase the ingredients individually and store them separately to prevent them from spoiling or reacting prematurely. As chemists developed more uniform constituents, they also began to experiment with ways of combining them. In the mid-late 1800s, chemists introduced the first modern baking powders.
Alfred Bird
The first to create a form of baking powder was English chemist and food manufacturer Alfred Bird in 1843. Bird was motivated to develop a yeast-free leavener because his wife Elizabeth was allergic to eggs and yeast. His formulation included bicarbonate of soda and tartaric acid, mixed with starch to absorb moisture and prevent the other ingredients from reacting. A single-action form of baking powder, Alfred Bird's Baking Powder reacted as soon as it became damp.
Bird focused on selling his baking powder to the British Army during the Crimean War, and to explorers like Captain Sir Francis Leopold McClintock, rather than the domestic market.
Nonetheless, Bird's creation of baking powder enabled cooks to take recipes for cakes such as the patriotically named Victoria sponge and make them rise higher.
He did not patent his discovery, and others such as Henry Jones of Bristol soon produced and patented similar products. In 1845, Jones patented "A new preparation of flour" (self-raising flour) that included sodium bicarbonate and tartaric acid to obtain a leavening effect.
Eben Norton Horsford
In America, Eben Norton Horsford, a student of Justus von Liebig, set out to create a flour fortifier and leavening agent. In 1856, he was awarded a patent for "pulverulent phosphoric acid", a process for extracting monocalcium pyrophosphate extracted from bones. Combined with baking soda, monocalcium pyrophosphate provided a double-acting form of leavening. Its initial reaction, when exposed to water, released carbon dioxide and produced dicalcium phosphate, which then reacted under heat to release second-stage carbon dioxide.
In 1859, Horsford and George Wilson formed the Rumford Chemical Works, named in honor of Count Rumford. In 1861, Horsford published The theory and art of breadmaking: A new process without the use of ferment, describing his innovations. In 1864, he obtained a patent for a self-rising flour or "Bread preparation" in which calcium acid phosphate and sodium bicarbonate acted as a leavener.
Horsford's research was interrupted by the American Civil War, but in 1869 Horsford finally created an already-mixed leavening agent by using cornstarch as a buffer. Rumford Chemical Works then began the manufacture of what can be considered a true baking powder. Throughout his career, Horsford continued to experiment extensively with possible techniques and preparations for baking powder. Horsford's leavening products were marketed originally as "Horsford's Yeast Powder" and later as "Rumford Baking Powder". They were packaged in glass bottles and later in metal cans. In 2006 the Rumford Chemical Works in East Providence, Rhode Island were designated a National Historic Chemical Landmark in recognition of baking powder's impact in making baking easier, quicker, and more reliable.
In the 1860s, Horsford shared his formula for baking powder with his former teacher, Justus von Liebig, who in turn shared it with Ludwig Clamor Marquart and Carl Zimmer in Germany. Baking powders based on Horsford's formula were sold in England as "Horsford-Liebig Baking Powder". They were also sold by several companies in Germany, beginning with Marquart and with Zimmer. However, baking powder was not successful in Germany at that time. Much of German baking occurred in guild-based bakeries, rather than in private homes, and the guilds were not interested in replacing centuries-old craft skills with a new technology. Nonetheless, Liebig clearly saw the importance of Horsford's work, stating:
Dr. Oetker's Baking Powder
In the 1890s, the German pharmacist August Oetker began to market a baking powder directly to housewives. It became popular in Germany as "Dr. Oetker's Baking Powder" and as "Backin". Oetker started the mass production of phosphate-based baking powder in 1898 and patented his technique in 1903.
Research by Paul R. Jones in 1993 has shown that Oetker's original recipe was a descendant of Horsford's phosphate-based recipe, obtained from Louis Marquand, a son of Ludwig Clamor Marquart. Dr. Oetker Baking Powder continues to be sold, currently listing its ingredients as sodium acid pyrophosphate, sodium bicarbonate and corn starch.
Royal Baking Powder
In the U.S., in 1866, Joseph C. Hoagland and his brother Cornelius developed a baking powder product with the help of Thomas M. Biddle. They sold a single-action baking powder containing cream of tartar, bicarbonate of soda and starch. Their formula became known as Royal Baking Powder.
Initially in partnership as Biddle & Hoagland, the Hoaglands moved from Fort Wayne, Indiana, to Chicago, leaving Biddle behind, and then to New York. They incorporated there as the Royal Baking Powder Company in 1868. Various battles for control ensued between the Hoagland brothers and their one-time employee William Ziegler. Finally, on March 2, 1899, Ziegler established the New Jersey–based Royal Baking Powder Corporation which combined the three major cream of tartar baking powder companies then in existence in the United States: Dr. Price (Ziegler), Royal (Joseph Hoagland) and Cleveland (Cornelius Nevius Hoagland).
Alum-based baking powders
Cream of tartar was an expensive ingredient in the United States, since it had to be imported from France. In the 1880s, several companies developed double-action baking powders containing cheaper alternative acids known as alums, a class of compounds involving double sulfates of aluminium.
The use of various types of alum in medicines and dyes is mentioned in Pliny the Elder's Natural History.
However, the actual composition of alum was not determined until 1798, when Louis Vauquelin demonstrated that common alum is a double salt, composed of sulfuric acid, alumina, and potash. and Jean-Antoine Chaptal published the analysis of four different kinds of alum.
In 1888, William Monroe Wright (a former salesman for Dr. Price) and George Campbell Rew in Chicago introduced a new form of baking powder, which they called Calumet. Calumet Baking Powder contained baking soda, a cornstarch buffer, sodium aluminium sulfate () as a leavening agent, and albumen. In 1899, after years of experimentation with various possible formulae beginning in the 1870s, Herman Hulman of Terre Haute also introduced a baking powder made with sodium aluminium sulfate. He called his baking powder Clabber, referencing a German baking tradition in which soured milk was used for leavening.
Cream of tartar vs. alum
Sodium aluminium sulfate baking powders were double-acting, and much less expensive to produce than cream of tartar-based baking powders. Cooks also needed less alum-based baking powder to produce a comparable effect. As a result, alum-based baking powders were severe competition for Royal Baking Powder's cream of tartar-based products. William Ziegler of the Royal Baking Powder Company used a variety of tactics, ranging from false advertising and industrial espionage to bribery, to try to convince consumers and legislators that aluminium-based baking powders were harmful. He suggested (without actual evidence) that alum was unnatural and poisonous, while cream of tartar was natural and healthful. He attempted (and in Missouri briefly succeeded) in convincing legislators to ban aluminium compounds from use in baking powders. At the same time, he changed his own "Dr. Price" baking powder to an aluminium-based formula that cornered two-thirds of the baking powder market in the southern states. Eventually, after a number of legal and commercial battles that included bribery charges against Ziegler and a grand jury hearing, Royal lost the baking powder wars.
The idea that aluminium in baking powder is dangerous can be traced to Ziegler's attack advertising, and has little if any scientific support. Aluminium is a commonly-found metal that appears in trace quantities in most foods.
By the 1970s Royal had ceased to produce a cream of tartar baking powder. For those who wanted something similar, James Beard suggested combining two parts cream of tartar to one part baking soda just before using it, since the mixture would not keep.
Instead of cream of tartar, modern Royal baking powder contains a mixture of Hulman's sodium aluminium sulfate and Horsford's monocalcium phosphate.
Bakewell Baking Powder
One more type of baking powder was introduced during World War II under the brand name Bakewell. Faced with wartime shortages of cream of tartar and baking powder, Byron H. Smith, a U.S. inventor in Bangor, Maine, created substitute products for American housewives. Bakewell Cream was introduced as a replacement for cream of tartar. It contained sodium acid pyrophosphate and cornstarch and was labeled as a leavening agent. It could be substituted for cream of tartar or mixed with baking soda to replace baking powder.
Smith also sold a baking powder replacement, in which sodium acid pyrophosphate was already mixed with bicarbonate of soda and cornstarch. Somewhat confusingly, it was marketed as "Bakewell Baking Powder" or "Bakewell Cream Baking Powder". Some packaging uses the phrase "The Original Bakewell Cream". A product labelled "Bakewell Cream" may be either the cream of tartar substitute or the baking powder substitute depending on whether it is additionally identified as "Double acting" "Baking Powder". A modern version containing acid sodium pyrophosphate, sodium bicarbonate and redried starch, is sold as being both aluminium-free and gluten-free.
Original preparations
Over time, most baking powder manufacturers have experimented with their products, combining or even replacing what were once key ingredients. Information in the following table reflects the original ingredients as introduced by different individuals and companies. The ingredients used may be very different from later formulations and current products. Where an ingredient had multiple names, they are all listed together in the first occurrence, and the most common name listed thereafter.
The base for all these products is sodium bicarbonate, also known as baking soda.
The formation of a brand's current baking powder may be very different from the original formula they produced, shown above. They may now use combinations of acids, or different acids altogether.
As of 2010, the two main baking powder companies in the United States were Clabber Girl and Calumet. Calumet held about 1/3 of the American baking powder market, with Clabber Girl (which produces the Clabber Girl, Rumford, and Davis brands of baking powder, among others) dominating 2/3. (Davis baking powder is commonly found in the northeastern United States.)
How much to use
Generally, one teaspoon (5 g or 1/6 oz) of baking powder is used to raise a mixture of one cup (120 g or 4oz) of flour, one cup of liquid, and one egg. However, if the mixture is acidic, baking powder's additional acids remain unconsumed in the chemical reaction and often lend an unpleasant taste to food. High acidity can be caused by ingredients such as buttermilk, lemon juice, yogurt, citrus, or honey. When excessive acid is present, some of the baking powder should be replaced with baking soda. For example, one cup of flour, one egg, and one cup of buttermilk requires only teaspoon of baking powder—the remaining leavening is caused by buttermilk acids reacting with teaspoon of baking soda.
However, with baking powders that contain sodium acid pyrophosphate, excess alkaline substances can sometimes deprotonate the acid in two steps instead of the one that normally occurs, resulting in an offensive bitter taste to baked goods. Calcium compounds and aluminium compounds do not have that problem, though, since calcium compounds that deprotonate twice are insoluble and aluminium compounds do not deprotonate in that fashion.
Moisture and heat can cause baking powder to lose its effectiveness over time, and commercial varieties have a somewhat arbitrary expiration date printed on the container. Regardless of the expiration date, the effectiveness can be tested by placing a teaspoon of the powder into a small container of hot water. If it bubbles vigorously, it is still active and usable.
Comparisons
Different brands of baking powder can perform quite differently in the oven. Early baking powder companies published their own cookbooks, to promote their new products, to educate cooks about exactly how and when to use them, and because cooks could not easily adapt recipes that were developed using different types of baking powder. Baking powders using cream-of-tartar, phosphates, or alums could behave very differently, and required different amounts for a desired rising effect.
In 2015, Cook's Country, an American TV show and magazine, evaluated six baking powders marketed to consumers. In one test, six U.S. brands were used to bake white cake, cream biscuits, and chocolate cookies. Depending on the brand, the thickness of the cakes varied by up to 20% (from 0.89 to 1.24 in). It was also found that the lower-rising products made what were judged to be better chocolate cookies. Also, 30% of the testers (n=21) noted a metallic flavor in cream biscuits made with brands containing aluminium.
Substituting in recipes
Substitute acids
As described above, baking powder is mainly just baking soda mixed with an acid. In principle, a number of kitchen acids may be combined with baking soda to simulate commercial baking powders. Vinegar (dilute acetic acid), especially white vinegar, is also a common acidifier in baking; for example, many heirloom chocolate cake recipes call for a tablespoon or two of vinegar. Where a recipe already uses buttermilk or yogurt, baking soda can be used without cream of tartar (or with less). Alternatively, lemon juice can be substituted for some of the liquid in the recipe, to provide the required acidity to activate the baking soda. The main variable with the use of these kitchen acids is the rate of leavening.
| Physical sciences | Carbonic oxyanions | Chemistry |
193287 | https://en.wikipedia.org/wiki/Heartburn | Heartburn | Heartburn, also known as pyrosis, cardialgia or acid indigestion, is a burning sensation in the central chest or upper central abdomen. Heartburn is usually due to regurgitation of gastric acid (gastric reflux) into the esophagus. It is the major symptom of gastroesophageal reflux disease (GERD).
Other common descriptors for heartburn (besides burning) are belching, nausea, squeezing, stabbing, or a sensation of pressure on the chest. The pain often rises in the chest (directly behind the breastbone) and may radiate to the neck, throat, or angle of the arm. Because the chest houses other important organs besides the esophagus (including the heart and lungs), not all symptoms related to heartburn are esophageal in nature.
The cause will vary depending on one's family and medical history, genetics, if a person is pregnant or lactating, and age. As a result, the diagnosis will vary depending on the suspected organ and the inciting disease process. Work-up will vary depending on the clinical suspicion of the provider seeing the patient, but generally includes endoscopy and a trial of antacids to assess for relief.
Treatment for heartburn may include medications and dietary changes. Medication include antacids. Dietary changes may require avoiding foods that are high in fats, spicy, high in artificial flavors, heavily reducing NSAID use, avoiding heavy alcohol consumption, and decreasing peppermint consumption. Lifestyle changes may help such as reducing weight.
Definition
The term indigestion includes heartburn along with a number of other symptoms. Indigestion is sometimes defined as a combination of epigastric pain and heartburn. Heartburn is commonly used interchangeably with gastroesophageal reflux disease (GERD) rather than just to describe a symptom of burning in one's chest.
Differential diagnosis
Heartburn-like symptoms may indicate disease. Of greatest concern, heartburn (generally related to the esophagus) may mimic symptoms of a heart attack, as these organs share a common nerve supply. Numerous abdominal and thoracic organs are present in that region of the body. Many different organ systems might explain the discomfort called heartburn.
Heart
The most common symptom for a heart attack is chest pain. However, as many as 30% of people who receive cardiac catheterization for chest pain have findings that do not account for their chest discomfort. These are often defined as having "atypical chest pain" or chest pain of undetermined origin. Women experiencing heart attacks may also deny classic signs and symptoms and instead complain of GI symptoms. One article estimates that ischemic heart disease may appear to be GERD in 0.6% of people.
Esophagus
GERD (most common cause of heartburn) – occurs when acid refluxes from the stomach and inflames the esophagus.
Esophageal spasms – typically occur after eating or drinking and may be combined with difficulty swallowing.
Esophageal strictures
Esophageal cancers
Mallory-Weis tears – tears of the superficial mucosa of the esophagus that are subsequently exposed to gastric acid commonly due to vomiting and/or retching
Eosinophilic esophagitis – a disease commonly associated with other atopic diseases such as asthma, food allergies, seasonal allergies, and atopic skin disease
Chemical esophagitis – related to the intake of caustic substances, excessive amounts of hot liquids, alcohol, or tobacco smoke
Infectious esophagitis – especially CMV and certain fungal infections, most common in immunocompromised persons
Stomach
Peptic ulcer disease – can be secondary to Helicobacter pylori infection or heavy NSAID use that weakens stomach mucosal layer. Pain often worsens with eating.
Stomach cancer
Intestines
Intestinal ulcers – generally secondary to other conditions such as H. pylori infection or cancers of the gastrointestinal tract. Pain often improves with eating.
Duodenitis – inflammation of the small intestine. May be the result of several conditions.
Gallbladder
Gallstones
Pancreas
Pancreatitis – can be autoimmune, due to a gallstone obstructing the lumen, related to alcohol consumption.
Hematology
Pernicious anemia – can be autoimmune, due to atrophic gastritis.
Pregnancy
Heartburn is common during pregnancy having been reported in as many as 80% of pregnancies. It is most often due to GERD and results from relaxation of the lower esophageal sphincter (LES), changes in gastric motility, and/or increasing intra-abdominal pressure. The onset of symptoms can be during any trimester of pregnancy.
Hormonal – related to the increasing amounts of estrogen and progesterone and their effect on the LES
Mechanical – the enlarging uterus increasing intra-abdominal pressure, inducing reflux of gastric acid
Behavioral – as with other instances of heartburn, behavioral modifications can exacerbate or alleviate symptoms
Unknown origin
Functional heartburn is heartburn of unknown cause. It is commonly associated with psychiatric conditions like depression and anxiety. It is also seen with other functional gastrointestinal disorders like irritable bowel syndrome and is the primary cause of lack of improvement post treatment with proton pump inhibitors (PPIs). Despite this, PPIs are still the primary treatment with response rates in about 50% of people. The diagnosis is one of elimination, based upon the Rome III criteria. It was found to be present in 22.3% of Canadians in one survey.
Diagnostic approach
Heartburn can be caused by several conditions and a preliminary diagnosis of GERD is based on additional signs and symptoms. The chest pain caused by GERD has a distinct 'burning' sensation, occurs after eating or at night, and worsens when a person lies down or bends over. It also is common in pregnant women, and may be triggered by consuming food in large quantities, or specific foods containing certain spices, high fat content, or high acid content. In young persons (typically <40 years) who present with heartburn symptoms consistent with GERD (onset after eating, when lying down, when pregnant), a physician may begin a course of PPIs to assess clinical improvement before additional testing is undergone. Resolution or improvement of symptoms on this course may result in a diagnosis of GERD.
Other tests or symptoms suggesting acid reflux is causing heartburn include:
Onset of symptoms after eating or drinking, at night, and/or with pregnancy, and improvement with PPIs
Endoscopy looking for erosive changes of the esophagus consistent with prolonged acid exposure (e.g. - Barrett's esophagus)
Upper GI series looking for the presence of acid reflux
GI cocktail
Relief of symptoms 5 to 10 minutes after the administration of viscous lidocaine and an antacid increases the suspicion that the pain is esophageal in origin. This however does not rule out a potential cardiac cause as 10% of cases of discomfort due to cardiac causes are improved with antacids.
Biochemical
Esophageal pH monitoring: a probe can be placed via the nose into the esophagus to record the level of acidity in the lower esophagus. Because some degree of variation in acidity is normal, and small reflux events are relatively common, esophageal pH monitoring can be used to document reflux in real-time. Patients are able to record symptom onset to correlate lower esophageal pH with time of symptom onset.
Mechanical
Manometry: in this test, a pressure sensor (manometer) is passed via the mouth into the esophagus and measures the pressure of the LES directly.
Endoscopy: the esophageal mucosa can be visualized directly by passing a thin, lighted tube with a tiny camera known as an endoscope attached through the mouth to examine the oesophagus and stomach. In this way, evidence of esophageal inflammation can be detected, and biopsies taken if necessary. Since an endoscopy allows a doctor to visually inspect the upper digestive tract the procedure may help identify any additional damage to the tract that may not have been detected otherwise.
Biopsy: a small sample of tissue from the oesophagus is removed. It is then studied to check for inflammation, cancer, or other problems.
Treatment
Treatment plans are tailored to the specific diagnosis and etiology of the heartburn. Management of heartburn can be sorted into various categories.
Pharmacologic management
Antacids (i.e. calcium carbonate and sodium bicarbonate) are often taken to treat the immediate problem
H2 receptor antagonists or proton pump inhibitors are effective for the two most common causes of heartburn (e.g. gastritis and GERD)
Antibiotics are used if H. pylori is present.
Behavioral management
Taking medications 30–45 minutes before eating suppresses the stomach's acid generating response to food
Avoiding chocolate, peppermint, caffeine intake, and foods high in fats
Limiting big meals, instead consuming smaller, more frequent meals
Avoiding reclining 2.5–3.5 hours after a meal to prevent the reflux of stomach contents
Lifestyle modifications
Early studies show that diets that are high in fiber may show evidence in decreasing symptoms of dyspepsia.
Weight loss can decrease abdominal pressure that both delays gastric emptying and increases gastric acid reflux into the esophagus
Smoking cessation
Alternative and complementary therapies
Symptoms of heartburn may not always be the result of an organic cause. Patients may respond better to therapies targeting anxiety, through medications aimed towards a psychiatric etiology, osteopathic manipulation, and acupuncture.
Psychotherapy may show a positive role in treatment of heartburn and the reduction of distress experienced during symptoms.
Acupuncture – in cases of PPI failure, adding acupuncture may be more effective than doubling the dose of PPIs.
Surgical management
In the case of GERD causing heartburn symptoms, surgery may be required if PPI is not effective. Surgery is not undergone if functional heartburn is the leading diagnosis.
Epidemiology
About 42% of the United States population has had heartburn at some point.
| Biology and health sciences | Symptoms and signs | Health |
193305 | https://en.wikipedia.org/wiki/Magnesium%20hydroxide | Magnesium hydroxide | Magnesium hydroxide is an inorganic compound with the chemical formula Mg(OH)2. It occurs in nature as the mineral brucite. It is a white solid with low solubility in water (). Magnesium hydroxide is a common component of antacids, such as milk of magnesia.
Preparation
Treating the solution of different soluble magnesium salts with alkaline water induces the precipitation of the solid hydroxide Mg(OH)2:
Mg2+ + 2 OH− → Mg(OH)2
As is the second most abundant cation present in seawater after , it can be economically extracted directly from seawater by alkalinisation as described here above. On an industrial scale, Mg(OH)2 is produced by treating seawater with lime (Ca(OH)2). A volume of of seawater gives about of Mg(OH)2. Ca(OH)2 ) is far more soluble than Mg(OH)2 ) and drastically increases the pH value of seawater from 8.2 to 12.5. The less soluble precipitates because of the common ion effect due to the added by the dissolution of :
For seawater brines, precipitating agents other than can be utilized, each with their own nuances:
Use of can yield or , which reduces the final purity of .
, can produce explosive nitrogen trichloride when the brine is used for chlorine production.
as the precipitating agent has longer settling times and is difficult to filter.
It has been demonstrated that sodium hydroxide, , is the better precipitating agent compared to and due to higher recovery and purity rates, and the settling and filtration time can be improved at low temperatures and higher concentration of precipitates. Methods involving the use of precipitating agents are typically batch processes.
It is also possible to obtain from seawater using electrolysis chambers separated with a cation exchange membrane. This process is continuous, lower-cost, and produces oxygen gas, hydrogen gas, sulfuric acid (if is used; can alternatively be used to yield ), and of 98% or higher purity. It is crucial to deaerate the seawater to mitigate co-precipitation of calcium precipitates.
Uses
Precursor to MgO
Most Mg(OH)2 that is produced industrially, as well as the small amount that is mined, is converted to fused magnesia (MgO). Magnesia is valuable because it is both a poor electrical conductor and an excellent thermal conductor.
Medical
Only a small amount of the magnesium from magnesium hydroxide is usually absorbed by the intestine (unless one is deficient in magnesium). However, magnesium is mainly excreted by the kidneys; so long-term, daily consumption of milk of magnesia by someone suffering from kidney failure could lead in theory to hypermagnesemia. Unabsorbed magnesium is excreted in feces; absorbed magnesium is rapidly excreted in urine.
Applications
Antacid
As an antacid, magnesium hydroxide is dosed at approximately 0.5–1.5g in adults and works by simple neutralization, in which the hydroxide ions from the Mg(OH)2 combine with acidic H+ ions (or hydronium ions) produced in the form of hydrochloric acid by parietal cells in the stomach, to produce water.
Laxative
As a laxative, magnesium hydroxide is dosed at , and works in a number of ways. First, Mg2+ is poorly absorbed from the intestinal tract, so it draws water from the surrounding tissue by osmosis. Not only does this increase in water content soften the feces, it also increases the volume of feces in the intestine (intraluminal volume) which naturally stimulates intestinal motility. Furthermore, Mg2+ ions cause the release of cholecystokinin (CCK), which results in intraluminal accumulation of water and electrolytes, and increased intestinal motility. Some sources claim that the hydroxide ions themselves do not play a significant role in the laxative effects of milk of magnesia, as alkaline solutions (i.e., solutions of hydroxide ions) are not strongly laxative, and non-alkaline Mg2+ solutions, like MgSO4, are equally strong laxatives, mole for mole.
History of milk of magnesia
On May 4, 1818, American inventor Koen Burrows received a patent (No. X2952) for magnesium hydroxide. In 1829, Sir James Murray used a "condensed solution of fluid magnesia" preparation of his own design to treat the Lord Lieutenant of Ireland, the Marquess of Anglesey, for stomach pain. This was so successful (advertised in Australia and approved by the Royal College of Surgeons in 1838) that he was appointed resident physician to Anglesey and two subsequent Lords Lieutenant, and knighted. His fluid magnesia product was patented two years after his death, in 1873.
The term milk of magnesia was first used by Charles Henry Phillips in 1872 for a suspension of magnesium hydroxide formulated at about 8% w/v. It was sold under the brand name Phillips' Milk of Magnesia for medicinal usage.
USPTO registrations show that the terms "Milk of Magnesia" and "Phillips' Milk of Magnesia" have both been assigned to Bayer since 1995. In the UK, the non-brand (generic) name of "Milk of Magnesia" and "Phillips' Milk of Magnesia" is "Cream of Magnesia" (Magnesium Hydroxide Mixture, BP).
As food additive
It is added directly to human food, and is affirmed as generally recognized as safe by the FDA. It is known as E number E528.
Magnesium hydroxide is marketed for medical use as chewable tablets, as capsules, powder, and as liquid suspensions, sometimes flavored. These products are sold as antacids to neutralize stomach acid and relieve indigestion and heartburn. It also is a laxative to alleviate constipation. As a laxative, the osmotic force of the magnesia acts to draw fluids from the body. High doses can lead to diarrhea, and can deplete the body's supply of potassium, sometimes leading to muscle cramps.
Some magnesium hydroxide products sold for antacid use (such as Maalox) are formulated to minimize unwanted laxative effects through the inclusion of aluminum hydroxide, which inhibits the contractions of smooth muscle cells in the gastrointestinal tract, thereby counterbalancing the contractions induced by the osmotic effects of the magnesium hydroxide.
Other niche uses
Magnesium hydroxide is also a component of antiperspirant.
Waste water treatment
Magnesium hydroxide powder is used industrially to neutralize acidic wastewaters. It is also a component of the Biorock method of building artificial reefs. The main advantage of over , is to impose a lower pH better compatible with that of seawater and sea life: pH 10.5 for in place of pH 12.5 with .
Fire retardant
Natural magnesium hydroxide (brucite) is used commercially as a fire retardant. Most industrially used magnesium hydroxide is produced synthetically. Like aluminum hydroxide, solid magnesium hydroxide has smoke suppressing and flame retardant properties. This property is attributable to the endothermic decomposition it undergoes at :
Mg(OH)2 → MgO + H2O
The heat absorbed by the reaction retards the fire by delaying ignition of the associated substance. The water released dilutes combustible gases. Common uses of magnesium hydroxide as a flame retardant include additives to cable insulation, insulation plastics, roofing, and various flame retardant coatings.
Mineralogy
Brucite, the mineral form of Mg(OH)2 commonly found in nature also occurs in the 1:2:1 clay minerals amongst others, in chlorite, in which it occupies the interlayer position normally filled by monovalent and divalent cations such as Na+, K+, Mg2+ and Ca2+. As a consequence, chlorite interlayers are cemented by brucite and cannot swell nor shrink.
Brucite, in which some of the Mg2+ cations have been substituted by Al3+ cations, becomes positively charged and constitutes the main basis of layered double hydroxide (LDH). LDH minerals as hydrotalcite are powerful anion sorbents but are relatively rare in nature.
Brucite may also crystallize in cement and concrete in contact with seawater. Indeed, the Mg2+ cation is the second-most-abundant cation in seawater, just behind Na+ and before Ca2+. Because brucite is a swelling mineral, it causes a local volumetric expansion responsible for tensile stress in concrete. This leads to the formation of cracks and fissures in concrete, accelerating its degradation in seawater.
For the same reason, dolomite cannot be used as construction aggregate for making concrete. The reaction of magnesium carbonate with the free alkali hydroxides present in the cement porewater also leads to the formation of expansive brucite.
MgCO3 + 2 NaOH → Mg(OH)2 + Na2CO3
This reaction, one of the two main alkali–aggregate reaction (AAR) is also known as alkali–carbonate reaction.
| Physical sciences | Hydroxy anion | Chemistry |
193308 | https://en.wikipedia.org/wiki/Cryolite | Cryolite | Cryolite (Na3AlF6, sodium hexafluoroaluminate) is an uncommon mineral identified with the once-large deposit at Ivittuut on the west coast of Greenland, mined commercially until 1987.
It is used in the reduction ("smelting") of aluminium, in pest control, and as a dye.
History
Cryolite was first described in 1798 by Danish veterinarian and physician Peder Christian Abildgaard (1740–1801), from rock samples obtained from Eskimos who used the mineral for washing their hides; the actual source of the ore was later discovered in 1806 by the explorer Karl Ludwig Giesecke. who found the deposit at Ivigtut (old spelling) and nearby Arsuk Fjord, Southwest Greenland. The name is derived from the Greek language words κρύος (cryos) = frost, and λίθος (lithos) = stone.
The Pennsylvania Salt Manufacturing Company used large amounts of cryolite to make caustic soda and fluorine compounds, including hydrofluoric acid at its Natrona, Pennsylvania, works, and at its integrated chemical plant in Cornwells Heights, Pennsylvania, during the 19th and 20th centuries.
It was historically used as an ore of aluminium and later in the electrolytic processing of the aluminium-rich oxide ore bauxite (itself a combination of aluminium oxide minerals such as gibbsite, boehmite and diaspore). The difficulty of separating aluminium from oxygen in the oxide ores was overcome by the use of cryolite as a flux to dissolve the oxide mineral(s). Pure cryolite itself melts at 1012 °C (1285 K), and it can dissolve the aluminium oxides sufficiently well to allow easy extraction of the aluminium by electrolysis. Substantial energy is still needed for both heating the materials and the electrolysis, but it is much more energy-efficient than melting the oxides themselves. As natural cryolite is now too rare to be used for this purpose, synthetic sodium aluminium fluoride is produced from the common mineral fluorite.
In 1940 before entering World War II, the United States became involved with protecting the world's largest cryolite mine in Ivittuut, Greenland from falling into Nazi Germany's control.
Source locations
Besides Ivittuut, on the west coast of Greenland where cryolite was once found in commercial quantities, small deposits of cryolite have also been reported in some areas of Spain, at the foot of Pikes Peak in Colorado, Francon Quarry near Montreal in Quebec, Canada and also in Miask, Russia.
Uses
Molten cryolite is used as a solvent for aluminium oxide (Al2O3) in the Hall–Héroult process, used in the refining of aluminium. It decreases the melting point of aluminium oxide from 2000–2500 °C to 900–1000 °C, and increases its conductivity thus making the extraction of aluminium more economical.
Cryolite is used as an insecticide and a pesticide. It is also used to give fireworks a yellow color.
It is used in glass manufacturing as a "powerful opaliser."
Physical properties
Cryolite occurs as glassy, colorless, white-reddish to gray-black prismatic monoclinic crystals. It has a Mohs hardness of 2.5 to 3 and a specific gravity of about 2.95 to 3.0. It is translucent to transparent with a very low refractive index of about 1.34, which is very close to that of water; thus if immersed in water, cryolite becomes essentially invisible.
| Physical sciences | Minerals | Earth science |
193366 | https://en.wikipedia.org/wiki/Asafoetida | Asafoetida | Asafoetida (; also spelled asafetida) is the dried latex (gum oleoresin) exuded from the rhizome or tap root of several species of Ferula, perennial herbs of the carrot family. It is produced in Iran, Afghanistan, Central Asia, northern India and Northwest China (Xinjiang). Different regions have different botanical sources.
Asafoetida has a pungent smell, as reflected in its name, lending it the common name of "stinking gum". The odour dissipates upon cooking; in cooked dishes, it delivers a smooth flavour reminiscent of leeks or other onion relatives. Asafoetida is also known colloquially as "devil's dung" in English (and similar expressions in many other languages).
Etymology and other names
The English name is derived from asa, a Latinised form of Persian 'mastic', and Latin 'stinky'.
Other names include, with its pungent odour having resulted in many unpleasant names:
Composition
Typical asafoetida contains about 40–64% resin, 25% endogeneous gum, 10–17% volatile oil, and 1.5–10% ash. The resin portion contains asaresinotannols A and B, ferulic acid, umbelliferone, and four unidentified compounds. The volatile oil component is rich in various organosulfide compounds, such as 2-butyl-propenyl-disulfide, diallyl sulfide, diallyl disulfide (also present in garlic) and dimethyl trisulfide, which is also responsible for the odour of cooked onions. The organosulfides are primarily responsible for the odour and flavour of asafoetida.
Botanical sources
Many Ferula species are utilised as the sources of asafoetida. Most of them are characterised by abundant sulphur-containing compounds in the essential oil.
Ferula foetida is the source of asafoetida in Eastern Iran, western Afghanistan, western Pakistan and Central Asia (Karakum Desert, Kyzylkum Desert). It is one of the most widely distributed asafoetida-producing species and often mistaken for F. assa-foetida. It has sulphur-containing compounds in the essential oil.
Ferula assa-foetida is endemic to Southern Iran and is the source of asafoetida there. It has sulphur-containing compounds in the essential oil. Although it is often considered the main source of asafoetida on the international market, this notion is attributable to the fact that several Ferula species acting as the major sources are often misidentified as F. assa-foetida. In fact, the production of asafoetida from F. assa-foetida is confined to its native range, namely Southern Iran, outside which the sources of asafoetida are other species.
Ferula pseudalliacea and Ferula rubricaulis are endemic to western and southwestern Iran. They are sometimes considered conspecific with F. assa-foetida.
Ferula lutensis and Ferula alliacea are the sources of asafoetida in Eastern Iran. They have sulphur-containing compounds in the essential oil.
Ferula latisecta is the source of asafoetida in Eastern Iran and southern Turkmenistan. It has sulphur-containing compounds in the essential oil.
Ferula sinkiangensis and Ferula fukanensis are endemic to Xinjiang, China. They are the sources of asafoetida in China. They have sulphur-containing compounds in the essential oil.
Ferula narthex is native to Afghanistan, northern Pakistan and Kashmir. Although it is often listed as the source of asafoetida, one report states that it lacks sulphur-containing compounds in the essential oil.
Uses
Cooking
This spice is used as a digestive aid, in food as a condiment, and in pickling. It plays a critical flavouring role in Indian vegetarian cuisine by acting as a savory enhancer. Used along with turmeric, it is a standard component of lentil curries, such as dal, chickpea curries, and vegetable dishes, especially those based on potato and cauliflower. Asafoetida is quickly heated in hot oil before it's sprinkled on the food. It is sometimes used to harmonise sweet, sour, salty, and spicy components in food. The spice is added to the food as it's tempered.
In its pure form, it is sold in the form of chunks of resin, small quantities of which are scraped off for use. The odour of the pure resin is so strong that the pungent smell will contaminate other spices stored nearby if it is not stored in an airtight container.
When adapting recipes for those with garlic allergy or intolerance, asafoetida can be used as a substitute.
Cultivation and manufacture
The resin-like gum comes from the dried sap extracted from the stem and roots, and is used as a spice. The resin is greyish-white when fresh, but dries to a dark amber colour. The asafoetida resin is difficult to grate and is traditionally crushed between stones or with a hammer. Today, the most commonly available form is compounded asafoetida, a fine powder containing 30% asafoetida resin, along with rice flour or maida (white wheat flour) and gum arabic.
Ferula assa-foetida is a monoecious, herbaceous, perennial plant of the family Apiaceae. It grows to high, with a circular mass of leaves. Stem leaves have wide sheathing petioles. Flowering stems are high and thick and hollow, with a number of schizogenous ducts in the cortex containing the resinous gum. Flowers are pale greenish yellow produced in large compound umbels. Fruits are oval, flat, thin, reddish brown and have a milky juice. Roots are thick, massive, and pulpy. They yield a resin similar to that of the stems. All parts of the plant have the distinctive fetid smell.
History
Asafoetida was familiar in the early Mediterranean, having come by land across Iran. It was brought to Europe by an expedition of Alexander the Great, who, after returning from a trip to northeastern ancient Persia, thought that he had found a plant almost identical to the famed silphium of Cyrene in North Africa—though less tasty. Dioscorides, in the first century, wrote, "the Cyrenaic kind, even if one just tastes it, at once arouses a humour throughout the body and has a very healthy aroma, so that it is not noticed on the breath, or only a little; but the Median [Iranian] is weaker in power and has a nastier smell." Nevertheless, it could be substituted for silphium in cooking, which was fortunate, because a few decades after Dioscorides' time, the true silphium of Cyrene became extinct, and asafoetida became more popular amongst physicians, as well as cooks.
Asafoetida is also mentioned numerous times in Jewish literature, such as the Mishnah. Maimonides also writes in the Mishneh Torah "In the rainy season, one should eat warm food with much spice, but a limited amount of mustard and asafoetida [ ]."
While it is generally forgotten now in Europe, it is widely used in India. Asafoetida is mentioned in the Bhagavata Purana (7:5:23-24), which states that one must not have eaten hing before worshipping the deity. Asafoetida is eaten by Brahmins and Jains. Devotees of the Hare Krishna movement also use hing in their food, as they are not allowed to consume onions or garlic. Their food has to be presented to Lord Krishna for sanctification (to become Prasadam) before consumption and onions and garlic cannot be offered to Krishna.
Asafoetida was described by a number of Arab and Islamic scientists and pharmacists. Avicenna discussed the effects of asafoetida on digestion. Ibn al-Baitar and Fakhr al-Din al-Razi described some positive medicinal effects on the respiratory system.
After the fall of Rome and until the 16th century, asafoetida was rare in Europe, and if ever encountered, it was viewed as a medicine. "If used in cookery, it would ruin every dish because of its dreadful smell", asserted Garcia de Orta's European guest. "Nonsense", Garcia replied, "nothing is more widely used in every part of India, both in medicine and in cookery."
During the Italian Renaissance, asafoetida was used as part of the exorcism ritual.
| Biology and health sciences | Herbs and spices | Plants |
193398 | https://en.wikipedia.org/wiki/Traumatology | Traumatology | In medicine, traumatology (from Greek trauma, meaning injury or wound) is the study of wounds and injuries caused by accidents or violence to a person, and the surgical therapy and repair of the damage. Traumatology is a branch of medicine. It is often considered a subset of surgery and in countries without the specialty of trauma surgery it is most often a sub-specialty to orthopedic surgery. Traumatology may also be known as accident surgery.
Branches
Branches of traumatology include medical traumatology and psychological traumatology. Medical traumatology can be defined as the study of specializing in the treatment of wounds and injuries caused by violence or general accidents. This type of traumatology focuses on the surgical procedures and future physical therapy a patient needs to repair the damage and recover properly. Psychological traumatology is a type of damage to one's mind due to a distressing event. This type of trauma can also be the result of overwhelming amounts of stress in one's life. Psychological trauma usually involves some type of physical trauma that poses as a threat to one's sense of security and survival. Psychological trauma often leaves people feeling overwhelmed, anxious, and threatened.
Trauma can also be classified as:
Acute: It results from a single stressful or dangerous situation.
Chronic: It results from repeated and prolonged exposure to highly stressful situations.
Complex: It results from exposure to multiple traumatic events.
Secondary or vicarious trauma, is another form of trauma in which a person develops trauma symptoms from close contact with someone who has experienced a traumatic event.
Types of trauma
When it comes to types of trauma, medical and psychological traumatology go hand in hand. Types of trauma include car accidents, gunshot wounds, concussions, PTSD from incidents, etc. Medical traumas are repaired with surgeries; however, they can still cause psychological trauma and other stress factors. For example, a teenager in a car accident who broke his wrist and needed extensive surgery to save his arm may experience anxiety when driving in a car post-accident. PTSD can be diagnosed after a person experiences one or more intense and traumatic events and react with fear with complaints from three categorical symptoms lasting one month or longer. These categories are: re-experiencing the traumatic event, avoiding anything associated with the trauma, and increased symptoms of increased psychological arousal.
Guidelines for essential trauma care
Airway management, monitoring, and management of injuries are all key guidelines when it comes to medical trauma care. Airway management is a key component of emergency on-scene care. Using a systematic approach, first responders must assess that a patient's airway is not blocked in order to ensure the patient gets enough circulation and remain as calm as they can. Monitoring patients and making sure their body does not go into shock is another essential guideline when it comes to medical trauma care. Nurses are required to watch over patients and check blood pressure, heart rate, etc. to make sure that patients are doing well and are not crashing. When it comes to managing injuries, head and neck injuries require the most care post surgery. Head injuries are one of the major causes of trauma related death and disabilities worldwide. It is important for patients of head trauma to get CT scans post surgery to insure that there are no problems.
Guidelines for psychological trauma care
There is a range of approaches to assist victims to overcome the anxiety and stress that follows psychological trauma. Affected persons can also follow self-care such as exercise and socializing with familiar and safe associates and family members. Trauma disturbs the body's natural equilibrium by putting it in a state of fear and hyper-arousal. Exercising for thirty minutes a day facilitated the nervous system to "unfreeze" from a traumatic state. Being surrounded by a good support system is a powerful factor in treating psychological trauma. Participating in social activities, volunteering, and making new friends are all ways to help forget about or cope with traumatic events. Coming to terms with childhood trauma is especially challenging.
Patient assessment
Advanced trauma life support, training for medical doctors dealing with trauma
Revised Trauma Score
Injury Severity Score
Abbreviated Injury Scale
Triage
Wound assessment
Factors in the assessment of wounds are:
the nature of the wound, whether it is a laceration, abrasion, bruise or burn
the size of the wound in length, width and depth
the extent of the overall area of tissue damage caused by the impact of a mechanical force, or the reaction to chemical agents in, for example, fires or exposure to caustic substances.
Forensic physicians, as well as pathologists may also be required to examine (traumatic) wounds on people.
| Biology and health sciences | Fields of medicine | Health |
193577 | https://en.wikipedia.org/wiki/Lynx%20%28constellation%29 | Lynx (constellation) | Lynx is a constellation named after the animal, usually observed in the Northern Celestial Hemisphere. The constellation was introduced in the late 17th century by Johannes Hevelius. It is a faint constellation, with its brightest stars forming a zigzag line. The orange giant Alpha Lyncis is the brightest star in the constellation, and the semiregular variable star Y Lyncis is a target for amateur astronomers. Six star systems have been found to contain planets. Those of 6 Lyncis and HD 75898 were discovered by the Doppler method; those of XO-2, XO-4, XO-5 and WASP-13 were observed as they passed in front of the host star.
Within the constellation's borders lie NGC 2419, an unusually remote globular cluster; the galaxy NGC 2770, which has hosted three recent Type Ib supernovae; the distant quasar APM 08279+5255, whose light is magnified and split into multiple images by the gravitational lensing effect of a foreground galaxy; and the Lynx Supercluster, which was the most distant supercluster known at the time of its discovery in 1999.
History
Polish astronomer Johannes Hevelius formed the constellation in 1687 from 19 faint stars between the constellations Ursa Major and Auriga that earlier had been part of the obsolete constellation Jordanus Fluvius. Naming it Lynx because of its faintness, he challenged future stargazers to see it, declaring that only the lynx-eyed (those with good sight) would have been able to recognize it. Hevelius also used the name Tigris (Tiger) in his catalog but kept the former name only in his atlas. English astronomer John Flamsteed adopted the constellation in his catalog, published in 1712, and his subsequent atlas. According to 19th-century amateur astronomer Richard Hinckley Allen, the chief stars in Lynx "might well have been utilized by the modern constructor, whoever he was, of our Ursa Major to complete the quartette of feet."
Characteristics
Lynx is bordered by Camelopardalis to the north, Auriga to the west, Gemini to the southwest, Cancer to the south, Leo to the east and Ursa Major to the northeast. Covering 545.4 square degrees and 1.322% of the night sky, it ranks 28th of the 88 constellations in size, surpassing better known constellations such as Gemini. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Lyn". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 20 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , and the declination coordinates are between +32.97° and +61.96°. On dark nights, the brighter stars can be seen as a crooked line extending roughly between Camelopardalis and Leo, and north of the bright star Castor. Lynx is most readily observed from the late winter to late summer to northern hemisphere observers, with midnight culmination occurring on 20 January. The whole constellation is visible to observers north of latitude 28°S.
Notable features
Stars
English astronomer Francis Baily gave a single star a Bayer designation—Alpha Lyncis—while Flamsteed numbered 44 stars, though several lie across the boundary in Ursa Major. Overall, there are 97 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5.
The brightest star in this constellation is Alpha Lyncis, with an apparent (visual) magnitude of 3.14. It is an orange giant of spectral type K7III located 203 ± 2 light-years distant from Earth. Around twice as massive as the Sun, it has exhausted the hydrogen at its core and has evolved away from the main sequence. The star has swollen to about 55 times the Sun's radius and is emitting roughly 673 times the luminosity of the Sun. The stellar atmosphere has cooled, giving it a surface temperature of 3,880 K. The only star with a proper name is Alsciaukat (from the Arabic for thorn), also known as 31 Lyncis, located 380 ± 10 light-years from Earth. This star is also an evolved giant with around twice the Sun's mass that has swollen and cooled since exhausting its core hydrogen. It is anywhere from 59 to 75 times as wide as the Sun, and 740 times as luminous. Alsciaukat is also a variable star, ranging in brightness by 0.05 magnitude over 25 to 30 days from its baseline magnitude of 4.25.
Lynx is rich in double stars. The second brightest star in the constellation is 38 Lyncis at magnitude 3.8. When viewed through a moderate telescope, the two components—a brighter blue-white star of magnitude 3.9 and a fainter star of magnitude 6.1 that has been described as lilac as well as blue-white—can be seen. 15 Lyncis is another star that is found to be a double system when viewed through a telescope, separating into two yellowish stars of magnitudes 4.7 and 5.8 that are 0.9 arcseconds apart. The components are a yellow giant of spectral type G8III that is around 4.01 times as massive as the Sun, and a yellow-white main sequence star of spectral type F8V that is around 3.73 times as massive as the Sun. Orbiting each other every 262 years, the stars are 178 ± 2 light years distant from Earth. 12 Lyncis has a combined apparent magnitude of 4.87. When seen through a telescope, it can be separated into three stars: two components with magnitudes 5.4 and 6.0 that lie at an angular separation by 1.8″ () and a yellow-hued star of magnitude 7.2 at a separation of 8.6″ (as of 1990). The two brighter stars are estimated to orbit each other with a period that is poorly known but estimated to be roughly 700 to 900 years. The 12 Lyncis system is 210 ± 10 light years distant from Earth.
10 Ursae Majoris is the third-brightest star in Lynx. Originally in the neighbouring constellation Ursa Major, it became part of Lynx with the official establishment of the constellation's borders. Appearing to be of magnitude 3.97, a telescope reveals a yellow-white main sequence star of spectral type F4V of magnitude 4.11 and a star very similar to the Sun of spectral type G5V and magnitude 6.18. The two are 10.6 astronomical units (au) apart and orbit each other every 21.78 years. The system is 52.4 ± 0.6 light-years distant from Earth. Likewise 16 Lyncis was originally known as Psi10 Aurigae and conversely, 37, 39, 41 and 44 Lyncis became part of Ursa Major.
Y Lyncis is a popular target among amateur astronomers, as it is a semiregular variable ranging in brightness from magnitude 6.2 to 8.9. These shifts in brightness are complex, with a shorter period of 110 days due to the star's pulsations, and a longer period of 1400 days possibly due to the star's rotation or regular cycles in its convection. A red supergiant, it has an estimated diameter around 580 times that of the Sun, is around 1.5 to 2 times as massive, and has a luminosity around 25,000 times that of the Sun. 1 Lyncis and UX Lyncis are red giants that are also semiregular variables with complex fluctuations in brightness.
Exoplanets
Six star systems have been found to contain exoplanets, of which two were discovered by the Doppler method and four by the transit method. 6 Lyncis, an orange subgiant that spent much of its life as an A-type or F-type main sequence star, is orbited by a planet with a minimum mass of 2.4 Jupiter masses and an orbital period of 899 days. HD 75898 is a 3.8 ± 0.8 billion-year-old yellow star of spectral type G0V that has just begun expanding and cooling off the main sequence. It has a planet at least 2.51 times as massive as Jupiter orbiting with a period of around 418 days. The centre of mass of the system is accelerating, indicating there is a third, more distant, component at least the size of Jupiter. Three star systems were found to have planets that were observed by the XO Telescope in Hawaii as they passed in front of them. XO-2 is a binary star system, both the stars of which are slightly less massive and cooler than the Sun and have planetary companions: XO-2S has a Saturn-mass planet at 0.13 au distance with a period of around 18 days, and one a little more massive than Jupiter at a distance of 0.48 au and with a period of around 120 days, and XO-2N has a hot Jupiter with around half Jupiter's mass that has an orbit of only 2.6 days. XO-4 is an F-type main sequence star that is a little hotter and more massive than the Sun that has a hot Jupiter orbiting with a period of around 4.1 days. XO-5 is a Sun-like star with a hot Jupiter about as massive as Jupiter that takes around 4.2 days to complete an orbit. WASP-13, a Sun-like star that has begun to swell and cool off the main sequence, had a transiting planet discovered by the SuperWASP program in 2009. The planet is around half as massive as Jupiter and takes 4.35 days to complete a revolution.
Deep-sky objects
Lynx's most notable deep sky object is NGC 2419, also called the "Intergalactic Wanderer" as it was assumed to lie outside the Milky Way. At a distance of between 275,000 and 300,000 light-years from Earth, it is one of the most distant known globular clusters within our galaxy. NGC 2419 is likely in a highly elliptical orbit around the Milky Way. It has a magnitude of +9.06 and is a Shapley class VII cluster. Originally thought to be a star, NGC 2419 was discovered to be a globular cluster by American astronomer Carl Lampland.
NGC 2537, known as the Bear's Paw Galaxy, lies about 3 degrees north-northwest of 31 Lyncis. It is a blue compact dwarf galaxy that is somewhere between 17 and 30 million light-years away from Earth. Close by is IC 2233, a very flat and thin spiral galaxy that is between 26 and 40 million light-years away from Earth. A comparatively quiet galaxy with a low rate of star formation (less than one solar mass every twenty years), it was long suspected to be interacting with the Bear's Paw galaxy. This is now considered highly unlikely as observations with the Very Large Array showed the two galaxies lie at different distances.
The NGC 2841 group is a group of galaxies that lie both in Lynx and neighbouring Ursa Major. It includes the loose triplet NGC 2541, NGC 2500, and NGC 2552 within Lynx. Using cepheids of NGC 2541 as standard candles, the distance to that galaxy (and the group) has been estimated at around 40 million light–years. NGC 2841 itself lies in Ursa Major.
NGC 2770 is a type SASc spiral galaxy located about 88 million light–years away that has hosted Type Ib supernovae: SN 1999eh, SN 2007uy, and SN 2008D. The last of these is famous for being the first supernova detected by the X-rays released very early on in its formation, rather than by the optical light emitted during later stages, which allowed the first moments of the outburst to be observed. It is possible that NGC 2770's interactions with a suspected companion galaxy may have created the massive stars causing this activity. UGC 4904 is a galaxy located about 77 million light-years from Earth. On 20 October 2004, a supernova impostor was observed by Japanese amateur astronomer Kōichi Itagaki within the galaxy. Observations of its spectrum suggest that it shed massive amounts of material in a two-year period, transforming from a LBV star to a Wolf–Rayet star, before it was observed erupting as hypernova SN 2006jc on October 11, 2006.
APM 08279+5255 is a very distant, broad absorption line quasar discovered in 1998 and initially considered the most luminous object yet found. It is magnified and split into multiple images by the gravitational lensing effect of a foreground galaxy through which its light passes. It appears to be a giant elliptical galaxy with a supermassive black hole around 23 billion times as massive as the Sun and an associated accretion disk that has a diameter of 3600 light years. The galaxy possesses large regions of hot dust and molecular gas, as well as regions with starburst activity. It has a cosmological redshift of 3.911. While observing the quasar in 2008, astronomers using ESA's XMM Newton and the Large Binocular Telescope (LBT) in Arizona discovered the huge galaxy cluster 2XMM J083026+524133.
The Lynx Supercluster is a remote supercluster with a redshift of 1.26–1.27. It was the most distant supercluster known at the time of its discovery in 1999. It is made up of two main clusters of galaxies—RX J0849+4452 or Lynx E and RX J0848+4453 or Lynx W—and several smaller clumps. Further still lies the Lynx Arc, located around 12 billion light years away (a redshift of 3.357). It is a distant region containing a million extremely hot, young blue stars with surface temperatures of 80,000–100,000 K that are twice as hot as similar stars in the Milky Way galaxy. Only visible through gravitational lensing produced by a closer cluster of galaxies, the Arc is a feature of the early days of the universe, when "furious firestorms of star birth" were more common.
Meteor showers
The September Lyncids are a minor meteor shower that appears around 6 September. They were historically more prominent, described as such by Chinese observers in 1037 and 1063, and Korean astronomers in 1560. The Alpha Lyncids were discovered in 1971 by Malcolm Currie, and appear between 10 December and 3 January.
| Physical sciences | Other | Astronomy |
193735 | https://en.wikipedia.org/wiki/Poisson%27s%20equation | Poisson's equation | Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
Statement of the equation
Poisson's equation is
where is the Laplace operator, and and are real or complex-valued functions on a manifold. Usually, is given, and is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as , and so Poisson's equation is frequently written as
In three-dimensional Cartesian coordinates, it takes the form
When identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
Applications in physics and engineering
Newtonian gravity
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity. Gauss's law for gravity is
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
Substituting this into Gauss's law,
yields Poisson's equation for gravity:
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance from a central point mass (i.e., the fundamental solution). In three dimensions the potential is
which is equivalent to Newton's law of universal gravitation.
Electrostatics
Many problems in electrostatics are governed by the Poisson equation, which relates the electric potential
to the free charge density
, such as those found in conductors.
The mathematical details of Poisson's equation, commonly expressed in SI units (as opposed to Gaussian units), describe how the distribution of free charges generates the electrostatic potential in a given Region (mathematics).
Starting with Gauss's law for electricity (also one of Maxwell's equations) in differential form, one has
where is the divergence operator, D is the electric displacement field, and ρf is the free-charge density (describing charges brought from outside).
Assuming the medium is linear, isotropic, and homogeneous (see polarization density), we have the constitutive equation
where is the permittivity of the medium, and E is the electric field.
Substituting this into Gauss's law and assuming that is spatially constant in the region of interest yields
In electrostatics, we assume that there is no magnetic field (the argument that follows also holds in the presence of a constant magnetic field).
Then, we have that
where is the curl operator. This equation means that we can write the electric field as the gradient of a scalar function (called the electric potential), since the curl of any gradient is zero. Thus we can write
where the minus sign is introduced so that is identified as the electric potential energy per unit charge.
The derivation of Poisson's equation under these circumstances is straightforward. Substituting the potential gradient for the electric field,
directly produces Poisson's equation for electrostatics, which is
Specifying the Poisson's equation for the potential requires knowing the charge density distribution. If the charge density is zero, then Laplace's equation results. If the charge density follows a Boltzmann distribution, then the Poisson–Boltzmann equation results. The Poisson–Boltzmann equation plays a role in the development of the Debye–Hückel theory of dilute electrolyte solutions.
Using a Green's function, the potential at distance from a central point charge (i.e., the fundamental solution) is
which is Coulomb's law of electrostatics. (For historical reasons, and unlike gravity's model above, the factor appears here and not in Gauss's law.)
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general class of cases, computing is no longer sufficient to calculate E, since E also depends on the magnetic vector potential A, which must be independently computed. See Maxwell's equation in potential formulation for more on and A in Maxwell's equations and how an appropriate Poisson's equation is obtained in this case.
Potential of a Gaussian charge density
If there is a static spherically symmetric Gaussian charge density
where is the total charge, then the solution of Poisson's equation
is given by
where is the error function. This solution can be checked explicitly by evaluating .
Note that for much greater than , approaches unity, and the potential approaches the point-charge potential,
as one would expect. Furthermore, the error function approaches 1 extremely quickly as its argument increases; in practice, for the relative error is smaller than one part in a thousand.
Surface reconstruction
Surface reconstruction is an inverse problem. The goal is to digitally reconstruct a smooth surface based on a large number of points pi (a point cloud) where each point also carries an estimate of the local surface normal ni. Poisson's equation can be utilized to solve this problem with a technique called Poisson surface reconstruction.
The goal of this technique is to reconstruct an implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pi, ni) is thus modeled as a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.
In order to effectively apply Poisson's equation to the problem of surface reconstruction, it is necessary to find a good discretization of the vector field V. The basic approach is to bound the data with a finite-difference grid. For a function valued at the nodes of such a grid, its gradient can be represented as valued on staggered grids, i.e. on grids whose nodes lie in between the nodes of the original grid. It is convenient to define three staggered grids, each shifted in one and only one direction corresponding to the components of the normal data. On each staggered grid we perform trilinear interpolation on the set of points. The interpolation weights are then used to distribute the magnitude of the associated component of ni onto the nodes of the particular staggered grid cell containing pi. Kazhdan and coauthors give a more accurate method of discretization using an adaptive finite-difference grid, i.e. the cells of the grid are smaller (the grid is more finely divided) where there are more data points. They suggest implementing this technique with an adaptive octree.
Fluid dynamics
For the incompressible Navier–Stokes equations, given by
The equation for the pressure field is an example of a nonlinear Poisson equation:
Notice that the above trace is not sign-definite.
| Mathematics | Multivariable and vector calculus | null |
193748 | https://en.wikipedia.org/wiki/Integration%20by%20substitution | Integration by substitution | In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
Substitution for a single variable
Introduction (indefinite integrals)
Before stating the result rigorously, consider a simple case using indefinite integrals.
Compute
Set This means or as a differential form, Now:
where is an arbitrary constant of integration.
This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand.
For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same.
Statement for definite integrals
Let be a differentiable function with a continuous derivative, where is an interval. Suppose that is a continuous function. Then:
In Leibniz notation, the substitution yields:
Working heuristically with infinitesimals yields the equation
which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.
The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing the original variable with a trigonometric function of a new variable and the original differential with the differential of the trigonometric function.
Proof
Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let and be two functions satisfying the above hypothesis that is continuous on and is integrable on the closed interval . Then the function is also integrable on . Hence the integrals
and
in fact exist, and it remains to show that they are equal.
Since is continuous, it has an antiderivative . The composite function is then defined. Since is differentiable, combining the chain rule and the definition of an antiderivative gives:
Applying the fundamental theorem of calculus twice gives:
which is the substitution rule.
Examples: Antiderivatives (indefinite integrals)
Substitution can be used to determine antiderivatives. One chooses a relation between and determines the corresponding relation between and by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between and is then undone.
Example 1
Consider the integral:
Make the substitution to obtain meaning Therefore:
where is an arbitrary constant of integration.
Example 2: Antiderivatives of tangent and cotangent
The tangent function can be integrated using substitution by expressing it in terms of the sine and cosine: .
Using the substitution gives and
The cotangent function can be integrated similarly by expressing it as and using the substitution :
Examples: Definite integrals
When evaluating definite integrals by substitution, one may calculate the antiderivative fully first, then apply the boundary conditions. In that case, there is no need to transform the boundary terms. Alternatively, one may fully evaluate the indefinite integral (see above) first then apply the boundary conditions. This becomes especially handy when multiple substitutions are used.
Example 1
Consider the integral:
Make the substitution to obtain meaning Therefore:
Since the lower limit was replaced with and the upper limit with a transformation back into terms of was unnecessary.
Example 2: Trigonometric substitution
For the integral
a variation of the above procedure is needed. The substitution implying is useful because We thus have:
The resulting integral can be computed using integration by parts or a double angle formula, followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or
Substitution for multiple variables
One may also use substitution when integrating functions of several variables.
Here, the substitution function needs to be injective and continuously differentiable, and the differentials transform as:
where denotes the determinant of the Jacobian matrix of partial derivatives of at the point . This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows.
More precisely, the change of variables formula is stated in the next theorem:
The conditions on the theorem can be weakened in various ways. First, the requirement that be continuously differentiable can be replaced by the weaker assumption that be merely differentiable and have a continuous inverse. This is guaranteed to hold if is continuously differentiable by the inverse function theorem. Alternatively, the requirement that can be eliminated by applying Sard's theorem.
For Lebesgue measurable functions, the theorem can be stated in the following form:
Another very general version in measure theory is the following:
In geometric measure theory, integration by substitution is used with Lipschitz functions. A bi-Lipschitz function is a Lipschitz function which is injective and whose inverse function is also Lipschitz. By Rademacher's theorem, a bi-Lipschitz mapping is differentiable almost everywhere. In particular, the Jacobian determinant of a bi-Lipschitz mapping is well-defined almost everywhere. The following result then holds:
The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre, Laplace, and Gauss, and first generalized to variables by Mikhail Ostrogradsky in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid-1890s.
Application in probability
Substitution can be used to answer the following important question in probability: given a random variable with probability density and another random variable such that for injective (one-to-one) what is the probability density for ?
It is easiest to answer this question by first answering a slightly different question: what is the probability that takes a value in some particular subset ? Denote this probability Of course, if has probability density , then the answer is:
but this is not really useful because we do not know it is what we are trying to find. We can make progress by considering the problem in the variable . takes a value in whenever takes a value in so:
Changing from variable to gives:
Combining this with our first equation gives:
so:
In the case where and depend on several uncorrelated variables (i.e., and ), can be found by substitution in several variables discussed above. The result is:
| Mathematics | Integral calculus | null |
193752 | https://en.wikipedia.org/wiki/Parasympathetic%20nervous%20system | Parasympathetic nervous system | The parasympathetic nervous system (PSNS) is one of the three divisions of the autonomic nervous system, the others being the sympathetic nervous system and the enteric nervous system.
The autonomic nervous system is responsible for regulating the body's unconscious actions. The parasympathetic system is responsible for stimulation of "rest-and-digest" or "feed-and-breed" activities that occur when the body is at rest, especially after eating, including sexual arousal, salivation, lacrimation (tears), urination, digestion, and defecation. Its action is described as being complementary to that of the sympathetic nervous system, which is responsible for stimulating activities associated with the fight-or-flight response.
Nerve fibres of the parasympathetic nervous system arise from the central nervous system. Specific nerves include several cranial nerves, specifically the oculomotor nerve, facial nerve, glossopharyngeal nerve, and vagus nerve. Three spinal nerves in the sacrum (S2–4), commonly referred to as the pelvic splanchnic nerves, also act as parasympathetic nerves.
Owing to its location, the parasympathetic system is commonly referred to as having "craniosacral outflow", which stands in contrast to the sympathetic nervous system, which is said to have "thoracolumbar outflow".
Structure
The parasympathetic nerves are autonomic or visceral branches of the peripheral nervous system (PNS). Parasympathetic nerve supply arises through three primary areas:
Certain cranial nerves in the cranium, namely the preganglionic parasympathetic nerves (CN III, CN VII, CN IX and CN X) usually arise from specific nuclei in the central nervous system (CNS) and synapse at one of four parasympathetic ganglia: ciliary, pterygopalatine, otic, or submandibular. From these four ganglia the parasympathetic nerves complete their journey to target tissues via trigeminal branches (ophthalmic nerve, maxillary nerve, mandibular nerve).
The vagus nerve (CN X) does not participate in these cranial ganglia as most of its parasympathetic fibers are destined for a broad array of ganglia on or near thoracic viscera (esophagus, trachea, heart, lungs) and abdominal viscera (stomach, pancreas, liver, kidneys, small intestine, and about half of the large intestine). The vagus innervation ends at the junction between the midgut and hindgut, just before the splenic flexure of the transverse colon.
The pelvic splanchnic efferent preganglionic nerve cell bodies reside in the lateral gray horn of the spinal cord at the T12–L1 vertebral levels (the spinal cord terminates at the L1–L2 vertebrae with the conus medullaris), and their axons exit the vertebral column as S2–S4 spinal nerves through the sacral foramina. Their axons continue away from the CNS to synapse at an autonomic ganglion. The parasympathetic ganglion where these preganglionic neurons synapse will be close to the organ of innervation. This differs from the sympathetic nervous system, where synapses between pre- and post-ganglionic efferent nerves in general occur at ganglia that are farther away from the target organ.
As in the sympathetic nervous system, efferent parasympathetic nerve signals are carried from the central nervous system to their targets by a system of two neurons. The first neuron in this pathway is referred to as the preganglionic or presynaptic neuron. Its cell body sits in the central nervous system and its axon usually extends to synapse with the dendrites of a postganglionic neuron somewhere else in the body. The axons of presynaptic parasympathetic neurons are usually long, extending from the CNS into a ganglion that is either very close to or embedded in their target organ. As a result, the postsynaptic parasympathetic nerve fibers are very short.
Cranial nerves
The oculomotor nerve is responsible for a number of parasympathetic functions related to the eye. The oculomotor PNS fibers originate in the Edinger-Westphal nucleus in the central nervous system and travel through the superior orbital fissure to synapse in the ciliary ganglion located just behind the orbit (eye). From the ciliary ganglion the postganglionic parasympathetic fibers leave via short ciliary nerve fibers, a continuation of the nasociliary nerve (a branch of ophthalmic division of the trigeminal nerve (CN V1)). The short ciliary nerves innervate the orbit to control the ciliary muscle (responsible for accommodation) and the iris sphincter muscle, which is responsible for miosis or constriction of the pupil (in response to light or accommodation). There are two motors that are part of the oculomotor nerve known as the somatic motor and visceral motor. The somatic motor is responsible for moving the eye in precise motions and for keeping the eye fixated on an object. The visceral motor helps constrict the pupil.
The parasympathetic aspect of the facial nerve controls secretion of the sublingual and submandibular salivary glands, the lacrimal gland, and the glands associated with the nasal cavity. The preganglionic fibers originate within the CNS in the superior salivatory nucleus and leave as the intermediate nerve (which some consider a separate cranial nerve altogether) to connect with the facial nerve just distal (further out) to it surfacing the central nervous system. Just after the facial nerve geniculate ganglion (general sensory ganglion) in the temporal bone, the facial nerve gives off two separate parasympathetic nerves. The first is the greater petrosal nerve and the second is the chorda tympani. The greater petrosal nerve travels through the middle ear and eventually combines with the deep petrosal nerve (sympathetic fibers) to form the nerve of the pterygoid canal. The parasympathetic fibers of the nerve of the pterygoid canal synapse at the pterygopalatine ganglion, which is closely associated with the maxillary division of the trigeminal nerve (CN V2). The postganglionic parasympathetic fibers leave the pterygopalatine ganglion in several directions. One division leaves on the zygomatic division of CN V2 and travels on a communicating branch to unite with the lacrimal nerve (branch of the ophthalmic nerve of CN V1) before synapsing at the lacrimal gland. These parasympathetic to the lacrimal gland control tear production.
A separate group of parasympathetic leaving from the pterygopalatine ganglion are the descending palatine nerves (CN V2 branch), which include the greater and lesser palatine nerves. The greater palatine parasympathetic synapse on the hard palate and regulate mucous glands located there. The lesser palatine nerve synapses at the soft palate and controls sparse taste receptors and mucous glands. Yet another set of divisions from the pterygopalatine ganglion are the posterior, superior, and inferior lateral nasal nerves; and the nasopalatine nerves (all branches of CN V2, maxillary division of the trigeminal nerve) that bring parasympathetic innervation to glands of the nasal mucosa. The second parasympathetic branch that leaves the facial nerve is the chorda tympani. This nerve carries secretomotor fibers to the submandibular and sublingual glands. The chorda tympani travels through the middle ear and attaches to the lingual nerve (mandibular division of trigeminal, CN V3). After joining the lingual nerve, the preganglionic fibers synapse at the submandibular ganglion and send postganglionic fibers to the sublingual and submandibular salivary glands.
The glossopharyngeal nerve has parasympathetic fibers that innervate the parotid salivary gland. The preganglionic fibers depart CN IX as the tympanic nerve and continue to the middle ear where they make up a tympanic plexus on the cochlear promontory of the mesotympanum. The tympanic plexus of nerves rejoin and form the lesser petrosal nerve and exit through the foramen ovale to synapse at the otic ganglion. From the otic ganglion postganglionic parasympathetic fibers travel with the auriculotemporal nerve (mandibular branch of trigeminal, CN V3) to the parotid salivary gland.
Vagus nerve
The vagus nerve, named after the Latin word vagus (because the nerve controls such a broad range of target tissues – vagus in Latin literally means "wandering"), contains parasympathetic fibers that originate in the dorsal nucleus of the vagus nerve and the nucleus ambiguus in the CNS. The vagus nerve can be readily identified in the neck both on ultrasound and magnetic resonance imaging. It has several branches. The largest branch is the recurrent laryngeal nerve. From the left vagus nerve the recurrent laryngeal nerve hooks around the aorta to travel back up to the larynx and proximal esophagus while, from the right vagus nerve, the recurrent laryngeal nerve hooks around the right subclavian artery to travel back up to the same location as its counterpart. These different paths are a direct result of embryological development of the circulatory system. Each recurrent laryngeal nerve supplies the larynx, the heart, the trachea and the esophagus.
Another set of nerves that come off the vagus nerves approximately at the level of entering the thorax are the cardiac branches of the vagus nerve. These cardiac branches go on to form cardiac and pulmonary plexuses around the heart and lungs. As the main vagus nerves continue into the thorax they become intimately linked with the esophagus and sympathetic nerves from the sympathetic trunks to form the esophageal plexus. This is very efficient as the major function of the vagus nerve from there on will be control of the gut smooth muscles and glands. As the esophageal plexus enter the abdomen through the esophageal hiatus anterior and posterior vagus trunks form. The vagus trunks then join with preaortic sympathetic ganglion around the aorta to disperse with the blood vessels and sympathetic nerves throughout the abdomen. The extent of the parasympathetic in the abdomen include the pancreas, kidneys, liver, gall bladder, stomach and gut tube. The vagus contribution of parasympathetic continues down the gut tube until the end of the midgut. The midgut ends two thirds of the way across the transverse colon near the splenic flexure.
Pelvic splanchnic nerves
The pelvic splanchnic nerves, S2–4, work in tandem to innervate the pelvic viscera. Unlike in the cranium, where one parasympathetic is in charge of one particular tissue or region, for the most part the pelvic splanchnics each contribute fibers to pelvic viscera by traveling to one or more plexuses before being dispersed to the target tissue. These plexuses are composed of mixed autonomic nerve fibers (parasympathetic and sympathetic) and include the vesical, prostatic, rectal, uterovaginal, and inferior hypogastric plexuses. The preganglionic neurons in the pathway do not synapse in a ganglion as in the cranium but rather in the walls of the tissues or organs that they innervate. The fiber paths are variable and each individual's autonomic nervous system in the pelvis is unique. The visceral tissues in the pelvis that the parasympathetic nerve pathway controls include those of the urinary bladder, ureters, urinary sphincter, anal sphincter, uterus, prostate, glands, vagina, and penis. Unconsciously, the parasympathetic will cause peristaltic movements of the ureters and intestines, moving urine from the kidneys into the bladder and food down the intestinal tract and, upon necessity, the parasympathetic will assist in excreting urine from the bladder or defecation. Stimulation of the parasympathetic will cause the detrusor muscle (urinary bladder wall) to contract and simultaneously relax the internal sphincter muscle between the bladder and the urethra, allowing the bladder to void. Also, parasympathetic stimulation of the internal anal sphincter will relax this muscle to allow defecation. There are other skeletal muscles involved with these processes but the parasympathetic plays a huge role in continence and bowel retention.
A study published in 2016, suggests that all sacral autonomic output may be sympathetic; indicating that the rectum, bladder and reproductive organs may only be innervated by the sympathetic nervous system. This suggestion is based on detailed analysis of 15 phenotypic and ontogenetic factors differentiating sympathetic from parasympathetic neurons in the mouse. Assuming that the reported findings most likely applies to other mammals as well, this perspective suggests a simplified, bipartite architecture of the autonomic nervous system, in which the parasympathetic nervous system receives input from cranial nerves exclusively and the sympathetic nervous system from thoracic to sacral spinal nerves.
Function
Sensation
The afferent fibers of the autonomic nervous system, which transmit sensory information from the internal organs of the body back to the central nervous system, are not divided into parasympathetic and sympathetic fibers as the efferent fibers are. Instead, autonomic sensory information is conducted by general visceral afferent fibers.
General visceral afferent sensations are mostly unconscious visceral motor reflex sensations from hollow organs and glands that are transmitted to the CNS. While the unconscious reflex arcs normally are undetectable, in certain instances they may send pain sensations to the CNS masked as referred pain. If the peritoneal cavity becomes inflamed or if the bowel is suddenly distended, the body will interpret the afferent pain stimulus as somatic in origin. This pain is usually non-localized. The pain is also usually referred to dermatomes that are at the same spinal nerve level as the visceral afferent synapse.
Vascular effects
Heart rate is largely controlled by the heart's internal pacemaker activity. Considering a healthy heart, the main pacemaker is a collection of cells on the border of the atria and vena cava called the sinoatrial node. Heart cells have the ability to generate electrical activity independent of external stimulation. As a result, the cells of the node spontaneously generate electrical activity that is subsequently conducted throughout the heart, resulting in a regular heart rate.
In absence of any external stimuli, sinoatrial pacing contributes to maintain the heart rate in the range of 60-100 beats per minute (bpm). At the same time, the two branches of the autonomic nervous system act in a complementary way increasing or slowing the heart rate. In this context, the vagus nerve acts on sinoatrial node slowing its conduction thus actively modulating vagal tone accordingly. This modulation is mediated by the neurotransmitter acetylcholine and downstream changes to ionic currents and calcium of heart cells.
The vagus nerve plays a crucial role in heart rate regulation by modulating the response of sinoatrial node; vagal tone can be quantified by investigating heart rate modulation induced by vagal tone changes. As a general consideration, increased vagal tone (and thus vagal action) is associated with a diminished and more variable heart rate. The main mechanism by which the parasympathetic nervous system acts on vascular and cardiac control is the so-called respiratory sinus arrhythmia (RSA). RSA is described as the physiological and rhythmical fluctuation of heart rate at the respiration frequency, characterized by heart rate increase during inspiration and decrease during expiration.
Sexual activity
Another role that the parasympathetic nervous system plays is in sexual activity. In males, the cavernous nerves from the prostatic plexus stimulate smooth muscles in the fibrous trabeculae of the coiled helicine arteries of penis to relax and allow blood to fill the two corpora cavernosa and the corpus spongiosum of the penis, making it rigid to prepare for sexual activity. Upon emission of ejaculate, the sympathetics participate and cause peristalsis of the ductus deferens and closure of the internal urethral sphincter to prevent semen from entering the bladder. At the same time, parasympathetics cause peristalsis of the urethral muscle, and the pudendal nerve causes contraction of the bulbospongiosus (skeletal muscle is not via PN), to forcibly emit the semen. During remission the penis becomes flaccid again. In the female, there is erectile tissue analogous to the male yet less substantial that plays a large role in sexual stimulation. The PN cause release of secretions in the female that decrease friction. Also in the female, the parasympathetics innervate the fallopian tubes, which helps peristaltic contractions and movement of the oocyte to the uterus for implantation. The secretions from the female genital tract aid in sperm migration. The PN (and SN to a lesser extent) play a significant role in reproduction.
Receptors
The parasympathetic nervous system uses chiefly acetylcholine (ACh) as its neurotransmitter, although peptides (such as cholecystokinin) can be used. The ACh acts on two types of receptors, the muscarinic and nicotinic cholinergic receptors. Most transmissions occur in two stages: When stimulated, the preganglionic neuron releases ACh at the ganglion, which acts on nicotinic receptors of postganglionic neurons. The postganglionic neuron then releases ACh to stimulate the muscarinic receptors of the target organ. Niconitic receptors transmit outgoing signals from the presynaptic to the postsynaptic cells within the sympathetic and parasympathetic nervous system, and are the receptors used in the somatic nervous system for signalling muscular contraction in the neuromuscular junction. The muscarinic receptors are mainly present in the parasympathetic nervous system but also appear in the sweat glands of the sympathetic nervous system.
Types of muscarinic receptors
The five main types of muscarinic receptors:
The M1 muscarinic receptors () are located in the neural system.
The M2 muscarinic receptors () are located in the heart, and act to bring the heart back to normal after the actions of the sympathetic nervous system: slowing down the heart rate, reducing contractile forces of the atrial cardiac muscle, and reducing conduction velocity of the sinoatrial node and atrioventricular node. They have a minimal effect on the contractile forces of the ventricular muscle due to sparse innervation of the ventricles from the parasympathetic nervous system.
The M3 muscarinic receptors () are located at many places in the body, such as the endothelial cells of blood vessels, as well as the lungs causing bronchoconstriction. The net effect of innervated M3 receptors on blood vessels is vasodilation, as acetylcholine causes endothelial cells to produce nitric oxide, which diffuses to smooth muscle and results in vasodilation. They are also in the smooth muscles of the gastrointestinal tract, which help in increasing intestinal motility and dilating sphincters. The M3 receptors are also located in many glands that help to stimulate secretion in salivary glands and other glands of the body. They are also located on the detrusor muscle and urothelium of the bladder, causing contraction.
The M4 muscarinic receptors: Postganglionic cholinergic nerves, possible CNS effects
The M5 muscarinic receptors: Possible effects on the CNS
Types of nicotinic receptors
In vertebrates, nicotinic receptors are broadly classified into two subtypes based on their primary sites of expression: muscle-type nicotinic receptors (N1) primarily for somatic motor neurons; and neuronal-type nicotinic receptors (N2) primarily for autonomic nervous system.
Relationship to sympathetic nervous system
Sympathetic and parasympathetic divisions typically function in opposition to each other. The sympathetic division typically functions in actions requiring quick responses. The parasympathetic division functions with actions that do not require immediate reaction. A mnemonic to summarize the functions of the parasympathetic nervous system is SSLUDD (sexual arousal, salivation, lacrimation, urination, digestion and defecation).
Clinical significance
The functions promoted by activity in the parasympathetic nervous system are associated with our day-to-day living. The parasympathetic nervous system promotes digestion and the synthesis of glycogen, and allows for normal function and behavior.
Parasympathetic action helps in digestion and absorption of food by increasing the activity of the intestinal musculature, increasing gastric secretion, and relaxing the pyloric sphincter. It is called the “rest and digest” division of the ANS.
The parasympathetic nervous system decreases respiration and heart rate and increases digestion. Stimulation of the parasympathetic nervous system results in:
Constriction of pupils
Decreased heart rate and blood pressure
Constriction of bronchial muscles
Stimulation of digestion and gastric emptying
Increased production of saliva and mucus
Increase in urine secretion
History
The terminology ‘Parasympathetic nervous system’ was introduced by John Newport Langley in 1921. He was the first person who put forward the concept of PSNS as the second division of the autonomic nervous system.
| Biology and health sciences | Nervous system | Biology |
193753 | https://en.wikipedia.org/wiki/Sympathetic%20nervous%20system | Sympathetic nervous system | The sympathetic nervous system (SNS or SANS, sympathetic autonomic nervous system, to differentiate it from the somatic nervous system) is one of the three divisions of the autonomic nervous system, the others being the parasympathetic nervous system and the enteric nervous system. The enteric nervous system is sometimes considered part of the autonomic nervous system, and sometimes considered an independent system.
The autonomic nervous system functions to regulate the body's unconscious actions. The sympathetic nervous system's primary process is to stimulate the body's fight or flight response. It is, however, constantly active at a basic level to maintain homeostasis. The sympathetic nervous system is described as being antagonistic to the parasympathetic nervous system. The latter stimulates the body to "feed and breed" and to (then) "rest-and-digest".
The SNS has a major role in various physiological processes such as blood glucose levels, body temperature, cardiac output, and immune system function. The formation of sympathetic neurons being observed at embryonic stage of life and its development during aging shows its significance in health; its dysfunction has shown to be linked to various health disorders.
Structure
There are two kinds of neurons involved in the transmission of any signal through the sympathetic system: pre-ganglionic and post-ganglionic. The shorter preganglionic neurons originate in the thoracolumbar division of the spinal cord specifically at T1 to L2~L3, and travel to a ganglion, often one of the paravertebral ganglia, where they synapse with a postganglionic neuron. From there, the long postganglionic neurons extend across most of the body.
At the synapses within the ganglia, preganglionic neurons release acetylcholine, a neurotransmitter that activates nicotinic acetylcholine receptors on postganglionic neurons. In response to this stimulus, the postganglionic neurons release norepinephrine, which activates adrenergic receptors that are present on the peripheral target tissues. The activation of target tissue receptors causes the effects associated with the sympathetic system. However, there are three important exceptions:
Postganglionic neurons of sweat glands release acetylcholine for the activation of muscarinic receptors, except for areas of thick skin, the palms and the plantar surfaces of the feet, where norepinephrine is released and acts on adrenergic receptors. This leads to the activation of sudomotor function, which is assessed by electrochemical skin conductance.
Chromaffin cells of the adrenal medulla are analogous to post-ganglionic neurons; the adrenal medulla develops in tandem with the sympathetic nervous system and acts as a modified sympathetic ganglion. Within this endocrine gland, pre-ganglionic neurons synapse with chromaffin cells, triggering the release of two transmitters: a small proportion of norepinephrine, and more substantially, epinephrine. The synthesis and release of epinephrine as opposed to norepinephrine is another distinguishing feature of chromaffin cells compared to postganglionic sympathetic neurons.
Postganglionic sympathetic nerves terminating in the kidney release dopamine, which acts on dopamine D1 receptors of blood vessels to control how much blood the kidney filters. Dopamine is the immediate metabolic precursor to norepinephrine, but is nonetheless a distinct signaling molecule.
Organization
Sympathetic nerves arise from near the middle of the spinal cord in the intermediolateral nucleus of the lateral grey column, beginning at the first thoracic vertebra of the vertebral column and are thought to extend to the second or third lumbar vertebra. Because its cells begin in the thoracolumbar division – the thoracic and lumbar regions of the spinal cord – the sympathetic nervous system is said to have a thoracolumbar outflow. Axons of these nerves leave the spinal cord through the anterior root. They pass near the spinal (sensory) ganglion, where they enter the anterior rami of the spinal nerves. However, unlike somatic innervation, they quickly separate out through white rami connectors (so called from the shiny white sheaths of myelin around each axon) that connect to either the paravertebral (which lie near the vertebral column) or prevertebral (which lie near the aortic bifurcation) ganglia extending alongside the spinal column.
To reach target organs and glands, the axons must travel long distances in the body, and, to accomplish this, many axons relay their message to a second cell through synaptic transmission. The ends of the axons link across a space, the synapse, to the dendrites of the second cell. The first cell (the presynaptic cell) sends a neurotransmitter across the synaptic cleft, where it activates the second cell (the postsynaptic cell). The message is then carried to the final destination.
Presynaptic nerves' axons terminate in either the paravertebral ganglia or prevertebral ganglia. There are four different paths an axon can take before reaching its terminal. In all cases, the axon enters the paravertebral ganglion at the level of its originating spinal nerve. After this, it can then either synapse in this ganglion, ascend to a more superior or descend to a more inferior paravertebral ganglion and synapse there, or it can descend to a prevertebral ganglion and synapse there with the postsynaptic cell.
The postsynaptic cell then goes on to innervate the targeted end effector (i.e. gland, smooth muscle, etc.). Because paravertebral and prevertebral ganglia are close to the spinal cord, presynaptic neurons are much shorter than their postsynaptic counterparts, which must extend throughout the body to reach their destinations.
A notable exception to the routes mentioned above is the sympathetic innervation of the suprarenal (adrenal) medulla. In this case, presynaptic neurons pass through paravertebral ganglia, on through prevertebral ganglia and then synapse directly with suprarenal tissue. This tissue consists of cells that have pseudo-neuron like qualities in that when activated by the presynaptic neuron, they will release their neurotransmitter (epinephrine) directly into the bloodstream.
In the sympathetic nervous system and other peripheral nervous system components, these synapses are made at sites called ganglia. The cell that sends its fiber is called a preganglionic cell, while the cell whose fiber leaves the ganglion is called a postganglionic cell. As mentioned previously, the preganglionic cells of the sympathetic nervous system are located between the first thoracic segment and the third lumbar segments of the spinal cord. Postganglionic cells have their cell bodies in the ganglia and send their axons to target organs or glands.
The ganglia include not just the sympathetic trunks but also the cervical ganglia (superior, middle and inferior), which send sympathetic nerve fibers to the head and thorax organs, and the celiac and mesenteric ganglia, which send sympathetic fibers to the gut.
Information transmission
Messages travel through the sympathetic nervous system in a bi-directional flow. Efferent messages can simultaneously trigger changes in different body parts. For example, the sympathetic nervous system can accelerate heart rate; widen bronchial passages; decrease motility (movement) of the large intestine; constrict blood vessels; increase peristalsis in the oesophagus; cause pupillary dilation, piloerection (goose bumps) and perspiration (sweating); and raise blood pressure. One exception is with certain blood vessels, such as those in the cerebral and coronary arteries, which dilate (rather than constrict) with increased sympathetic tone. This is because of a proportional increase in the presence of β2 adrenergic receptors rather than α1 receptors. β2 receptors promote vessel dilation instead of constriction like α1 receptors. An alternative explanation is that the primary (and direct) effect of sympathetic stimulation on coronary arteries is vasoconstriction followed by a secondary vasodilation caused by the release of vasodilatory metabolites due to the sympathetically increased cardiac inotropy and heart rate. This secondary vasodilation caused by the primary vasoconstriction is termed functional sympatholysis, the overall effect of which on coronary arteries is dilation.
The target synapse of the postganglionic neuron is mediated by adrenergic receptors and is activated by either norepinephrine (noradrenaline) or epinephrine (adrenaline).
Function
The sympathetic nervous system is responsible for up- and down-regulating many homeostatic mechanisms in living organisms. Fibers from the SNS innervate tissues in almost every organ system, providing at least some regulation of functions as diverse as pupil diameter, gut motility, and urinary system output and function. It is perhaps best known for mediating the neuronal and hormonal stress response commonly known as the fight-or-flight response. This response is also known as sympatho-adrenal response of the body, as the preganglionic sympathetic fibers that end in the adrenal medulla (but also all other sympathetic fibers) secrete acetylcholine, which activates the great secretion of adrenaline (epinephrine) and to a lesser extent noradrenaline (norepinephrine) from it. Therefore, this response that acts primarily on the cardiovascular system is mediated directly via impulses transmitted through the sympathetic nervous system and indirectly via catecholamines secreted from the adrenal medulla.
The sympathetic nervous system is responsible for priming the body for action, particularly in situations threatening survival. One example of this priming is in the moments before waking, in which sympathetic outflow spontaneously increases in preparation for action.
Sympathetic nervous system stimulation causes vasoconstriction of most blood vessels, including many of those in the skin, the digestive tract, and the kidneys. This occurs due to the activation of alpha-1 adrenergic receptors by norepinephrine released by post-ganglionic sympathetic neurons. These receptors exist throughout the vasculature of the body but are inhibited and counterbalanced by beta-2 adrenergic receptors (stimulated by epinephrine release from the adrenal glands) in the skeletal muscles, the heart, the lungs, and the brain during a sympathoadrenal response. The net effect of this is a shunting of blood away from the organs not necessary to the immediate survival of the organism and an increase in blood flow to those organs involved in intense physical activity.
Sensation
The afferent fibers of the autonomic nervous system, which transmit sensory information from the internal organs of the body back to the central nervous system (or CNS), are not divided into parasympathetic and sympathetic fibers as the efferent fibers are. Instead, autonomic sensory information is conducted by general visceral afferent fibers.
General visceral afferent sensations are mostly unconscious visceral motor reflex sensations from hollow organs and glands that are transmitted to the CNS. While the unconscious reflex arcs normally are undetectable, in certain instances they may send pain sensations to the CNS masked as referred pain. If the peritoneal cavity becomes inflamed or if the bowel is suddenly distended, the body will interpret the afferent pain stimulus as somatic in origin. This pain is usually non-localized. The pain is also usually referred to dermatomes that are at the same spinal nerve level as the visceral afferent synapse.
Relationship with the parasympathetic nervous system
Together with the other component of the autonomic nervous system, the parasympathetic nervous system, the sympathetic nervous system aids in the control of most of the body's internal organs. Reaction to stress—as in the flight-or-fight response—is thought to be elicited by the sympathetic nervous system and to counteract the parasympathetic system, which works to promote maintenance of the body at rest. The comprehensive functions of both the parasympathetic and sympathetic nervous systems are not so straightforward, but this is a useful rule of thumb.
Origins
It was originally believed that the sympathetic nervous system arose with jawed vertebrates. However, the sea lamprey (Petromyzon marinus), a jawless vertebrate, has been found to contain the key building blocks and developmental controls of a sympathetic nervous system. Nature described this research as a "landmark study" that "point to a remarkable diversification of sympathetic neuron populations among vertebrate classes and species".
Disorders
The dysfunction of the sympathetic nervous system is linked to many health disorders, such as heart failure, gastrointestinal problems and immune dysfunction, as well as metabolic disorders like hypertension and diabetes, highlighting the importance of the sympathetic nervous system for health.
The sympathetic stimulation of metabolic tissues is required for the maintenance of metabolic regulation and feedback loops. The dysregulation of this system leads to an increased risk of neuropathy within metabolic tissues and therefore can worsen or precipitate metabolic disorders. An example of this includes the retraction of sympathetic neurons due to leptin resistance, which is linked to obesity. Another example, although more research is required, is the observed link that diabetes results in the impairment of synaptic transmission due to the inhibition of acetylcholine receptors as a result of high blood glucose levels. The loss of sympathetic neurons is also associated with the reduction of insulin secretion and impaired glucose tolerance, further exacerbating the disorder.
The sympathetic nervous system holds a major role in long-term regulation of hypertension, whereby the central nervous system stimulates sympathetic nerve activity in specific target organs or tissues via neurohumoral signals. In terms of hypertension, the overactivation of the sympathetic system results in vasoconstriction and increased heart rate resulting in increased blood pressure. In turn, increasing the potential of the development of cardiovascular disease.
In heart failure, the sympathetic nervous system increases its activity, leading to increased force of muscular contractions that in turn increases the stroke volume, as well as peripheral vasoconstriction to maintain blood pressure. However, these effects accelerate disease progression, eventually increasing mortality in heart failure.
Sympathicotonia is a stimulated condition of the sympathetic nervous system, marked by vascular spasm elevated blood pressure, and goose bumps.
Heightened sympathetic nervous system activity is also linked to various mental health disorders such as, anxiety disorders and post-traumatic stress disorder (PTSD). It is suggested that the overactivation of the SNS results in the increased severity of PTSD symptoms. In accordance with disorders like hypertension and cardiovascular disease mentioned above, PTSD is also linked with the increased risk of developing mentioned diseases, further correlating the link between these disorders and the SNS.
The sympathetic nervous system is sensitive to stress, studies suggest that the chronic dysfunction of the sympathetic system results in migraines, due to the vascular changes associated with tension headaches. Individuals with migraine attacks are exhibited to have symptoms that are associated with sympathetic dysfunction, which include reduced levels of plasma norepinephrine levels, sensitivity of the peripheral adrenergic receptors.
Insomnia is a sleeping disorder, that makes falling or staying asleep difficult, this disruption in sleep results in sleep deprivation and various symptoms, with the severity depending on whether the insomnia is acute or chronic. The most favoured hypothesis for the cause of insomnia is the hyperarousal hypothesis, which is known as a collective over-activation of various systems in the body, this over-activation includes the hyperactivity of the SNS. Whereby during sleep cycle disruption sympathetic baroreflex function and neural cardiovascular responses become impaired.
However more research is still required, as methods used in measuring SNS biological measures are not so reliable due to the sensitivity of the SNS, many factors easily effect its activity, like stress, environment, timing of day, and disease. These factors can impact results significantly and for more accurate results extremely invasive methods are required, such as microneurography. The difficulty of measuring the SNS activity does not only apply to insomnia, but also with various disorders previously discussed. However, overtime with advancements in technology and techniques in research studies the disruption of the SNS and its impact on the human body will be explored further.
History and etymology
The name of this system can be traced to the concept of sympathy, in the sense of "connection between parts", first used medically by Galen. In the 18th century, Jacob B. Winslow applied the term specifically to nerves.
The concept that an independent part of the nervous system coordinates body functions had its origin in the works of Galen (129–199), who proposed that nerves distributed spirits throughout the body. From animal dissections he concluded that there were extensive interconnections from the spinal cord to the viscera and from one organ to another. He proposed that this system fostered a concerted action or 'sympathy' of the organs. Little changed until the Renaissance when Bartolomeo Eustacheo (1545) depicted the sympathetic nerves, the vagus and adrenal glands in anatomical drawings. Jacobus Winslow (1669–1760), a Danish-born professor working in Paris, popularised the term 'sympathetic nervous system' in 1732 to describe the chain of ganglia and nerves which were connected to the thoracic and lumbar spinal cord.
| Biology and health sciences | Nervous system | Biology |
193957 | https://en.wikipedia.org/wiki/Spaghetti%20squash | Spaghetti squash | Spaghetti squash or vegetable spaghetti is a group of cultivars of Cucurbita pepo subsp. pepo. They are available in a variety of shapes, sizes, and colours, including ivory, yellow and orange, with orange having the highest amount of carotene. Its center contains many large seeds. When raw, the flesh is solid and similar to other raw squash. When cooked, the meat of the fruit falls away from the flesh in ribbons or strands that look like and can be used as an alternative to spaghetti.
Preparation
Spaghetti squash can be cooked in a variety of ways, including baking, boiling, steaming, air frying, or microwaving. Once cooked the flesh of this fruit can be prepared in a way that its “strands” look like and are as long as traditional spaghetti noodles. It can be served with or without sauce as a substitute for pasta, and its seeds can be roasted, similar to pumpkin seeds.
Nutrition
Spaghetti squash contains many nutrients, including folic acid, potassium, and beta carotene. It is low in calories, averaging 42 calories per 1-cup (155 grams) serving.
Cultivation
Spaghetti squash is relatively easy to grow, thriving in gardens or pots.
The plants are monoecious, with male and female flowers on the same plant. Male flowers have long, thin stems that extend upwards from the vine. Female flowers are shorter, with a small round growth underneath the petals. This round growth turns into the squash if the flower is successfully pollinated.
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
193975 | https://en.wikipedia.org/wiki/Edible%20mushroom | Edible mushroom | Edible mushrooms are the fleshy fruit bodies of numerous species of macrofungi (fungi that bear fruiting structures large enough to be seen with the naked eye). Edibility may be defined by criteria including the absence of poisonous effects on humans and desirable taste and aroma. Mushrooms that have a particularly desirable taste are described as "choice". Edible mushrooms are consumed for their nutritional and culinary value. Mushrooms, especially dried shiitake, are sources of umami flavor.
To ensure safety, wild mushrooms must be correctly identified before their edibility can be assumed. Deadly poisonous mushrooms that are frequently confused with edible mushrooms include several species of the genus Amanita, particularly A. phalloides, the death cap. Some mushrooms that are edible for most people can cause allergic reactions in others; old or improperly stored specimens can go rancid and cause food poisoning. Additionally, mushrooms can absorb chemicals within polluted locations, accumulating pollutants and heavy metals including arsenic and iron—sometimes in lethal concentrations.
Several varieties of fungi contain psychedelic compounds—the magic mushrooms—while variously resembling non-psychoactive species. The most commonly consumed for recreational use are Amanita muscaria (the fly agaric) and Psilocybe cubensis, with the former containing alkaloids such as muscimol and the latter predominately psilocybin.
Edible mushrooms include many fungal species that are either harvested wild or cultivated. Easily cultivated and common wild mushrooms are often available in markets; those that are more difficult to obtain (such as the prized truffle, matsutake, and morel) may be collected on a smaller scale and are sometimes available at farmers' markets or other local grocers. Despite long-term use in folk medicine, there is no scientific evidence that consuming "medicinal mushrooms" cures or lowers the risk of human diseases.
Description
Mushrooms can appear either below ground (hypogeous) or above ground (epigeous) and can be picked by hand. Edibility may be defined by criteria including the absence of poisonous effects on humans and desirable taste and aroma. Edible mushrooms are consumed for their nutritional and culinary value. Mushrooms, especially dried shiitake, are sources of umami flavor.
List of edible mushrooms
Commercially cultivated
Agaricus bisporus dominates the edible mushroom market in North America and Europe, in several forms. It is an edible basidiomycete mushroom native to grasslands in Europe and North America. As it ages, this mushroom turns from small, white and smooth to large and light brown. In its youngest form, it is known as the 'common mushroom', 'button mushroom', 'cultivated mushroom', and 'champignon mushroom'. Its semi-mature form is known variously as 'cremini', 'baby-bella', 'Swiss brown' mushroom, 'Roman brown' mushroom, 'Italian brown' mushroom, or 'chestnut' mushroom Its fully mature form is known as 'portobello’.
Pleurotus species, the oyster mushrooms, are commonly grown at industrial scale.
Morchella species, (morel family) morels belong to the ascomycete grouping of fungi. Morels are difficult to grow commercially, but there are ongoing efforts to make cultivating morels at scale a reality. Since 2014, some farmers in China have been cultivating morels outdoors in the spring; however, yields are variable. Morels must be cooked before eating.
Lentinula edodes, the Shiitake mushroom
Auricularia heimuer, wood ear mushroom
Volvariella volvacea, the paddy straw mushroom or straw mushroom
Volvariella bombycina, the silky rosegill mushroom
Flammulina filiformis, the enoki mushroom, golden needle mushroom, seafood mushroom, lily mushroom, or winter mushroom
Flammulina velutipes
Tremella fuciformis, the snow fungus, snow ear, silver ear fungus and white jelly mushroom
Hypsizygus tessellatus, aka Hypsizygus marmoreus, the beech mushroom, also known in its white and brown varieties as Bunapi-shimeji and Buna-shimeji, respectively
Stropharia rugosoannulata, the wine cap mushroom, burgundy mushroom, garden giant mushroom or king stropharia
Cyclocybe aegerita, the pioppino, velvet pioppini, poplar or black poplar mushroom
Hericium erinaceus, the lion's mane, monkey head, bearded tooth, satyr's beard, bearded hedgehog, or pom pom mushroom.
Phallus indusiatus, the bamboo mushrooms, bamboo pith, long net stinkhorn, crinoline stinkhorn or veiled lady mushroom.
Commercially harvested wild fungi
Boletus edulis or edible Boletus, native to Europe, known in Italian as (plural ) (pig mushroom), in German as (stone mushroom), in Russian as , (white mushroom), in French as the , and in the UK as the penny bun. It is also known as the king bolete, and is renowned for its delicious flavor. It is sought after worldwide, and can be found in a variety of culinary dishes.
Boletus griseus, the gray bolete
Boletus variipes
Boletus pinophilus, the pinewood king bolete
Calbovista subsculpta, commonly known as the sculptured giant puffball is a common puffball of the Rocky Mountains and Pacific Coast ranges of western North America. The puffball is more or less round with a diameter of up to , white becoming brownish in age, and covered with shallow pyramid-shaped plates or scales. It fruits singly or in groups along roads and in open woods at high elevations, from summer to autumn. It is considered a choice edible species while its interior flesh (the gleba) is still firm and white. As the puffball matures, its insides become dark brown and powdery from mature spores.
Calvatia gigantea, the giant puffball. Giant puffballs are considered a choice edible species and are commonly found in meadows, fields, and deciduous forests usually in late summer and autumn. It is found in temperate areas throughout the world. They can reach diameters up to and weights of . The inside of mature giant puffballs is greenish brown, whereas the interior of immature puffballs is white. The large white mushrooms are edible when young.
Cantharellus cibarius (the chanterelle). The yellow chanterelle is one of the best and most easily recognizable mushrooms and can be found in Asia, Europe, North America and Australia. There are poisonous mushrooms that resemble it, though these can be confidently distinguished if one is familiar with the chanterelle's identifying features.
Craterellus tubaeformis, the tube chanterelle, yellow foot chanterelle or yellow-leg
Clitocybe nuda, blewit (or blewitt)
Cortinarius caperatus, the Gypsy mushroom
Craterellus cornucopioides, (trumpet of death) or horn of plenty
Grifola frondosa, known in Japan as (also "hen of the woods" or "sheep's head"), a large, hearty mushroom commonly found on or near stumps and bases of oak trees, and believed to have Macrolepiota procera properties.
Hericium erinaceus, a tooth fungus; also called "lion's mane mushroom"
Hydnum repandum, sweet tooth fungus, hedgehog mushroom or hedgehog fungus, urchin of the woods
Lactarius deliciosus, saffron milk cap, consumed around the world and prized in Russia
Morchella genus (morel family) morels belong to the ascomycete grouping of fungi. They are usually found in open scrub, woodland or open ground in late spring. When collecting this fungus, care must be taken to distinguish it from the poisonous false morels, including Gyromitra esculenta. The morel must be cooked before eating.
Morchella conica var. deliciosa
Morchella esculenta var. rotunda
Morchella crassipes
Morchella elata
Pleurotus species are sometimes commercially harvested despite the ease of cultivation.
Pleurotus ostreatus
Termitomyces species are symbiotes of termites and the mushrooms grow out of termite mounds. This genus includes the largest edible mushroom, Termitomyces titanicus, with a cap that averages 1 m in diameter, though most species are much smaller. Research is underway to determine how to cultivate these mushrooms.
Tricholoma matsutake, the , a mushroom highly prized in Japanese cuisine.
Tuber genus (truffles). Truffles have long eluded the modern techniques of domestication known as trufficulture. Although the field of trufficulture has greatly expanded since its inception in 1808, several species still remain uncultivated. Domesticated truffles include:
Tuber aestivum, black summer truffle
Tuber borchii, bianchetto truffle
Tuber brumale, muscat truffle
Tuber indicum, Chinese black truffle
Tuber macrosporum, smooth black truffle
Tuber mesentericum, the Bagnoli truffle
Other edible wild species
Agaricus arvensis (horse mushroom)
Agaricus silvaticus (pinewood mushroom)
Agaricus campestris (field mushroom)
Aleuria aurantia (orange peel fungus)
Amanita caesarea (Caesar's mushroom)
Armillaria mellea (honey mushroom)
Boletus badius (bay bolete)
Calocybe gambosa (St George's mushroom)
Calvatia utriformis (syn. Lycoperdon caelatum)
Calvatia cyathiformis (purple-spored puffball)
Chroogomphus genus (pine-spikes or spike-caps)
Clavariaceae genus (coral fungus family)
Clavulinaceae genus (coral fungus family)
Coprinus comatus, the shaggy mane, shaggy inkcap or lawyer's wig. Must be cooked as soon as possible after harvesting or the caps will first turn dark and unappetizing, then deliquesce and turn to ink. Not found in markets for this reason.
Corn smut (Ustilago maydis), economically important pathogens of cereals. Known in Mexico as , where it is considered a delicacy. Corn smuts can be used as fillings in quesadillas, tacos and soups.
Cyttaria espinosae
Fistulina hepatica (beefsteak polypore or ox tongue)
Flammulina velutipes (velvet shank or winter fungus)
Gomphidius glutinosus (slimy spike-cap)
Hygrophorus chrysodon (gold flecked woodwax)
Kalaharituber pfeilii
Lactarius deterrimus (orange milkcap)
Lactarius salmonicolor
Lactarius subdulcis (mild milkcap)
Lactarius volemus (fishy milkcap), also known as weeping milkcap
Laetiporus sulphureus (sulphur shelf), also known by names such as "chicken mushroom", "chicken fungus"; a distinct bracket fungus popular among mushroom hunters
Leccinum aurantiacum (red-capped scaber stalk)
Leccinum scabrum (birch bolete)
Leccinum versipelle (orange birch bolete / Boletus testaceoscaber)
Macrolepiota procera (parasol mushroom); globally, it is widespread in temperate regions
Marasmius oreades (fairy ring champignon)
Polyporus mylittae (blackfellow's bread)
Polyporus squamosus (dryad's saddle and pheasant's back mushroom)
Pseudohydnum gelatinosum (toothed jelly fungus)
Ramariaceae genus (coral fungus family)
Rhizopogon luteolus
Russula; some members of this genus, such as R. laeta, are edible
Sparassis crispa, also known as "cauliflower mushroom"
Suillus bovinus (bovine bolete)
Suillus granulatus (weeping bolete), also known as "granulated bolete"
Suillus grevillei (tamarack jack)
Suillus luteus (slippery jack)
Suillus tomentosus (woolly-capped suillus)
Suillus brevipes (short-stemmed slippery Jack)
Suillus pictus (painted suillus)
Suillus decipiens
Tricholoma portentosum
Conditionally edible species
Amanita fulva (tawny grisette) must be cooked before eating.
Amanita muscaria is edible if parboiled to leach out toxins; fresh mushrooms cause vomiting, twitching, drowsiness, and hallucinations due to the presence of muscimol. Although present in A. muscaria, ibotenic acid is not in high enough concentration to produce any physical or psychological effects unless massive amounts are ingested.
Amanita rubescens (the blusher) must be cooked before eating.
Coprinopsis atramentaria (Coprinus atramentarius, common inkcap) is edible without special preparation, but consumption with alcohol is toxic due to the presence of coprine. Some other Coprinus spp. share this property.
Gyromitra esculenta (false morel, turban, brain mushroom) is eaten by some after it has been parboiled, but many mycologists do not recommend it. Raw Gyromitra are toxic due to the presence of gyromitrin, and it is not known whether all of the toxin can be removed by parboiling.
Lactarius spp. Apart from Lactarius deliciosus (saffron milkcap), which is universally considered edible, other Lactarius spp. that are considered toxic elsewhere in the world are eaten in some Eastern European countries and Russia after pickling or parboiling.
Lactarius indigo
Lactarius paradoxus
Lactarius corrugis
Lactarius volemus
Lactarius hygrophoroides
Lepista saeva (field blewit, blue leg, or Tricholoma personatum) must be cooked before eating.
Morchella esculenta (morel) must be cooked before eating.
Verpa bohemica is considered choice by some—it even can be found for sale as a "morel"—but cases of toxicity have been reported. Verpas appear to contain monomethylhydrazine and similar precautions apply to them as Gyromitra species.
Tricholoma terreum (grey knight), might cause rhabdomyolysis
Cultivation
Mushroom cultivation has a long history, with over twenty species commercially cultivated. Mushrooms are cultivated in at least 60 countries. A fraction of the many fungi consumed by humans are currently cultivated and sold commercially. Commercial cultivation is important ecologically, as there have been concerns of the depletion of larger fungi such as chanterelles in Europe, possibly because the group has grown popular, yet remains a challenge to cultivate.
Some mushrooms, particularly mycorrhizal species, have not yet been successfully cultivated.
In 2019, world production of commercial mushrooms and recorded truffle collection reported to the Food and Agriculture Organization was 11.9 million tonnes, led by China with 75% of the total:
Safety concerns
Some wild species are toxic, or at least indigestible, when raw. Failure to identify poisonous mushrooms and confusing them with edible ones has resulted in death. Although in the 21st century primitive digital applications exist to aid with identification, these are unreliable and some inexperienced hunters relying upon them have been seriously poisoned.
Deadly poisonous mushrooms that are frequently confused with edible mushrooms and responsible for many fatal poisonings include several species of the genus Amanita, particularly Amanita phalloides, the death cap. Some mushrooms that are edible for most people can cause allergic reactions in some individuals with no prior knowledge of an allergy; old or improperly stored specimens can go rancid quickly and cause food poisoning. Great care should therefore be taken when eating any fungus for the first time, and only small quantities should be consumed in case of individual allergies or reactions. Even normally edible species of mushrooms may be dangerous, as certain mushrooms growing in polluted locations can act as chemical-absorbers, accumulating pollutants and heavy metals, including arsenic and iron, sometimes in lethal concentrations. On the other hand, some cooking preparations may reduce the toxicity of slightly poisonous mushrooms enough to be consumed as survival food, for example, many prized fungi will cause gastric upset when eaten uncooked, such as the Morchella genus.
Additionally, several varieties of fungi are known and documented to contain psychedelic drugs—the so-called magic mushrooms—yet resemble perfectly edible, non-psychoactive species. While not necessarily lethal to consume, to the uninitiated, an accidentally induced psychedelic experience can run the gamut from benign to terrifying, even depressing or psychotic. The most commonly consumed for recreational psychoactive use are Amanita muscaria (the fly agaric) and Psilocybe cubensis, with the former containing alkaloids such as muscimol and the latter predominately psilocybin. Both have the potential to induce in the user feelings of awe, wonder with nature, interesting visual hallucinations and inner peace (even in mild doses), but excessive or accidental consumption can create feelings of insanity, helplessness and fear, usually persisting for a few hours.
Nutrition
Higher mushroom consumption has been associated with lower risk of breast cancer. , mushroom consumption has not been shown to conclusively affect risk factors for cardiovascular diseases.
A commonly eaten mushroom is the white mushroom (Agaricus bisporus). In a reference serving, Agaricus mushrooms provide of food energy and are 92% water, 3% carbohydrates, 3% protein, and 0.3% fat. They contain high levels of riboflavin, niacin, and pantothenic acid, with moderate content of phosphorus (see table). Otherwise, raw white mushrooms generally have low amounts of essential nutrients. Although cooking by boiling lowers mushroom water content only 1%, the contents per 100 grams for several nutrients increase appreciably, especially for dietary minerals.
The content of vitamin D is absent or low unless mushrooms are exposed to sunlight or purposely treated with artificial ultraviolet light, even after harvesting and processed into dry powder.
Vitamin D
When exposed to UV light before or after harvest, mushrooms convert their large concentrations of ergosterol into vitamin D2. This is similar to the reaction in humans, where vitamin D3 is synthesized after exposure to sunlight.
Testing showed an hour of UV light exposure before harvesting made a serving of mushrooms contain twice the FDA's daily recommendation of vitamin D. With 5 minutes of artificial UV light exposure after harvesting, a serving of mushrooms contained four times as much. Analysis also demonstrated that natural sunlight produced vitamin D2.
The form of vitamin D found in UV-irradiated mushrooms is ergocalciferol, or vitamin D2. This is not the same as cholecalciferol, called vitamin D3, which is produced by UV-irradiation of human or animal skin, fur, and feathers. Although vitamin D2 has vitamin-D activity in humans, and is widely used in food fortification and nutritional supplements, vitamin D3 is more commonly used in dairy and cereal products.
Uses
Edible mushrooms include many fungal species that are either harvested wild or cultivated. Easily cultivated and common wild mushrooms are often available in markets; those that are more difficult to obtain (such as the prized truffle, matsutake, and morel) may be collected on a smaller scale by private gatherers, and are sometimes available at farmers' markets or other local grocers. Mushrooms can be purchased fresh when in season, and many species are also sold dried.
Before assuming that any wild mushroom is edible, it should be correctly identified. Accurate determination of and proper identification of a species is the only safe way to ensure edibility, and the only safeguard against possible poisoning. Some edible species cannot be identified without the use of advanced techniques such as chemistry or microscopy.
History
Mycophagy (), the act of consuming mushrooms, dates back to ancient times. Edible mushroom species have been found in association with 13,000-year-old archaeological sites in Chile. Ötzi, the mummy of a man who lived between 3400 and 3100 BCE in Europe, was found with two types of mushroom. The Chinese value mushrooms for their supposed medicinal properties as well as for food. Ancient Romans and Greeks, particularly the upper classes, used mushrooms for culinary purposes. Food tasters were employed by Roman emperors to ensure that mushrooms were safe to eat. The Forme of Cury, a 14th-century compilation of medieval English recipes, features a recipe of mushrooms and leeks cooked in broth.
Culinary
Cooking
Mushrooms may be cooked before consumption to improve texture and lower trace levels of toxic hydrazines. Frying, roasting, baking, and microwaving are all used to prepare mushrooms. Cooking lowers the amount of water present in the food. Mushrooms do not go mushy with long term cooking because the chitin that gives most of the structure to a mushroom does not break down until which is not reached in any normal cooking.
Storage
Mushrooms will usually last a few days, longer if refrigerated. Mushrooms can be frozen, but are best cooked first. They can also be dried or pickled.
In traditional medicine
Medicinal mushrooms are mushrooms or extracts from mushrooms that are thought to be treatments for diseases, yet remain unconfirmed in mainstream science and medicine, and so are not approved as drugs or medical treatments. Such use of mushrooms therefore falls into the domain of traditional medicine for which there is no direct high-quality clinical evidence of efficacy.
Preliminary research on mushroom extracts has been conducted to determine if anti-disease properties exist, such as for polysaccharide-K or lentinan. Some extracts have widespread use in Japan, Korea and China, as potential adjuvants for radiation treatments and chemotherapy.
| Biology and health sciences | Edible fungi | Plants |
194031 | https://en.wikipedia.org/wiki/Nuclear%20fuel%20cycle | Nuclear fuel cycle | The nuclear fuel cycle, also called nuclear fuel chain, is the progression of nuclear fuel through a series of differing stages. It consists of steps in the front end, which are the preparation of the fuel, steps in the service period in which the fuel is used during reactor operation, and steps in the back end, which are necessary to safely manage, contain, and either reprocess or dispose of spent nuclear fuel. If spent fuel is not reprocessed, the fuel cycle is referred to as an open fuel cycle (or a once-through fuel cycle); if the spent fuel is reprocessed, it is referred to as a closed fuel cycle.
Basic concepts
Nuclear power relies on fissionable material that can sustain a chain reaction with neutrons. Examples of such materials include uranium and plutonium. Most nuclear reactors use a moderator to lower the kinetic energy of the neutrons and increase the probability that fission will occur. This allows reactors to use material with far lower concentration of fissile isotopes than are needed for nuclear weapons. Graphite and heavy water are the most effective moderators, because they slow the neutrons through collisions without absorbing them. Reactors using heavy water or graphite as the moderator can operate using natural uranium.
A light water reactor (LWR) uses water in the form that occurs in nature, and requires fuel enriched to higher concentrations of fissile isotopes. Typically, LWRs use uranium enriched to 3–5% U-235, the only fissile isotope that is found in significant quantity in nature. One alternative to this low-enriched uranium (LEU) fuel is mixed oxide (MOX) fuel produced by blending plutonium with natural or depleted uranium, and these fuels provide an avenue to utilize surplus weapons-grade plutonium. Another type of MOX fuel involves mixing LEU with thorium, which generates the fissile isotope U-233. Both plutonium and U-233 are produced from the absorption of neutrons by irradiating fertile materials in a reactor, in particular the common uranium isotope U-238 and thorium, respectively, and can be separated from spent uranium and thorium fuels in reprocessing plants.
Some reactors do not use moderators to slow the neutrons. Like nuclear weapons, which also use unmoderated or "fast" neutrons, these fast-neutron reactors require much higher concentrations of fissile isotopes in order to sustain a chain reaction. They are also capable of breeding fissile isotopes from fertile materials; a breeder reactor is one that generates more fissile material in this way than it consumes.
During the nuclear reaction inside a reactor, the fissile isotopes in nuclear fuel are consumed, producing more and more fission products, most of which are considered radioactive waste. The buildup of fission products and consumption of fissile isotopes eventually stop the nuclear reaction, causing the fuel to become a spent nuclear fuel. When 3% enriched LEU fuel is used, the spent fuel typically consists of roughly 1% U-235, 95% U-238, 1% plutonium and 3% fission products. Spent fuel and other high-level radioactive waste is extremely hazardous, although nuclear reactors produce orders of magnitude smaller volumes of waste compared to other power plants because of the high energy density of nuclear fuel. Safe management of these byproducts of nuclear power, including their storage and disposal, is a difficult problem for any country using nuclear power.
Front end
Exploration
A deposit of uranium, such as uraninite, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs.
Naturally occurring uranium consists primarily of two isotopes U-238 and U-235, with 99.28% of the metal being U-238 while 0.71% is U-235, and the remaining 0.01% is mostly U-234. The number in such names refers to the isotope's atomic mass number, which is the number of protons plus the number of neutrons in the atomic nucleus.
The atomic nucleus of U-235 will nearly always fission when struck by a free neutron, and the isotope is therefore said to be a "fissile" isotope. The nucleus of a U-238 atom on the other hand, rather than undergoing fission when struck by a free neutron, will nearly always absorb the neutron and yield an atom of the isotope U-239. This isotope then undergoes natural radioactive decay to yield Pu-239, which, like U-235, is a fissile isotope. The atoms of U-238 are said to be fertile, because, through neutron irradiation in the core, some eventually yield atoms of fissile Pu-239.
Mining
Uranium ore can be extracted through conventional mining in open pit and underground methods similar to those used for mining other metals. In-situ leach mining methods also are used to mine uranium in the United States. In this technology, uranium is leached from the in-place ore through an array of regularly spaced wells and is then recovered from the leach solution at a surface plant. Uranium ores in the United States typically range from about 0.05 to 0.3% uranium oxide (U3O8). Some uranium deposits developed in other countries are of higher grade and are also larger than deposits mined in the United States. Uranium is also present in very low-grade amounts (50 to 200 parts per million) in some domestic phosphate-bearing deposits of marine origin. Because very large quantities of phosphate-bearing rock are mined for the production of wet-process phosphoric acid used in high analysis fertilizers and other phosphate chemicals, at some phosphate processing plants the uranium, although present in very low concentrations, can be economically recovered from the process stream.
Milling
When Uranium is mined out of the ground it does not contain enough pure uranium per pound to be used. The process of milling is how the cycle extracts the usable uranium from the rest of the materials, also known as tailings. To begin the milling process the ore is either ground into fine dust with water or crushed into dust without water. Once the Materials have been physically treated, they then begin the process of being chemically treated by being doused in acids. Acids used include hydrochloric and nitrous acids but the most common acids are sulfuric acids. Alternatively if the material that the ore is made of is particularly resistant to acids then an alkali is used instead. After being treated chemically the uranium particles are dissolved into the solution used to treat them. This solution is then filtered until what solids remain are separated from the liquids that contain the uranium. The undesirable solids are disposed of as tailings. Once the solution has had the tailings removed the uranium is extracted from the rest of the liquid solution, in one of two ways, solvent exchange or ion exchange. In the first of these a solvent is mixed into the solution. The dissolved uranium binds to the solvent and floats to the top while the other dissolved materials remain in the mixture. During ion exchange a different material is mixed into the solution and the uranium binds to it. Once filtered the material is panned out and washed off. The solution will repeat this process of filtration to pull as much usable uranium out as possible. The filtered uranium is then dried out into U3O8 uranium. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake", which is sold on the uranium market as U3O8. Note that the material is not always yellow.
Uranium conversion
Usually milled uranium oxide, U3O8 (triuranium octoxide) is then processed into either of two substances depending on the intended use.
For use in most reactors, U3O8 is usually converted to uranium hexafluoride (UF6), the input stock for most commercial uranium enrichment facilities. A solid at room temperature, uranium hexafluoride becomes gaseous at 57 °C (134 °F). At this stage of the cycle, the uranium hexafluoride conversion product still has the natural isotopic mix (99.28% of U-238 plus 0.71% of U-235).
There are two ways to convert uranium oxide into its usable forms uranium dioxide and uranium hexafluoride; the wet option and the dry option. In the wet option the yellowcake is dissolved in nitric acid then extracted using tributyl phosphate. The resulting mixture is then dried and washed resulting in uranium trioxide. The uranium trioxide is then mixed with pure hydrogen resulting in uranium dioxide and dihydrogen monoxide or water. After that the uranium dioxide is mixed with four parts hydrogen fluoride resulting in more water and uranium tetrafluoride. Finally the end product of uranium hexafluoride is created by simply adding more fluoride to the mixture.
For use in reactors such as CANDU which do not require enriched fuel, the U3O8 may instead be converted to uranium dioxide (UO2) which can be included in ceramic fuel elements.
In the current nuclear industry, the volume of material converted directly to UO2 is typically quite small compared to that converted to UF6.
Enrichment
The natural concentration (0.71%) of the fissile isotope U-235 is less than that required to sustain a nuclear chain reaction in light water reactor cores. Accordingly, UF6 produced from natural uranium sources must be enriched to a higher concentration of the fissionable isotope before being used as nuclear fuel in such reactors. The level of enrichment for a particular nuclear fuel order is specified by the customer according to the application they will use it for: light-water reactor fuel normally is enriched to 3.5% U-235, but uranium enriched to lower concentrations is also required. Enrichment is accomplished using any of several methods of isotope separation. Gaseous diffusion and gas centrifuge are the commonly used uranium enrichment methods, but new enrichment technologies are currently being developed.
The bulk (96%) of the byproduct from enrichment is depleted uranium (DU), which can be used for armor, kinetic energy penetrators, radiation shielding and ballast. As of 2008 there are vast quantities of depleted uranium in storage. The United States Department of Energy alone has 470,000 tonnes. About 95% of depleted uranium is stored as uranium hexafluoride (UF6).
Fabrication
For use as nuclear fuel, enriched uranium hexafluoride is converted into uranium dioxide (UO2) powder that is then processed into pellet form. The pellets are then fired in a high temperature sintering furnace to create hard, ceramic pellets of enriched uranium. The cylindrical pellets then undergo a grinding process to achieve a uniform pellet size. The pellets are stacked, according to each nuclear reactor core's design specifications, into tubes of corrosion-resistant metal alloy. The tubes are sealed to contain the fuel pellets: these tubes are called fuel rods. The finished fuel rods are grouped in special fuel assemblies that are then used to build up the nuclear fuel core of a power reactor.
The alloy used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy. For the most common types of reactors, boiling water reactors (BWR) and pressurized water reactors (PWR), the tubes are assembled into bundles with the tubes spaced precise distances apart. These bundles are then given a unique identification number, which enables them to be tracked from manufacture through use and into disposal.
Service period
Transport of radioactive materials
Transport is an integral part of the nuclear fuel cycle. There are nuclear power reactors in operation in several countries but uranium mining is viable in only a few areas. Also, in the course of over forty years of operation by the nuclear industry, a number of specialized facilities have been developed in various locations around the world to provide fuel cycle services and there is a need to transport nuclear materials to and from these facilities. Most transports of nuclear fuel material occur between different stages of the cycle, but occasionally a material may be transported between similar facilities. With some exceptions, nuclear fuel cycle materials are transported in solid form, the exception being uranium hexafluoride (UF6) which is considered a gas. Most of the material used in nuclear fuel is transported several times during the cycle. Transports are frequently international, and are often over large distances. Nuclear materials are generally transported by specialized transport companies.
Since nuclear materials are radioactive, it is important to ensure that radiation exposure of those involved in the transport of such materials and of the general public along transport routes is limited. Packaging for nuclear materials includes, where appropriate, shielding to reduce potential radiation exposures. In the case of some materials, such as fresh uranium fuel assemblies, the radiation levels are negligible and no shielding is required. Other materials, such as spent fuel and high-level waste, are highly radioactive and require special handling. To limit the risk in transporting highly radioactive materials, containers known as spent nuclear fuel shipping casks are used which are designed to maintain integrity under normal transportation conditions and during hypothetical accident conditions.
While transport casks vary in design, material, size, and purpose, they are typically long tubes made of stainless steel or concrete with the ends sealed shut to prevent leaks. Frequently the casks' shell will have at least one layer of radiation-resistant material, such as lead. The inside of the tube will also vary depending on what is being transported. For example casks that are transporting depleted or unused fuel rods will have sleeves that keep the rods separate, while casks that transport uranium hexafluoride typically have no internal organization. Depending on the purpose and radioactivity of the materials some casks have systems of ventilation, thermal protection, impact protection, and other features more specific to the route and cargo.
In-core fuel management
A nuclear reactor core is composed of a few hundred "assemblies", arranged in a regular array of cells, each cell being formed by a fuel or control rod surrounded, in most designs, by a moderator and coolant, which is water in most reactors.
Because of the fission process that consumes the fuels, the old fuel rods must be replaced periodically with fresh ones (this is called a (replacement) cycle). During a given replacement cycle only some of the assemblies (typically one-third) are replaced since fuel depletion occurs at different rates at different places within the reactor core. Furthermore, for efficiency reasons, it is not a good policy to put the new assemblies exactly at the location of the removed ones. Even bundles of the same age will have different burn-up levels due to their previous positions in the core. Thus the available bundles must be arranged in such a way that the yield is maximized, while safety limitations and operational constraints are satisfied. Consequently, reactor operators are faced with the so-called optimal fuel reloading problem, which consists of optimizing the rearrangement of all the assemblies, the old and fresh ones, while still maximizing the reactivity of the reactor core so as to maximise fuel burn-up and minimise fuel-cycle costs.
This is a discrete optimization problem, and computationally infeasible by current combinatorial methods, due to the huge number of permutations and the complexity of each computation. Many numerical methods have been proposed for solving it and many commercial software packages have been written to support fuel management. This is an ongoing issue in reactor operations as no definitive solution to this problem has been found. Operators use a combination of computational and empirical techniques to manage this problem.
The study of used fuel
Used nuclear fuel is studied in Post irradiation examination, where used fuel is examined to know more about the processes that occur in fuel during use, and how these might alter the outcome of an accident. For example, during normal use, the fuel expands due to thermal expansion, which can cause cracking. Most nuclear fuel is uranium dioxide, which is a cubic solid with a structure similar to that of calcium fluoride. In used fuel the solid state structure of most of the solid remains the same as that of pure cubic uranium dioxide. SIMFUEL is the name given to the simulated spent fuel which is made by mixing finely ground metal oxides, grinding as a slurry, spray drying it before heating in hydrogen/argon to 1700 °C. In SIMFUEL, 4.1% of the volume of the solid was in the form of metal nanoparticles which are made of molybdenum, ruthenium, rhodium and palladium. Most of these metal particles are of the ε phase (hexagonal) of Mo-Ru-Rh-Pd alloy, while smaller amounts of the α (cubic) and σ (tetragonal) phases of these metals were found in the SIMFUEL. Also present within the SIMFUEL was a cubic perovskite phase which is a barium strontium zirconate (BaxSr1−xZrO3).
Uranium dioxide is minimally soluable in water, but after oxidation it can be converted to uranium trioxide or another uranium(VI) compound which is much more soluble. Uranium dioxide (UO2) can be oxidised to an oxygen rich hyperstoichiometric oxide (UO2+x) which can be further oxidised to U4O9, U3O7, U3O8 and UO3.2H2O.
Because used fuel contains alpha emitters (plutonium and the minor actinides), the effect of adding an alpha emitter (238Pu) to uranium dioxide on the leaching rate of the oxide has been investigated. For the crushed oxide, adding 238Pu tended to increase the rate of leaching, but the difference in the leaching rate between 0.1 and 10% 238Pu was very small.
The concentration of carbonate in the water which is in contact with the used fuel has a considerable effect on the rate of corrosion, because uranium(VI) forms soluble anionic carbonate complexes such as [UO2(CO3)2]2− and [UO2(CO3)3]4−. When carbonate ions are absent, and the water is not strongly acidic, the hexavalent uranium compounds which form on oxidation of uranium dioxide often form insoluble hydrated uranium trioxide phases.
Thin films of uranium dioxide can be deposited upon gold surfaces by ‘sputtering’ using uranium metal and an argon/oxygen gas mixture. These gold surfaces modified with uranium dioxide have been used for both cyclic voltammetry and AC impedance experiments, and these offer an insight into the likely leaching behaviour of uranium dioxide.
Fuel cladding interactions
The study of the nuclear fuel cycle includes the study of the behaviour of nuclear materials both under normal conditions and under accident conditions. For example, there has been much work on how uranium dioxide based fuel interacts with the zirconium alloy tubing used to cover it. During use, the fuel swells due to thermal expansion and then starts to react with the surface of the zirconium alloy, forming a new layer which contains both fuel and zirconium (from the cladding). Then, on the fuel side of this mixed layer, there is a layer of fuel which has a higher caesium to uranium ratio than most of the fuel. This is because xenon isotopes are formed as fission products that diffuse out of the lattice of the fuel into voids such as the narrow gap between the fuel and the cladding. After diffusing into these voids, it decays to caesium isotopes. Because of the thermal gradient which exists in the fuel during use, the volatile fission products tend to be driven from the centre of the pellet to the rim area. Below is a graph of the temperature of uranium metal, uranium nitride and uranium dioxide as a function of distance from the centre of a 20 mm diameter pellet with a rim temperature of 200 °C. The uranium dioxide (because of its poor thermal conductivity) will overheat at the centre of the pellet, while the other more thermally conductive forms of uranium remain below their melting points.
Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas; one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring.
The releases of radioactivity from normal operations are the small planned releases from uranium ore processing, enrichment, power reactors, reprocessing plants and waste stores. These can be in different chemical/physical form from releases which could occur under accident conditions. In addition the isotope signature of a hypothetical accident may be very different from that of a planned normal operational discharge of radioactivity to the environment.
Just because a radioisotope is released it does not mean it will enter a human and then cause harm. For instance, the migration of radioactivity can be altered by the binding of the radioisotope to the surfaces of soil particles. For example, caesium (Cs) binds tightly to clay minerals such as illite and montmorillonite, hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. But 137Cs is not able to migrate quickly through most soils and thus is unlikely to contaminate well water. Colloids of soil minerals can migrate through soil so simple binding of a metal to the surfaces of soil particles does not completely fix the metal.
According to Jiří Hála's text book, the distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioisotope is tightly bound to the minerals in the soil, then less radioactivity can be absorbed by crops and grass growing on the soil.
Cs-137 Kd = 1000
Pu-239 Kd = 10000 to 100000
Sr-90 Kd = 80 to 150
I-131 Kd = 0.007 to 50
In dairy farming, one of the best countermeasures against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long-term gamma dose to humans due to 137Cs, as the gamma photons will be attenuated by their passage through the soil.
Even after the radioactive element arrives at the roots of the plant, the metal may be rejected by the biochemistry of the plant. The details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions has been reported. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes.
In livestock farming, an important countermeasure against 137Cs is to feed animals a small amount of Prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of Prussian blue per day. The Prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. This is a constant which can not be changed but the biological half-life is not a constant. It will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the Prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of Prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Note that a source of data on the subject of caesium in Chernobyl fallout exists at (Ukrainian Research Institute for Agricultural Radiology).
Release of radioactivity from fuel during normal use and accidents
The IAEA assume that under normal operation the coolant of a water-cooled reactor will contain some radioactivity but during a reactor accident the coolant radioactivity level may rise. The IAEA states that under a series of different conditions different amounts of the core inventory can be released from the fuel, the four conditions the IAEA consider are normal operation, a spike in coolant activity due to a sudden shutdown/loss of pressure (core remains covered with water), a cladding failure resulting in the release of the activity in the fuel/cladding gap (this could be due to the fuel being uncovered by the loss of water for 15–30 minutes where the cladding reached a temperature of 650–1250 °C) or a melting of the core (the fuel will have to be uncovered for at least 30 minutes, and the cladding would reach a temperature in excess of 1650 °C).
Based upon the assumption that a Pressurized water reactor contains 300 tons of water, and that the activity of the fuel of a 1 GWe reactor is as the IAEA predicts, then the coolant activity after an accident such as the Three Mile Island accident (where a core is uncovered and then recovered with water) can be predicted.
Releases from reprocessing under normal conditions
It is normal to allow used fuel to stand after the irradiation to allow the short-lived and radiotoxic iodine isotopes to decay away. In one experiment in the US, fresh fuel which had not been allowed to decay was reprocessed (the Green run ) to investigate the effects of a large iodine release from the reprocessing of short cooled fuel. It is normal in reprocessing plants to scrub the off gases from the dissolver to prevent the emission of iodine. In addition to the emission of iodine the noble gases and tritium are released from the fuel when it is dissolved. It has been proposed that by voloxidation (heating the fuel in a furnace under oxidizing conditions) the majority of the tritium can be recovered from the fuel.
A paper was written on the radioactivity in oysters found in the Irish Sea. These were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. Additionally, a zinc activation product (65Zn) was found, which is thought to be due to the corrosion of magnox fuel cladding in spent fuel pools. It is likely that the modern releases of all these isotopes from the Windscale event is smaller.
On-load reactors
Some reactor designs, such as RBMKs or CANDU reactors, can be refueled without being shut down. This is achieved through the use of many small pressure tubes to contain the fuel and coolant, as opposed to one large pressure vessel as in pressurized water reactor (PWR) or boiling water reactor (BWR) designs. Each tube can be individually isolated and refueled by an operator-controlled fueling machine, typically at a rate of up to 8 channels per day out of roughly 400 in CANDU reactors. On-load refueling allows for the optimal fuel reloading problem to be dealt with continuously, leading to more efficient use of fuel. This increase in efficiency is partially offset by the added complexity of having hundreds of pressure tubes and the fueling machines to service them.
Interim storage
After its operating cycle, the reactor is shut down for refueling. The fuel discharged at that time (spent fuel) is stored either at the reactor site (commonly in a spent fuel pool) or potentially in a common facility away from reactor sites. If on-site pool storage capacity is exceeded, it may be desirable to store the now cooled aged fuel in modular dry storage facilities known as Independent Spent Fuel Storage Installations (ISFSI) at the reactor site or at a facility away from the site. The spent fuel rods are usually stored in water or boric acid, which provides both cooling (the spent fuel continues to generate decay heat as a result of residual radioactive decay) and shielding to protect the environment from residual ionizing radiation, although after at least a year of cooling they may be moved to dry cask storage.
Transportation
Reprocessing
Spent fuel discharged from reactors contains appreciable quantities of fissile (U-235 and Pu-239), fertile (U-238), and other radioactive materials, including reaction poisons, which is why the fuel had to be removed. These fissile and fertile materials can be chemically separated and recovered from the spent fuel. The recovered uranium and plutonium can, if economic and institutional conditions permit, be recycled for use as nuclear fuel. This is currently not done for civilian spent nuclear fuel in the United States, however it is done in Russia. Russia aims to maximise recycling of fissile materials from used fuel. Hence reprocessing used fuel is a basic practice, with reprocessed uranium being recycled and plutonium used in MOX, at present only for fast reactors.
Mixed oxide, or MOX fuel, is a blend of reprocessed uranium and plutonium and depleted uranium which behaves similarly, although not identically, to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low-enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Currently, plants in Europe are reprocessing spent fuel from utilities in Europe and Japan. Reprocessing of spent commercial-reactor nuclear fuel is currently not permitted in the United States due to the perceived danger of nuclear proliferation. The Bush Administration's Global Nuclear Energy Partnership proposed that the U.S. form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons.
Partitioning and transmutation
As an alternative to the disposal of the PUREX raffinate in glass or Synroc matrix, the most radiotoxic elements could be removed through advanced reprocessing. After separation, the minor actinides and some long-lived fission products could be converted to short-lived or stable isotopes by either neutron or photon irradiation. This is called transmutation. Strong and long-term international cooperation, and many decades of research and huge investments remain necessary before to reach a mature industrial scale where the safety and the economical feasibility of partitioning and transmutation (P&T) could be demonstrated.
Waste disposal
A current concern in the nuclear power field is the safe disposal and isolation of either spent fuel from reactors or, if the reprocessing option is used, wastes from reprocessing plants. These materials must be isolated from the biosphere until the radioactivity contained in them has diminished to a safe level. In the U.S., under the Nuclear Waste Policy Act of 1982 as amended, the Department of Energy has responsibility for the development of the waste disposal system for spent nuclear fuel and high-level radioactive waste. Current plans call for the ultimate disposal of the wastes in solid form in a licensed deep, stable geologic structure called a deep geological repository. The Department of Energy chose Yucca Mountain as the location for the repository. Its opening has been repeatedly delayed. Since 1999 thousands of nuclear waste shipments have been stored at the Waste Isolation Pilot Plant in New Mexico.
Fast-neutron reactors can fission all actinides, while the thorium fuel cycle produces low levels of transuranics. Unlike LWRs, in principle these fuel cycles could recycle their plutonium and minor actinides and leave only fission products and activation products as waste. The highly radioactive medium-lived fission products Cs-137 and Sr-90 diminish by a factor of 10 each century; while the long-lived fission products have relatively low radioactivity, often compared favorably to that of the original uranium ore.
Horizontal drillhole disposal describes proposals to drill over one kilometer vertically, and two kilometers horizontally in the Earth's crust, for the purpose of disposing of high-level waste forms such as spent nuclear fuel, Caesium-137, or Strontium-90. After the emplacement and the retrievability period, drillholes would be backfilled and sealed. A series of tests of the technology were carried out in November 2018 and then again publicly in January 2019 by a U.S. based private company. The test demonstrated the emplacement of a test-canister in a horizontal drillhole and retrieval of the same canister. There was no actual high-level waste used in this test.
Fuel cycles
Although the most common terminology is fuel cycle, some argue that the term fuel chain is more accurate, because the spent fuel is never fully recycled. Spent fuel includes fission products, which generally must be treated as waste, as well as uranium, plutonium, and other transuranic elements. Where plutonium is recycled, it is normally reused once in light water reactors, although fast reactors could lead to more complete recycling of plutonium.
Once-through nuclear fuel cycle
Not a cycle per se, fuel is used once and then sent to storage without further processing save additional packaging to provide for better isolation from the biosphere. This method is favored by six countries: the United States, Canada, Sweden, Finland, Spain and South Africa. Some countries, notably Finland, Sweden and Canada, have designed repositories to permit future recovery of the material should the need arise, while others plan for permanent sequestration in a geological repository like the Yucca Mountain nuclear waste repository in the United States.
Plutonium cycle
Several countries, including Japan, Switzerland, and previously Spain and Germany, are using or have used the reprocessing services offered by Areva NC and previously THORP. Fission products, minor actinides, activation products, and reprocessed uranium are separated from the reactor-grade plutonium, which can then be fabricated into MOX fuel. Because the proportion of the non-fissile even-mass isotopes of plutonium rises with each pass through the cycle, there are currently no plans to reuse plutonium from used MOX fuel for a third pass in a thermal reactor. If fast reactors become available, they may be able to burn these, or almost any other actinide isotopes.
The use of a medium-scale reprocessing facility onsite, and the use of pyroprocessing rather than the present day aqueous reprocessing, is claimed to potentially be able to considerably reduce the nuclear proliferation potential or possible diversion of fissile material as the processing facility is in-situ. Similarly as plutonium is not separated on its own in the pyroprocessing cycle, rather all actinides are "electro-won" or "refined" from the spent fuel, the plutonium is never separated on its own, instead it comes over into the new fuel mixed with gamma and alpha emitting actinides, species that "self-protect" it in numerous possible thief scenarios.
Beginning in 2016 Russia has been testing and is now deploying Remix Fuel in which the spent nuclear fuel is put through a process like Pyroprocessing that separates the reactor Grade Plutonium and remaining Uranium from the fission products and fuel cladding. This mixed metal is then combined with a small quantity of medium enriched Uranium with approximately 17% U-235 concentration to make a new combined metal oxide fuel with 1% Reactor Grade plutonium and a U-235 concentration of 4%. These fuel rods are suitable for use in standard PWR reactors as the Plutonium content is no higher than that which exists at the end of cycle in the spent nuclear fuel. As of February 2020 Russia was deploying this fuel in some of their fleet of VVER reactors.
Minor actinides recycling
It has been proposed that in addition to the use of plutonium, the minor actinides could be used in a critical power reactor. Tests are already being conducted in which americium is being used as a fuel.
A number of reactor designs, like the Integral Fast Reactor, have been designed for this rather different fuel cycle. In principle, it should be possible to derive energy from the fission of any actinide nucleus. With a careful reactor design, all the actinides in the fuel can be consumed, leaving only lighter elements with short half-lives. Whereas this has been done in prototype plants, no such reactor has ever been operated on a large scale.
It so happens that the neutron cross-section of many actinides decreases with increasing neutron energy, but the ratio of fission to simple activation (neutron capture) changes in favour of fission as the neutron energy increases. Thus with a sufficiently high neutron energy, it should be possible to destroy even curium without the generation of the transcurium metals. This could be very desirable as it would make it significantly easier to reprocess and handle the actinide fuel.
One promising alternative from this perspective is an accelerator-driven sub-critical reactor / subcritical reactor. Here a beam of either protons (United States and European designs) or electrons (Japanese design) is directed into a target. In the case of protons, very fast neutrons will spall off the target, while in the case of the electrons, very high energy photons will be generated. These high-energy neutrons and photons will then be able to cause the fission of the heavy actinides.
Such reactors compare very well to other neutron sources in terms of neutron energy:
Thermal 0 to 100 eV
Epithermal 100 eV to 100 keV
Fast (from nuclear fission) 100 keV to 3 MeV
DD fusion 2.5 MeV
DT fusion 14 MeV
Accelerator driven core 200 MeV (lead driven by 1.6 GeV protons)
Muon-catalyzed fusion 7 GeV.
As an alternative, the curium-244, with a half-life of 18 years, could be left to decay into plutonium-240 before being used in fuel in a fast reactor.
Fuel or targets for this actinide transmutation
To date the nature of the fuel (targets) for actinide transformation has not been chosen.
If actinides are transmuted in a Subcritical reactor, it is likely that the fuel will have to be able to tolerate more thermal cycles than conventional fuel. Due to current particle accelerators not being optimized for long continuous operation at least the first generation of accelerator-driven sub-critical reactor is unlikely to be able to maintain a constant operation period for equally long times as a critical reactor, and each time the accelerator stops then the fuel will cool down.
On the other hand, if actinides are destroyed using a fast reactor, such as an Integral Fast Reactor, then the fuel will most likely not be exposed to many more thermal cycles than in a normal power station.
Depending on the matrix the process can generate more transuranics from the matrix. This could either be viewed as good (generate more fuel) or can be viewed as bad (generation of more radiotoxic transuranic elements). A series of different matrices exists which can control this production of heavy actinides.
Fissile nuclei (such as 233U, 235U, and 239Pu) respond well to delayed neutrons and are thus important to keep a critical reactor stable; this limits the amount of minor actinides that can be destroyed in a critical reactor. As a consequence, it is important that the chosen matrix allows the reactor to keep the ratio of fissile to non-fissile nuclei high, as this enables it to destroy the long-lived actinides safely. In contrast, the power output of a sub-critical reactor is limited by the intensity of the driving particle accelerator, and thus it need not contain any uranium or plutonium at all. In such a system, it may be preferable to have an inert matrix that does not produce additional long-lived isotopes. Having a low fraction of delayed neutrons is not only not a problem in a subcritical reactor, it may even be slightly advantageous as criticality can be brought closer to unity, while still staying subcritical.
Actinides in an inert matrix
The actinides will be mixed with a metal which will not form more actinides; for instance, an alloy of actinides in a solid such as zirconia could be used.
The raison d’être of the Initiative for Inert Matrix Fuel (IMF) is to contribute to Research and Development studies on inert matrix fuels that could be used to utilise, reduce and dispose both weapon- and light water reactor-grade plutonium excesses. In addition to plutonium, the amounts of minor actinides are also increasing. These actinides have to be consequently disposed in a safe, ecological and economical way. The promising strategy that consists of utilising plutonium and minor actinides using a once-through fuel approach within existing commercial nuclear power reactors e.g. US, European, Russian or Japanese Light Water Reactors (LWR), Canadian Pressured Heavy Water Reactors, or in future transmutation units, has been emphasised since the beginning of the initiative. The approach, which makes use of inert matrix fuel is now studied by several groups in the world. This option has the advantage of reducing the plutonium amounts and potentially minor actinide contents prior to geological disposal. The second option is based on using a uranium-free fuel leachable for reprocessing and by following a multi-recycling strategy. In both cases, the advanced fuel material produces energy while consuming plutonium or the minor actinides. This material must, however, be robust. The selected material must be the result of a careful system study including inert matrix – burnable absorbent – fissile material as minimum components and with the addition of stabiliser. This yields a single-phase solid solution or more simply if this option is not selected a composite inert matrix–fissile component. In screening studies pre-selected elements were identified as suitable. In the 90s an IMF once through strategy was adopted considering the following properties:
neutron properties i.e. low absorption cross-section, optimal constant reactivity, suitable Doppler coefficient,
phase stability, chemical inertness, and compatibility,
acceptable thermo-physical properties i.e. heat capacity, thermal conductivity,
good behaviour under irradiation i.e. phase stability, minimum swelling,
retention of fission products or residual actinides, and
optimal properties after irradiation with insolubility for once through then out.
This once-through then out strategy may be adapted as a last cycle after multi-recycling if the fission yield is not large enough, in which case the following property is required good leaching properties for reprocessing and multi-recycling.
Actinides in a thorium matrix
Upon neutron bombardment, thorium can be converted to uranium-233. 233U is fissile, and has a larger fission cross section than both 235U and 238U, and thus it is far less likely to produce higher actinides through neutron capture.
Actinides in a uranium matrix
If the actinides are incorporated into a uranium-metal or uranium-oxide matrix, then the neutron capture of 238U is likely to generate new plutonium-239. An advantage of mixing the actinides with uranium and plutonium is that the large fission cross sections of 235U and 239Pu for the less energetic delayed neutrons could make the reaction stable enough to be carried out in a critical fast reactor, which is likely to be both cheaper and simpler than an accelerator driven system.
Mixed matrix
It is also possible to create a matrix made from a mix of the above-mentioned materials. This is most commonly done in fast reactors where one may wish to keep the breeding ratio of new fuel high enough to keep powering the reactor, but still low enough that the generated actinides can be safely destroyed without transporting them to another site. One way to do this is to use fuel where actinides and uranium is mixed with inert zirconium, producing fuel elements with the desired properties.
Uranium cycle in renewable mode
To fulfill the conditions required for a nuclear renewable energy concept, one has to explore a combination of processes going from the front end of the nuclear fuel cycle to the fuel production and the energy conversion using specific fluid fuels and reactors, as reported by Degueldre et al. (2019). Extraction of uranium from a diluted fluid ore such as seawater has been studied in various countries worldwide. This extraction should be carried out parsimoniously, as suggested by Degueldre (2017). An extraction rate of kilotons of U per year over centuries would not modify significantly the equilibrium concentration of uranium in the oceans (3.3 ppb). This equilibrium results from the input of 10 kilotons of U per year by river waters and its scavenging on the sea floor from the 1.37 exatons of water in the oceans. For a renewable uranium extraction, the use of a specific biomass material is suggested to adsorb uranium and subsequently other transition metals. The uranium loading on the biomass would be around 100 mg per kg. After contact time, the loaded material would be dried and burned ( neutral) with heat conversion into electricity. The uranium ‘burning’ in a molten salt fast reactor helps to optimize the energy conversion by burning all actinide isotopes with an excellent yield for producing a maximum amount of thermal energy from fission and converting it into electricity. This optimisation can be reached by reducing the moderation and the fission product concentration in the liquid fuel/coolant. These effects can be achieved by using a maximum amount of actinides and a minimum amount of alkaline/earth alkaline elements yielding a harder neutron spectrum. Under these optimal conditions the consumption of natural uranium would be 7 tons per year and per gigawatt (GW) of produced electricity. The coupling of uranium extraction from the sea and its optimal utilisation in a molten salt fast reactor should allow nuclear energy to gain the label renewable. In addition, the amount of seawater used by a nuclear power plant to cool the last coolant fluid and the turbine would be ~2.1 giga tons per year for a fast molten salt reactor, corresponding to 7 tons of natural uranium extractable per year. This practice justifies the label renewable.
Thorium cycle
In the thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material.
\overset{neutron}{n} + ^{232}_{90}Th -> ^{233}_{90}Th ->[\beta^-] ^{233}_{91}Pa ->[\beta^-] \overset{fuel}{^{233}_{92}U}
After starting the reactor with existing U-233 or some other fissile material such as U-235 or Pu-239, a breeding cycle similar to but more efficient than that with U-238 and plutonium can be created. The Th-232 absorbs a neutron to become Th-233 which quickly decays to protactinium-233. Protactinium-233 in turn decays with a half-life of 27 days to U-233. In some molten salt reactor designs, the Pa-233 is extracted and protected from neutrons (which could transform it to Pa-234 and then to U-234), until it has decayed to U-233. This is done in order to improve the breeding ratio which is low compared to fast reactors.
Thorium is at least 4-5 times more abundant in nature than all of uranium isotopes combined; thorium is fairly evenly spread around Earth with a lot of countries having huge supplies of it; preparation of thorium fuel does not require difficult and expensive enrichment processes; the thorium fuel cycle creates mainly Uranium-233 contaminated with Uranium-232 which makes it harder to use in a normal, pre-assembled nuclear weapon which is stable over long periods of time (unfortunately drawbacks are much lower for immediate use weapons or where final assembly occurs just prior to usage time); elimination of at least the transuranic portion of the nuclear waste problem is possible in MSR and other breeder reactor designs.
One of the earliest efforts to use a thorium fuel cycle took place at Oak Ridge National Laboratory in the 1960s. An experimental reactor was built based on molten salt reactor technology to study the feasibility of such an approach, using thorium fluoride salt kept hot enough to be liquid, thus eliminating the need for fabricating fuel elements. This effort culminated in the Molten-Salt Reactor Experiment that used 232Th as the fertile material and 233U as the fissile fuel. Due to a lack of funding, the MSR program was discontinued in 1976.
Thorium was first used commercially in the Indian Point Unit 1 reactor which began operation in 1962. The cost of recovering U-233 from the spent fuel was deemed uneconomical, since less than 1% of the thorium was converted to U-233. The plant's owner switched to uranium fuel, which was used until the reactor was permanently shut down in 1974.
Current industrial activity
Currently the only isotopes used as nuclear fuel are uranium-235 (U-235), uranium-238 (U-238) and plutonium-239, although the proposed thorium fuel cycle has advantages. Some modern reactors, with minor modifications, can use thorium. Thorium is approximately three times more abundant in the Earth's crust than uranium (and 550 times more abundant than uranium-235). There has been little exploration for thorium resources, and thus the proven reserves are comparatively small. Thorium is more plentiful than uranium in some countries, notably India. The main thorium-bearing mineral, monazite is currently mostly of interest due to its content of rare earth elements and most of the thorium is simply dumped on spoils tips similar to uranium mine tailings. As mining for rare earth elements occurs mainly in China and as it is not associated in the public consciousness with the nuclear fuel cycle, Thorium-containing mine tailings - despite their radioactivity - are not commonly seen as a nuclear waste issue and are not treated as such by regulators.
Virtually all ever deployed heavy water reactors and some graphite-moderated reactors can use natural uranium, but the vast majority of the world's reactors require enriched uranium, in which the ratio of U-235 to U-238 is increased. In civilian reactors, the enrichment is increased to 3-5% U-235 and 95% U-238, but in naval reactors there is as much as 93% U-235. The fissile content in spent fuel from most light water reactors is high enough to allow its use as fuel for reactors capable of using natural uranium based fuel. However, this would require at least mechanical and/or thermal reprocessing (forming the spent fuel into a new fuel assembly) and is thus not currently widely done.
The term nuclear fuel is not normally used in respect to fusion power, which fuses isotopes of hydrogen into helium to release energy.
| Technology | Fuel | null |
194068 | https://en.wikipedia.org/wiki/Blue-ringed%20octopus | Blue-ringed octopus | Blue-ringed octopuses, comprising the genus Hapalochlaena, are four extremely venomous species of octopus that are found in tide pools and coral reefs in the Pacific and Indian oceans, from Japan to Australia. They can be identified by their yellowish skin and characteristic blue and black rings that can change color dramatically when the animal is threatened. They eat small crustaceans, including crabs, hermit crabs, shrimp, and other small sea animals.
They are one of the world's most venomous marine animals. Despite their small size——and relatively docile nature, they are very dangerous if provoked when handled because their venom contains a powerful neurotoxin called tetrodotoxin.
The species tends to have a lifespan of approximately two to three years. This may vary depending on factors such as nutrition, temperature, and the intensity of light within its environment.
Classification
The genus was described by British zoologist Guy Coburn Robson in 1929. There are four confirmed species of Hapalochlaena, and six possible but still undescribed species being researched:
Greater blue-ringed octopus (Hapalochlaena lunulata)
Southern blue-ringed octopus or lesser blue-ringed octopus (Hapalochlaena maculosa)
Blue-lined octopus (Hapalochlaena fasciata)
Hapalochlaena nierstraszi was documented and described in 1938 from a single specimen found in the Bay of Bengal, with a second specimen caught and described in 2013.
Behavior
Blue-ringed octopuses spend most of their time hiding in crevices while displaying effective camouflage patterns with their dermal chromatophore cells. Like all octopuses, they can change shape easily, which allows them to squeeze into crevices much smaller than themselves. This, along with piling up rocks outside the entrance to its lair, helps safeguard the octopus from predators.
If they are provoked, they quickly change color, becoming bright yellow with each of the 50–60 rings flashing bright iridescent blue within a third of a second as an aposematic warning display. In the greater blue-ringed octopus (Hapalochlaena lunulata), the rings contain multi-layer light reflectors called iridophores. These are arranged to reflect blue–green light in a wide viewing direction. Beneath and around each ring are dark pigmented chromatophores which can be expanded within 1 second to enhance the contrast of the rings. There are no chromatophores above the ring, which is unusual for cephalopods as they typically use chromatophores to cover or spectrally modify iridescence. The fast flashes of the blue rings are achieved by using muscles which are under neural control. Under normal circumstances, each ring is hidden by contraction of muscles above the iridophores. When these relax and muscles outside the ring contract, the iridescence is exposed, thereby revealing the blue color.
Similar to other Octopoda, the blue-ringed octopus swims by expelling water from a funnel in a form of jet propulsion.
Feeding
The blue-ringed octopus feeds on fish and crustaceans. It pounces on its prey, seizing it with its arms and pulling it towards its mouth. It uses its horny beak to pierce through the tough crab or shrimp exoskeleton, releasing its venom. The venom paralyzes the muscles required for movement, which effectively kills the prey.
Reproduction
The mating ritual for the blue-ringed octopus begins when a male approaches a female and begins to caress her with his modified arm, the hectocotylus. A male mates with a female by grabbing her, which sometimes completely obscures the female's vision, then transferring sperm packets by inserting his hectocotylus into her mantle cavity repeatedly. Mating continues until the female has had enough, and in at least one species, the female has to remove the over-enthusiastic male by force. Males will attempt copulation with members of their own species regardless of sex or size, but interactions between males are most often shorter in duration and end with the mounting octopus withdrawing the hectocotylus without packet insertion or struggle.
Blue-ringed octopus females lay only one clutch of about 50 eggs in their lifetimes, towards the end of fall. Eggs are laid and then incubated underneath the female's arms for about six months. During this process, the female does not eat. After the eggs hatch, the female dies, and the new offspring will reach maturity and be able to mate by the next year.
Mating behavior
In the southern blue-ringed octopus, body mass is observed to be the strongest factor that influences copulatory rates. Evidence of female preference of larger males is apparent, although no male preference of females is shown. In this species, it is suggested that males expend more effort than females to initiate copulation. Additionally, it is unlikely that males use odor cues to identify females with which to mate. Male-male mounting attempts are common in H. maculosa, proposing that there is no discrimination between sex. Male blue-ringed octopus will adjust mating durations based on the female's recent mating history. Termination of copulation is not likely to happen with a female if she has not yet mated with another male. Duration length of mating is also found to be longer in these cases as well.
Toxicity
The blue-ringed octopus, despite its small size, carries enough venom to kill 26 adult humans within minutes. Their bites are tiny and often painless, with many victims not realizing they have been envenomated until respiratory depression and paralysis begins. No blue-ringed octopus antivenom is available.
Venom
The octopus produces venom containing tetrodotoxin, histamine, tryptamine, octopamine, taurine, acetylcholine, and dopamine. The venom can result in nausea, respiratory arrest, heart failure, severe and sometimes total paralysis, blindness, and can lead to death within minutes if not treated. Death is usually from suffocation due to paralysis of the diaphragm.
The venom is produced in the posterior salivary gland of the octopus by endosymbiotic bacteria. The salivary glands possess a tubuloacinar exocrine structure and are located in the intestinal blood space.
The major neurotoxin component of the blue-ringed octopus is a compound originally known as "maculotoxin"; in 1978, this maculotoxin was found to be tetrodotoxin, a neurotoxin also found in pufferfish, rough-skinned newts, and some poison dart frogs; the blue-ringed octopus is the first reported instance in which tetrodotoxin is used as a venom. Tetrodotoxin blocks sodium channels, causing motor paralysis and respiratory arrest within minutes of exposure. The octopus's own sodium channels are adapted to be resistant to tetrodotoxin.
Direct contact is necessary to be envenomated. Faced with danger, the octopus's first instinct is to flee. If the threat persists, the octopus will go into a defensive stance, and display its blue rings. If the octopus is cornered and touched, it may bite and envenomate its attacker.
Estimates of the number of recorded human fatalities caused by blue-ringed octopuses vary, ranging from seven to sixteen deaths; most scholars agree that there have been at least eleven.
Tetrodotoxin can be found in nearly every organ and gland of its body. Even sensitive areas such as the Needham's sac, branchial heart, nephridia, and gills have been found to contain tetrodotoxin, and it has no effect on the octopus's normal functions. This may be possible through a unique blood transport. The mother will inject the neurotoxin (and perhaps the toxin-producing bacteria) into her eggs to make them generate their own venom before hatching.
Effects
Tetrodotoxin causes severe and often total body paralysis. Tetrodotoxin envenomation can result in victims being fully aware of their surroundings but unable to move. Because of the paralysis, they have no way of signaling for help or indicating distress. The victim remains conscious and alert in a manner similar to the effect of curare or pancuronium bromide. This effect is temporary and will fade over a period of hours as the tetrodotoxin is metabolized and excreted by the body.
The symptoms vary in severity, with children being the most at risk because of their small body size.
Treatment
First aid treatment is pressure on the wound and artificial respiration once the paralysis has disabled the victim's respiratory muscles, which often occurs within minutes of being bitten. Because the venom primarily kills through paralysis, victims are frequently saved if artificial respiration is started and maintained before marked cyanosis and hypotension develop. Respiratory support until medical assistance arrives will improve the victim's chances of survival. Definitive hospital treatment involves placing the patient on a ventilator until the toxin is removed by the body. Victims who survive the first 24 hours usually recover completely.
Conservation
Currently, the blue-ringed octopus population information is listed as Least Concern according to the International Union for the Conservation of Nature (IUCN). However, threats such as bioprospecting, habitat fragmentation, degradation, overfishing, and human disturbance, as well as species collections for aquarium trade, may be threats to population numbers. It is possible that Hapalochlaena contributes to a variety of advantages to marine conservation. This genera of octopus provides stability of habitat biodiversity as well as expanding the balance of marine food webs. Various species of blue-ringed octopus may help control populations of Asian date mussels. Additionally, future research on tetrodotoxins produced by the blue-ringed octopus may produce new medicinal discoveries.
In popular culture
In the 1983 James Bond film Octopussy, the blue-ringed octopus is the prominent symbol of the secret order of female bandits and smugglers, appearing in an aquarium tank, on silk robes, and as a tattoo on women in the order. The Adventure Zone featured a blue-ringed octopus in its "Petals to the Metal" series.
| Biology and health sciences | Cephalopods | Animals |
194227 | https://en.wikipedia.org/wiki/Heat%20capacity | Heat capacity | Heat capacity or thermal capacity is a physical property of matter, defined as the amount of heat to be supplied to an object to produce a unit change in its temperature. The SI unit of heat capacity is joule per kelvin (J/K).
Heat capacity is an extensive property. The corresponding intensive property is the specific heat capacity, found by dividing the heat capacity of an object by its mass. Dividing the heat capacity by the amount of substance in moles yields its molar heat capacity. The volumetric heat capacity measures the heat capacity per volume. In architecture and civil engineering, the heat capacity of a building is often referred to as its thermal mass.
Definition
Basic definition
The heat capacity of an object, denoted by , is the limit
where is the amount of heat that must be added to the object (of mass M) in order to raise its temperature by .
The value of this parameter usually varies considerably depending on the starting temperature of the object and the pressure applied to it. In particular, it typically varies dramatically with phase transitions such as melting or vaporization (see enthalpy of fusion and enthalpy of vaporization). Therefore, it should be considered a function of those two variables.
Variation with temperature
The variation can be ignored in contexts when working with objects in narrow ranges of temperature and pressure. For example, the heat capacity of a block of iron weighing one pound is about 204 J/K when measured from a starting temperature T = 25 °C and P = 1 atm of pressure. That approximate value is adequate for temperatures between 15 °C and 35 °C, and surrounding pressures from 0 to 10 atmospheres, because the exact value varies very little in those ranges. One can trust that the same heat input of 204 J will raise the temperature of the block from 15 °C to 16 °C, or from 34 °C to 35 °C, with negligible error.
Heat capacities of a homogeneous system undergoing different thermodynamic processes
At constant pressure, δQ = dU + pdV (isobaric process)
At constant pressure, heat supplied to the system contributes to both the work done and the change in internal energy, according to the first law of thermodynamics. The heat capacity is called and defined as:
From the first law of thermodynamics follows and the inner energy as a function of and is:
For constant pressure the equation simplifies to:
where the final equality follows from the appropriate Maxwell relations, and is commonly used as the definition of the isobaric heat capacity.
At constant volume, dV = 0, δQ = dU (isochoric process)
A system undergoing a process at constant volume implies that no expansion work is done, so the heat supplied contributes only to the change in internal energy. The heat capacity obtained this way is denoted The value of is always less than the value of ( < )
Expressing the inner energy as a function of the variables and gives:
For a constant volume () the heat capacity reads:
The relation between and is then:
Calculating Cp and CV for an ideal gas
Mayer's relation:
where:
is the number of moles of the gas,
is the universal gas constant,
is the heat capacity ratio (which can be calculated by knowing the number of degrees of freedom of the gas molecule).
Using the above two relations, the specific heats can be deduced as follows:
Following from the equipartition of energy, it is deduced that an ideal gas has the isochoric heat capacity
where is the number of degrees of freedom of each individual particle in the gas, and is the number of internal degrees of freedom, where the number 3 comes from the three translational degrees of freedom (for a gas in 3D space). This means that a monoatomic ideal gas (with zero internal degrees of freedom) will have isochoric heat capacity .
At constant temperature (Isothermal process)
No change in internal energy (as the temperature of the system is constant throughout the process) leads to only work done by the total supplied heat, and thus an infinite amount of heat is required to increase the temperature of the system by a unit temperature, leading to infinite or undefined heat capacity of the system.
At the time of phase change (Phase transition)
Heat capacity of a system undergoing phase transition is infinite, because the heat is utilized in changing the state of the material rather than raising the overall temperature.
Heterogeneous objects
The heat capacity may be well-defined even for heterogeneous objects, with separate parts made of different materials; such as an electric motor, a crucible with some metal, or a whole building. In many cases, the (isobaric) heat capacity of such objects can be computed by simply adding together the (isobaric) heat capacities of the individual parts.
However, this computation is valid only when all parts of the object are at the same external pressure before and after the measurement. That may not be possible in some cases. For example, when heating an amount of gas in an elastic container, its volume and pressure will both increase, even if the atmospheric pressure outside the container is kept constant. Therefore, the effective heat capacity of the gas, in that situation, will have a value intermediate between its isobaric and isochoric capacities and .
For complex thermodynamic systems with several interacting parts and state variables, or for measurement conditions that are neither constant pressure nor constant volume, or for situations where the temperature is significantly non-uniform, the simple definitions of heat capacity above are not useful or even meaningful. The heat energy that is supplied may end up as kinetic energy (energy of motion) and potential energy (energy stored in force fields), both at macroscopic and atomic scales. Then the change in temperature will depend on the particular path that the system followed through its phase space between the initial and final states. Namely, one must somehow specify how the positions, velocities, pressures, volumes, etc. changed between the initial and final states; and use the general tools of thermodynamics to predict the system's reaction to a small energy input. The "constant volume" and "constant pressure" heating modes are just two among infinitely many paths that a simple homogeneous system can follow.
Measurement
The heat capacity can usually be measured by the method implied by its definition: start with the object at a known uniform temperature, add a known amount of heat energy to it, wait for its temperature to become uniform, and measure the change in its temperature. This method can give moderately accurate values for many solids; however, it cannot provide very precise measurements, especially for gases.
Units
International system (SI)
The SI unit for heat capacity of an object is joule per kelvin (J/K or J⋅K−1). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same unit as J/°C.
The heat capacity of an object is an amount of energy divided by a temperature change, which has the dimension L2⋅M⋅T−2⋅Θ−1. Therefore, the SI unit J/K is equivalent to kilogram meter squared per second squared per kelvin (kg⋅m2⋅s−2⋅K−1 ).
English (Imperial) engineering units
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use the so-called English Engineering units, that include the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (K, about 0.55556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.06 J), as the unit of heat. In those contexts, the unit of heat capacity is 1 BTU/°R ≈ 1900 J/K. The BTU was in fact defined so that the average heat capacity of one pound of water would be 1 BTU/°F. In this regard, with respect to mass, note conversion of 1 Btu/lb⋅°R ≈ 4,187 J/kg⋅K and the calorie (below).
Calories
In chemistry, heat amounts are often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
The "small calorie" (or "gram-calorie", "cal") is exactly 4.184 J. It was originally defined so that the heat capacity of 1 gram of liquid water would be 1 cal/°C.
The "grand calorie" (also "kilocalorie", "kilogram-calorie", or "food calorie"; "kcal" or "Cal") is 1000 cal, that is, exactly 4184 J. It was originally defined so that the heat capacity of 1 kg of water would be 1 kcal/°C.
With these units of heat energy, the units of heat capacity are
1 cal/°C = 4.184 J/K ;
1 kcal/°C = 4184 J/K.
Physical basis
Negative heat capacity
Most physical systems exhibit a positive heat capacity; constant-volume and constant-pressure heat capacities, rigorously defined as partial derivatives, are always positive for homogeneous bodies. However, even though it can seem paradoxical at first, there are some systems for which the heat capacity / is negative. Examples include a reversibly and nearly adiabatically expanding ideal gas, which cools, < 0, while a small amount of heat > 0 is put in, or combusting methane with increasing temperature, > 0, and giving off heat, < 0. Others are inhomogeneous systems that do not meet the strict definition of thermodynamic equilibrium. They include gravitating objects such as stars and galaxies, and also some nano-scale clusters of a few tens of atoms close to a phase transition. A negative heat capacity can result in a negative temperature.
Stars and black holes
According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy Upot and the average kinetic energy Ukin are locked together in the relation
The total energy U (= Upot + Ukin) therefore obeys
If the system loses energy, for example, by radiating energy into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity.
A more extreme version of this occurs with black holes. According to black-hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.
Consequences
According to the second law of thermodynamics, when two systems with different temperatures interact via a purely thermal connection, heat will flow from the hotter system to the cooler one (this can also be understood from a statistical point of view). Therefore, if such systems have equal temperatures, they are at thermal equilibrium. However, this equilibrium is stable only if the systems have positive heat capacities. For such systems, when heat flows from a higher-temperature system to a lower-temperature one, the temperature of the first decreases and that of the latter increases, so that both approach equilibrium. In contrast, for systems with negative heat capacities, the temperature of the hotter system will further increase as it loses heat, and that of the colder will further decrease, so that they will move farther from equilibrium. This means that the equilibrium is unstable.
For example, according to theory, the smaller (less massive) a black hole is, the smaller its Schwarzschild radius will be, and therefore the greater the curvature of its event horizon will be, as well as its temperature. Thus, the smaller the black hole, the more thermal radiation it will emit and the more quickly it will evaporate by Hawking radiation.
| Physical sciences | Thermodynamics | null |
194277 | https://en.wikipedia.org/wiki/Mangonel | Mangonel | The mangonel, also called the traction trebuchet, was a type of trebuchet used in Ancient China starting from the Warring States period, and later across Eurasia by the 6th century AD. Unlike the later counterweight trebuchet, the mangonel operated on manpower-pulling cords attached to a lever and sling to launch projectiles.
Although the mangonel required more men to function, it was also less complex and faster to reload than the torsion-powered onager which it replaced in early Medieval Europe. It was replaced as the primary siege weapon in the 12th and 13th centuries by the counterweight trebuchet. A common misconception about the mangonel is that it was a torsion siege engine.
Etymology
The word mangonel was first attested in English in the 13th century, it is borrowed from Old French mangonel, mangonelle (> French mangonneau). The French word is from Medieval Latin manganellus, mangonellus, diminutive form of Late Latin manganum, itself probably derived from the Greek mangana, "a generic term for construction machinery." or mágganon "engine of war, axis of a pulley"
Mangonel was a general term for medieval stone-throwing artillery and was used more specifically to refer to manually (traction--) powered weapons. It is sometimes wrongly used to refer to the onager. Modern military historians came up with the term "traction trebuchet" to distinguish it from previous torsion machines such as the onager.
The mangonel was called al-manjanīq, arrada, shaytani, or sultani in Arabic. In China, the mangonel was called the pào (砲).
History
China
The mangonel originated in ancient China. Torsion-based siege weapons such as the ballista and onager are not known to have been used in China.
The first recorded use of mangonels was in ancient China. They were probably used by the Mohists as early as 4th century BC; descriptions can be found in the Mozi (compiled in the 4th century BC). According to the Mozi, the mangonel was high with buried below ground, the fulcrum attached was constructed from the wheels of a cart, the throwing arm was long with three quarters above the pivot and a quarter below to which the ropes are attached, and the sling long. The range given for projectiles are , , and . They were used as defensive weapons stationed on walls and sometimes hurled hollowed out logs filled with burning charcoal to destroy enemy siege works. By the 1st century AD, commentators were interpreting other passages in texts such as the Zuo zhuan and Classic of Poetry as references to the mangonel: "the guai is 'a great arm of wood on which a stone is laid, and this by means of a device [ji] is shot off and so strikes down the enemy. The Records of the Grand Historian say that "The flying stones weigh 12 catties and by devices [ji] are shot off 300 paces." Mangonels went into decline during the Han dynasty due to long periods of peace but became a common siege weapon again during the Three Kingdoms period. They were commonly called stone-throwing machines, thunder carriages, and stone carriages in the following centuries. They were used as ship mounted weapons by 573 for attacking enemy fortifications. It seems that during the early 7th century, improvements were made on mangonels, although it is not explicitly stated what. According to a stele in Barkul celebrating Tang Taizong's conquest of what is now Ejin Banner, the engineer Jiang Xingben made great advancements on mangonels that were unknown in ancient times. Jiang Xingben participated in the construction of siege engines for Taizong's campaigns against the Western Regions.
In 617 Li Mi (Sui dynasty) constructed 300 mangonels for his assault on Luoyang, in 621 Li Shimin did the same at Luoyang, and onward into the Song dynasty when in 1161, mangonels operated by Song dynasty soldiers fired bombs of lime and sulphur against the ships of the Jin dynasty navy during the Battle of Caishi. During the Jingde period (1004–1007), many young men rose in office due to their military accomplishments, and one such man, Zhang Cun, was said to have possessed no knowledge except how to operate a Whirlwind mangonel. When the Jurchen Jin dynasty (1115–1234) laid siege to Kaifeng in 1126, they attacked with 5,000 mangonels.
Chinese mangonels
The Wujing Zongyao lists various types of the mangonel:
Whirlwind – a swivel mangonel for shooting small missiles that could be turned to face any direction
Whirlwind battery – five whirlwind mangonels combined on a single turntable
Pao che (catapult cart) – a whirlwind mangonel on wheels
Crouching tiger – medium-sized mangonel considered stronger than the whirlwind type but weaker than the four-footed
Four-footed – a trestle-frame mangonel for shooting heavier projectiles
Two-seven component – different weight classes for the four-footed type indicated by the number of poles bound together to create the swinging arm
Spread
The mangonel was adopted by various peoples west of China such as the Byzantines, Persians, Arabs, and Avars by the sixth to seventh centuries AD. Some scholars suggest that Avars carried the mangonel westward while others claim that the Byzantines already possessed knowledge of the mangonel beforehand. Regardless of the vector of transmission, it appeared in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager. The rapid displacement of torsion siege engines was probably due to a combination of reasons. The mangonel is simpler in design, has a faster rate of fire, increased accuracy, and comparable range and power. It was probably also safer than the twisted cords of torsion weapons, "whose bundles of taut sinews stored up huge amounts of energy even in resting state and were prone to catastrophic failure when in use." At the same time, the late Roman Empire seems to have fielded "considerably less artillery than its forebears, organised now in separate units, so the weaponry that came into the hands of successor states might have been limited in quantity." Evidence from Gaul and Germania suggests there was substantial loss of skills and techniques in artillery further west.
According to the Miracles of Saint Demetrius, probably written around 620 by John, Archbishop of Thessaloniki, the Avaro-Slavs attacked Thessaloniki in 586 with more than 50 mangonels. The bombardment lasted for hours, but the operators were inaccurate and most of the shots missed their target. When one stone did reach their target, it "demolished the top of the rampart down to the walkway." The Miracles does not provide a clear date of the siege, which could have been in 586 or 597. An argument has been made that the Byzantines were already acquainted with mangonels prior to this based on the History written by Theophylact Simocatta in the late 620s. The account contained describes a captured Byzantine soldier named Busas who taught the Avars how to construct a "besieging machine" which led to their conquest of Appiaria in 587. The word used for the machine is helepolis, which does not indicate a specific siege engine. It has been variously interpreted as a battering ram, a stone-throwing trebuchet, and a siege tower. Theophylact's account is vague on descriptions of the device and why it allowed the Avars to take Appiaria after they had already taken many Roman cities beforehand. The Greek term manganikon, from which the Arabic word for trebuchet mandjanik is derived, was also first used to describe Avar machines used against Constantinople in 626. Peter Purton notes that the account by Theophylact is not contemporary and likely written when the mangonel was more common. David Graff and Purton argue that the account by Theophylact has chronological problems and does not explain why the machine used by the Avars in the Miracles was treated as a novelty in either 586 or 597, since the Byzantines would have known about it in both cases. Yet there are no descriptions of the mangonel in the west prior to the encounter with the Avars.
Purton considers it equally likely for the Avars, Byzantines, or Persians to have learned of the mangonel first in the western world. Michael Fulton says it is at least equally likely that the Avars or some other vector transmitted the technology to the Byzantines, but expressed skepticism that the mangonel was complex enough to require explanation by a captured Byzantine soldier. He described Theophylact's account as a "racially motivated explanation of how a supposedly 'barbaric' people were able to replicate and incorporate a piece of 'civilised' technology". Others like Stephen McCotter and John Haldon consider the Avar theory to be the most likely. As McCotter puts it, "there is no good reason to doubt that the Avars may have brought it and the Byzantines copied it." According to Georgios Kardaras, the idea that the Avars directly learned siegecraft from a Byzantine captive is not credible, as they had been perfectly capable of taking walled Byzantine towns beforehand and had been in contact with other tribes who engaged in siege warfare.
The Byzantines may have used the mangonel in 587 against a Persian fort near Akbas, although it does seem to have been operated effectively, suggesting that it was still a new weapon. The Persians may have used mangonels against Dara in the early 7th century and against Jerusalem in 614. The Arabs had ship mounted mangonel by 653 and used them at Mecca in 683. The Franks and Saxons adopted the weapon in the 8th century. The Life of Louis the Pious contains the earliest western European reference to mangonels in its account of the siege of Tortosa (808–809). In the 890s, Abbo Cernuus described mango or manganaa used at the Siege of Paris (885–886) which had high posts, presumably meaning they used trebuchet-type throwing arms. In 1173, the Republic of Pisa tried to capture an island castle with mangonels on galleys. Mangonels were also used in India.
Independent invention
According to Leife Inge Ree Peterson, a mangonel could have been used at Theodosiopolis in 421 but was "likely an onager". Peterson says that mangonels may have been independently invented or at least known in the Eastern Mediterranean by 500 AD based on records of different and better artillery weapons, however there is no explicit description of a mangonel. According to Peterson's timeline and presumption that the mangonel became widespread throughout the Roman Empire by the mid-6th century, mangonels would also have been used in Spain and Italy by the mid 6th century, in Africa by the 7th century, and by the Franks in the 8th century. Tracy Rihll suggests that the mangonel was independently invented through an evolution of the Byzantine staff-sling, although this has received little support. There are no sources indicating whether Byzantium received the mangonel from East Asia or if it was independently invented.
Notable uses in history
The mangonel was most efficient as an anti-personnel weapon, used in a supportive position alongside archers and slingers. Most accounts of mangonels describe them as light artillery weapons while actual penetration of defenses was the result of mining or siege towers. At the Siege of Kamacha in 766, Byzantine defenders used wooden cover to protect themselves from the enemy artillery while inflicting casualties with their stone throwers. Michael the Syrian noted that at the siege of Balis in 823 it was the defenders that suffered from bombardment rather than the fortifications. At the siege of Kaysum, Abdallah ibn Tahir al-Khurasani used artillery to damage houses in the town. The Sack of Amorium in 838 saw the use of mangonels to drive away defenders and destroy wooden defenses. At the siege of Marand in 848, mangonels were used, "reportedly killing 100 and wounding 400 on each side during the eight-month siege." During the siege of Baghdad in 865, defensive artillery was responsible for repelling an attack on the city gate while mangonels on boats claimed a hundred of the defenders' lives.
Some exceptionally large and powerful mangonels have been described during the 11th century or later. At the Siege of Manzikert (1054), the Seljuks' initial siege artillery was countered by the defenders' own, which shot stones at the besieging machine. In response, the Seljuks constructed another one requiring 400 men to pull and throw stones weighing . A breach was created on the first shot but the machine was burnt down by the defenders. According to Matthew of Edessa, this machine weighed and caused several casualties to the city's defenders. Ibn al-Adim describes a mangonel capable of throwing a man in 1089. At the siege of Haizhou in 1161, a mangonel was reported to have had a range of 200 paces (over ).
Decline
West of China, the mangonel remained the primary siege engine until the late 12th century when it was replaced by the counterweight trebuchet. In China the mangonel was the primary siege engine until the counterweight trebuchet was introduced during the Mongol conquest of the Song dynasty in the 13th century. The counterweight trebuchet did not completely replace the mangonel. Despite its greater range, counterweight trebuchets had to be constructed close to the site of the siege unlike mangonels, which were smaller, lighter, cheaper, and easier to take apart and put back together again where necessary. The superiority of the counterweight trebuchet was not clear cut. Of this, the Hongwu Emperor stated in 1388: "The old type of trebuchet was really more convenient. If you have a hundred of those machines, then when you are ready to march, each wooden pole can be carried by only four men. Then when you reach your destination, you encircle the city, set them up, and start shooting!" The mangonel continued to serve as an anti-personnel weapon. The Norwegian text of 1240, Speculum regale, explicitly states this division of functions. Mangonels were to be used for hitting people in undefended areas. As late as the Siege of Acre (1291), where the Mamluk Sultanate fielded 72 or 92 trebuchets, the majority were still mangonels while 14 or 15 were counterweight trebuchets. The counterweight trebuchets were unable to create a breach in Acre's walls and the Mamluks entered the city by sapping the northeast corner of the outer wall. The Templar of Tyre described the faster firing mangonels as more dangerous to the defenders than the counterweight trebuchets.
| Technology | Artillery | null |
194422 | https://en.wikipedia.org/wiki/Ball%20lightning | Ball lightning | Ball lightning is a rare and unexplained phenomenon described as luminescent, spherical objects that vary from pea-sized to several meters in diameter. Though usually associated with thunderstorms, the observed phenomenon is reported to last considerably longer than the split-second flash of a lightning bolt, and is a phenomenon distinct from St. Elmo's fire and will-o'-the-wisp.
Some 19th-century reports describe balls that eventually explode and leave behind an odor of sulfur. Descriptions of ball lightning appear in a variety of accounts over the centuries and have received attention from scientists. An optical spectrum of what appears to have been a ball lightning event was published in January 2014 and included a video at high frame rate.
Nevertheless, scientific data on ball lightning remain scarce.
Although laboratory experiments have produced effects that are visually similar to reports of ball lightning, how these relate to the supposed phenomenon remains unclear.
Characteristics
Descriptions of ball lightning vary widely. It has been described as moving up and down, sideways or in unpredictable trajectories, hovering and moving with or against the wind; attracted to, unaffected by, or repelled from buildings, people, cars and other objects. Some accounts describe it as moving through solid masses of wood or metal without effect, while others describe it as destructive and melting or burning those substances. Its appearance has also been linked to power lines, altitudes of and higher, and during thunderstorms and calm weather. Ball lightning has been described as transparent, translucent, multicolored, evenly lit, radiating flames, filaments or sparks, with shapes that vary between spheres, ovals, tear-drops, rods, or disks.
Ball lightning is often erroneously identified as St. Elmo's fire. They are separate and distinct phenomena.
The balls have been reported to disperse in many different ways, such as suddenly vanishing, gradually dissipating, being absorbed into an object, "popping," exploding loudly, or even exploding with force, which is sometimes reported as damaging. Accounts also vary on their alleged danger to humans, from lethal to harmless.
A review of the available literature published in 1972 identified the properties of a "typical" ball lightning, whilst cautioning against over-reliance on eye-witness accounts:
They frequently appear almost simultaneously with cloud-to-ground lightning discharge
They are generally spherical or pear-shaped with fuzzy edges
Their diameters range from , most commonly
Their brightness corresponds to roughly that of a domestic lamp, so they can be seen clearly in daylight
A wide range of colors has been observed, with red, orange, and yellow being the most common
The lifetime of each event is from one second to over a minute with the brightness remaining fairly constant during that time
They tend to move at a few meters per second, most often in a horizontal direction, but may also move vertically, remain stationary, or wander erratically
Many are described as having rotational motion
It is rare that observers report the sensation of heat, although in some cases the disappearance of the ball is accompanied by the liberation of heat
Some display an affinity for metal objects and may move along conductors such as wires, metal fences, or railroad tracks
Some appear within buildings passing through closed doors and windows
Some have appeared within metal aircraft and have entered and left without causing damage
The disappearance of a ball is generally rapid and may be either silent or explosive
Odors resembling ozone, burning sulphur, or nitrogen oxides are often reported
Historical accounts
Ball lightning is a possible source of legends that describe luminous balls, such as the mythological Anchimayen from Argentinean and Chilean Mapuche culture.
According to a statistical investigation carried out in 1960, of 1,962 Oak Ridge National Laboratory monthly role personnel, and of all 15,923 Union Carbide Nuclear Company personnel in Oak Ridge, found 5.6% and 3.1% respectively reported seeing ball lightning. A Scientific American article summarized the study as having found that ball lightning had been seen by 5% of the population of the Earth. Another study analyzed reports of more than 2,000 cases.
Gervase of Canterbury
The chronicle of Gervase of Canterbury, an English monk, contains what is possibly the earliest known reference to ball lightning, dated 7 June 1195. He states, "A marvellous sign descended near London", consisting of a dense and dark cloud, emitting a white substance that grew into a spherical shape under the cloud, from which a fiery globe fell towards the river.
Physicist Emeritus Professor Brian Tanner and historian Giles Gasper of Durham University identified the chronicle entry as probably describing ball lightning, and noted its similarity to other accounts:
Great Thunderstorm of Widecombe-in-the-Moor
One early account reports on the Great Thunderstorm at a church in Widecombe-in-the-Moor, Devon, in England, on 21 October 1638. Four people died and approximately 60 suffered injuries during a severe storm. Witnesses described an ball of fire striking and entering the church, nearly destroying it. Large stones from the church walls were hurled onto the ground and through large wooden beams. The ball of fire allegedly smashed the pews and many windows, and filled the church with a foul sulphurous odour and dark, thick smoke.
The ball of fire reportedly divided into two segments, one exiting through a window by smashing it open, the other disappearing somewhere inside the church. Because of the fire and sulphur smell, contemporaries explained the ball of fire as "the devil" or as the "flames of hell". Later, some blamed the entire incident on two people who had been playing cards in the pews during the sermon, thereby incurring God's wrath.
The sloop Catherine and Mary
In December 1726, a number of British newspapers printed an extract of a letter from John Howell of the sloop Catherine and Mary:
HMS Montague
One particularly large example was reported "on the authority of Dr. Gregory" in 1749:
Admiral Chambers on board the Montague, 4 November 1749, was taking an observation just before noon...he observed a large ball of blue fire about distant from them. They immediately lowered their topsails, but it came up so fast upon them, that, before they could raise the main tack, they observed the ball rise almost perpendicularly, and not above from the main chains when it went off with an explosion, as great as if a hundred cannons had been discharged at the same time, leaving behind it a strong sulphurous smell. By this explosion the main top-mast was shattered into pieces and the main mast went down to the keel.
Five men were knocked down and one of them very bruised. Just before the explosion, the ball seemed to be the size of a large mill-stone.
Georg Richmann
A 1753 report recounts lethal ball lightning when professor Georg Richmann of Saint Petersburg, Russia, constructed a kite-flying apparatus similar to Benjamin Franklin's proposal a year earlier. Richmann was attending a meeting of the Academy of Sciences when he heard thunder and ran home with his engraver to capture the event for posterity. While the experiment was under way, ball lightning appeared, travelled down the string, struck Richmann's forehead and killed him. The ball had left a red spot on Richmann's forehead, his shoes were blown open, and his clothing was singed. His engraver was knocked unconscious. The door-frame of the room was split and the door was torn from its hinges.
HMS Warren Hastings
An English journal reported that during an 1809 storm, three "balls of fire" appeared and "attacked" the British ship HMS Warren Hastings. The crew watched one ball descend, killing a man on deck and setting the main mast on fire. A crewman went out to retrieve the fallen body and was struck by a second ball, which knocked him back and left him with mild burns. A third man was killed by contact with the third ball. Crew members reported a persistent, sickening sulphur smell afterward.
Ebenezer Cobham Brewer
Ebenezer Cobham Brewer, in his 1864 US edition of A Guide to the Scientific Knowledge of Things Familiar, discusses "globular lightning". He describes it as slow-moving balls of fire or explosive gas that sometimes fall to the earth or run along the ground during a thunderstorm. He said that the balls sometimes split into smaller balls and may explode "like a cannon".
Wilfrid de Fonvielle
In his book Thunder and Lightning, translated into English in 1875, French science-writer Wilfrid de Fonvielle wrote that there had been about 150 reports of globular lightning:
Globular lightning seems to be particularly attracted to metals; thus it will seek the railings of balconies, or else water or gas pipes etc., It has no peculiar tint of its own but will appear of any colour as the case may be ... at Coethen in the Duchy of Anhalt it appeared green. M. Colon, Vice-President of the Geological Society of Paris, saw a ball of lightning descend slowly from the sky along the bark of a poplar tree; as soon as it touched the earth it bounced up again, and disappeared without exploding. On 10th of September 1845 a ball of lightning entered the kitchen of a house in the village of Salagnac in the valley of Correze. This ball rolled across without doing any harm to two women and a young man who were here; but on getting into an adjoining stable it exploded and killed a pig which happened to be shut up there, and which, knowing nothing about the wonders of thunder and lightning, dared to smell it in the most rude and unbecoming manner.
The motion of such balls is far from being very rapid – they have even been observed occasionally to pause in their course, but they are not the less destructive for all that. A ball of lightning which entered the church of Stralsund, on exploding, projected a number of balls which exploded in their turn like shells.
Tsar Nicholas II
Nicholas II, the final tsar of the Russian Empire, reported witnessing a fiery ball as a child attending church in the company of his grandfather Alexander II.
Once my parents were away, and I was at the all-night vigil with my grandfather in the small church in Alexandria. During the service there was a powerful thunderstorm, streaks of lightning flashed one after the other, and it seemed as if the peals of thunder would shake even the church and the whole world to its foundations. Suddenly it became quite dark, a blast of wind from the open door blew out the flame of the candles which were lit in front of the iconostasis, there was a long clap of thunder, louder than before, and I suddenly saw a fiery ball flying from the window straight towards the head of the Emperor. The ball (it was of lightning) whirled around the floor, then passed the chandelier and flew out through the door into the park. My heart froze, I glanced at my grandfather – his face was completely calm. He crossed himself just as calmly as he had when the fiery ball had flown near us, and I felt that it was unseemly and not courageous to be frightened as I was. I felt that one had only to look at what was happening and believe in the mercy of God, as he, my grandfather, did. After the ball had passed through the whole church, and suddenly gone out through the door, I again looked at my grandfather. A faint smile was on his face, and he nodded his head at me. My panic disappeared, and from that time I had no more fear of storms.
Aleister Crowley
British occultist Aleister Crowley reported witnessing what he referred to as "globular electricity" during a thunderstorm on Lake Pasquaney in New Hampshire, United States, in 1916. He was sheltered in a small cottage when he, in his own words,
...noticed, with what I can only describe as calm amazement, that a dazzling globe of electric fire, apparently between in diameter, was stationary about below and to the right of my right knee. As I looked at it, it exploded with a sharp report quite impossible to confuse with the continuous turmoil of the lightning, thunder and hail, or that of the lashed water and smashed wood which was creating a pandemonium outside the cottage. I felt a very slight shock in the middle of my right hand, which was closer to the globe than any other part of my body.
R. C. Jennison
Jennison, of the Electronics Laboratory at the University of Kent, described his own observation of ball lightning in an article published in Nature in 1969:
I was seated near the front of the passenger cabin of an all-metal airliner (Eastern Airlines Flight EA 539) on a late night flight from New York to Washington. The aircraft encountered an electrical storm during which it was enveloped in a sudden bright and loud electrical discharge (0005 h EST, March 19, 1963). Some seconds after this a glowing sphere a little more than in diameter emerged from the pilot's cabin and passed down the aisle of the aircraft approximately from me, maintaining the same height and course for the whole distance over which it could be observed.
Other accounts
Willy Ley discussed a sighting in Paris on 5 July 1852 "for which sworn statements were filed with the French Academy of Science". During a thunderstorm, a tailor living next to Church of the Val-de-Grâce saw a ball the size of a human head come out of the fireplace. It flew around the room, reentered the fireplace, and exploded in and destroyed the top of the chimney.
On 30 April 1877 a ball of lightning entered the Golden Temple at Amritsar, India, and exited through a side door. Several people observed the ball, and the incident is inscribed on the front wall of Darshani Deori.
On 22 November 1894 an unusually prolonged instance of natural ball lightning occurred in Golden, Colorado, which suggests it could be artificially induced from the atmosphere. The Golden Globe newspaper reported:
A beautiful yet strange phenomenon was seen in this city on last Monday night. The wind was high and the air seemed to be full of electricity. In front of, above and around the new Hall of Engineering of the School of Mines, balls of fire played tag for half an hour, to the wonder and amazement of all who saw the display. In this building is situated the dynamos and electrical apparatus of perhaps the finest electrical plant of its size in the state. There was probably a visiting delegation from the clouds, to the captives of the dynamos on last Monday night, and they certainly had a fine visit and a roystering game of romp.
On 22 May 1901 in the Kazakh city of Ouralsk in the Russian Empire (now Oral, Kazakhstan), "a dazzlingly brilliant ball of fire" descended gradually from the sky during a thunderstorm, then entered into a house where 21 people had taken refuge, "wreaked havoc with the apartment, broke through the wall into a stove in the adjoining room, smashed the stove-pipe, and carried it off with such violence that it was dashed against the opposite wall, and went out through the broken window". The incident was reported in the Bulletin de la Société astronomique de France the following year.
In July 1907 ball lightning hit the Cape Naturaliste Lighthouse in Western Australia. Lighthouse-keeper Patrick Baird was in the tower at the time and was knocked unconscious. His daughter Ethel recorded the event.
Ley discussed another incident in Bischofswerda, Germany. On 29 April 1925 multiple witnesses saw a silent ball land near a mailman, move along a telephone wire to a school, knock back a teacher using a telephone, and bore perfectly round coin-sized holes through a glass pane. of wire was melted, several telephone poles were damaged, an underground cable was broken, and several workmen were thrown to the ground but unhurt.
An early reference to ball lightning appears in a children's book set in the 19th century by Laura Ingalls Wilder. The books are considered historical fiction, but the author always insisted they were descriptive of actual events in her life. In Wilder's description, three separate balls of lightning appear during a winter blizzard near a cast-iron stove in the family's kitchen. They are described as appearing near the stovepipe, then rolling across the floor, only to disappear as the mother (Caroline Ingalls) chases them with a willow-branch broom.
Pilots in World War II (1939–1945) described an unusual phenomenon for which ball lightning has been suggested as an explanation. The pilots saw small balls of light moving in strange trajectories, which came to be referred to as foo fighters.
Submariners in World War II gave the most frequent and consistent accounts of small ball lightning in the confined submarine atmosphere. There are repeated accounts of inadvertent production of floating explosive balls when the battery banks were switched in or out, especially if misswitched or when the highly inductive electrical motors were misconnected or disconnected. An attempt later to duplicate those balls with a surplus submarine battery resulted in several failures and an explosion.
On 6 August 1994 a ball lightning is believed to have gone through a closed window in Uppsala, Sweden, leaving a circular hole about in diameter. The hole in the window was found days later, and it was thought it could have happened during the thunderstorm; a lightning strike was witnessed by residents in the area, and was recorded by a lightning strike tracking system at the Division for Electricity and Lightning Research at Uppsala University.
In 2005 an incident occurred in Guernsey, where an apparent lightning-strike on an aircraft led to multiple fireball sightings on the ground.
On 10 July 2011, during a powerful thunderstorm, a ball of light with a tail went through a window to the control room of local emergency services in Liberec in the Czech Republic. The ball bounced from window to ceiling, then to the floor and back, where it rolled along it for two or three meters. It then dropped to the floor and disappeared. The staff present in the control room were frightened, smelled electricity and burned cables and thought something was burning. The computers froze (not crashed) and all communications equipment was knocked out for the night until restored by technicians. Aside from damages caused by disrupting equipment, only one computer monitor was destroyed.
On 15 December 2014, Loganair Flight 6780 in Scotland experienced ball lightning in the forward cabin just before lightning struck the aircraft nose, the plane fell several thousand feet and came within 1,100 feet of the North Sea before making an emergency landing at Aberdeen Airport.
On June 24, 2022, in a massive thunderstorm front, a retired lady at Liebenberg, Lower Austria, saw blinding cloud-to-ground lightning to the northeast and within 1 min spotted a yellowish "burning object with licking flames" that followed a wavy trajectory along the local road about 15 m over ground and was lost from sight after 2 seconds. It occurred at the end of a local thunderstorm cell. The European Severe Storms Laboratory recorded this as ball lightning.
Direct measurements of natural ball lightning
In January 2014, scientists from Northwest Normal University in Lanzhou, China, published the results of recordings made in July 2012 of the optical spectrum of what was thought to be natural ball lightning made by chance during the study of ordinary cloud–ground lightning on the Tibetan Plateau. At a distance of , a total of 1.64 seconds of digital video of the ball lightning and its spectrum was made, from the formation of the ball lightning after the ordinary lightning struck the ground, up to the optical decay of the phenomenon. Additional video was recorded by a high-speed (3000 frames/sec) camera, which captured only the last 0.78 seconds of the event, due to its limited recording capacity. Both cameras were equipped with slitless spectrographs. The researchers detected emission lines of neutral atomic silicon, calcium, iron, nitrogen, and oxygen—in contrast with mainly ionized nitrogen emission lines in the spectrum of the parent lightning. The ball lightning traveled horizontally across the video frame at an average speed equivalent of . It had a diameter of and covered a distance of about within those 1.64 s.
Oscillations in the light intensity and in the oxygen and nitrogen emission at a frequency of 100 hertz, possibly caused by the electromagnetic field of the 50 Hz high-voltage power transmission line in the vicinity, were observed. From the spectrum, the temperature of the ball lightning was assessed as being lower than the temperature of the parent lightning (<15,000 to 30,000 K). The observed data are consistent with vaporization of soil as well as with ball lightning's sensitivity to electric fields.
Laboratory experiments
Scientists have long attempted to produce ball lightning in laboratory experiments. While some experiments have produced effects that are visually similar to reports of natural ball lightning, it has not yet been determined whether there is any relation.
Nikola Tesla reportedly could artificially produce balls and conducted some demonstrations of his ability. Tesla was more interested in higher voltages and powers as well as remote transmission of power; the balls he made were just a curiosity.
The International Committee on Ball Lightning (ICBL) held regular symposia on the subject. A related group uses the generic name "Unconventional Plasmas". The last ICBL symposium was tentatively scheduled for July 2012 in San Marcos, Texas but was cancelled due to a lack of submitted abstracts.
Wave-guided microwaves
Ohtsuki and Ofuruton described producing "plasma fireballs" by microwave interference within an air-filled cylindrical cavity fed by a rectangular waveguide using a 2.45 GHz, 5 kW (maximum power) microwave oscillator.
Water discharge experiments
Some scientific groups, including the Max Planck Institute, have reportedly produced a ball lightning-type effect by discharging a high-voltage capacitor in a tank of water.
Home microwave oven experiments
Many modern experiments involve using a microwave oven to produce small rising glowing balls, often referred to as plasma balls.
Generally, the experiments are conducted by placing a lit or recently extinguished match or other small object in a microwave oven. The burnt portion of the object flares up into a large ball of fire, while "plasma balls" float near the oven chamber ceiling. Some experiments describe covering the match with an inverted glass jar, which contains both the flame and the balls so that they do not damage the chamber walls. (A glass jar, however, eventually explodes rather than simply causing charred paint or melting metal, as happens to the inside of a microwave.) Experiments by Eli Jerby and Vladimir Dikhtyar in Israel revealed that microwave plasma balls are made up of nanoparticles with an average radius of . The team demonstrated the phenomenon with copper, salts, water and carbon.
Silicon experiments
Experiments in 2007 involved shocking silicon wafers with electricity, which vaporizes the silicon and induces oxidation in the vapors. The visual effect can be described as small glowing, sparkling orbs that roll around a surface. Two Brazilian scientists, Antonio Pavão and Gerson Paiva of the Federal University of Pernambuco have reportedly consistently made small long-lasting balls using this method. These experiments stemmed from the theory that ball lightning is actually oxidized silicon vapors (see vaporized silicon hypothesis, below).
Proposed scientific explanations
There is at present no widely accepted explanation for ball lightning. Several hypotheses have been advanced since the phenomenon was brought into the scientific realm by the English physician and electrical researcher William Snow Harris in 1843, and French Academy scientist François Arago in 1855.
Vaporized silicon hypothesis
This hypothesis suggests that ball lightning consists of vaporized silicon burning through oxidation. Lightning striking Earth's soil could vaporize the silica contained within it, and somehow separate the oxygen from the silicon dioxide, turning it into pure silicon vapor. As it cools, the silicon could condense into a floating aerosol, bound by its charge, glowing due to the heat of silicon recombining with oxygen. An experimental investigation of this effect, published in 2007, reported producing "luminous balls with lifetime in the order of seconds" by evaporating pure silicon with an electric arc. Videos and spectrographs of this experiment have been made available. This hypothesis got significant supportive data in 2014, when the first ever recorded spectra of natural ball lightning were published. The theorized forms of silicon storage in soil include nanoparticles of Si, SiO, and SiC.
Matthew Francis has dubbed this the "dirt clod hypothesis", in which the spectrum of ball lightning shows that it shares chemistry with soil.
Electrically charged solid-core model
In this model ball lightning is assumed to have a solid, positively charged core. According to this underlying assumption, the core is surrounded by a thin electron layer with a charge nearly equal in magnitude to that of the core. A vacuum exists between the core and the electron layer containing an intense electromagnetic (EM) field, which is reflected and guided by the electron layer. The microwave EM field applies a ponderomotive force (radiation pressure) to the electrons preventing them from falling into the core.
Microwave cavity hypothesis
Pyotr Kapitsa proposed that ball lightning is a glow discharge driven by microwave radiation that is guided to the ball along lines of ionized air from lightning clouds where it is produced. The ball serves as a resonant microwave cavity, automatically adjusting its radius to the wavelength of the microwave radiation so that resonance is maintained.
The Handel Maser-Soliton theory of ball lightning hypothesizes that the energy source generating the ball lightning is a large (several cubic kilometers) atmospheric maser. The ball lightning appears as a plasma caviton at the antinodal plane of the microwave radiation from the maser.
In 2017, Researchers from Zhejiang University in Hangzhou, China, proposed that the bright glow of lightning balls is created when microwaves become trapped inside a plasma bubble. At the tip of a lightning stroke reaching the ground, a relativistic electron bunch can be produced when in contact with microwave radiation.
The latter ionizes the local air and the radiation pressure evacuates the resulting plasma, forming a spherical plasma bubble that stably traps the radiation. Microwaves trapped inside the ball continue to generate plasma for a moment to maintain the bright flashes described in observer accounts. The ball eventually fades as the radiation held within the bubble starts to decay and microwaves are discharged from the sphere. The lightning balls can dramatically explode as the structure destabilizes. The theory could explain many of the strange characteristics of ball lightning. For instance, microwaves are able to pass through glass, which helps to explain why balls could be formed indoors.
Soliton hypothesis
Julio Rubinstein, David Finkelstein, and James R. Powell proposed that ball lightning is a detached St. Elmo's fire (1964–1970). St. Elmo's fire arises when a sharp conductor, such as a ship's mast, amplifies the atmospheric electric field to breakdown. For a globe the amplification factor is 3. A free ball of ionized air can amplify the ambient field this much by its own conductivity. When this maintains the ionization, the ball is then a soliton in the flow of atmospheric electricity.
Powell's kinetic theory calculation found that the ball size is set by the second Townsend coefficient (the mean free path of conduction electrons) near breakdown. Wandering glow discharges are found to occur within certain industrial microwave ovens and continue to glow for several seconds after power is shut off. Arcs drawn from high-power low-voltage microwave generators also are found to exhibit afterglow. Powell measured their spectra, and found that the after-glow comes mostly from metastable NO ions, which are long-lived at low temperatures. It occurred in air and in nitrous oxide, which possess such metastable ions, and not in atmospheres of argon, carbon dioxide, or helium, which do not.
The soliton model of a ball lightning was further developed. It was suggested that a ball lightning is based on spherically symmetric nonlinear oscillations of charged particles in plasma – the analogue of a spatial Langmuir soliton. These oscillations were described in both classical and quantum approaches. It was found that the most intense plasma oscillations occur in the central regions of a ball lightning. It is suggested that bound states of radially oscillating charged particles with oppositely oriented spins – the analogue of Cooper pairs – can appear inside a ball lightning. This phenomenon, in its turn, can lead to a superconducting phase in a ball lightning. The idea of the superconductivity in a ball lightning was considered earlier. The possibility of the existence of a ball lightning with a composite core was also discussed in this model.
Hydrodynamic vortex ring antisymmetry
One theory that may account for the wide spectrum of observational evidence is the idea of combustion inside the low-velocity region of spherical vortex breakdown of a natural vortex (e.g., the 'Hill's spherical vortex').
Nanobattery hypothesis
Oleg Meshcheryakov suggests that ball lightning is made of composite nano or submicrometer particles—each particle constituting a battery. A surface discharge shorts these batteries, causing a current that forms the ball. His model is described as an aerosol model that explains all the observable properties and processes of ball lightning.
Buoyant plasma hypothesis
The declassified Project Condign report concludes that buoyant charged plasma formations similar to ball lightning are formed by novel physical, electrical, and magnetic phenomena, and that these charged plasmas are capable of being transported at enormous speeds under the influence and balance of electrical charges in the atmosphere. These plasmas appear to originate due to more than one set of weather and electrically charged conditions, the scientific rationale for which is incomplete or not fully understood. One suggestion is that meteoroids breaking up in the atmosphere and forming charged plasmas as opposed to burning completely or impacting as meteorites could explain some instances of the phenomena, in addition to other unknown atmospheric events. However, according to Stenhoff, this explanation is considered insufficient to explain the ball lightning phenomenon, and would likely not withstand peer review.
Hallucinations induced by magnetic field
Cooray and Cooray (2008) stated that the features of hallucinations experienced by patients having epileptic seizures in the occipital lobe are similar to the observed features of ball lightning. The study also showed that the rapidly changing magnetic field of a close lightning flash is strong enough to excite the neurons in the brain. This strengthens the possibility of lightning-induced seizure in the occipital lobe of a person close to a lightning strike, establishing the connection between epileptic hallucination mimicking ball lightning and thunderstorms.
More recent research with transcranial magnetic stimulation has been shown to give the same hallucination results in the laboratory (termed magnetophosphenes), and these conditions have been shown to occur in nature near lightning strikes.
This hypothesis fails to explain observed physical damage caused by ball lightning or simultaneous observation by multiple witnesses. (At the very least, observations would differ substantially.)
Theoretical calculations from University of Innsbruck researchers suggest that the magnetic fields involved in certain types of lightning strikes could potentially induce visual hallucinations resembling ball lightning. Such fields, which are found within close distances to a point in which multiple lightning strikes have occurred over a few seconds, can directly cause the neurons in the visual cortex to fire, resulting in magnetophosphenes (magnetically induced visual hallucinations).
Rydberg matter concept
Manykin et al. have suggested atmospheric Rydberg matter as an explanation of ball lightning phenomena. Rydberg matter is a condensed form of highly excited atoms in many aspects similar to electron-hole droplets in semiconductors. However, in contrast to electron-hole droplets, Rydberg matter has an extended life-time—as long as hours. This condensed excited state of matter is supported by experiments, mainly of a group led by Holmlid. It is similar to a liquid or solid state of matter with extremely low (gas-like) density. Lumps of atmospheric Rydberg matter can result from condensation of highly excited atoms that form by atmospheric electrical phenomena, mainly from linear lightning. Stimulated decay of Rydberg matter clouds can, however, take the form of an avalanche, and so appear as an explosion.
Vacuum hypothesis
In December 1899, Nikola Tesla theorized that the balls consisted of a highly rarefied hot gas.
Electron-ion model
Fedosin presented a model in which charged ions are located inside the ball lightning, and electrons rotate in the shell, creating a magnetic field.
The long-term stability of ball lightning is ensured by the balance of electric and magnetic forces. The electric force acting on the electrons from the positive volume charge of the ions is the centripetal force that holds the electrons in place as they rotate. In turn, the ions are held by the magnetic field, which causes them to rotate around the magnetic field lines. The model predicts a maximum diameter of 34 cm for ball lightning, with the lightning having a charge of about 10 microcoulombs and being positively charged, and the energy of the lightning reaching 11 kilojoules.
The electron-ion model describes not only ball lightning, but also bead lightning, which usually occurs when linear lightning disintegrates. Based on the known dimensions of the beads of bead lightning, it is possible to calculate the electric charge of a single bead and its magnetic field. The electric forces of repulsion of neighboring beads are balanced by the magnetic forces of their attraction. Since the electromagnetic forces between the beads significantly exceed the force of the wind pressure, the beads remain in their places until the moment of extinction of the bead lightning.
Other hypotheses
Several other hypotheses have been proposed to explain ball lightning:
Spinning electric dipole hypothesis. A 1976 article by V. G. Endean postulated that ball lightning could be described as an electric field vector spinning in the microwave frequency region.
Electrostatic Leyden jar models. Stanley Singer discussed (1971) this type of hypothesis and suggested that the electrical recombination time would be too short for the ball lightning lifetimes often reported.
Smirnov proposed (1987) a fractal aerogel hypothesis.
M. I. Zelikin proposed (2006) an explanation (with a rigorous mathematical foundation) based on the hypothesis of plasma superconductivity (see also).
A. Meessen presented a theory at the 10th International Symposium on Ball Lightning (June 21–27, 2010, Kaliningrad, Russia) explaining all known properties of ball lightning in terms of collective oscillations of free electrons. The simplest case corresponds to radial oscillations in a spherical plasma membrane. These oscillations are sustained by parametric amplification, resulting from regular "inhalation" of charged particles that are present at lower densities in the ambient air. Ball lightning vanishes thus by silent extinction when the available density of charged particles is too low, while it disappears with a loud and sometimes very violent explosion when this density is too high. Electronic oscillations are also possible as stationary waves in a plasma ball or thick plasma membrane. This yields concentric luminous bubbles.
| Physical sciences | Storms | Earth science |
194538 | https://en.wikipedia.org/wiki/Thumb | Thumb | The thumb is the first digit of the hand, next to the index finger. When a person is standing in the medical anatomical position (where the palm is facing to the front), the thumb is the outermost digit. The Medical Latin English noun for thumb is pollex (compare hallux for big toe), and the corresponding adjective for thumb is pollical.
Definition
Thumb and fingers
The English word finger has two senses, even in the context of appendages of a single typical human hand:
1) Any of the five terminal members of the hand.
2) Any of the four terminal members of the hand, other than the thumb.
Linguistically, it appears that the original sense was the first of these two: (also rendered as ) was, in the inferred Proto-Indo-European language, a suffixed form of (or ), which has given rise to many Indo-European-family words (tens of them defined in English dictionaries) that involve, or stem from, concepts of fiveness.
The thumb shares the following with each of the other four fingers:
Having a skeleton of phalanges, joined by hinge-like joints that provide flexion toward the palm of the hand
Having a dorsal surface that features hair and a nail, and a hairless palmar aspect with fingerprint ridges
The thumb contrasts with each of the other four fingers by being the only one that:
Is opposable to the other four fingers
Has two phalanges rather than three. However, recently there have been reports that the thumb, like other fingers, has three phalanges, but lacks a metacarpal bone.
Has greater breadth in the distal phalanx than in the proximal phalanx
Is attached to such a mobile metacarpus (which produces most of the opposability)
Curls horizontally instead of vertically
and hence the etymology of the word: is Proto-Indo-European for 'swelling' (cf 'tumor' and 'thigh') since the thumb is the stoutest of the fingers.
Opposition and apposition
Humans
Anatomists and other researchers focused on human anatomy have hundreds of definitions of opposition. Some anatomists restrict opposition to when the thumb is approximated to the fifth finger (little finger) and refer to other approximations between the thumb and other fingers as apposition. To anatomists, this makes sense as two intrinsic hand muscles are named for this specific movement (the opponens pollicis and opponens digiti minimi respectively).
Other researchers use another definition, referring to opposition-apposition as the transition between flexion-abduction and extension-adduction; the side of the distal thumb phalanx thus approximated to the palm or the hand's radial side (side of index finger) during apposition and the pulp or "palmar" side of the distal thumb phalanx approximated to either the palm or other fingers during opposition.
Moving a limb back to its neutral position is called reposition and a rotary movement is referred to as circumduction.
Primatologists and hand research pioneers John and Prudence Napier defined opposition as: "A movement by which the pulp surface of the thumb is placed squarely in contact with or diametrically opposite to the terminal pads of one or all of the remaining fingers." For this true, pulp-to-pulp opposition to be possible, the thumb must rotate about its long axis (at the carpometacarpal joint). Arguably, this definition was chosen to underline what is unique to the human thumb.
Other primates
Primates fall into one of six groups:
Thumbless: spider monkey and colobus
Nonopposable thumbs: tarsiers (which are found in the islands of Southeast Asia), marmosets (which are New World monkeys)
Pseudo-opposable thumbs: all strepsirrhines (lemurs, pottos and lorises) and Cebidae (capuchin and squirrel monkeys, which are New World monkeys)
Opposable thumbs: Old World monkeys (Circopithecidae) except colobus, and all great apes
Opposable with comparatively long thumbs: gibbons (or lesser apes)
Yet to be classified: other New World monkeys (tamarins, Aotidae: night or owl monkeys, Pitheciidae: titis, sakis and uakaris, Atelidae: howler and woolly monkeys)
The spider monkey compensates for being virtually thumbless by using the hairless part of its long, prehensile tail for grabbing objects. In apes and Old World monkeys, the thumb can be rotated around its axis, but the extensive area of contact between the pulps of the thumb and index finger is a human characteristic.
Darwinius masillae, an Eocene primate transitional fossil between prosimian and simian, had hands and feet with highly flexible digits featuring opposable thumbs and halluces.
Other placental mammals
Giant pandas — five clawed fingers plus an extra-long sesamoid bone beside the true first finger that, though not a true finger, works like an opposable thumb.
Most rodents have a partly opposable toe on each front paw, letting them grasp.
In some mice, the hallux ("big toe") is clawless and fully opposable, including arboreal species such as Hapalomys, Chiropodomys, Vandeleuria, and Chiromyscus; and saltatorial, bipedal species such as Notomys and possibly some Gerbillinae.
The East African maned rat (Lophiomys imhausi), an arboreal, porcupine-like rodent, has four fingers on its hands and feet and a partially opposable thumb.
Additionally, in many polydactyl cats, both the innermost toe and outermost toe (pinky) may become opposable, allowing the cat to perform more complex tasks.
Marsupials
In most phalangerid marsupials (a family of possums) except species Trichosurus and Wyulda, the first and second toes of the forefoot are opposable to the other three. In the hind foot, the first toe is clawless but opposable and provides firm grip on branches. The second and third toes are partly syndactylous, united by skin at the top joint while the two separate nails serve as hair combs. The fourth and fifth toes are the largest of the hind foot.
Koalas have five toes on their fore and hind feet with sharp curved claws except for the first toe of the hind foot. The first and second toes of the forefeet are opposable to the other three, which enables the koala to grip smaller branches and search for fresh leaves in the outer canopy. Similar to the phalangerids, the second and third toes of the hind foot are fused but have separate claws.
Opossums are New World marsupials with opposable thumbs in the hind feet giving these animals their characteristic grasping capability (with the exception of the water opossum, the webbed feet of which restrict opposability).
The mouse-like microbiotheres were a group of South American marsupials most closely related to Australian marsupials. The only extant member, Dromiciops gliroides, is not closely related to opossums but has paws similar to these animals, each having opposable toes adapted for gripping.
Reptiles
The front feet of chameleons are organized into a medial bundle of toes 1, 2 and 3, and a lateral bundle of toes 4 and 5, and the hind feet are organized into a medial bundle of toes 1 and 2, and a lateral bundle of toes 3, 4 and 5.
Dinosaurs
Dinosaurs belonging to the family of bird-like dinosaur Troodontidae had a partially opposable finger. It is possible that this adaptation was used to better manipulate ground objects or moving undergrowth branches when searching for prey.
The small predatory dinosaur Bambiraptor may have had mutually opposable first and third fingers and a forelimb manoeuvrability that would allow the hand to reach its mouth. Its forelimb morphology and range of motion enabled two-handed prehension, one-handed clutching of objects to the chest, and use of the hand as a hook.
Nqwebasaurus — a coelurosaur with a long, three-fingered hand which included a partially opposable thumb (a "killer claw").
In addition to these, some other dinosaurs may have had partially or completely opposed toes in order to manipulate food and/or grasp prey.
Birds
Most birds have at least one opposable toe on the foot, in various configurations, though these are seldom called "thumbs". They are more often known simply as halluxes.
Pterosaurs
The wukongopterid pterosaur Kunpengopterus bore an opposable first toe on each wing. The presence of opposable thumbs in this taxon is thought to be an arboreal adaptation.
Amphibians
Phyllomedusa, a genus of frogs native to South America.
Human anatomy
Skeleton
The skeleton of the thumb consists of the first metacarpal bone which articulates proximally with the carpus at the carpometacarpal joint and distally with the proximal phalanx at the metacarpophalangeal joint. This latter bone articulates with the distal phalanx at the interphalangeal joint. Additionally, there are two sesamoid bones at the metacarpophalangeal joint.
Muscles
The muscles of the thumb can be compared to guy-wires supporting a flagpole; tension from these muscular guy-wires must be provided in all directions to maintain stability in the articulated column formed by the bones of the thumb. Because this stability is actively maintained by muscles rather than by articular constraints, most muscles attached to the thumb tend to be active during most thumb motions.
The muscles acting on the thumb can be divided into two groups: The extrinsic hand muscles, with their muscle bellies located in the forearm, and the intrinsic hand muscles, with their muscle bellies located in the hand proper.
Extrinsic
A ventral forearm muscle, the flexor pollicis longus (FPL) originates on the anterior side of the radius distal to the radial tuberosity and from the interosseous membrane. It passes through the carpal tunnel in a separate tendon sheath, after which it lies between the heads of the flexor pollicis brevis. It finally attaches onto the base of the distal phalanx of the thumb. It is innervated by the anterior interosseus branch of the median nerve (C7-C8) It is a persistence of one of the former contrahentes muscles that pulled the fingers or toes together.
Three dorsal forearm muscles act on the thumb:
The abductor pollicis longus (APL) originates on the dorsal sides of both the ulna and the radius, and from the interosseous membrane. Passing through the first tendon compartment, it inserts to the base of the first metacarpal bone. A part of the tendon reaches the trapezium, while another fuses with the tendons of the extensor pollicis brevis and the abductor pollicis brevis. Except for abducting the hand, it flexes the hand towards the palm and abducts it radially. It is innervated by the deep branch of the radial nerve (C7-C8).
The extensor pollicis longus (EPL) originates on the dorsal side of the ulna and the interosseous membrane. Passing through the third tendon compartment, it is inserted onto the base of the distal phalanx of the thumb. It uses the dorsal tubercle on the lower extremity of the radius as a fulcrum to extend the thumb and also dorsiflexes and abducts the hand at the wrist. It is innervated by the deep branch of the radial nerve (C7-C8).
The extensor pollicis brevis (EPB) originates on the ulna distal to the abductor pollicis longus, from the interosseus membrane, and from the dorsal side of the radius. Passing through the first tendon compartment together with the abductor pollicis longus, it is attached to the base of the proximal phalanx of the thumb. It extends the thumb and, because of its close relationship to the long abductor, also abducts the thumb. It is innervated by the deep branch of the radial nerve (C7-T1).
The tendons of the extensor pollicis longus and extensor pollicis brevis form what is known as the anatomical snuff box (an indentation on the lateral aspect of the thumb at its base) The radial artery can be palpated anteriorly at the wrist (not in the snuffbox).
Intrinsic
There are three thenar muscles:
The abductor pollicis brevis (APB) originates on the scaphoid tubercle and the flexor retinaculum. It inserts to the radial sesamoid bone and the proximal phalanx of the thumb. It is innervated by the median nerve (C8-T1).
The flexor pollicis brevis (FPB) has two heads. The superficial head arises on the flexor retinaculum, while the deep head originates on three carpal bones: the trapezium, trapezoid, and capitate. The muscle is inserted onto the radial sesamoid bone of the metacarpophalangeal joint. It acts to flex, adduct, and abduct the thumb, and is therefore also able to oppose the thumb. The superficial head is innervated by the median nerve, while the deep head is innervated by the ulnar nerve (C8-T1).
The opponens pollicis originates on the tubercle of the trapezium and the flexor retinaculum. It is inserted onto the radial side of the first metacarpal. It opposes the thumb and assists in adduction. It is innervated by the median nerve.
Other muscles involved are:
The adductor pollicis also has two heads. The transversal head originates along the entire third metacarpal bone, while the oblique head originates on the carpal bones proximal to the third metacarpal. The muscle is inserted onto the ulnar sesamoid bone of the metacarpophalangeal joint. It adducts the thumb, and assists in opposition and flexion. It is innervated by the deep branch of the ulnar nerve (C8-T1).
The first dorsal interosseous, one of the central muscles of the hand, extends from the base of the thumb metacarpal to the radial side of the proximal phalanx of the index finger.
Variations
There is a variation of the human thumb where the angle between the first and second (proximal and distal) phalanges varies between 0° and almost 90° when the thumb is in a thumbs-up gesture.
It has been suggested that the variation is an autosomal recessive trait, called a hitchhiker's thumb, with homozygous carriers having an angle close to 90°. However this theory has been disputed, since the variation in thumb angle is known to fall on a continuum and shows little evidence of the bi-modality seen in other recessive genetic traits.
Other variations of the thumb include brachydactyly type D (which is a thumb with a congenitally short distal phalanx), a triphalangeal thumb (which is a thumb which has 3 phalanges instead of the usual two), and polysyndactyly (which is a combination of radial polydactyly and syndactyly).
Grips
One of the earlier significant contributors to the study of hand grips was orthopedic primatologist and paleoanthropologist John Napier, who proposed organizing the movements of the hand by their anatomical basis as opposed to work done earlier that had only used arbitrary classification. Most of this early work on hand grips had a pragmatic basis as it was intended to narrowly define compensable injuries to the hand, which required an understanding of the anatomical basis of hand movement. Napier proposed two primary prehensile grips: the precision grip and the power grip. The precision and power grip are defined by the position of the thumb and fingers where:
The power grip is when the fingers (and sometimes palm) clamp down on an object with the thumb making counter pressure. Examples of the power grip are gripping a hammer, opening a jar using both your palm and fingers, and during pullups.
The precision grip is when the intermediate and distal phalanges ("fingertips") and the thumb press against each other. Examples of a precision grip are writing with a pencil, opening a jar with the fingertips alone, and gripping a ball (only if the ball is not tight against the palm).
Opposability of the thumb should not be confused with a precision grip as some animals possess semi-opposable thumbs yet are known to have extensive precision grips (Tufted Capuchins for example). Nevertheless, precision grips are usually only found in higher apes, and only in degrees significantly more restricted than in humans.
The pad-to-pad pinch between the thumb and index finger is made possible because of the human ability to passively hyperextend the distal phalanx of the index finger. Most non-human primates have to flex their long fingers in order for the small thumb to reach them.
In humans, the distal pads are wider than in other primates because the soft tissues of the finger tip are attached to a horseshoe-shaped edge on the underlying bone, and, in the grasping hand, the distal pads can therefore conform to uneven surfaces while pressure is distributed more evenly in the finger tips. The distal pad of the human thumb is divided into a proximal and a distal compartment, the former more deformable than the latter, which allows the thumb pad to mold around an object.
In robotics, almost all robotic hands have a long and strong opposable thumb. Like human hands, the thumb of a robotic hand also plays a key role in gripping an object. One inspiring approach of robotic grip planning is to mimic human thumb placement.
In a sense, human thumb placement indicates which surface or part of the object is good for grip. Then the robot places its thumb to the same location and plans the other fingers based on the thumb placement.
The function of the thumb declines physiologically with aging. This can be demonstrated by assessing the motor sequencing of the thumb.
Human evolution
A primitive autonomization of the first carpometacarpal joint (CMC) may have occurred in dinosaurs. A real differentiation appeared an estimated 70 mya in early primates, while the shape of the human thumb CMC finally appears about 5 mya. The result of this evolutionary process is a human CMC joint positioned at 80° of pronation, 40 of abduction, and 50° of flexion in relation to an axis passing through the second and third CMC joints.
Opposable thumbs are shared by some primates, including most catarrhines. The climbing and suspensory behaviour in orthograde apes, such as chimpanzees, has resulted in elongated hands while the thumb has remained short. As a result, these primates are unable to perform the pad-to-pad grip associated with opposability. However, in pronograde monkeys such as baboons, an adaptation to a terrestrial lifestyle has led to reduced finger length and thus hand proportions similar to those of humans. Consequently, these primates have dexterous hands and are able to grasp objects using a pad-to-pad grip. It can thus be difficult to identify hand adaptations to manipulation-related tasks based solely on thumb proportions.
The evolution of the fully opposable thumb is usually associated with Homo habilis, a forerunner of Homo sapiens. This, however, is the suggested result of evolution from Homo erectus (around 1 mya) via a series of intermediate anthropoid stages, and is therefore a much more complicated link.
Modern humans are unique in the musculature of their forearm and hand. Yet, they remain autapomorphic, meaning each muscle is found in one or more non-human primates. The extensor pollicis brevis and flexor pollicis longus allow modern humans to have great manipulative skills and strong flexion in the thumb.
However, a more likely scenario may be that the specialized precision gripping hand (equipped with opposable thumb) of Homo habilis preceded walking, with the specialized adaptation of the spine, pelvis, and lower extremities preceding a more advanced hand. And, it is logical that a conservative, highly functional adaptation be followed by a series of more complex ones that complement it. With Homo habilis, an advanced grasping-capable hand was accompanied by facultative bipedalism, possibly implying, assuming a co-opted evolutionary relationship exists, that the latter resulted from the former as obligate bipedalism was yet to follow. Walking may have been a by-product of busy hands and not vice versa.
HACNS1 (also known as Human Accelerated Region 2) is a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb, and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome, HACNS1 has undergone the most change during the human evolution since the chimpanzee–human last common ancestor.
| Biology and health sciences | Human anatomy | Health |
194578 | https://en.wikipedia.org/wiki/Ricinulei | Ricinulei | Ricinulei is a small order of arachnids. Like most arachnids, they are predatory; eating small arthropods. They occur today in west-central Africa (Ricinoides) and the Americas (Cryptocellus and Pseudocellus) from South America to as far north as Texas, where they either inhabit leaf-litter or caves. As of 2022, 103 extant species of ricinuleids have been described worldwide, all in the single family Ricinoididae. In older works they are sometimes referred to as Podogona. Due to their obscurity they do not have a proper common-name, though in academic literature they are occasionally referred to as hooded tickspiders.
In addition to the three living genera, Ricinulei has a fossil-record spanning over 300 million years, including fossils from the Late Carboniferous of Euramerica and the Cretaceous Burmese amber.
Anatomy and physiology
The most important general account of ricinuleid anatomy remains the 1904 monograph by Hans Jacob Hansen and William Sørensen. Useful further studies can be found in, e.g., the work of Pittard and Mitchell, Gerald Legg and L. van der Hammen.
Body
Ricinulei are typically about long. The largest Ricinulei known to ever exist was the Late Carboniferous Curculioides bohemondi with a body length of . The cuticle (or exoskeleton) of both the legs and body is remarkably thick. Their most notable feature is a "hood" (or cucullus) which can be raised and lowered over the head. When lowered, it covers the mouth and the chelicerae. Living ricinuleids have no eyes, although two pairs of lateral eyes can be seen in fossils and even living species retain light-sensitive areas of cuticle in this position.
The heavy-bodied abdomen (or opisthosoma) exhibits a narrow pedicel, or waist, where it attaches to the prosoma. Curiously, there is a complex coupling mechanism between the prosoma and opisthosoma. The front margin of the opisthosoma tucks into a corresponding fold at the back of the carapace. The advantages of this unusual system are not well understood, and since the genital opening is located on the pedicel (another rather unusual feature) the animals have to 'unlock' themselves in order to mate. The abdomen is divided dorsally into a series of large plates or tergites, each of which is subdivided into a median and lateral plate.
Appendages
The mouthparts, or chelicerae, are composed of two segments forming a fixed and a moveable digit. Sensory organs are also found associated with the mouthparts; presumably for tasting the food. The chelicerae can be retracted and at rest they are normally hidden beneath the cucullus.
Ricinuleid pedipalps are complex appendages. They are typically used to manipulate food items, but also bear many sensory structures and are used as 'short range' sensory organs. The pedipalps end in pincers that are small relative to their bodies, when compared to those of the related orders of scorpions and pseudoscorpions. Similar pincers on the pedipalps have now been found in the extinct order Trigonotarbida (see Relationships).
As in many harvestmen, the second pair of legs is longest in ricinuleids and these limbs are used to feel ahead of the animal, almost like antennae. If the pedipalps are 'short range' sensory organs, the second pair of legs are the corresponding 'long range' ones. Sensilla on the tarsi at the ends of legs I and II (which are used more frequently to sense the surroundings) differ from those of legs III and IV. In male ricinuleids, the third pair of legs are uniquely modified to form copulatory organs. The shape of these organs is very important for taxonomy and can be used to tell males of different species apart.
Internal anatomy
An older summary of ricinuleid internal anatomy was published by Jacques Millot. The midgut has been described, while the excretory system consists of Malpighian tubules and a pair of coxal glands. Female ricinuleids have spermathecae, presumably to store sperm. The male genitalia, sperm cells and sperm production have also been intensively studied. Gas exchange takes place through trachea, and opens through a single pair of spiracles on the prosoma. At least one Brazilian species appears to have a plastron, which may help it prevent getting wet and allow it to continue to breathe, even if inundated with water.
Behavior and life history
Ricinuleids inhabit the leaf litter of rainforest floors, as well as caves, where they search for prey with their elongate sensory second leg pair. Ricinulei feed on other small invertebrates, although details of their natural prey are sparse. Relatively little is known about their courtship and mating habits, but males have been observed using their modified third pair of legs to transfer a spermatophore to the female. The eggs are carried under the mother's hood, until the young hatch into six-legged larva, which later molt into their eight-legged adult forms. The six-legged larva is a feature they share with Acari (see Relationships). Despite the scarce number of studies about the biology of this group, recent studies have reported nocturnal habits, as well as novel behaviors for this group, which include interactions between individuals different than mating. Ricinuleids are often found in large congregations, the exact purpose of which is unknown.
Fossil record
Ricinulei are unique among arachnids in that the first one to be discovered was a fossil, described in 1837 by the noted English geologist William Buckland; albeit misinterpreted as a beetle. Further fossil species were added in subsequent years by, among others, Samuel Hubbard Scudder, Reginald Innes Pocock and Alexander Petrunkevitch.
Fifteen of the twenty species of fossil ricinuleids discovered so far originate from the late Carboniferous (Pennsylvanian) coal measures of Europe and North America. They were revised in detail in 1992 by Paul Selden, who placed them in a separate suborder, Palaeoricinulei.
The fossils are divided into four families: Curculioididae, Poliocheridae, Primoricinuleidae, and Sigillaricinuleidae. The poliocherids are more like modern ricinuleids in having an opisthosoma with a series of three large, divided tergites. Curculioidids, by contrast, have an opisthosoma without obvious tergites, but with a single median sulcus; a dividing line running down the middle of the back. This superficially resembles the elytra of a beetle and explains why Buckland originally misidentified the first fossil species. Five species: ?Poliochera cretacea, Primoricinuleus pugio, Hirsutisoma acutiformis, H. bruckschi, H. grimaldii and H. dentata, are known from the Cenomanian (~99 million years old) Burmese amber of Myanmar; Curculioides bohemondi, the largest of all Ricinulei, was a member of the Curculioididae. Monooculricinuleus incisus and M. semiglobosus from Burmese amber were originally described as members of Ricinulei, but they might belong to Opiliones instead.
Some Carboniferous genera of Palaeoricinulei exceed modern Ricinulei in size, with bodies in length, and many appear to have had eyes, unlike modern representatives which are completely blind. It is likely they had a surface dwelling ecology, unlike that of modern Ricinulei. The fossil genera from the Cretaceous Burmese amber are referred to the extinct order Primoricinulei, and are thought to have had a different ecology than modern species as tree-dwelling predators that crawled on bark.
Genera
, the World Ricinulei Catalog accepts the following genera:
Ricinoididae Ewing, 1929 (103+ species)
Cryptocellus Westwood, 1874
Pseudocellus Platnick, 1980
Ricinoides Ewing, 1929
† Curculioididae Cockerell, 1916 (12 species, Carboniferous)
† Amarixys Selden, 1992
† Curculioides Buckland, 1837
† Hirsutisomidae Wunderlich, 2017 (4 species, Burmese amber)
† Hirsutisoma Wunderlich, 2017
† Poliocheridae Scudder, 1884 (5 species, Carboniferous, ?Burmese amber)
† Poliochera Scudder, 1884
† Terpsicroton Selden, 1992
† Primoricinuleidae Wunderlich, 2015 (1 species, Burmese amber)
† Primoricinuleus Wunderlich, 2015
† Sigillaricinuleidae Wunderlich, 2022 (1 species, Burmese amber)
† Sigillaricinuleus Wunderlich, 2022
Relationships
Early work
In 1665, Robert Hooke described a large crab-like mite he observed with a microscope, he published a description of it in his book; Micrographia. The first living ricinuleid described using Linnaean taxonomy was from West Africa by Félix Édouard Guérin-Méneville in 1838, i.e. one year after the first fossil. This was followed by a second living example collected by Henry Walter Bates in Brazil and described by John Obadiah Westwood in 1874, and a third from Sierra Leone by Tamerlan Thorell in 1892. In these early studies ricinuleids were thought to be unusual harvestmen (Opiliones), and in his 1892 paper Thorell introduced the name "Ricinulei" for these animals as a suborder of the harvestman. Ricinuleids were subsequently recognized as an arachnid order in their own right in the 1904 monograph by Hansen & Soerensen. These authors recognised a group called "", comprising spiders, whip spiders, whip scorpions and ricinuleids, which they defined as having a rather narrow join between the prosoma and opisthosoma and a small 'tail end' to the opisthosoma.
Ricinuleids and mites
Morphological studies of arachnid relationships have largely concluded that ricinuleids are most closely related to Acari (mites and ticks) though more recent phylogenomic studies refute this. L. van der Hammen placed ricinuleids in a group called "Cryptognomae", together with the anactinotrichid mites only. Peter Weygoldt and Hannes Paulus referred to ricinuleids and all mites as "Acarinomorpha". Jeffrey Shultz used the name "Acaromorpha". This hypothesis recognizes that both ricinuleids and mites hatch with a larval stage with only six legs, rather than the usual eight seen in arachnids. The additional pair of legs appears later during development. Some authors have also suggested that the gnathosoma, a separate part of the body bearing the mouthparts, is also a unique character for ricinuleids and mites, but this feature is rather complex and difficult to interpret and other authors would restrict the presence of a gnathosoma sensu stricto to mites only.
Ricinuleids and trigonotarbids
In 1892, Ferdinand Karsch suggested that ricinuleids were the last living descendants of the extinct arachnid order Trigonotarbida. This hypothesis was widely overlooked, but was reintroduced by Jason Dunlop in 1996. Characteristics shared by ricinuleids and trigonotarbids include the division of the tergites on the opisthososma into median and lateral plates and the presence of an unusual 'locking mechanism' between the two halves of the body. A further study subsequently recognised that the tip of the pedipalp in both ricinuleids and trigonotarbids ends in a similar small claw. Ricinuleids as sister group of trigonotarbids was also recovered in the 2002 study by Gonzalo Giribet and colleagues.
Phylogenomic studies
Recent phylogenomic studies have recovered different relationships than those previously suggested. An analysis in early 2019 suggested the sister group of the ricinuleids may be Xiphosura, the arthropod order containing horseshoe crabs. In response to this work, a more recent study placed Ricinulei and Opiliones as sister taxa.
| Biology and health sciences | Arachnids | Animals |
194603 | https://en.wikipedia.org/wiki/Screwdriver | Screwdriver | A screwdriver is a tool, manual or powered, used for turning screws.
Description
A typical simple screwdriver has a handle and a shaft, ending in a tip the user puts into the screw head before turning the handle. This form of the screwdriver has been replaced in many workplaces and homes with a more modern and versatile tool, a power drill, as they are quicker, easier, and can also drill holes. The shaft is usually made of tough steel to resist bending or twisting. The tip may be hardened to resist wear, treated with a dark tip coating for improved visual contrast between tip and screw—or ridged or treated for additional "grip".
Handles are typically wood, metal, or plastic and usually hexagonal, square, or oval in cross-section to improve grip and prevent the tool from rolling when set down. Some manual screwdrivers have interchangeable tips that fit into a socket on the end of the shaft and are held in mechanically or magnetically. These often have a hollow handle that contains various types and sizes of tips, and a reversible ratchet action that allows multiple full turns without repositioning the tip or the user's hand.
A screwdriver is classified by its tip, which is shaped to fit the driving surfaces (slots, grooves, recesses, etc.) on the corresponding screw head. Proper use requires that the screwdriver's tip engage the head of a screw of the same size and type designation as the screwdriver tip. Screwdriver tips are available in a wide variety of types and sizes (List of screw drives). The two most common are the simple 'blade'-type for slotted screws, and Phillips, generically called "cross-recess", "cross-head", or "cross-point".
A wide variety of power screwdrivers ranges from a simple "stick"-type with batteries, a motor, and a tip holder all inline, to powerful "pistol" type VSR (variable-speed reversible) cordless drills that also function as screwdrivers. This is particularly useful as drilling a pilot hole before driving a screw is a common operation. Special combination drill-driver bits and adapters let an operator rapidly alternate between the two. Variations include impact drivers, which provide two types of 'hammering' force for improved performance in certain situations, and "right-angle" drivers for use in tight spaces. Many options and enhancements, such as built-in bubble levels, high/low gear selection, magnetic screw holders, adjustable-torque clutches, keyless chucks, "gyroscopic" control, etc., are available.
History
The earliest documented screwdrivers were used in the late Middle Ages. They were probably invented in the late 15th century, either in Germany or France. The tool's original names in German and French were Schraubenzieher (screw-tightener) and tournevis (turnscrew), respectively. The first documentation of the tool is in the medieval Housebook of Wolfegg Castle, a manuscript written sometime between 1475 and 1490. These earliest screwdrivers had pear-shaped handles and were made for slotted screws (diversification of the many types of screwdrivers did not emerge until the Gilded Age). The screwdriver remained inconspicuous, however, as evidence of its existence throughout the next 300 years is based primarily on the presence of screws.
Screws were used in the 15th century to construct screw-cutting lathes, for securing breastplates, backplates, and helmets on medieval jousting armor—and eventually for multiple parts of the emerging firearms, particularly the matchlock. Screws, hence screwdrivers, were not used in full combat armor, most likely to give the wearer freedom of movement.
The jaws that hold the pyrites inside wheellock guns were secured with screws, and the need to constantly replace the pyrites resulted in a considerable refinement of the screwdriver. The tool is more documented in France, and took on many shapes and sizes, though all for slotted screws. There were large, heavy-duty screwdrivers for building and repairing large machines, and smaller screwdrivers for refined cabinet work.
The screwdriver depended entirely on the screw, and it took several advances to make the screw easy enough to produce to become popular and widespread. The most popular door hinge at the time was the butt-hinge, but it was considered a luxury. The butt-hinge was handmade, and its constant motion required the security of a screw.
Screws were very hard to produce before the First Industrial Revolution, requiring the manufacture of a conical helix. The brothers Job and William Wyatt found a way to produce a screw on a novel machine that first cut the slotted head, and then cut the helix. Though their business ultimately failed, their contribution to low-cost manufacturing of the screw ultimately led to a vast increase in the screw and the screwdriver's popularity. The increase in popularity gradually led to refinement and eventually diversification of the screwdriver. Refinement of the precision of screws also significantly contributed to the boom in production, mostly by increasing its efficiency and standardizing sizes, important precursors to industrial manufacture.
Canadian P.L. Robertson, though he was not the first person to patent the idea of socket-head screws, was the first to successfully commercialize them, starting in 1908. Socket screws rapidly grew in popularity, and are still used for their resistance to wear and tear, compatibility with hex keys, and ability to stop a power tool when set. Though immensely popular, Robertson had trouble marketing his invention to the newly booming auto industry, for he was unwilling to relinquish his patents.
Meanwhile, in Portland, Oregon, Henry F. Phillips patented his own invention, an improved version of a deep socket with a cruciform slot, today known as the Phillips Screw. Phillips offered his screw to the American Screw Company, and after a successful trial on the 1936 Cadillac, it quickly swept through the American auto industry. With the Industrial Revival at the end of the Great Depression and the upheaval of World War II, the Phillips screw quickly became, and remains, the most popular screw in the world. A main attraction for the screw was that conventional slotted screwdrivers could also be used on them, which was not possible with the Robertson Screw.
Gunsmiths still call a screwdriver a turnscrew, under which name it is an important part of a set of pistols. The name was common in earlier centuries, used by cabinetmakers, shipwrights, and perhaps other trades. The cabinetmaker's screwdriver is one of the longest-established handle forms, somewhat oval or ellipsoid in cross-section. This is variously attributed to improving grip or preventing the tool rolling off the bench. The shape has been popular for a couple of hundred years. It is usually associated with a plain head for slotted screws, but has been used with many head forms. Modern plastic screwdrivers use a handle with a roughly hexagonal cross-section to achieve these same two goals, a far cry from the pear-shaped handle of the original 15th-century screwdriver.
Handle
The handle and shaft of screwdrivers have changed considerably over time. The design is influenced by both purpose and manufacturing requirements. The "Perfect Pattern Handle" screwdriver was first manufactured by HD Smith & Company, which operated from 1850 to 1900. Many manufacturers adopted this handle design. At the time, the "flat bladed" screw type was prevalent and was the fastener with which they were designed to be used. Another popular design was composed of drop-forged steel with riveted wood handles.
The shape and material of many modern screwdriver handles are designed to fit comfortably in the user's hand, for user comfort and to facilitate maximum control and torque. Designs include indentations for the user's fingers, and surfaces of a soft material such as thermoplastic elastomer to increase comfort and grip. Composite handles of rigid plastic and rubber are also common. Many screwdriver handles are not smooth and often not round, but have flats or other irregularities to improve grip and to prevent the tool from rolling when on a flat surface.
Some screwdrivers have a short hexagonal section at the top of the blade, adjacent to the handle, so that a ring spanner or open wrench can be used to increase the applied torque. Another option are "cabinet" screwdrivers which are made of flat bar stock and while the shaft may be rounded, will have a large flat section adjacent to the handle which a wrench (often an adjustable) may be used on for additional leverage. The offset screwdriver has a handle set at right angles to the small blade, providing access to narrow spaces and giving extra torque.
Drive tip
Screwdrivers come in a large range of sizes to accommodate various screws—from tiny jeweller's screwdrivers up. A screwdriver that is not the right size and type for the screw may damage the screw in the process of tightening it.
Some screwdriver tips are magnetic, so that the screw (unless non-magnetic) remains attached to the screwdriver. This is particularly useful for small screws, which are otherwise very difficult to attempt to handle. Many screwdriver designs have a handle with a detachable tip (the part of the screwdriver that engages the screw), called bits as with drill bits. This provides a set of one handle and several bits that can drive a variety of screw sizes and types.
Slotted
The tool used to drive a slotted screw head is called a standard, common blade, flat-blade, slot-head, straight, flat, flat-tip, or "flat-head" screwdriver. This last usage can be confusing, because the term flat-head also describes a screw with a flat top, designed to install in a countersunk hole. Before the development of the newer bit types, the flat-blade was called the "Common-Blade", because it was the most common one. Depending on the application, the name of this screwdriver may differ. Within the automotive/heavy electric industries, it is known as a "flat head screwdriver"; within the avionics and mining industries, it is known as a "standard screwdriver". Though there are many names; the original device from 1908 was known as a "flat-head screw turner".
Among slotted screwdrivers, variations at the blade or bit end involve the profile of the blade as viewed face-on (from the side of the tool). The more common type is sometimes called keystone, where the blade profile is slightly flared before tapering off at the end, which provides extra stiffness to the workface and makes it capable of withstanding more torque by gripping deeper in the screw slot. To maximize access in space-restricted applications, the cabinet variant screwdriver blade sides are straight and parallel, reaching the end of the blade at a right angle. This design is also frequently used in jeweler's screwdrivers.
Many textbooks and vocational schools instruct mechanics to grind down the tip of the blade, which, due to the taper, increases its thickness and consequently allows more precise engagement with the slot in the screw. This approach creates a set of graduated slotted screwdrivers that will fit a particular screw for a tighter engagement and reduce screw head deformation. However, many better-quality screwdriver blades are already induction-hardened (surface heat-treated), coated with black-oxide, black-phosphate, or diamond-coated to increase friction between the screwdriver tip and the screw. Thus tip grinding after manufacture will likely compromise their durability so it is best to select the proper tip and avoid weakening the manufacture's treatments.
Phillips
Phillips screwdrivers come in several standard sizes, ranging from tiny "jeweler's" to those used for automobile frame assembly—or #000 to #4 respectively. This size number is usually stamped onto the shank (shaft) or handle for identification. Each bit size fits a range of screw sizes, more or less well. Each Phillips screwdriver size also has a related shank diameter. The driver has a 57° point and tapered, unsharp (rounded) flutes. The #1 and smaller bits come to a blunt point, but the #2 and above have no point, but rather a nearly squared-off tip, making each size incompatible with the other.
The design is often criticized for its tendency to cam out at lower torque levels than other "cross head" designs, an effect caused by the tapered profile of the flutes which makes them easier to insert into the screw than other similar styles. There has long been a popular belief that this was actually a deliberate feature of the design. Evidence is lacking for this specific narrative and the feature is not mentioned in the original patents. However, a subsequent refinement to the original design described in US Patent #2,474,994 describes this feature.
Robertson
Robertson, also known as a square, or Scrulox screw drive has a square-shaped socket in the screw head and a square protrusion on the tool. Both the tool and the socket have a taper, which makes inserting the tool easier, and also tends to help keep the screw on the tool tip. (The taper's earliest reason for being was to make the manufacture of the screws practical using cold forming of the heads, but its other advantages helped popularize the drive.) Robertson screws are commonplace in Canada, though they have been used elsewhere, and have become much more common in other countries in recent decades. Robertson screwdrivers are easy to use one-handed, because the tapered socket tends to retain the screw, even if it is shaken. They also allow for the use of angled screw drivers and trim head screws. The socket-headed Robertson screws are self-centering, reduce cam out, stop a power tool when set, and can be removed if painted over or old and rusty. In industry, they speed up production and reduce product damage. One of their first major industrial uses was the Ford Motor Company's Model A & Model T production. Henry Ford found them highly reliable and saved considerable production time, but he could not secure licensing for them in the United States, so he limited their use solely to his Canadian division. Robertson-head screwdrivers are available in a standard range of tip sizes, from 1.77mm to 4.85mm.
Reed and Prince
Reed and Prince, also called Frearson, is another historic cross-head screw configuration. The cross in the screw head is sharper and less rounded than a Phillips, and the bit has 45° flukes and a sharper, pointed end. Also, the Phillips screw slot is not as deep as the Reed and Prince slot. In theory, different size R&P screws fit any R&P bit size.
Pozidriv
Pozidriv and the related Supadriv are widely used in Europe and most of the Far East. While Pozidriv screws have cross heads like Phillips and are sometimes thought effectively the same, the Pozidriv design allows higher torque application than Phillips. It is often claimed that they can apply more torque than any of the other commonly used cross-head screwdriver systems, due to a complex fluting (mating) configuration.
Japanese Industrial Standard (JIS)
Japanese Industrial Standard (JIS) cross-head screwdrivers are still another standard, often inaccurately called Japanese Phillips. Compatible screw heads are usually identifiable by a single depressed dot or an "X" to one side of the cross slot. This is a screw standard throughout the Asia market and Japanese imports. The driver has a 57° point with a flat tip.
Other types
Many modern electrical appliances, if they contain screws, use screws with heads other than the typical slotted or Phillips styles. Torx is one such pattern that has become widespread. It is a spline tip with a corresponding recess in the screw head. The main cause of this trend is manufacturing efficiency: Torx screwdriver tips do not slip out of the fastener as easily as would a Phillips or slotted driver.
Non-typical fasteners are commonplace in consumer devices for their ability to make disassembly more difficult. In microwave ovens, such screws deny casual access to the high-power kilovolt electrical components.
Torx and other drivers have become widely available to the consumer due to their increasing use in the industry. Some other styles fit a three-pointed star recess, and a five-lobed spline with rounded edges instead of the square edges of the Torx. This is called a Pentalobe.
Specialized patterns of security screws are also used, such as the Line Head (LH) style by OSG System Products, Japan, as used in many Nintendo consoles, though drivers for the more common security heads are, again, readily available. Another type of security head has smooth curved surfaces instead of the slot edges that would permit loosening the screw; it is found in public rest room privacy partitions, and cannot be removed by conventional screwdrivers.
Variations
Torque screwdrivers
Screwdrivers are available—manual, electric, and pneumatic—with a clutch that slips at a preset torque. This helps the user tighten screws to a specified torque without damage or over-tightening. Cordless drills designed to use as screwdrivers often have such a clutch.
Powered screwdrivers
Interchangeable bits allow the use of powered screwdrivers, commonly using an electric or air motor to rotate the bit. Cordless drills with speed and torque control are commonly used as power screwdrivers.
Ratcheting screwdrivers
Some manual screwdrivers have a ratchet action whereby the screwdriver blade locks to the handle for clockwise rotation, but uncouples for counterclockwise rotation when set for tightening screws—and vice versa for loosening.
Spiral ratchet screw drivers, often colloquially called Yankee screwdrivers (a brand name), provide a special mechanism that transforms linear motion into rotational motion. Originally the "Yankee" name was used on all tools sold by the North Brothers Manufacturing Company but later, after Stanley purchased the company, it became synonymous with only this type of screwdriver. The user pushes the handle toward the workpiece, causing a pawl in a spiral groove to rotate the shank and the removable bit. The ratchet can be set to rotate left or right with each push, or can be locked so that the tool can be used like a conventional screwdriver. One disadvantage of this design is that if the bit slips out of the screw, the resultant sudden extension of the spring may cause the bit to scratch or otherwise damage the workpiece.
Once very popular, versions of these spiral ratchet drivers using proprietary bits have been largely discontinued by manufacturers such as Stanley. Some companies now offer a modernized version that uses standard -inch hex shank power tool bits. Since a wide variety of drill bits are available in this format, the tool can do double duty as a "push drill" or Persian drill.
| Technology | Hand tools | null |
194634 | https://en.wikipedia.org/wiki/Moment-generating%20function | Moment-generating function | In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
As its name implies, the moment-generating function can be used to compute a distribution’s moments: the nth moment about 0 is the nth derivative of the moment-generating function, evaluated at 0.
In addition to real-valued distributions (univariate distributions), moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases.
The moment-generating function of a real-valued distribution does not always exist, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments.
Definition
Let be a random variable with CDF . The moment generating function (mgf) of (or ), denoted by , is
provided this expectation exists for in some open neighborhood of 0. That is, there is an such that for all in , exists. If the expectation does not exist in an open neighborhood of 0, we say that the moment generating function does not exist.
In other words, the moment-generating function of X is the expectation of the random variable . More generally, when , an -dimensional random vector, and is a fixed vector, one uses instead of :
always exists and is equal to 1. However, a key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. By contrast, the characteristic function or Fourier transform always exists (because it is the integral of a bounded function on a space of finite measure), and for some purposes may be used instead.
The moment-generating function is so named because it can be used to find the moments of the distribution. The series expansion of is
Hence
where is the th moment. Differentiating times with respect to and setting , we obtain the th moment about the origin, ;
see Calculations of moments below.
If is a continuous random variable, the following relation between its moment-generating function and the two-sided Laplace transform of its probability density function holds:
since the PDF's two-sided Laplace transform is given as
and the moment-generating function's definition expands (by the law of the unconscious statistician) to
This is consistent with the characteristic function of being a Wick rotation of when the moment generating function exists, as the characteristic function of a continuous random variable is the Fourier transform of its probability density function , and in general when a function is of exponential order, the Fourier transform of is a Wick rotation of its two-sided Laplace transform in the region of convergence. See the relation of the Fourier and Laplace transforms for further information.
Examples
Here are some examples of the moment-generating function and the characteristic function for comparison. It can be seen that the characteristic function is a Wick rotation of the moment-generating function when the latter exists.
{|class="wikitable"
|-
! Distribution
! Moment-generating function
! Characteristic function
|-
|Degenerate
|
|
|-
| Bernoulli
|
|
|-
| Binomial
|
|
|-
| Geometric
|
|
|-
|Negative binomial
|
|
|-
| Poisson
|
|
|-
| Uniform (continuous)
|
|
|-
| Uniform (discrete)
|
|
|-
|Laplace
|
|
|-
| Normal
|
|
|-
| Chi-squared
|
|
|-
|Noncentral chi-squared
|
|
|-
| Gamma
|
|
|-
| Exponential
|
|
|-
|Beta
|
| (see Confluent hypergeometric function)
|-
| Multivariate normal
|
|
|-
| Cauchy
|Does not exist
|
|-
|Multivariate Cauchy
|Does not exist
|
|-
|}
Calculation
The moment-generating function is the expectation of a function of the random variable, it can be written as:
For a discrete probability mass function,
For a continuous probability density function,
In the general case: , using the Riemann–Stieltjes integral, and where is the cumulative distribution function. This is simply the Laplace-Stieltjes transform of , but with the sign of the argument reversed.
Note that for the case where has a continuous probability density function , is the two-sided Laplace transform of .
where is the th moment.
Linear transformations of random variables
If random variable has moment generating function , then has moment generating function
Linear combination of independent random variables
If , where the Xi are independent random variables and the ai are constants, then the probability density function for Sn is the convolution of the probability density functions of each of the Xi, and the moment-generating function for Sn is given by
Vector-valued random variables
For vector-valued random variables with real components, the moment-generating function is given by
where is a vector and is the dot product.
Important properties
Moment generating functions are positive and log-convex, with M(0) = 1.
An important property of the moment-generating function is that it uniquely determines the distribution. In other words, if and are two random variables and for all values of t,
then
for all values of x (or equivalently X and Y have the same distribution). This statement is not equivalent to the statement "if two distributions have the same moments, then they are identical at all points." This is because in some cases, the moments exist and yet the moment-generating function does not, because the limit
may not exist. The log-normal distribution is an example of when this occurs.
Calculations of moments
The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution:
That is, with n being a nonnegative integer, the nth moment about 0 is the nth derivative of the moment generating function, evaluated at t = 0.
Other properties
Jensen's inequality provides a simple lower bound on the moment-generating function:
where is the mean of X.
The moment-generating function can be used in conjunction with Markov's inequality to bound the upper tail of a real random variable X. This statement is also called the Chernoff bound. Since is monotonically increasing for , we have
for any and any a, provided exists. For example, when X is a standard normal distribution and , we can choose and recall that . This gives , which is within a factor of 1+a of the exact value.
Various lemmas, such as Hoeffding's lemma or Bennett's inequality provide bounds on the moment-generating function in the case of a zero-mean, bounded random variable.
When is non-negative, the moment generating function gives a simple, useful bound on the moments:
For any and .
This follows from the inequality into which we can substitute implies for any .
Now, if and , this can be rearranged to .
Taking the expectation on both sides gives the bound on in terms of .
As an example, consider with degrees of freedom. Then from the examples .
Picking and substituting into the bound:
We know that in this case the correct bound is .
To compare the bounds, we can consider the asymptotics for large .
Here the moment-generating function bound is ,
where the real bound is .
The moment-generating function bound is thus very strong in this case.
Relation to other functions
Related to the moment-generating function are a number of other transforms that are common in probability theory:
Characteristic function The characteristic function is related to the moment-generating function via the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform.
Cumulant-generating function The cumulant-generating function is defined as the logarithm of the moment-generating function; some instead define the cumulant-generating function as the logarithm of the characteristic function, while others call this latter the second cumulant-generating function.
Probability-generating function The probability-generating function is defined as This immediately implies that
| Mathematics | Probability | null |
194637 | https://en.wikipedia.org/wiki/Acetate | Acetate | An acetate is a salt formed by the combination of acetic acid with a base (e.g. alkaline, earthy, metallic, nonmetallic or radical base). "Acetate" also describes the conjugate base or ion (specifically, the negatively charged ion called an anion) typically found in aqueous solution and written with the chemical formula . The neutral molecules formed by the combination of the acetate ion and a positive ion (called a cation) are also commonly called "acetates" (hence, acetate of lead, acetate of aluminium, etc.). The simplest of these is hydrogen acetate (called acetic acid) with corresponding salts, esters, and the polyatomic anion , or .
Most of the approximately 5 million tonnes of acetic acid produced annually in industry are used in the production of acetates, which usually take the form of polymers. In nature, acetate is the most common building block for biosynthesis.
Nomenclature and common formula
When part of a salt, the formula of the acetate ion is written as , , or . Chemists often represent acetate as OAc− or, less commonly, AcO−. Thus, HOAc is the symbol for acetic acid, NaOAc for sodium acetate, and EtOAc for ethyl acetate (as Ac is common symbol for acetyl group CH3CO).The pseudoelement symbol "Ac" is also sometimes encountered in chemical formulas as indicating the entire acetate ion (). It is not to be confused with the symbol of actinium, the first element of the actinide series; context guides disambiguation. For example, the formula for sodium acetate might be abbreviated as "NaOAc", rather than "NaC2H3O2". Care should also be taken to avoid confusion with peracetic acid when using the OAc abbreviation; for clarity and to avoid errors when translated, HOAc should be avoided in literature mentioning both compounds.
Although its systematic name is ethanoate (), the common acetate remains the preferred IUPAC name.
Salts
The acetate anion, [CH3COO]−,(or [C2H3O2]−) is one of the carboxylate family. It is the conjugate base of acetic acid. Above a pH of 5.5, acetic acid converts to acetate:
CH3COOH ⇌ CH3COO− + H+
Many acetate salts are ionic, indicated by their tendency to dissolve well in water. A commonly encountered acetate in the home is sodium acetate, a white solid that can be prepared by combining vinegar and sodium bicarbonate ("bicarbonate of soda"):
CH3COOH + NaHCO3 → CH3COO−Na+ + H2O + CO2
Transition metals can be complexed by acetate. Examples of acetate complexes include chromium(II) acetate and basic zinc acetate.
Commercially important acetate salts are aluminium acetate, used in dyeing, ammonium acetate, a precursor to acetamide, and potassium acetate, used as a diuretic. All three salts are colourless and highly soluble in water.
Esters
Acetate esters have the general formula CH3CO2R, where R is an organyl group. The esters are the dominant forms of acetate in the marketplace. Unlike the acetate salts, acetate esters are often liquids, lipophilic, and sometimes volatile. They are popular because they have inoffensive, often sweet odors, they are inexpensive, and they are usually of low toxicity.
Almost half of acetic acid production is consumed in the production of vinyl acetate, precursor to polyvinyl alcohol, which is a component of many paints. The second largest use of acetic acid is consumed in the production of cellulose acetate. In fact, "acetate" is jargon for cellulose acetate, which is used in the production of fibres or diverse products, e.g. the acetate discs used in audio record production. Cellulose acetate can be found in many household products. Many industrial solvents are acetates, including methyl acetate, ethyl acetate, isopropyl acetate, ethylhexyl acetate. Butyl acetate is a fragrance used in food products.
Acetate in biology
Acetate is a common anion in biology. It is mainly utilized by organisms in the form of acetyl coenzyme A.
Intraperitoneal injection of sodium acetate (20 or 60 mg per kg body mass) was found to induce headache in sensitized rats, and it has been proposed that acetate resulting from oxidation of ethanol is a major factor in causing hangovers. Increased serum acetate levels lead to accumulation of adenosine in many tissues including the brain, and administration of the adenosine receptor antagonist caffeine to rats after ethanol was found to decrease nociceptive behavior.
Acetate has known immunomodulatory properties and can affect the innate immune response to pathogenic bacteria such as the respiratory pathogen Haemophilus influenzae.
Fermentation acetyl CoA to acetate
Pyruvate is converted into acetyl-coenzyme A (acetyl-CoA) by the enzyme pyruvate dehydrogenase. This acetyl-CoA is then converted into acetate in E. coli, whilst producing ATP by substrate-level phosphorylation. Acetate formation requires two enzymes: phosphate acetyltransferase and acetate kinase.
acetyl-CoA + phosphate → acetyl-phosphate + CoA
acetyl-phosphate + ADP → acetate + ATP
Fermentation of acetate
Acetic acid can also undergo a dismutation reaction to produce methane and carbon dioxide:
CH3COO− + H+ → CH4 + CO2 ΔG° = −36 kJ/mol
This disproportionation reaction is catalysed by methanogen archaea in their fermentative metabolism. One electron is transferred from the carbonyl function (e− donor) of the carboxylic group to the methyl group (e− acceptor) of acetic acid to respectively produce CO2 and methane gas.
Structures
| Physical sciences | Acetates | Chemistry |
194873 | https://en.wikipedia.org/wiki/Allspice | Allspice | Allspice, also known as Jamaica pepper, myrtle pepper, pimenta, or pimento, is the dried unripe berry of Pimenta dioica, a midcanopy tree native to the Greater Antilles, southern Mexico, and Central America, now cultivated in many warm parts of the world. The name allspice was coined as early as 1621 by the English, who valued it as a spice that combined the flavours of cinnamon, nutmeg, and clove. Contrary to common misconception, it is not a mixture of spices.
Several unrelated fragrant shrubs are called "Carolina allspice" (Calycanthus floridus), "Japanese allspice" (Chimonanthus praecox), or "wild allspice" (Lindera benzoin).
Production
Allspice is the dried fruit of the Pimenta dioica plant. The fruits are picked when green and unripe, and are traditionally dried in the sun. When dry, they are brown and resemble large, smooth peppercorns. Fresh leaves are similar in texture to bay leaves and similarly used in cooking. Leaves and wood are often used for smoking meats where allspice is a local crop.
Care must be taken during drying to ensure that the volatile oils in the fruit, such as eugenol, remain in the end products rather than being driven out by the drying process.
Uses
Allspice is one of the most important ingredients of Jamaican cuisine. Under the name pimento, it is used in Jamaican jerk seasoning, and traditionally its wood was used to smoke jerk in Jamaica. In the West Indies, an allspice liqueur is produced under the name "pimento dram". In Mexican cuisine, it is used in many dishes, where it is known as pimienta gorda.
Allspice is also indispensable in Middle Eastern cuisine, particularly in the Levant, where it is used to flavour a variety of stews and meat dishes, as well as tomato sauce. In Arab cuisine, for example, many main dishes use allspice as the only spice.
In Northern European and North American cooking, it is an ingredient in commercial sausage preparations and curry powders, and in pickling.
In the United States, it is used mostly in desserts, but it is also responsible for giving Cincinnati-style chili its distinctive aroma and flavor. Allspice is commonly used in Great Britain, and appears in many dishes. In Portugal, whole allspice is used heavily in traditional stews cooked in large terracotta pots in the Azores islands.
In the United Kingdom it is a dominant flavour in the condiment Brown sauce.
Allspice is also one of the most used spices in Polish cuisine (used in most dishes, soups and stews) and is commonly known under the name English herb () since Britain was its major exporter.
Allspice is an important part of Swedish and Finnish cuisine. Whole allspice is used to flavour soups as well as stews such as Karelian hot pot. Ground allspice is also used in various dishes, such as minced meat sauces, Swedish meatballs, lutefisk and different cakes.
Cultivation, trade and origin
The allspice tree, classified as an evergreen shrub, can reach in height. Allspice can be a small, scrubby tree, quite similar to the bay laurel in size and form. It can also be a tall canopy tree, sometimes grown to provide shade for coffee trees planted underneath it. It can be grown outdoors in the tropics and subtropics with normal garden soil and watering. Smaller plants can be killed by frost; larger plants are more tolerant. It adapts well to container culture and can be kept as a houseplant or in a greenhouse.
Christopher Columbus became aware of allspice on his second New World voyage, and the plant soon became part of European diets. At the time, it was found only on the island of Jamaica, where birds readily spread the seeds. To protect the pimenta trade, Jamaican growers guarded against export of the plant. Many attempts at growing the pimenta from seeds were reported, but all failed. Eventually, passage through the avian digestive tract, whether due to the acidity or the elevated temperature, was found to be essential for germinating the seeds, and successful germination elsewhere was enabled. Today, pimenta grows in Tonga and in Hawaii, where it has become naturalized on Kauai and Maui. Jamaica remains the leading source of the plant, although some is grown by other countries in the same region. In modern times, Central American countries like Guatemala, Mexico, Honduras, and Belize also play a large role in world exports of Allspice.
| Biology and health sciences | Herbs and spices | Plants |
194883 | https://en.wikipedia.org/wiki/Cymbopogon | Cymbopogon | Cymbopogon, also known as lemongrass, barbed wire grass, silky heads, oily heads, Cochin grass, Malabar grass, citronella grass or fever grass, is a genus of Asian, African, Australian, and tropical island plants in the grass family.
Some species (particularly Cymbopogon citratus) are commonly cultivated as culinary and medicinal herbs because of their scent, resembling that of lemons (Citrus limon).
The name Cymbopogon derives from the Greek words (, 'boat') and (, 'beard') "which mean [that] in most species, the hairy spikelets project from boat-shaped spathes." Lemongrass and its oil are believed to possess therapeutic properties.
Uses
Citronella grass (Cymbopogon nardus and Cymbopogon winterianus) grow to about and have magenta-colored base stems. These species are used for the production of citronella oil, which is used in soaps, as an insect repellent (especially mosquitoes and houseflies) in insect sprays and candles, and aromatherapy. The principal chemical constituents of citronella, geraniol and citronellol, are antiseptics, hence their use in household disinfectants and soaps. Besides oil production, citronella grass is also used for culinary purposes as a flavoring.
Culinary
East Indian lemongrass (Cymbopogon flexuosus), also called Cochin grass or Malabar grass, is native to Cambodia, Vietnam, Laos, India, Sri Lanka, Burma, and Thailand, while West Indian lemongrass (Cymbopogon citratus) is native to maritime Southeast Asia. While both can be used interchangeably, C. citratus is more suitable for cooking.
Folk medicine
In India, C. citratus is used as a medical herb and in perfumes. C. citratus is consumed as a tea for anxiety in Brazilian folk medicine, but a study in humans found no effect. The tea caused a recurrence of contact dermatitis in one case. Samoans and Tongans use mashed C. citratus (called moegalo and moengālō respectively) leaves as a traditional remedy for oral infections.
FDA classification
Lemongrass essential oil has been declared generally recognized as safe in food by the United States Food and Drug Administration.
Folk magic
In Hoodoo, lemongrass is the primary ingredient of van van oil, one of the most popular oils used in conjure. Lemongrass is used in this preparation and on its own in hoodoo to protect against evil, spiritually clean a house, and to bring good luck in love affairs.
Insect
In beekeeping, lemongrass oil imitates the pheromone emitted by a honeybee's Nasonov gland to attract bees to a hive or a swarm.
Species
Species in the genus include:
Cymbopogon ambiguus (Australian lemon-scented grass) – Australia, Timor
Cymbopogon annamensis – Yunnan, Laos, Vietnam, Thailand
Cymbopogon bhutanicus – Bhutan
Cymbopogon bombycinus silky oilgrass – Australia
Cymbopogon caesius – Sub-Saharan Africa, Indian Subcontinent, Yemen, Afghanistan, Madagascar, Comoros, Réunion
Cymbopogon calcicola – Thailand, Kedah
Cymbopogon calciphilus – Thailand
Cymbopogon cambogiensis – Thailand, Cambodia, Vietnam
Cymbopogon citratus (lemon grass or West Indian lemon grass) – Indonesia, Malaysia, Brunei, Philippines
Cymbopogon clandestinus – Thailand, Myanmar, Andaman Islands
Cymbopogon coloratus – Madhya Pradesh, Tamil Nadu, Myanmar, Vietnam
Cymbopogon commutatus – Sahel, East Africa, Arabian Peninsula, Iraq, Iran, Afghanistan, India, Pakistan
Cymbopogon densiflorus – central + south-central Africa
Cymbopogon dependens – Australia
Cymbopogon dieterlenii – Lesotho, Namibia, South Africa
Cymbopogon distans – Gansu, Guizhou, Shaanxi, Sichuan, Tibet, Yunnan, Nepal, northern Pakistan, Jammu & Kashmir
Cymbopogon exsertus – Nepal, Assam
Cymbopogon flexuosus (East Indian lemon grass) – Indian Subcontinent, Indochina
Cymbopogon gidarba – Indian Subcontinent, Myanmar, Yunnan
Cymbopogon giganteus – Africa, Madagascar
Cymbopogon globosus – Maluku, New Guinea, Queensland
Cymbopogon goeringii – China, Korea, Japan incl Ryukyu Islands, Vietnam
Cymbopogon gratus – Queensland
Cymbopogon jwarancusa – Socotra, Turkey, Middle East, Arabian Peninsula, Iraq, Iran, Afghanistan, Indian Subcontinent, Tibet, Sichuan, Yunnan, Vietnam
Cymbopogon khasianus – Yunnan, Guangxi, Assam, Bhutan, Bangladesh, Myanmar, Thailand
Cymbopogon liangshanensis – Sichuan
Cymbopogon mandalaiaensis – Myanmar
Cymbopogon marginatus – Cape Province of South Africa
Cymbopogon martini (palmarosa) – Indian Subcontinent, Myanmar, Vietnam
Cymbopogon mekongensis – China, Indochina
Cymbopogon microstachys Indian Subcontinent, Myanmar, Thailand, Yunnan
Cymbopogon microthecus – Nepal, Bhutan, Assam, West Bengal, Bangladesh
Cymbopogon minor – Yunnan
Cymbopogon minutiflorus – Sulawesi
Cymbopogon nardus (citronella grass) – Indian Subcontinent, Indochina, central + southern Africa, Madagascar, Seychelles
Cymbopogon nervatus – Myanmar, Thailand, central Africa
Cymbopogon obtectus Silky-heads – Australia
Cymbopogon osmastonii – India, Bangladesh
Cymbopogon pendulus – Yunnan, eastern Himalayas, Myanmar, Vietnam
Cymbopogon polyneuros – Tamil Nadu, Sri Lanka, Myanmar
Cymbopogon pospischilii – eastern + southern Africa, Oman, Yemen, Himalayas, Tibet, Yunnan
Cymbopogon procerus – Australia, New Guinea, Maluku, Lesser Sunda Islands, Sulawesi
Cymbopogon pruinosus – islands of Indian Ocean
Cymbopogon queenslandicus – Queensland
Cymbopogon quinhonensis – Vietnam
Cymbopogon rectus – Lesser Sunda Islands, Java
Cymbopogon refractus (barbed wire grass) – Australia incl Norfolk Island
Cymbopogon schoenanthus (camel hay or camel grass) – Sahara, Sahel, eastern Africa, Arabian Peninsular, Iran
Cymbopogon tortilis – China incl Taiwan, Ryukyu + Bonin Is, Philippines, Vietnam, Maluku
Cymbopogon tungmaiensis – Sichuan, Tibet, Yunnan
Cymbopogon winterianus (Java citronella, citronella grass) – Borneo, Java, Sumatra
Cymbopogon xichangensis – Sichuan
Formerly included
Numerous species are now regarded as better suited to other genera, including Andropogon, Exotheca, Hyparrhenia, Iseilema, Schizachyrium, and Themeda.
Images
| Biology and health sciences | Herbs and spices | Plants |
194926 | https://en.wikipedia.org/wiki/Petersen%20graph | Petersen graph | In the mathematical field of graph theory, the Petersen graph is an undirected graph with 10 vertices and 15 edges. It is a small graph that serves as a useful example and counterexample for many problems in graph theory. The Petersen graph is named after Julius Petersen, who in 1898 constructed it to be the smallest bridgeless cubic graph with no three-edge-coloring.
Although the graph is generally credited to Petersen, it had in fact first appeared 12 years earlier, in a paper by . Kempe observed that its vertices can represent the ten lines of the Desargues configuration, and its edges represent pairs of lines that do not meet at one of the ten points of the configuration.
Donald Knuth states that the Petersen graph is "a remarkable configuration that serves as a counterexample to many optimistic predictions about what might be true for graphs in general."
The Petersen graph also makes an appearance in tropical geometry. The cone over the Petersen graph is naturally identified with the moduli space of five-pointed rational tropical curves.
Constructions
The Petersen graph is the complement of the line graph of . It is also the Kneser graph ; this means that it has one vertex for each 2-element subset of a 5-element set, and two vertices are connected by an edge if and only if the corresponding 2-element subsets are disjoint from each other. As a Kneser graph of the form it is an example of an odd graph.
Geometrically, the Petersen graph is the graph formed by the vertices and edges of the hemi-dodecahedron, that is, a dodecahedron with opposite points, lines and faces identified together.
Embeddings
The Petersen graph is nonplanar. Any nonplanar graph has as minors either the complete graph , or the complete bipartite graph , but the Petersen graph has both as minors. The minor can be formed by contracting the edges of a perfect matching, for instance the five short edges in the first picture. The minor can be formed by deleting one vertex (for instance the central vertex of the 3-symmetric drawing) and contracting an edge incident to each neighbor of the deleted vertex.
The most common and symmetric plane drawing of the Petersen graph, as a pentagram within a pentagon, has five crossings. However, this is not the best drawing for minimizing crossings; there exists another drawing (shown in the figure) with only two crossings. Because it is nonplanar, it has at least one crossing in any drawing, and if a crossing edge is removed from any drawing it remains nonplanar and has another crossing; therefore, its crossing number is exactly 2. Each edge in this drawing is crossed at most once, so the Petersen graph is 1-planar. On a torus the Petersen graph can be drawn without edge crossings; it therefore has orientable genus 1.
The Petersen graph can also be drawn (with crossings) in the plane in such a way that all the edges have equal length. That is, it is a unit distance graph.
The simplest non-orientable surface on which the Petersen graph can be embedded without crossings is the projective plane. This is the embedding given by the hemi-dodecahedron construction of the Petersen graph (shown in the figure). The projective plane embedding can also be formed from the standard pentagonal drawing of the Petersen graph by placing a cross-cap within the five-point star at the center of the drawing, and routing the star edges through this cross-cap; the resulting drawing has six pentagonal faces. This construction forms a regular map and shows that the Petersen graph has non-orientable genus 1.
Symmetries
The Petersen graph is strongly regular (with signature srg(10,3,0,1)). It is also symmetric, meaning that it is edge transitive and vertex transitive. More strongly, it is 3-arc-transitive: every directed three-edge path in the Petersen graph can be transformed into every other such path by a symmetry of the graph.
It is one of only 13 cubic distance-regular graphs.
The automorphism group of the Petersen graph is the symmetric group ; the action of on the Petersen graph follows from its construction as a Kneser graph. The Petersen graph is a core: every homomorphism of the Petersen graph to itself is an automorphism. As shown in the figures, the drawings of the Petersen graph may exhibit five-way or three-way symmetry, but it is not possible to draw the Petersen graph in the plane in such a way that the drawing exhibits the full symmetry group of the graph.
Despite its high degree of symmetry, the Petersen graph is not a Cayley graph. It is the smallest vertex-transitive graph that is not a Cayley graph.
Hamiltonian paths and cycles
The Petersen graph has a Hamiltonian path but no Hamiltonian cycle. It is the smallest bridgeless cubic graph with no Hamiltonian cycle. It is hypohamiltonian, meaning that although it has no Hamiltonian cycle, deleting any vertex makes it Hamiltonian, and is the smallest hypohamiltonian graph.
As a finite connected vertex-transitive graph that does not have a Hamiltonian cycle, the Petersen graph is a counterexample to a variant of the Lovász conjecture, but the canonical formulation of the conjecture asks for a Hamiltonian path and is verified by the Petersen graph.
Only five connected vertex-transitive graphs with no Hamiltonian cycles are known: the complete graph K2, the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. If G is a 2-connected, r-regular graph with at most 3r + 1 vertices, then G is Hamiltonian or G is the Petersen graph.
To see that the Petersen graph has no Hamiltonian cycle, consider the edges in the cut disconnecting the inner 5-cycle from the outer one. If there is a Hamiltonian cycle C, it must contain an even number of these edges. If it contains only two of them, their end-vertices must be adjacent in the two 5-cycles, which is not possible. Hence, it contains exactly four of them. Assume that the top edge of the cut is not contained in C (all the other cases are the same by symmetry). Of the five edges in the outer cycle, the two top edges must be in C, the two side edges must not be in C, and hence the bottom edge must be in C. The top two edges in the inner cycle must be in C, but this completes a non-spanning cycle, which cannot be part of a Hamiltonian cycle. Alternatively, we can also describe the ten-vertex 3-regular graphs that do have a Hamiltonian cycle and show that none of them is the Petersen graph, by finding a cycle in each of them that is shorter than any cycle in the Petersen graph. Any ten-vertex Hamiltonian 3-regular graph consists of a ten-vertex cycle C plus five chords. If any chord connects two vertices at distance two or three along C from each other, the graph has a 3-cycle or 4-cycle, and therefore cannot be the Petersen graph. If two chords connect opposite vertices of C to vertices at distance four along C, there is again a 4-cycle. The only remaining case is a Möbius ladder formed by connecting each pair of opposite vertices by a chord, which again has a 4-cycle. Since the Petersen graph has girth five, it cannot be formed in this way and has no Hamiltonian cycle.
Coloring
The Petersen graph has chromatic number 3, meaning that its vertices can be colored with three colors — but not with two — such that no edge connects vertices of the same color. It has a list coloring with 3 colors, by Brooks' theorem for list colorings.
The Petersen graph has chromatic index 4; coloring the edges requires four colors. As a connected bridgeless cubic graph with chromatic index four, the Petersen graph is a snark. It is the smallest possible snark, and was the only known snark from 1898 until 1946. The snark theorem, a result conjectured by W. T. Tutte and announced in 2001 by Robertson, Sanders, Seymour, and Thomas, states that every snark has the Petersen graph as a minor.
Additionally, the graph has fractional chromatic index 3, proving that the difference between the chromatic index and fractional chromatic index can be as large as 1. The long-standing Goldberg-Seymour Conjecture proposes that this is the largest gap possible.
The Thue number (a variant of the chromatic index) of the Petersen graph is 5.
The Petersen graph requires at least three colors in any (possibly improper) coloring that breaks all of its symmetries; that is, its distinguishing number is three. Except for the complete graphs, it is the only Kneser graph whose distinguishing number is not two.
Other properties
The Petersen graph:
is 3-connected and hence 3-edge-connected and bridgeless. See the glossary.
has independence number 4 and is 3-partite. See the glossary.
is cubic, has domination number 3, and has a perfect matching and a 2-factor.
has 6 distinct perfect matchings.
is the smallest cubic graph of girth 5. (It is the unique -cage. In fact, since it has only 10 vertices, it is the unique -Moore graph.)
every cubic bridgeless graph without Petersen minor has a cycle double cover.
is the smallest cubic graph with Colin de Verdière graph invariant μ = 5.
is the smallest graph of cop number 3.
has radius 2 and diameter 2. It is the largest cubic graph with diameter 2.
has 2000 spanning trees, the most of any 10-vertex cubic graph.
has chromatic polynomial .
has characteristic polynomial , making it an integral graph—a graph whose spectrum consists entirely of integers.
Petersen coloring conjecture
An Eulerian subgraph of a graph is a subgraph consisting of a subset of the edges of , touching every vertex of an even number of times. These subgraphs are the elements of the cycle space of and are sometimes called cycles. If and are any two graphs, a function from the edges of to the edges of is defined to be cycle-continuous if the pre-image of every cycle of is a cycle of . A conjecture of Jaeger asserts that every bridgeless graph has a cycle-continuous mapping to the Petersen graph. Jaeger showed this conjecture implies the 5-cycle-double-cover conjecture and the Berge-Fulkerson conjecture."
Related graphs
The generalized Petersen graph is formed by connecting the vertices of a regular n-gon to the corresponding vertices of a star polygon with Schläfli symbol {n/k}. For instance, in this notation, the Petersen graph is : it can be formed by connecting corresponding vertices of a pentagon and five-point star, and the edges in the star connect every second vertex. The generalized Petersen graphs also include the n-prism the Dürer graph , the Möbius-Kantor graph , the dodecahedron , the Desargues graph and the Nauru graph .
The Petersen family consists of the seven graphs that can be formed from the Petersen graph by zero or more applications of Δ-Y or Y-Δ transforms. The complete graph K6 is also in the Petersen family. These graphs form the forbidden minors for linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles in the graph are linked.
The Clebsch graph contains many copies of the Petersen graph as induced subgraphs: for each vertex v of the Clebsch graph, the ten non-neighbors of v induce a copy of the Petersen graph.
| Mathematics | Graph theory | null |
195100 | https://en.wikipedia.org/wiki/Voskhod%20programme | Voskhod programme | The Voskhod programme (, , Ascent or Dawn) was the second Soviet human spaceflight project. Two one-day crewed missions were flown using the Voskhod spacecraft and rocket, one in 1964 and one in 1965, and two dogs flew on a 22-day mission in 1966.
Voskhod development was both a follow-on to the Vostok programme and a recycling of components left over from that programme's cancellation following its first six flights. The Voskhod programme was superseded by the Soyuz programme.
Design
The Voskhod spacecraft was basically a Vostok spacecraft that had a backup, solid-fueled retrorocket added to the top of the descent module. As it was much heavier, the launch vehicle would be the 11A57, a Molniya 8K78M with the Blok L stage removed and later the basis of the Soyuz booster. The ejection seat was removed and two or three crew couches were added to the interior at a 90-degree angle to that of the Vostok crew position. However, the position of the in-flight controls was not changed, so the crew had to crane their heads 90° to see the instruments.
In the case of Voskhod 2, an inflatable exterior airlock was also added to the descent module opposite the entry hatch. The airlock was jettisoned after use. This apparatus was needed because the vehicle avionics and environmental systems were air-cooled, and depressurization in orbit would cause overheating. A solid-fueled braking rocket was also added to the parachute lines to provide for a softer landing at touchdown. This was necessary because, unlike the Vostok, the Voskhod descent module landed with the crewmen still inside.
Unlike Vostok and the later Soyuz, Voskhod had no launch abort system, meaning that the crew lacked any means of escape from a malfunctioning launch vehicle.
Voskhod had a solid-fueled backup retrorocket on top of the capsule in case the main one failed (as it did on Voskhod 2). While Vostok lacked this feature, it was not considered a problem since the spacecraft would decay from orbit within 10 days. Relatively lightweight, Voskhod was well below the 11A57 booster's lift capacity, meaning that it launched into a much higher orbit and would not decay as quickly.
Flights
The Voskhod flights, with launch dates:
Uncrewed
Kosmos 47 – Uncrewed test flight of the Voskhod hardware.
Kosmos 57 – Uncrewed test flight, unsuccessful.
Kosmos 110 – Uncrewed, sent two dogs, Veterok and Ugolyok, on a 22-day flight, launched 22 February 1966 and landed 16 March.
Crewed
Cancelled
Voskhod 3 – 19-day two-man mission to study long-term weightlessness with artificial gravity, medical, military and other experiments
Voskhod 4 – 20-day single-man mission to study long-term weightlessness with artificial gravity, medical, military, and other experiments
Voskhod 5 – 10-day two-woman mission with medical and other experiments and first female EVA-spacewalk
Voskhod 6 – 15-day two-man mission with military and other experiments and multiple spacewalks to test new EVA jet belt
Results
While the Vostok programme was dedicated more toward understanding the effects of space travel and microgravity on the human body, Voskhod's two flights were more aimed towards spectacular firsts. Although achieving the first EVA ("spacewalk") became the main success of the programme, beating the American Project Gemini to put the first multiman crew in orbit was the objective that initially motivated the programme. After those goals were realized, the programme planned to focus on other advances the spacecraft could accomplish, such as longer duration and a second female flight. However, there were delays preparing for Voskhod 3, and during that time the Gemini programme accomplished most of what had been planned for future Voskhods. In the end, the Voskhod programme was abandoned, aided by a change in Soviet leadership which was less concerned about stunt and prestige flights, and this allowed the Soviet designers to concentrate on the Soyuz programme.
| Technology | Programs and launch sites | null |
195137 | https://en.wikipedia.org/wiki/Oil%20refinery | Oil refinery | An oil refinery or petroleum refinery is an industrial process plant where petroleum (crude oil) is transformed and refined into products such as gasoline (petrol), diesel fuel, asphalt base, fuel oils, heating oil, kerosene, liquefied petroleum gas and petroleum naphtha. Petrochemical feedstock like ethylene and propylene can also be produced directly by cracking crude oil without the need of using refined products of crude oil such as naphtha. The crude oil feedstock has typically been processed by an oil production plant. There is usually an oil depot at or near an oil refinery for the storage of incoming crude oil feedstock as well as bulk liquid products. In 2020, the total capacity of global refineries for crude oil was about 101.2 million barrels per day.
Oil refineries are typically large, sprawling industrial complexes with extensive piping running throughout, carrying streams of fluids between large chemical processing units, such as distillation columns. In many ways, oil refineries use many different technologies and can be thought of as types of chemical plants. Since December 2008, the world's largest oil refinery has been the Jamnagar Refinery owned by Reliance Industries, located in Gujarat, India, with a processing capacity of per day.
Oil refineries are an essential part of the petroleum industry's downstream sector.
History
The Chinese were among the first civilizations to refine oil. As early as the first century, the Chinese were refining crude oil for use as an energy source. Between 512 and 518, in the late Northern Wei dynasty, the Chinese geographer, writer and politician Li Daoyuan introduced the process of refining oil into various lubricants in his famous work Commentary on the Water Classic.
Crude oil was often distilled by Persian chemists, with clear descriptions given in handbooks such as those of Muhammad ibn Zakarīya Rāzi (). The streets of Baghdad were paved with tar, derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan. These fields were described by the Arab geographer Abu al-Hasan 'Alī al-Mas'ūdī in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. Arab and Persian chemists also distilled crude oil in order to produce flammable products for military purposes. Through Islamic Spain, distillation became available in Western Europe by the 12th century.
In the Northern Song dynasty (960–1127), a workshop called the "Fierce Oil Workshop", was established in the city of Kaifeng to produce refined oil for the Song military as a weapon. The troops would then fill iron cans with refined oil and throw them toward the enemy troops, causing a fire – effectively the world's first "fire bomb". The workshop was one of the world's earliest oil refining factories where thousands of people worked to produce Chinese oil-powered weaponry.
Prior to the nineteenth century, petroleum was known and utilized in various fashions in Babylon, Egypt, China, Philippines, Rome and Azerbaijan. However, the modern history of the petroleum industry is said to have begun in 1846 when Abraham Gessner of Nova Scotia, Canada devised a process to produce kerosene from coal. Shortly thereafter, in 1854, Ignacy Łukasiewicz began producing kerosene from hand-dug oil wells near the town of Krosno, Poland.
Romania was registered as the first country in world oil production statistics, according to the Academy Of World Records.
In North America, the first oil well was drilled in 1858 by James Miller Williams in Oil Springs, Ontario, Canada. In the United States, the petroleum industry began in 1859 when Edwin Drake found oil near Titusville, Pennsylvania. The industry grew slowly in the 1800s, primarily producing kerosene for oil lamps. In the early twentieth century, the introduction of the internal combustion engine and its use in automobiles created a market for gasoline that was the impetus for fairly rapid growth of the petroleum industry. The early finds of petroleum like those in Ontario and Pennsylvania were soon outstripped by large oil "booms" in Oklahoma, Texas and California.
Samuel Kier established America's first oil refinery in Pittsburgh on Seventh Avenue near Grant Street, in 1853. Polish pharmacist and inventor Ignacy Łukasiewicz established an oil refinery in Jasło, then part of the Austro-Hungarian Empire (now in Poland) in 1854.
The first large refinery opened at Ploiești, Romania, in 1856–1857.
It was in Ploiesti that, 51 years later, in 1908, Lazăr Edeleanu, a Romanian chemist of Jewish origin who got his PhD in 1887 by discovering amphetamine, invented, patented and tested on industrial scale the first modern method of liquid extraction for refining crude oil, the Edeleanu process. This increased the refining efficiency compared to pure fractional distillation and allowed a massive development of the refining plants. Successively, the process was implemented in France, Germany, U.S. and in a few decades became worldwide spread. In 1910 Edeleanu founded "Allgemeine Gesellschaft für Chemische Industrie" in Germany, which, given the success of the name, changed to Edeleanu GmbH, in 1930. During Nazi's time, the company was bought by the Deutsche Erdöl-AG and Edeleanu, being of Jewish origin, moved back to Romania. After the war, the trademark was used by the successor company EDELEANU Gesellschaft mbH Alzenau (RWE) for many petroleum products, while the company was lately integrated as EDL in the Pörner Group.
The Ploiești refineries, after being taken over by Nazi Germany, were bombed in the 1943 Operation Tidal Wave by the Allies, during the Oil Campaign of World War II.
Another close contender for the title of hosting the world's oldest oil refinery is Salzbergen in Lower Saxony, Germany. Salzbergen's refinery was opened in 1860.
At one point, the refinery in Ras Tanura, Saudi Arabia owned by Saudi Aramco was claimed to be the largest oil refinery in the world. For most of the 20th century, the largest refinery was the Abadan Refinery in Iran. This refinery suffered extensive damage during the Iran–Iraq War. Since 25 December 2008, the world's largest refinery complex is the Jamnagar Refinery Complex, consisting of two refineries side by side operated by Reliance Industries Limited in Jamnagar, India with a combined production capacity of . PDVSA's Paraguaná Refinery Complex in Paraguaná Peninsula, Venezuela, with a capacity of but effective run rates have been dramatically lower due to the impact of 20 years of sanctions, and SK Energy's Ulsan in South Korea with are the second and third largest, respectively.
Prior to World War II in the early 1940s, most petroleum refineries in the United States consisted simply of crude oil distillation units (often referred to as atmospheric crude oil distillation units). Some refineries also had vacuum distillation units as well as thermal cracking units such as visbreakers (viscosity breakers, units to lower the viscosity of the oil). All of the many other refining processes discussed below were developed during the war or within a few years after the war. They became commercially available within 5 to 10 years after the war ended and the worldwide petroleum industry experienced very rapid growth. The driving force for that growth in technology and in the number and size of refineries worldwide was the growing demand for automotive gasoline and aircraft fuel.
In the United States, for various complex economic and political reasons, the construction of new refineries came to a virtual stop in about the 1980s. However, many of the existing refineries in the United States have revamped many of their units and/or constructed add-on units in order to: increase their crude oil processing capacity, increase the octane rating of their product gasoline, lower the sulfur content of their diesel fuel and home heating fuels to comply with environmental regulations and comply with environmental air pollution and water pollution requirements.
United States
In the 19th century, refineries in the U.S. processed crude oil primarily to recover the kerosene. There was no market for the more volatile fraction, including gasoline, which was considered waste and was often dumped directly into the nearest river. The invention of the automobile shifted the demand to gasoline and diesel, which remain the primary refined products today.
Today, national and state legislation require refineries to meet stringent air and water cleanliness standards. In fact, oil companies in the U.S. perceive obtaining a permit to build a modern refinery to be so difficult and costly that no new refineries were built (though many have been expanded) in the U.S. from 1976 until 2014 when the small Dakota Prairie Refinery in North Dakota began operation. More than half the refineries that existed in 1981 are now closed due to low utilization rates and accelerating mergers. As a result of these closures total US refinery capacity fell between 1981 and 1995, though the operating capacity stayed fairly constant in that time period at around . Increases in facility size and improvements in efficiencies have offset much of the lost physical capacity of the industry. In 1982 (the earliest data provided), the United States operated 301 refineries with a combined capacity of of crude oil each calendar day. In 2010, there were 149 operable U.S. refineries with a combined capacity of per calendar day. By 2014 the number of refinery had reduced to 140 but the total capacity increased to per calendar day. Indeed, in order to reduce operating costs and depreciation, refining is operated in fewer sites but of bigger capacity.
In 2009 through 2010, as revenue streams in the oil business dried up and profitability of oil refineries fell due to lower demand for product and high reserves of supply preceding the economic recession, oil companies began to close or sell the less profitable refineries.
Operation
Raw or unprocessed crude oil is not generally useful in industrial applications, although "light, sweet" (low viscosity, low sulfur) crude oil has been used directly as a burner fuel to produce steam for the propulsion of seagoing vessels. The lighter elements, however, form explosive vapors in the fuel tanks and are therefore hazardous, especially in warships. Instead, the hundreds of different hydrocarbon molecules in crude oil are separated in a refinery into components that can be used as fuels, lubricants, and feedstocks in petrochemical processes that manufacture such products as plastics, detergents, solvents, elastomers, and fibers such as nylon and polyesters.
Petroleum fossil fuels are burned in internal combustion engines to provide power for ships, automobiles, aircraft engines, lawn mowers, dirt bikes, and other machines. Different boiling points allow the hydrocarbons to be separated by distillation. Since the lighter liquid products are in great demand for use in internal combustion engines, a modern refinery will convert heavy hydrocarbons and lighter gaseous elements into these higher-value products.
Oil can be used in a variety of ways because it contains hydrocarbons of varying molecular masses, forms and lengths such as paraffins, aromatics, naphthenes (or cycloalkanes), alkenes, dienes, and alkynes. While the molecules in crude oil include different atoms such as sulfur and nitrogen, the hydrocarbons are the most common form of molecules, which are molecules of varying lengths and complexity made of hydrogen and carbon atoms, and a small number of oxygen atoms. The differences in the structure of these molecules account for their varying physical and chemical properties, and it is this variety that makes crude oil useful in a broad range of several applications.
Once separated and purified of any contaminants and impurities, the fuel or lubricant can be sold without further processing. Smaller molecules such as isobutane and propylene or butylenes can be recombined to meet specific octane requirements by processes such as alkylation, or more commonly, dimerization. The octane grade of gasoline can also be improved by catalytic reforming, which involves removing hydrogen from hydrocarbons producing compounds with higher octane ratings such as aromatics. Intermediate products such as gasoils can even be reprocessed to break a heavy, long-chained oil into a lighter short-chained one, by various forms of cracking such as fluid catalytic cracking, thermal cracking, and hydrocracking. The final step in gasoline production is the blending of fuels with different octane ratings, vapor pressures, and other properties to meet product specifications. Another method for reprocessing and upgrading these intermediate products (residual oils) uses a devolatilization process to separate usable oil from the waste asphaltene material. Certain cracked streams are particularly suitable to produce petrochemicals includes polypropylene, heavier polymers, and block polymers based on the molecular weight and the characteristics of the olefin specie that is cracked from the source feedstock.
Oil refineries are large-scale plants, processing about a hundred thousand to several hundred thousand barrels of crude oil a day. Because of the high capacity, many of the units operate continuously, as opposed to processing in batches, at steady state or nearly steady state for months to years. The high capacity also makes process optimization and advanced process control very desirable.
Major products
Petroleum products are materials derived from crude oil (petroleum) as it is processed in oil refineries. The majority of petroleum is converted to petroleum products, which includes several classes of fuels.
Oil refineries also produce various intermediate products such as hydrogen, light hydrocarbons, reformate and pyrolysis gasoline. These are not usually transported but instead are blended or processed further on-site. Chemical plants are thus often adjacent to oil refineries or a number of further chemical processes are integrated into it. For example, light hydrocarbons are steam-cracked in an ethylene plant, and the produced ethylene is polymerized to produce polyethene.
To ensure both proper separation and environmental protection, a very low sulfur content is necessary in all but the heaviest products. The crude sulfur contaminant is transformed to hydrogen sulfide via catalytic hydrodesulfurization and removed from the product stream via amine gas treating. Using the Claus process, hydrogen sulfide is afterward transformed to elementary sulfur to be sold to the chemical industry. The rather large heat energy freed by this process is directly used in the other parts of the refinery. Often an electrical power plant is combined into the whole refinery process to take up the excess heat.
According to the composition of the crude oil and depending on the demands of the market, refineries can produce different shares of petroleum products. The largest share of oil products is used as "energy carriers", i.e. various grades of fuel oil and gasoline. These fuels include or can be blended to give gasoline, jet fuel, diesel fuel, heating oil, and heavier fuel oils. Heavier (less volatile) fractions can also be used to produce asphalt, tar, paraffin wax, lubricating and other heavy oils. Refineries also produce other chemicals, some of which are used in chemical processes to produce plastics and other useful materials. Since petroleum often contains a few percent sulfur-containing molecules, elemental sulfur is also often produced as a petroleum product. Carbon, in the form of petroleum coke, and hydrogen may also be produced as petroleum products. The hydrogen produced is often used as an intermediate product for other oil refinery processes such as hydrocracking and hydrodesulfurization.
Petroleum products are usually grouped into four categories: light distillates (LPG, gasoline, naphtha), middle distillates (kerosene, jet fuel, diesel), heavy distillates, and residuum (heavy fuel oil, lubricating oils, wax, asphalt). These require blending various feedstocks, mixing appropriate additives, providing short-term storage, and preparation for bulk loading to trucks, barges, product ships, and railcars. This classification is based on the way crude oil is distilled and separated into fractions.
Gaseous fuel such as liquified petroleum gas and propane, stored and shipped in liquid form under pressure.
Lubricants (produces light machine oils, motor oils, and greases, adding viscosity stabilizers as required), usually shipped in bulk to an offsite packaging plant.
Paraffin wax, used in the candle industry, among others. May be shipped in bulk to a site to prepare as packaged blocks. Used for wax emulsions, candles, matches, rust protection, vapor barriers, construction board, and packaging of frozen foods.
Sulfur (or sulfuric acid), byproducts of sulfur removal from petroleum which may have up to a couple of percent sulfur as organic sulfur-containing compounds. Sulfur and sulfuric acid are useful industrial materials. Sulfuric acid is usually prepared and shipped as the acid precursor oleum.
Bulk tar shipping for offsite unit packaging for use in tar-and-gravel roofing.
Asphalt used as a binder for gravel to form asphalt concrete, which is used for paving roads, lots, etc. An asphalt unit prepares bulk asphalt for shipment.
Petroleum coke, used in specialty carbon products like electrodes or as solid fuel.
Petrochemicals are organic compounds that are the ingredients for the chemical industry, ranging from polymers and pharmaceuticals, including ethylene and benzene-toluene-xylenes ("BTX") which are often sent to petrochemical plants for further processing in a variety of ways. The petrochemicals may be olefins or their precursors, or various types of aromatic petrochemicals.
Gasoline
Naphtha
Kerosene and related jet aircraft fuels
Diesel fuel and fuel oils
Heat
Electricity
Over 6,000 items are made from petroleum waste by-products, including fertilizer, floor coverings, perfume, insecticide, petroleum jelly, soap, vitamin capsules.
Chemical processes
Desalter unit washes out salt, and other water soluble contaminants, from the crude oil before it enters the atmospheric distillation unit.
Pre-flash and/or pre-distillation which is found in most atmospheric crude oil units of more than 100,000 bpsd in capacity.
Crude oil distillation unit distills the incoming crude oil into various fractions for further processing in other units. See continuous distillation.
Vacuum distillation further distills the residue oil from the bottom of the crude oil distillation unit. The vacuum distillation is performed at a pressure well below atmospheric pressure.
Naphtha hydrotreater unit uses hydrogen to desulfurize naphtha from atmospheric distillation. Naphtha must be desulfurized before sending it to a catalytic reformer unit.
Catalytic reformer converts the desulfurized naphtha molecules into higher-octane molecules to produce reformate (reformer product). The reformate has higher content of aromatics and cyclic hydrocarbons which is a component of the end-product gasoline or petrol. An important byproduct of a reformer is hydrogen released during the catalyst reaction. The hydrogen is used either in the hydrotreaters or the hydrocracker.
Distillate hydrotreater desulfurizes distillates (such as diesel) after atmospheric distillation. Uses hydrogen to desulfurize the naphtha fraction from the crude oil distillation or other units within the refinery. Distillate hydrotreaters that operate above 700 psi are also capable of removing nitrogen contaminants from feedstocks if given adequate liquid hourly space velocity.
Fluid catalytic cracker (FCC) upgrades the heavier, higher-boiling fractions from the crude oil distillation by converting them into lighter and lower boiling, more valuable products.
Hydrocracker uses hydrogen to upgrade heavy residual oils from the vacuum distillation unit by thermally cracking them into lighter, more valuable reduced viscosity products.
Merox desulfurize LPG, kerosene or jet fuel by oxidizing mercaptans to organic disulfides.
Alternative processes for removing mercaptans are known, e.g. doctor sweetening process and caustic washing.
Coking units (delayed coker, fluid coker, and flexicoker) process very heavy residual oils into gasoline and diesel fuel, leaving petroleum coke as a residual product.
Alkylation unit uses sulfuric acid or hydrofluoric acid to produce high-octane components for gasoline blending. The "alky" unit converts light end isobutane and butylenes from the FCC process into alkylate, a very high-octane component of the end-product gasoline or petrol.
Dimerization unit converts olefins into higher-octane gasoline blending components. For example, butenes can be dimerized into isooctene which may subsequently be hydrogenated to form isooctane. There are also other uses for dimerization. Gasoline produced through dimerization is highly unsaturated and very reactive. It tends spontaneously to form gums. For this reason, the effluent from the dimerization needs to be blended into the finished gasoline pool immediately or hydrogenated.
Isomerization converts linear molecules such as normal pentane to higher-octane branched molecules for blending into gasoline or feed to alkylation units. Also used to convert linear normal butane into isobutane for use in the alkylation unit.
Steam reforming converts natural gas into hydrogen for the hydrotreaters and/or the hydrocracker.
Liquified gas storage vessels store propane and similar gaseous fuels at pressure sufficient to maintain them in liquid form. These are usually spherical vessels or "bullets" (i.e., horizontal vessels with rounded ends).
Amine gas treater, Claus unit, and tail gas treatment convert hydrogen sulfide from hydrodesulfurization into elemental sulfur. The large majority of the 64,000,000 metric tons of sulfur produced worldwide in 2005 was byproduct sulfur from petroleum refining and natural gas processing plants.
Sour water stripper uses steam to remove hydrogen sulfide gas from various wastewater streams for subsequent conversion into end-product sulfur in the Claus unit.
Cooling towers circulate cooling water, boiler plants generates steam for steam generators, and instrument air systems include pneumatically operated control valves and an electrical substation.
Wastewater collection and treating systems consist of API separators, dissolved air flotation (DAF) units and further treatment units such as an activated sludge biotreater to make water suitable for reuse or for disposal.
Solvent refining uses solvent such as cresol or furfural to remove unwanted, mainly aromatics from lubricating oil stock or diesel stock.
Solvent dewaxing removes the heavy waxy constituents petrolatum from vacuum distillation products.
Storage tanks for storing crude oil and finished products, usually vertical, cylindrical vessels with some sort of vapor emission control and surrounded by an earthen berm to contain spills.
Flow diagram of typical refinery
The image below is a schematic flow diagram of a typical oil refinery that depicts the various unit processes and the flow of intermediate product streams that occurs between the inlet crude oil feedstock and the final end products. The diagram depicts only one of the literally hundreds of different oil refinery configurations. The diagram also does not include any of the usual refinery facilities providing utilities such as steam, cooling water, and electric power as well as storage tanks for crude oil feedstock and for intermediate products and end products.
There are many process configurations other than that depicted above. For example, the vacuum distillation unit may also produce fractions that can be refined into end products such as spindle oil used in the textile industry, light machine oil, motor oil, and various waxes.
Crude oil distillation unit
The crude oil distillation unit (CDU) is the first processing unit in virtually all petroleum refineries. The CDU distills the incoming crude oil into various fractions of different boiling ranges, each of which is then processed further in the other refinery processing units. The CDU is often referred to as the atmospheric distillation unit because it operates at slightly above atmospheric pressure.
Below is a schematic flow diagram of a typical crude oil distillation unit. The incoming crude oil is preheated by exchanging heat with some of the hot, distilled fractions and other streams. It is then desalted to remove inorganic salts (primarily sodium chloride).
Following the desalter, the crude oil is further heated by exchanging heat with some of the hot, distilled fractions and other streams. It is then heated in a fuel-fired furnace (fired heater) to a temperature of about 398 °C and routed into the bottom of the distillation unit.
The cooling and condensing of the distillation tower overhead is provided partially by exchanging heat with the incoming crude oil and partially by either an air-cooled or water-cooled condenser. Additional heat is removed from the distillation column by a pumparound system as shown in the diagram below.
As shown in the flow diagram, the overhead distillate fraction from the distillation column is naphtha. The fractions removed from the side of the distillation column at various points between the column top and bottom are called sidecuts. Each of the sidecuts (i.e., the kerosene, light gas oil, and heavy gas oil) is cooled by exchanging heat with the incoming crude oil. All of the fractions (i.e., the overhead naphtha, the sidecuts, and the bottom residue) are sent to intermediate storage tanks before being processed further.
Location of refineries
A party searching for a site to construct a refinery or a chemical plant needs to consider the following issues:
The site has to be reasonably far from residential areas.
Infrastructure should be available for the supply of raw materials and shipment of products to markets.
Energy to operate the plant should be available.
Facilities should be available for waste disposal.
Factors affecting site selection for oil refinery:
Availability of land
Conditions of traffic and transportation
Conditions of utilities – power supply, water supply
Availability of labours and resources
Refineries that use a large amount of steam and cooling water need to have an abundant source of water. Oil refineries, therefore, are often located nearby navigable rivers or on a seashore, nearby a port. Such location also gives access to transportation by river or by sea. The advantages of transporting crude oil by pipeline are evident, and oil companies often transport a large volume of fuel to distribution terminals by pipeline. A pipeline may not be practical for products with small output, and railcars, road tankers, and barges are used.
Petrochemical plants and solvent manufacturing (fine fractionating) plants need spaces for further processing of a large volume of refinery products, or to mix chemical additives with a product at source rather than at blending terminals.
Safety and environment
The refining process releases a number of different chemicals into the atmosphere (see AP 42 Compilation of Air Pollutant Emission Factors) and a notable odor normally accompanies the presence of a refinery. Aside from air pollution impacts there are also wastewater concerns, risks of industrial accidents such as fire and explosion, and noise health effects due to industrial noise.
Many governments worldwide have mandated restrictions on contaminants that refineries release, and most refineries have installed the equipment needed to comply with the requirements of the pertinent environmental protection regulatory agencies. In the United States, there is strong pressure to prevent the development of new refineries, and no major refinery has been built in the country since Marathon's Garyville, Louisiana facility in 1976. However, many existing refineries have been expanded during that time. Environmental restrictions and pressure to prevent the construction of new refineries may have also contributed to rising fuel prices in the United States. Additionally, many refineries (more than 100 since the 1980s) have closed due to obsolescence and/or merger activity within the industry itself.
Environmental and safety concerns mean that oil refineries are sometimes located some distance away from major urban areas. Nevertheless, there are many instances where refinery operations are close to populated areas and pose health risks. In California's Contra Costa County and Solano County, a shoreline necklace of refineries, built in the early 20th century before this area was populated, and associated chemical plants are adjacent to urban areas in Richmond, Martinez, Pacheco, Concord, Pittsburg, Vallejo and Benicia, with occasional accidental events that require "shelter in place" orders to the adjacent populations. A number of refineries are located in Sherwood Park, Alberta, directly adjacent to the City of Edmonton, which has a population of over 1,000,000 residents.
NIOSH criteria for occupational exposure to refined petroleum solvents have been available since 1977.
Worker health
Background
Modern petroleum refining involves a complicated system of interrelated chemical reactions that produce a wide variety of petroleum-based products. Many of these reactions require precise temperature and pressure parameters. The equipment and monitoring required to ensure the proper progression of these processes is complex, and has evolved through the advancement of the scientific field of petroleum engineering.
The wide array of high pressure and/or high temperature reactions, along with the necessary chemical additives or extracted contaminants, produces an astonishing number of potential health hazards to the oil refinery worker. Through the advancement of technical chemical and petroleum engineering, the vast majority of these processes are automated and enclosed, thus greatly reducing the potential health impact to workers. However, depending on the specific process in which a worker is engaged, as well as the particular method employed by the refinery in which he/she works, significant health hazards remain.
Although occupational injuries in the United States were not routinely tracked and reported at the time, reports of the health impacts of working in an oil refinery can be found as early as the 1800s. For instance, an explosion in a Chicago refinery killed 20 workers in 1890. Since then, numerous fires, explosions, and other significant events have from time to time drawn the public's attention to the health of oil refinery workers. Such events continue in the 21st century, with explosions reported in refineries in Wisconsin and Germany in 2018.
However, there are many less visible hazards that endanger oil refinery workers.
Chemical exposures
Given the highly automated and technically advanced nature of modern petroleum refineries, nearly all processes are contained within engineering controls and represent a substantially decreased risk of exposure to workers compared to earlier times. However, certain situations or work tasks may subvert these safety mechanisms, and expose workers to a number of chemical (see table above) or physical (described below) hazards. Examples of these scenarios include:
System failures (leaks, explosions, etc.).
Standard inspection, product sampling, process turnaround, or equipment maintenance/cleaning activities.
A 2021 systematic review associated working in the petrochemical industry with increased risk of various cancers, such as mesothelioma. It also found reduced risks of other cancers, such as stomach and rectal. The systematic review did mention that several of the associations were not due to factors directly related to the petroleum industry, rather were related to lifestyle factors such as smoking. Evidence for adverse health effects for nearby residents was also weak, with the evidence primarily centering around neighborhoods in developed countries.
BTX stands for benzene, toluene, xylene. This is a group of common volatile organic compounds (VOCs) that are found in the oil refinery environment, and serve as a paradigm for more in depth discussion of occupational exposure limits, chemical exposure and surveillance among refinery workers.
The most important route of exposure for BTX chemicals is inhalation due to the low boiling point of these chemicals. The majority of the gaseous production of BTX occurs during tank cleaning and fuel transfer, which causes offgassing of these chemicals into the air. Exposure can also occur through ingestion via contaminated water, but this is unlikely in an occupational setting. Dermal exposure and absorption is also possible, but is again less likely in an occupational setting where appropriate personal protective equipment is in place.
In the United States, the Occupational Safety and Health Administration (OSHA), National Institute for Occupational Safety and Health (NIOSH), and American Conference of Governmental Industrial Hygienists (ACGIH) have all established occupational exposure limits (OELs) for many of the chemicals above that workers may be exposed to in petroleum refineries.
Benzene, in particular, has multiple biomarkers that can be measured to determine exposure. Benzene itself can be measured in the breath, blood, and urine, and metabolites such as phenol, t,t-muconic acid (t,tMA) and S-phenylmercapturic acid (sPMA) can be measured in urine. In addition to monitoring the exposure levels via these biomarkers, employers are required by OSHA to perform regular blood tests on workers to test for early signs of some of the feared hematologic outcomes, of which the most widely recognized is leukemia. Required testing includes complete blood count with cell differentials and peripheral blood smear "on a regular basis". The utility of these tests is supported by formal scientific studies.
Potential chemical exposure by process
Physical hazards
Workers are at risk of physical injuries due to a large number of high-powered machines in the relatively close proximity of the oil refinery. The high pressure required for many of the chemical reactions also presents the possibility of localized system failures resulting in blunt or penetrating trauma from exploding system components.
Heat is also a hazard. The temperature required for the proper progression of certain reactions in the refining process can reach . As with chemicals, the operating system is designed to safely contain this hazard without injury to the worker. However, in system failures, this is a potent threat to workers' health. Concerns include both direct injury through a heat illness or injury, as well as the potential for devastating burns should the worker come in contact with super-heated reagents/equipment.
Noise is another hazard. Refineries can be very loud environments, and have previously been shown to be associated with hearing loss among workers. The interior environment of an oil refinery can reach levels in excess of 90 dB. In the United States, an average of 90 dB is the permissible exposure limit (PEL) for an 8-hour work-day. Noise exposures that average greater than 85 dB over an 8-hour require a hearing conservation program to regularly evaluate workers' hearing and to promote its protection. Regular evaluation of workers' auditory capacity and faithful use of properly vetted hearing protection are essential parts of such programs.
While not specific to the industry, oil refinery workers may also be at risk for hazards such as vehicle-related accidents, machinery-associated injuries, work in a confined space, explosions/fires, ergonomic hazards, shift-work related sleep disorders, and falls.
Hazard controls
The theory of hierarchy of controls can be applied to petroleum refineries and their efforts to ensure worker safety.
Elimination and substitution are unlikely in petroleum refineries, as many of the raw materials, waste products, and finished products are hazardous in one form or another (e.g. flammable, carcinogenic).
Examples of engineering controls include a fire detection/extinguishing system, pressure/chemical sensors to detect/predict loss of structural integrity, and adequate maintenance of piping to prevent hydrocarbon-induced corrosion (leading to structural failure). Other examples employed in petroleum refineries include the post-construction protection of steel components with vermiculite to improve heat/fire resistance. Compartmentalization can help to prevent a fire or other systems failure from spreading to affect other areas of the structure, and may help prevent dangerous reactions by keeping different chemicals separate from one another until they can be safely combined in the proper environment.
Administrative controls include careful planning and oversight of the refinery cleaning, maintenance, and turnaround processes. These occur when many of the engineering controls are shut down or suppressed and may be especially dangerous to workers. Detailed coordination is necessary to ensure that maintenance of one part of the facility will not cause dangerous exposures to those performing the maintenance, or to workers in other areas of the plant. Due to the highly flammable nature of many of the involved chemicals, smoking areas are tightly controlled and carefully placed.
Personal protective equipment (PPE) may be necessary depending on the specific chemical being processed or produced. Particular care is needed during sampling of the partially-completed product, tank cleaning, and other high-risk tasks as mentioned above. Such activities may require the use of impervious outerwear, acid hood, disposable coveralls, etc. More generally, all personnel in operating areas should use appropriate hearing and vision protection, avoid clothes made of flammable material (nylon, Dacron, acrylic, or blends), and full-length pants and sleeves.
Regulations
United States
Worker health and safety in oil refineries is closely monitored at a national level by both the Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH). In addition to federal monitoring, California's CalOSHA has been particularly active in protecting worker health in the industry, and adopted a policy in 2017 that requires petroleum refineries to perform a "Hierarchy of Hazard Controls Analysis" (see above "Hazard controls" section) for each process safety hazard. Safety regulations have resulted in a below-average injury rate for refining industry workers. In a 2018 report by the US Bureau of Labor Statistics, they indicate that petroleum refinery workers have a significantly lower rate of occupational injury (0.4 OSHA-recordable cases per 100 full-time workers) than all industries (3.1 cases), oil and gas extraction (0.8 cases), and petroleum manufacturing in general (1.3 cases).
Below is a list of the most common regulations referenced in petroleum refinery safety citations issued by OSHA:
Flammable and Combustible Liquids ()
The Hazard Communication (HazCom) standard ()
Permit-Required Confined Spaces ()
Hazardous (Classified) Locations ()
The Personal Protective Equipment (PPE) standard ()
The Control of Hazardous Energy (Lockout/Tagout) standard ()
Corrosion
Corrosion of metallic components is a major factor of inefficiency in the refining process. Because it leads to equipment failure, it is a primary driver for the refinery maintenance schedule. Corrosion-related direct costs in the U.S. petroleum industry as of 1996 were estimated at US$3.7 billion.
Corrosion occurs in various forms in the refining process, such as pitting corrosion from water droplets, embrittlement from hydrogen, and stress corrosion cracking from sulfide attack. From a materials standpoint, carbon steel is used for upwards of 80 percent of refinery components, which is beneficial due to its low cost. Carbon steel is resistant to the most common forms of corrosion, particularly from hydrocarbon impurities at temperatures below 205 °C, but other corrosive chemicals and environments prevent its use everywhere. Common replacement materials are low alloy steels containing chromium and molybdenum, with stainless steels containing more chromium dealing with more corrosive environments. More expensive materials commonly used are nickel, titanium, and copper alloys. These are primarily saved for the most problematic areas where extremely high temperatures and/or very corrosive chemicals are present.
Corrosion is fought by a complex system of monitoring, preventative repairs, and careful use of materials. Monitoring methods include both offline checks taken during maintenance and online monitoring. Offline checks measure corrosion after it has occurred, telling the engineer when equipment must be replaced based on the historical information they have collected. This is referred to as preventative management.
Online systems are a more modern development and are revolutionizing the way corrosion is approached. There are several types of online corrosion monitoring technologies such as linear polarization resistance, electrochemical noise and electrical resistance. Online monitoring has generally had slow reporting rates in the past (minutes or hours) and been limited by process conditions and sources of error but newer technologies can report rates up to twice per minute with much higher accuracy (referred to as real-time monitoring). This allows process engineers to treat corrosion as another process variable that can be optimized in the system. Immediate responses to process changes allow the control of corrosion mechanisms, so they can be minimized while also maximizing production output. In an ideal situation having on-line corrosion information that is accurate and real-time will allow conditions that cause high corrosion rates to be identified and reduced. This is known as predictive management.
Materials methods include selecting the proper material for the application. In areas of minimal corrosion, cheap materials are preferable, but when bad corrosion can occur, more expensive but longer-lasting materials should be used. Other materials methods come in the form of protective barriers between corrosive substances and the equipment metals. These can be either a lining of refractory material such as standard Portland cement or other special acid-resistant cement that is shot onto the inner surface of the vessel. Also available are thin overlays of more expensive metals that protect cheaper metal against corrosion without requiring much material.
| Technology | Energy and fuel | null |
195164 | https://en.wikipedia.org/wiki/Old%20World%20warbler | Old World warbler | The Old World warblers are a large group of birds formerly grouped together in the bird family Sylviidae. They are not closely related to the New World warblers. The family held over 400 species in over 70 genera, and were the source of much taxonomic confusion. Two families were split out initially, the cisticolas into Cisticolidae and the kinglets into Regulidae. In the past 20–30 years they have been the subject of much research and many species are now placed into other families, including the Acrocephalidae, Cettiidae, Phylloscopidae, and Megaluridae. In addition some species have been moved into existing families or have not yet had their placement fully resolved. Only a small number of warblers, in just two genera, are now retained in the family Sylviidae.
Characteristics
Most Old World warblers are of generally undistinguished appearance, though some species are boldly marked. The sexes are often identical, but may be clearly distinct, notably in the genera Sylvia and Curruca. They are of small to medium size, varying from 9 to 20 centimetres in length, with a slender, finely pointed bill. Almost all species are primarily insectivorous, although many will also eat soft fruit, nectar, or tiny seeds.
The majority of species are monogamous and build simple, cup-shaped nests in dense vegetation. They lay between two and six eggs per clutch, depending on species. Both parents typically help in raising the young, which are able to fly at around two weeks of age.
Systematics
In the late 20th century, the Sylviidae were thought to unite nearly 300 small insectivorous bird species in nearly 50 genera, a huge family, with few clear patterns of relationships recognisable. Though not as diverse as the Timaliidae (Old World babblers; another "wastebin taxon" containing more thrush-like forms), the frontiers were much blurred. The largely tropical warbler family Cisticolidae was at that time traditionally included in the Sylviidae. The kinglets, now a small genus in a monotypic family Regulidae, were also sometimes placed in this family, including by the influential List of Recent Holarctic Bird Species,. The American Ornithologists' Union then also included the gnatcatchers, as subfamily Polioptilinae, in the Sylviidae.
Sibley & Ahlquist (1990) united the "Old World warblers" with the babblers and other taxa in a superfamily Sylvioidea as a result of DNA–DNA hybridisation studies. This demonstrated that the Sylviidae as initially defined was a form taxon which collected unrelated songbirds. Consequently, the monophyly of the individual "songster" lineages themselves was increasingly being questioned.
More recently, analysis of DNA sequence data has provided information on the Sylvioidea. Usually, the scope of the clade was underestimated and only one or two specimens were sampled for each presumed "family". Minor or little-known groups such as the parrotbills were left out entirely (e.g. Ericson & Johansson 2003, Barker et al. 2004). These could only confirm that the Cisticolidae were indeed distinct, and suggested that bulbuls (Pycnonotidae) were apparently the closest relatives of a group containing Sylviidae, Timaliidae, cisticolids and white-eyes.
In 2003, a study of Timaliidae relationships (Cibois 2003a) using mtDNA cytochrome b and 12S/16S rRNA data indicated that the Sylviidae and Old World babblers were not reciprocally monophyletic to each other. Moreover, Sylvia, the type genus of the Sylviidae, turned out to be closer to taxa such as the yellow-eyed babbler (Chrysomma sinense, traditionally held to be an atypical timaliid) and the wrentit (Chamaea fasciata), an enigmatic species generally held to be the only American Old World babbler. The parrotbills Paradoxornithidae (roughly, "puzzling birds") of then unclear affiliations also were part of what apparently was a well distinctive clade.
Cibois suggested that the Sylviidae should officially be suppressed by the ICZN as a taxon and the genus Sylvia merged into the Timaliidae (Cibois 2003b), but this was rejected. Clearly, the sheer extent of the groups concerned made it necessary to study a wide range of taxa. This was begun by Beresford et al. (2005) and Alström et al. (2006). They determined that the late-20th-century Sylviidae united at least four, but probably as many as seven major distinct lineages. The authors propose the creation of several new families (Phylloscopidae, Cettiidae, Acrocephalidae, and Megaluridae, this last turning out to be a synonym of the older-published Locustellidae) to better reflect the evolutionary history of the sylvioid group.
Species
Family Sylviidae sensu stricto
Typical warblers (or sylviid warblers). A fairly diverse group of smallish taxa with longish tails, now containing 33 species in two genera. Mostly in Europe and the Mediterranean region, with a few extending to central Asia and in tropical Africa.
Genus Sylvia – typical warblers (6 species)
Eurasian blackcap, Sylvia atricapilla
Garden warbler, Sylvia borin
Dohrn's warbler, Sylvia dohrni
Abyssinian catbird, Sylvia galinieri
Bush blackcap, Sylvia nigricapillus
African hill babbler, Sylvia abyssinica
Genus Curruca – 27 species. Formerly in Sylvia (Sylviidae)
Barred warbler, Curruca nisoria
Layard's warbler, Curruca layardi
Banded parisoma, Curruca boehmi
Chestnut-vented warbler, Curruca suboerulea
Desert whitethroat, Curruca minula
Lesser whitethroat, Curruca curruca
Hume's whitethroat, Curruca althaea
Brown parisoma, Curruca lugens
Yemen warbler, Curruca buryi
Arabian warbler, Curruca leucomelaena
Western orphean warbler, Curruca hortensis
Eastern orphean warbler, Curruca crassirostris
African desert warbler, Curruca deserti
Asian desert warbler, Curruca nana
Tristram's warbler, Curruca deserticola
Menetries's warbler, Curruca mystacea
Rüppell's warbler, Curruca ruppeli
Cyprus warbler, Curruca melanothorax
Sardinian warbler, Curruca melanocephala
Western subalpine warbler, Curruca iberiae
Moltoni's warbler, Curruca subalpina
Eastern subalpine warbler, Curruca cantillans
Common whitethroat, Curruca communis
Spectacled warbler, Curruca conspicillata
Marmora's warbler, Curruca sarda
Dartford warbler, Curruca undata
Balearic warbler, Curruca balearica
Moved to family Paradoxornithidae
Genus Lioparus – golden-breasted fulvetta
Genus Chrysomma
Yellow-eyed babbler, Chrysomma sinense
Jerdon's babbler, Chrysomma altirostre
Genus Rhopophilus
Tarim babbler, Rhopophilus albosuperciliaris
Beijing babbler, Rhopophilus pekinensis
Genus Fulvetta
Spectacled fulvetta, Fulvetta ruficapilla
Indochinese fulvetta, Fulvetta danisi
Chinese fulvetta, Fulvetta striaticollis
White-browed fulvetta, Fulvetta vinipectus
Brown-throated fulvetta, Fulvetta ludlowi
Manipur fulvetta, Fulvetta manipurensis
Grey-hooded fulvetta, Fulvetta cinereiceps
Genus Chamea – wrentit
Genus Paradoxornis
Black-breasted parrotbill, Paradoxornis flavirostris
Spot-breasted parrotbill, Paradoxornis guttaticollis
Genus Conostoma – great parrotbill
Moved to family Pellorneidae
Genus Graminicola
Rufous-rumped grassbird ("-babbler") Graminicola bengalensis
Moved to family Cisticolidae
Genus Bathmocercus – rufous-warblers
Black-capped rufous-warbler, Bathmocercus cerviniventris
Black-faced rufous-warbler, Bathmocercus rufus
Genus Sceptomycter – sometimes merged into Bathmocercus
Mrs Moreau's warbler, Sceptomycter winifredae
Genus Poliolais – Cisticolidae or more basal like bulbuls?
White-tailed warbler, Poliolais lopezi
Two to 14 of the 15 tailorbirds
Moved to family Acrocephalidae
Marsh and tree warblers or acrocephalid warblers. Usually rather large "warblers", most are olivaceous brown above with much yellow to beige below. Usually in open woodland, reed beds or tall grass. Mainly southern Asia to western Europe and surroundings ranging far into Pacific, some in Africa.
Genus Acrocephalus – marsh warblers (about 38 living species, 5 recently extinct)
Genus Iduna – olivaceous warblers (6 species)
Genus Hippolais – tree warblers (4 species)
Genus Arundinax – thick-billed warbler
Genus Calamonastides – yellow warblers (2 species)
Genus Nesillas – brush warblers (5 living species, 1 recently extinct)
Moved to Malagasy warblers
See Cibois et al. (2001)
Genus Thamnornis
Thamnornis, Thamnornis chloropetoides
Genus Cryptosylvicola
Cryptic warbler, Cryptosylvicola randriansoloi
Moved to family Locustellidae
Grass warblers and allies. Mid-sized and usually long-tailed species; sometimes strongly patterned but generally very drab in overall colouration. Often forage in dense low vegetation. Old World and into Australian region, centred on the Indian Ocean.
Genus Bradypterus – megalurid warblers (12 species, including the forner genus Dromaeocercus)
Genus Locustella – grass warblers (more than 20 species)
Genus Megalurus – typical grassbirds (10 species)
Genus Amphilais – grey emutail
Genus Elaphrornis – Sri Lanka bush warbler
Genus Schoenicola – (2 species)
Genus Buettikoferella – buff-banded thicketbird
Genus Chaetornis – bristled grassbird
Moved to family Donacobiidae
The black-capped donacobius Donacobius atricapillus, which was long considered an aberrant wren or mockingbird is apparently quite closely related, and is the only South American species in the superfamily Sylvioidea.
Moved to family Cettiidae
Typical bush warblers and relatives or cettiid warblers. Another group of generally very drab species, tend to be smaller and shorter-tailed than Megaluridae. Usually frequent shrubland and undergrowth. Continental Asia, and surrounding regions, ranging into Africa and southern Europe.
Genus Pholidornis – formerly in Remizidae; tentatively placed here
Tit hylia, Pholidornis rushiae
Genus Hylia – tentatively placed here
Green hylia, Hylia prasina
Genus Abroscopus – Abroscopus warblers
Rufous-faced warbler, Abroscopus albogularis
Yellow-bellied warbler, Abroscopus superciliaris
Black-faced warbler, Abroscopus schisticeps
Genus Erythrocercus – monarch-warblers. Formerly Monarchinae.
Chestnut-capped flycatcher, Erythrocercus mccallii
Little yellow flycatcher, Erythrocercus holochlorus
Livingstone's flycatcher, Erythrocercus livingstonei
Genus Urosphena – stubtails
Timor stubtail, Urosphena subulata
Babar stubtail, Urosphena subulata advena – extinct (mid-20th century)
Bornean stubtail, Urosphena whiteheadi
Asian stubtail, Urosphena squameiceps
Pale-footed bush warbler, Urosphena pallidipes
Neumann's warbler, Urosphena neumanni
Genus Tesia – tesias
Javan tesia, tesia superciliaris
Slaty-bellied tesia, Tesia olivea
Grey-bellied tesia, Tesia cyaniventer
Russet-capped tesia, Tesia everetti
Genus Horornis – bush warblers (some 13 species).
Genus Cettia – bush warblers (4 species).
Genus Tickellia
Broad-billed warbler, Tickellia hodgsoni
Genus Phyllergates
Mountain tailorbird, Phyllergates cucullatus
Rufous-headed tailorbird, Phyllergates heterolaemus
Moved to family Aegithalidae
Genus Leptopoecile – tit-warblers.
White-browed tit-warbler, Leptopoecile sophiae
Crested tit-warbler, Leptopoecile elegans
Moved to family Phylloscopidae
Leaf warblers. A group variable in size, generally dull to vivid green above and whitish or yellow below, or more subdued with greyish-green to greyish-brown plumage. Catch food on the wing fairly often. Eurasia, ranging into Wallacea and Africa.
Genus Phylloscopus – leaf warblers (c. 55 species). Includes the former genus Seicercus.
Green-crowned warbler, Phylloscopus burkii
Grey-crowned warbler, Phylloscopus tephrocephalus
Whistler's warbler, Phylloscopus whistleri
Bianchi's warbler, Phylloscopus valentini
Martens's warbler, Phylloscopus omeiensis
Alström's warbler, Phylloscopus soror
White-spectacled warbler, Phylloscopus affinis – paraphyletic
Bar-winged white-spectacled warbler, Phylloscopus (affinis) intermedius
Grey-cheeked warbler, Phylloscopus poliogenys
Chestnut-crowned warbler, Phylloscopus castaniceps
Yellow-breasted warbler, Phylloscopus montis
Sunda warbler, Phylloscopus grammiceps
Moved to family Macrosphenidae
African warblers. Also "Sphenoeacus group". An assemblage of usually species-poor and apparently rather ancient "odd warblers" from Africa. Ecomorphologically quite variable. Monophyly requires confirmation.
Genus Sylvietta – crombecs
Green crombec, Sylvietta virens
Lemon-bellied crombec, Sylvietta denti
White-browed crombec, Sylvietta leucophrys
Chapin's crombec, Sylvietta (leucophrys) chapini – possibly extinct (late 20th century?)
Northern crombec, Sylvietta brachyura
Philippa's crombec, Sylvietta philippae
Red-capped crombec, Sylvietta ruficapilla
Red-faced crombec, Sylvietta whytii
Somali crombec, Sylvietta isabellina
Long-billed crombec, Sylvietta rufescens
Genus Melocichla
Moustached grass warbler, Melocichla mentalis
Genus Achaetops
Rockrunner, Achaetops pycnopygius
Genus Sphenoeacus
Cape grassbird, Sphenoeacus afer
Genus Cryptillas.
Victorin's warbler, Cryptillas victorini
Genus Macrosphenus – longbills
Kemp's longbill, Macrosphenus kempi
Yellow longbill, Macrosphenus flavicans
Grey longbill, Macrosphenus concolor
Pulitzer's longbill, Macrosphenus pulitzeri
Kretschmer's longbill, Macrosphenus kretschmeri
"Sylviidae" incertae sedis
Taxa that have not been studied. Most are likely to belong to one of Sylvioidea families listed above. Those in the Australian-Pacific region are probably Megaluridae. These taxa are listed in the sequence used in recent years.
Genus Phyllolais – Cisticolidae?
Buff-bellied warbler, Phyllolais pulchella
Genus Graueria
Grauer's warbler, Graueria vittata
Genus Eremomela – eremomelas. Cettiidae?
Salvadori's eremomela, Eremomela salvadorii
Yellow-vented eremomela, Eremomela flavicrissalis
Yellow-bellied eremomela, Eremomela icteropygialis
Senegal eremomela, Eremomela canescens
Green-backed eremomela, Eremomela pusilla
Green-capped eremomela, Eremomela scotops
Yellow-rumped eremomela, Eremomela gregalis
Rufous-crowned eremomela, Eremomela badiceps
Turner's eremomela, Eremomela turneri
Western Turner's eremomela, Eremomela turneri kalindei – probably extinct (early 1980s?)
Black-necked eremomela, Eremomela atricollis
Burnt-neck eremomela, Eremomela usticollis
Genus Randia – Malagasy warblers?
Rand's warbler, Randia pseudozosterops
Genus Bowdleria – fernbirds. Sometimes merged into Megalurus. Locustellidae?
Fernbird, Bowdleria punctata
Chatham fernbird, Bowdleria rufescens – extinct (c. 1900)
Genus Chaetornis – bristled grassbird. Locustellidae?
Genus Schoenicola – grassbirds. Basal Locustellidae?
Broad-tailed grassbird, Schoenicola platyura
Fan-tailed grassbird, Schoenicola brevirostris
Genus Cincloramphus – songlarks. Basal Locustellidae?
Brown songlark, Cincloramphus cruralis
Rufous songlark, Cincloramphus mathewsi
Genus Buettikoferella – probably Locustellidae
Buff-banded bushbird, Buettikoferella bivittata
Genus Megalurulus – thicketbirds. Probably Locustellidae
New Caledonian grassbird, Megalurulus mariei
Bismarck thicketbird, Megalurulus grosvenori
Bougainville thicketbird, Megalurulus llaneae
Santo thicketbird, Megalurulus whitneyi
Rusty thicketbird, Megalurulus rubiginosus
Genus Trichocichla – long-legged warbler
Not in Sylvioidea
Entirely unrelated songbirds hitherto placed in Sylviidae
Genus Amaurocichla – Now placed in Passeroidea in the Motacillidae
Bocage's longbill or São Tomé short-tail, Amaurocichla bocagei
Genus Stenostira – Together with some "odd flycatchers", they form the new family Stenostiridae. They are closely related to Paridae (Beresford et al. 2005)
Fairy flycatcher, Stenostira scita
Genus Hyliota – hyliotas. Basal Passerida with no known relatives, perhaps somewhat closer to Promeropidae (sugarbirds)
Yellow-bellied hyliota, Hyliota flavigaster
Southern hyliota, Hyliota australis
Usambara hyliota, Hyliota usambarae
Violet-backed hyliota, Hyliota violacea
Genus Newtonia – newtonias. Now in Vangidae (vangas); possibly polyphyletic (Yamagishi et al. 2001)
Dark newtonia, Newtonia amphichroa
Common newtonia, Newtonia brunneicauda
Archbold's newtonia, Newtonia archboldi
Red-tailed newtonia, Newtonia fanovanae – tentatively placed here
| Biology and health sciences | Passerida | Animals |
195187 | https://en.wikipedia.org/wiki/Zidovudine | Zidovudine | Zidovudine (ZDV), also known as azidothymidine (AZT), was the first antiretroviral medication used to prevent and treat HIV/AIDS. It is generally recommended for use in combination with other antiretrovirals. It may be used to prevent mother-to-child spread during birth or after a needlestick injury or other potential exposure. It is sold both by itself and together as lamivudine/zidovudine and abacavir/lamivudine/zidovudine. It can be used by mouth or by slow injection into a vein.
Common side effects include headaches, fever, and nausea. Serious side effects include liver problems, muscle damage, and high blood lactate levels. It is commonly used in pregnancy and appears to be safe for the fetus. ZDV is of the nucleoside analog reverse-transcriptase inhibitor (NRTI) class. It works by inhibiting the enzyme reverse transcriptase that HIV uses to make DNA and therefore decreases replication of the virus.
Zidovudine was first described in 1964. It was resynthesized from a public-domain formula by Burroughs Wellcome. It was approved in the United States in 1987 and was the first treatment for HIV. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
HIV treatment
AZT was usually dosed twice a day in combination with other antiretroviral therapies. This approach is referred to as Highly Active Antiretroviral Therapy (HAART) and is used to prevent the likelihood of HIV resistance. As of 2019, the standard is a three-drug once-daily oral treatment that can include AZT.
HIV prevention
AZT has been used for post-exposure prophylaxis (PEP) in combination with another antiretroviral drug called lamivudine. Together they work to substantially reduce the risk of HIV infection following the first single exposure to the virus. More recently, AZT has been replaced by other antiretrovirals such as tenofovir to provide PEP.
Before tenofovir, a principal part of the clinical pathway for both pre-exposure prophylaxis and post-exposure treatment of mother-to-child transmission of HIV during pregnancy, labor, and delivery and has been proven to be integral to uninfected siblings' perinatal and neonatal development. Without AZT, 10–15% of fetuses with HIV-infected mothers will themselves become infected. AZT has been shown to reduce this risk to 8% when given in a three-part regimen post-conception, delivery, and six weeks post-delivery. Consistent and proactive precautionary measures, such as the rigorous use of antiretroviral medications, cesarean section, face masks, heavy-duty rubber gloves, clinically segregated disposable diapers, and avoidance of mouth contact will further reduce child-attendant transmission of HIV to as little as 1–2%.
During 1994 to 1999, AZT was the primary form of prevention of mother-to-child HIV transmission. AZT prophylaxis prevented more than 1000 parental and infant deaths from AIDS in the United States. In the U.S. at that time, the accepted standard of care for HIV-positive mothers was known as the 076 regimen and involved five daily doses of AZT from the second trimester onwards, as well as AZT intravenously administered during labour. As this treatment was lengthy and expensive, it was deemed unfeasible in the Global South, where mother-to-child transmission was a significant problem. A number of studies were initiated in the late 1990s that sought to test the efficacy of a shorter, simpler regimen for use in 'resource-poor' countries. This AZT short course was an inferior standard of care and would have been considered malpractice if trialed in the US; however, it was nonetheless a treatment that would improve the care and survival of impoverished subjects.
Antibacterial properties
Zidovudine also has antibacterial properties, though not routinely used in clinical settings. It acts on bacteria with a mechanism of action still not fully explained. Promising results from in vitro and in vivo studies showed the efficacy of AZT also against multidrug-resistant gram-negative bacteria (including mcr-1 carrying and metallo-β-lactamase producing isolates), especially in combination with other active agents (e.g. fosfomycin, colistin, tigecycline).
Side effects
Most common side effects include nausea, vomiting, acid reflux (heartburn), headache, cosmetic reduction in abdominal body fat, trouble sleeping, and loss of appetite. Less common side effects include faint discoloration of fingernails and toenails, mood elevation, occasional tingling or transient numbness of the hands or feet, and minor skin discoloration. Allergic reactions are rare.
Early long-term higher-dose therapy with AZT was initially associated with side effects that sometimes limited therapy, including anemia, neutropenia, hepatotoxicity, cardiomyopathy, and myopathy. All of these conditions were generally found to be reversible upon reduction of AZT dosages. They have been attributed to several possible causes, including transient depletion of mitochondrial DNA, sensitivity of the γ-DNA polymerase in some cell mitochondria, the depletion of thymidine triphosphate, oxidative stress, reduction of intracellular L-carnitine or apoptosis of the muscle cells. Anemia due to AZT was successfully treated using erythropoetin to stimulate red blood cell production. Drugs that inhibit hepatic glucuronidation, such as indomethacin, nordazepam, acetylsalicylic acid (aspirin) and trimethoprim decreased the elimination rate and increased the therapeutic strength of the medication. Today, side effects are much less common with the use of lower doses of AZT.
According to IARC, there is sufficient evidence in experimental animals for the carcinogenicity of zidovudine; it is possibly carcinogenic to humans (Group 2B). In 2009, the State of California added zidovudine to its list of chemicals "known to the state of California to cause cancer and other reproductive harm."
Viral resistance
Even at the highest doses that can be tolerated in patients, AZT is not potent enough to prevent all HIV replication and may only slow the replication of the virus and progression of the disease. Prolonged AZT treatment can lead to HIV developing resistance to AZT by mutation of its reverse transcriptase. To slow the development of resistance, physicians generally recommend that AZT be given in combination with another reverse-transcriptase inhibitor and an antiretroviral from another group, such as a protease inhibitor, non-nucleoside reverse-transcriptase inhibitor, or integrase inhibitor; this type of therapy is known as HAART (Highly Active Anti Retroviral Therapy).
Mechanism of action
AZT is a thymidine analogue. AZT works by selectively inhibiting HIV's reverse transcriptase, the enzyme that the virus uses to make a DNA copy of its RNA. Reverse transcription is necessary for production of HIV's double-stranded DNA, which would be subsequently integrated into the genetic material of the infected cell (where it is called a provirus).
Cellular enzymes convert AZT into the effective 5'-triphosphate form. Studies have shown that the termination of HIV's forming DNA chains is the specific factor in the inhibitory effect.
At very high doses, AZT's triphosphate form may also inhibit DNA polymerase used by human cells to undergo cell division, but regardless of dosage AZT has an approximately 100-fold greater affinity for HIV's reverse transcriptase. The selectivity has been suggested to be due to the cell's ability to quickly repair its own DNA chain if it is disrupted by AZT during its formation, whereas the HIV virus lacks that ability. Thus AZT inhibits HIV replication without affecting the function of uninfected cells. At sufficiently high dosages, AZT begins to inhibit the cellular DNA polymerase used by mitochondria to replicate, accounting for its potentially toxic but reversible effects on cardiac and skeletal muscles, causing myositis.
Chemistry
Enantiopure AZT crystallizes in the monoclinic space group P21. The primary intermolecular bonding motif is a hydrogen bonded dimeric ring formed from two N-H...O interactions.
History
Initial cancer research
In the 1960s, the theory that most cancers were caused by environmental retroviruses gained clinical support and funding. It had recently become known, due to the work of Nobel laureates Howard Temin and David Baltimore, that nearly all avian cancers were caused by bird retroviruses, but corresponding human retroviruses had not yet been found.
In parallel work, other compounds that successfully blocked the synthesis of nucleic acids had been proven to be both antibacterial, antiviral, and anticancer agents, the leading work being done at the laboratory of Nobel laureates George H. Hitchings and Gertrude Elion, leading to the development of the antitumor agent 6-mercaptopurine.
Richard E. Beltz first synthesized AZT in 1961, but did not publish his research.Jerome Horwitz of the Barbara Ann Karmanos Cancer Institute and Wayne State University School of Medicine synthesized AZT in 1964 under a US National Institutes of Health (NIH) grant. Development was shelved after it proved biologically inert in mice. In 1974, Wolfram Ostertag of the Max Planck Institute for Experimental Medicine in Göttingen, Germany, reported that AZT specifically targeted Friend virus (strain of murine leukemia virus).
This report attracted little interest from other researchers as the Friend leukemia virus is a retrovirus, and at the time, there were no known human diseases caused by retroviruses.
HIV/AIDS research
In 1983, researchers at the Institut Pasteur in Paris identified the retrovirus now known as the Human Immunodeficiency Virus (HIV) as the cause of acquired immunodeficiency syndrome (AIDS) in humans. Shortly thereafter, Samuel Broder, Hiroaki Mitsuya, and Robert Yarchoan of the United States National Cancer Institute (NCI) initiated a program to develop therapies for HIV/AIDS. Using a line of CD4+ T cells that they had made, they developed an assay to screen drugs for their ability to protect CD4+ T cells from being killed by HIV. In order to expedite the process of discovering a drug, the NCI researchers actively sought collaborations with pharmaceutical companies having access to libraries of compounds with potential antiviral activity. This assay could simultaneously test both the anti-HIV effect of the compounds and their toxicity against infected T cells.
In June 1984, Burroughs-Wellcome virologist Marty St. Clair set up a program to discover drugs with the potential to inhibit HIV replication. Burroughs-Wellcome had expertise in nucleoside analogs and viral diseases, led by researchers including George Hitchings, Gertrude Elion, David Barry, Paul (Chip) McGuirt Jr., Philip Furman, Martha St. Clair, Janet Rideout, Sandra Lehrman and others. Their research efforts were focused in part on the viral enzyme reverse transcriptase. Reverse transcriptase is an enzyme that retroviruses, including HIV, utilize to replicate themselves. Secondary testing was performed in mouse cells infected with the retroviruses Friend virus or Harvey sarcoma virus, as the Wellcome group did not have a viable in-house HIV antiviral assay in place at that time, and these other retroviruses were believed to represent reasonable surrogates. AZT proved to be a remarkably potent inhibitor of both Friend virus and Harvey sarcoma virus, and a search of the company's records showed that it had demonstrated low toxicity when tested for its antibacterial activity in rats many years earlier. Based in part on these results, AZT was selected by nucleoside chemist Janet Rideout as one of 11 compounds to send to the NCI for testing in that organization's HIV antiviral assay.
In February 1985, the NCI scientists found that AZT had potent efficacy in vitro.
Several months later, a phase 1 clinical trial of AZT at the NCI was initiated at the NCI and Duke University.
In doing this Phase I trial, they built on their experience in doing an earlier trial, with suramin, another drug that had shown effective anti-HIV activity in the laboratory. This initial trial of AZT proved that the drug could be safely administered to patients with HIV, that it increased their CD4 counts, restored T cell immunity as measured by skin testing, and that it showed strong evidence of clinical effectiveness, such as inducing weight gain in AIDS patients. It also showed that levels of AZT that worked in vitro could be injected into patients in serum and suppository form, and that the drug penetrated deeply only into infected brains.
Patent filed and FDA approval
A flawed double-blind, placebo-controlled randomized trial of AZT was subsequently conducted by Burroughs-Wellcome and suggested that AZT safely prolongs the lives of people with HIV. However, it was quickly unblinded and several more in the drug receiving group later perished. Burroughs-Wellcome filed for a patent for AZT in 1985. The Anti-Infective Advisory Committee to United States Food and Drug Administration (FDA) voted ten to one to recommend the approval of AZT. The FDA approved the drug (via the then-new FDA accelerated approval system) for use against HIV, AIDS, and AIDS Related Complex (ARC, a now-obsolete medical term for pre-AIDS illness) on March 20, 1987. The time between the first demonstration that AZT was active against HIV in the laboratory and its approval was 25 months.
AZT was subsequently approved unanimously for infants and children in 1990. AZT was initially administered in significantly higher dosages than today, typically 400 mg every four hours, day and night, compared to modern dosage of 300 mg twice daily. The paucity of alternatives for treating HIV/AIDS at that time unambiguously affirmed the health risk/benefit ratio, with inevitable slow, disfiguring, and painful death from HIV outweighing the drug's side effect of transient anemia and malaise.
Society and culture
Until 1991, 80% of the $420 million allocated to the National Institute of Health's AIDS Clinical Trials Group, went toward studies of AZT. Aside from two similarly designed chemotherapies, ddI and ddC, from approval of the drug until 1993, no other drugs against AIDS were approved, leading to criticism that research preoccupation with AZT and its close relatives, and the massive diverting of funds to such, had delayed the development of more efficacious drugs.
In 1991, the advocacy group Public Citizen filed a lawsuit claiming that the patents were invalid. Subsequently, Barr Laboratories and Novopharm Ltd. also challenged the patent, in part based on the assertion that NCI scientists Samuel Broder, Hiroaki Mitsuya, and Robert Yarchoan should have been named as inventors, and those two companies applied to the FDA to sell AZT as a generic drug. In response, Burroughs Wellcome Co. filed a lawsuit against the two companies. The United States Court of Appeals for the Federal Circuit ruled in 1992 in favor of Burroughs Wellcome, ruling that even though they had never tested it against HIV, they had conceived of it working before they sent it to the NCI scientists. This suit was appealed up to the Supreme Court of the US, but in 1996 the Court declined to formally review it. The case, Burroughs Wellcome Co. v. Barr Laboratories, was a landmark in US law of inventorship.
In 2002, another lawsuit was filed challenging the patent by the AIDS Healthcare Foundation, which also filed an antitrust case against GSK. The patent case was dismissed in 2003 and AHF filed a new case challenging the patent.
GSK's patents on AZT expired in 2005, and in September 2005, the FDA approved three generic versions.
| Biology and health sciences | Antiviral drugs | Health |
195193 | https://en.wikipedia.org/wiki/Sky | Sky | The sky is an unobstructed view upward from the surface of the Earth. It includes the atmosphere and outer space. It may also be considered a place between the ground and outer space, thus distinct from outer space.
In the field of astronomy, the sky is also called the celestial sphere. This is an abstract sphere, concentric to the Earth, on which the Sun, Moon, planets, and stars appear to be drifting. The celestial sphere is conventionally divided into designated areas called constellations.
Usually, the term sky informally refers to a perspective from the Earth's surface; however, the meaning and usage can vary. An observer on the surface of the Earth can see a small part of the sky, which resembles a dome (sometimes called the sky bowl) appearing flatter during the day than at night. In some cases, such as in discussing the weather, the sky refers to only the lower, denser layers of the atmosphere.
The daytime sky appears blue because air molecules scatter shorter wavelengths of sunlight more than longer ones (redder light). The night sky appears to be a mostly dark surface or region spangled with stars. The Sun and sometimes the Moon are visible in the daytime sky unless obscured by clouds. At night, the Moon, planets, and stars are similarly visible in the sky.
Some of the natural phenomena seen in the sky are clouds, rainbows, and aurorae. Lightning and precipitation are also visible in the sky. Certain birds and insects, as well as human inventions like aircraft and kites, can fly in the sky. Due to human activities, smog during the day and light pollution during the night are often seen above large cities.
Etymology
The word sky comes from the Old Norse , meaning 'cloud, abode of God'. The Norse term is also the source of the Old English , which shares the same Indo-European base as the classical Latin , meaning 'obscure'.
In Old English, the term heaven was used to describe the observable expanse above the earth. During the period of Middle English, "heaven" began shifting toward its current, religious meaning.
During daytime
Except for direct sunlight, most of the light in the daytime sky is caused by scattering, which is dominated by a small-particle limit called Rayleigh scattering. The scattering due to molecule-sized particles (as in air) is greater in the directions both toward and away from the source of light than it is in directions perpendicular to the incident path. Scattering is significant for light at all visible wavelengths, but is stronger at the shorter (bluer) end of the visible spectrum, meaning that the scattered light is bluer than its source: the Sun. The remaining direct sunlight, having lost some of its shorter-wavelength components, appears slightly less blue.
Scattering also occurs even more strongly in clouds. Individual water droplets refract white light into a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed-out white color.
The sky can turn a multitude of colors such as red, orange, purple, and yellow (especially near sunset or sunrise) when the light must travel a much longer path (or optical depth) through the atmosphere. Scattering effects also partially polarize light from the sky and are most pronounced at an angle 90° from the Sun. Scattered light from the horizon travels through as much as 38 times the air mass as does light from the zenith, causing a blue gradient looking vivid at the zenith and pale near the horizon. Red light is also scattered if there is enough air between the source and the observer, causing parts of the sky to change color as the Sun rises or sets. As the air mass nears infinity, scattered daylight appears whiter and whiter.
Apart from the Sun, distant clouds or snowy mountaintops may appear yellow. The effect is not very obvious on clear days, but is very pronounced when clouds cover the line of sight, reducing the blue hue from scattered sunlight. At higher altitudes, the sky tends toward darker colors since scattering is reduced due to lower air density. An extreme example is the Moon, where no atmospheric scattering occurs, making the lunar sky black even when the Sun is visible.
Sky luminance distribution models have been recommended by the International Commission on Illumination (CIE) for the design of daylighting schemes. Recent developments relate to "all sky models" for modelling sky luminance under weather conditions ranging from clear to overcast.
During twilight
The brightness and color of the sky vary greatly over the course of a day, and the primary cause of these properties differs as well. When the Sun is well above the horizon, direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. However, during twilight, the period between sunset and night or between night and sunrise, the situation is more complex.
Green flashes and green rays are optical phenomena that occur shortly after sunset or before sunrise, when a green spot is visible above the Sun, usually for no more than a second or two, or it may resemble a green ray shooting up from the sunset point. Green flashes are a group of phenomena that stem from different causes, most of which occur when there is a temperature inversion (when the temperature increases with altitude rather than the normal decrease in temperature with altitude). Green flashes may be observed from any altitude (even from an aircraft). They are usually seen above an unobstructed horizon, such as over the ocean, but are also seen above clouds and mountains. Green flashes may also be observed at the horizon in association with the Moon and bright planets, including Venus and Jupiter.
Earth's shadow is the shadow that the planet casts through its atmosphere and into outer space. This atmospheric phenomenon is visible during civil twilight (after sunset and before sunrise). When the weather conditions and the observing site permit a clear view of the horizon, the shadow's fringe appears as a dark or dull bluish band just above the horizon, in the low part of the sky opposite of the (setting or rising) Sun's direction. A related phenomenon is the Belt of Venus (or antitwilight arch), a pinkish band that is visible above the bluish band of Earth's shadow in the same part of the sky. No defined line divides Earth's shadow and the Belt of Venus; one colored band fades into the other in the sky.
Twilight is divided into three stages according to the Sun's depth below the horizon, measured in segments of 6°. After sunset, the civil twilight sets in; it ends when the Sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the Sun is between 6° and 12° below the horizon (depth between −6° and −12°), after which comes the astronomical twilight, defined as the period between −12° and −18°. When the Sun drops more than 18° below the horizon, the sky generally attains its minimum brightness.
Several sources can be identified as the source of the intrinsic brightness of the sky, namely airglow, indirect scattering of sunlight, scattering of starlight, and artificial light pollution.
During the night
The term night sky refers to the sky as seen at night. The term is usually associated with skygazing and astronomy, with reference to views of celestial bodies such as stars, the Moon, and planets that become visible on a clear night after the Sun has set. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. The fact that the sky is not completely dark at night can be easily observed. Were the sky (in the absence of moon and city lights) absolutely dark, one would not be able to see the silhouette of an object against the sky.
The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the state of the night sky as a calendar to determine when to plant crops. The ancient belief in astrology is generally based on the belief that relationships between heavenly bodies influence or convey information about events on Earth. The scientific study of the night sky and bodies observed within it, meanwhile, takes place in the science of astronomy.
Within visible-light astronomy, the visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of ambient lighting. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Special filters and modifications to light fixtures can help to alleviate this problem, but for the best views, both professional and amateur optical astronomers seek viewing sites located far from major urban areas.
Use in weather forecasting
Along with pressure tendency, the condition of the sky is one of the more important parameters used to forecast weather in mountainous areas. Thickening of cloud cover or the invasion of a higher cloud deck is indicative of rain in the near future. At night, high thin cirrostratus clouds can lead to halos around the Moon, which indicate the approach of a warm front and its associated rain. Morning fog portends fair conditions and can be associated with a marine layer, an indication of a stable atmosphere. Rainy conditions are preceded by wind or clouds which prevent fog formation. The approach of a line of thunderstorms could indicate the approach of a cold front. Cloud-free skies are indicative of fair weather for the near future. The use of sky cover in weather prediction has led to various weather lore over the centuries.
Tropical cyclones
Within 36 hours of the passage of a tropical cyclone's center, the pressure begins to fall and a veil of white cirrus clouds approaches from the cyclone's direction. Within 24 hours of the closest approach to the center, low clouds begin to move in, also known as the bar of a tropical cyclone, as the barometric pressure begins to fall more rapidly and the winds begin to increase. Within 18 hours of the center's approach, squally weather is common, with sudden increases in wind accompanied by rain showers or thunderstorms. Within six hours of the center's arrival, rain becomes continuous. Within an hour of the center, the rain becomes very heavy and the highest winds within the tropical cyclone are experienced. When the center arrives with a strong tropical cyclone, weather conditions improve and the sun becomes visible as the eye moves overhead. Once the system departs, winds reverse and, along with the rain, suddenly increase. One day after the center's passage, the low overcast is replaced with a higher overcast, and the rain becomes intermittent. By 36 hours after the center's passage, the high overcast breaks and the pressure begins to level off.
Use in transportation
Flight is the process by which an object moves through or beyond the sky (as in the case of spaceflight), whether by generating aerodynamic lift, propulsive thrust, aerostatically using buoyancy, or by ballistic movement, without any direct mechanical support from the ground. The engineering aspects of flight are studied in aerospace engineering which is subdivided into aeronautics, which is the study of vehicles that travel through the air, and astronautics, the study of vehicles that travel through space, and in ballistics, the study of the flight of projectiles. While human beings have been capable of flight via hot air balloons since 1783, other species have used flight for significantly longer. Animals, such as birds, bats, and insects are capable of flight. Spores and seeds from plants use flight, via use of the wind, as a method of propagating their species.
Significance in mythology
Many mythologies have deities especially associated with the sky. In Egyptian religion, the sky was deified as the goddess Nut and as the god Horus. Dyeus is reconstructed as the god of the sky, or the sky personified, in Proto-Indo-European religion, whence Zeus, the god of the sky and thunder in Greek mythology and the Roman god of sky and thunder Jupiter.
In Australian Aboriginal mythology, Altjira (or Arrernte) is the main sky god and also the creator god. In Iroquois mythology, Atahensic was a sky goddess who fell down to the ground during the creation of the Earth. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities.
| Physical sciences | Atmospheric optics | null |
195198 | https://en.wikipedia.org/wiki/Mist | Mist | Mist is a phenomenon caused by small droplets of water suspended in the cold air, usually by condensation. Physically, it is an example of a dispersion. It is most commonly seen where water vapor in warm, moist air meets sudden cooling, such as in exhaled air in the winter, or when throwing water onto the hot stove of a sauna. It can be created artificially with aerosol canisters if the humidity and temperature conditions are right. It can also occur as part of natural weather, when humid air cools rapidly, notably when the air comes into contact with surfaces that are much cooler than the air (e.g. mountains).
The formation of mist, as of other suspensions, is greatly aided by the presence of nucleation sites on which the suspended water phase can congeal. Thus even such unusual sources of nucleation as small particulates from volcanic eruptions, releases of strongly polar gases, and even the magnetospheric ions associated with polar lights can in right conditions trigger condensation and the formation of mist.
Mist is commonly confused with fog, which resembles a stratus cloud lying at ground level. These two phenomena differ, but share some commonalities; similar processes form both fog and mist. Fog is denser, more opaque, and generally lasts a longer time, while mist is thinner and more transparent.
Description
Cloud cover is often referred to as "mist" when encountered on surfaces of mountains, whereas moisture suspended above a body of water, cleared or marsh area is usually called "fog". One main difference between mist and fog is visibility. The phenomenon is called fog if the visibility is or less. In the United Kingdom, the definition of fog is visibility less than on the surface for driving purposes, while for pilots the distance is 1 km at cruising height. Otherwise, it is known as mist.
Mist makes a light beam visible from the side via refraction and scattering on the suspended water droplets, and rainbows can be possibly created.
"Scotch mist" is a light steady drizzle that appears like mist.
Mist usually occurs near the shores and is often associated with fog. Mist can be as high as mountain tops when extreme temperatures are low and strong condensation occurs.
Freezing mist is similar to freezing fog, only the density is less and the visibility greater. When fog falls below 0°C, it is known as freezing fog, however it still stays suspended.
| Physical sciences | Clouds | Earth science |
195243 | https://en.wikipedia.org/wiki/Riemannian%20geometry | Riemannian geometry | Riemannian geometry is the branch of differential geometry that studies Riemannian manifolds, defined as smooth manifolds with a Riemannian metric (an inner product on the tangent space at each point that varies smoothly from point to point). This gives, in particular, local notions of angle, length of curves, surface area and volume. From those, some other global quantities can be derived by integrating local contributions.
Riemannian geometry originated with the vision of Bernhard Riemann expressed in his inaugural lecture "" ("On the Hypotheses on which Geometry is Based"). It is a very broad and abstract generalization of the differential geometry of surfaces in R3. Development of Riemannian geometry resulted in synthesis of diverse results concerning the geometry of surfaces and the behavior of geodesics on them, with techniques that can be applied to the study of differentiable manifolds of higher dimensions. It enabled the formulation of Einstein's general theory of relativity, made profound impact on group theory and representation theory, as well as analysis, and spurred the development of algebraic and differential topology.
Introduction
Riemannian geometry was first put forward in generality by Bernhard Riemann in the 19th century. It deals with a broad range of geometries whose metric properties vary from point to point, including the standard types of non-Euclidean geometry.
Every smooth manifold admits a Riemannian metric, which often helps to solve problems of differential topology. It also serves as an entry level for the more complicated structure of pseudo-Riemannian manifolds, which (in four dimensions) are the main objects of the theory of general relativity. Other generalizations of Riemannian geometry include Finsler geometry.
There exists a close analogy of differential geometry with the mathematical structure of defects in regular crystals. Dislocations and disclinations produce torsions and curvature.
The following articles provide some useful introductory material:
Metric tensor
Riemannian manifold
Levi-Civita connection
Curvature
Riemann curvature tensor
List of differential geometry topics
Glossary of Riemannian and metric geometry
Classical theorems
What follows is an incomplete list of the most classical theorems in Riemannian geometry. The choice is made depending on its importance and elegance of formulation. Most of the results can be found in the classic monograph by Jeff Cheeger and D. Ebin (see below).
The formulations given are far from being very exact or the most general. This list is oriented to those who already know the basic definitions and want to know what these definitions are about.
General theorems
Gauss–Bonnet theorem The integral of the Gauss curvature on a compact 2-dimensional Riemannian manifold is equal to 2χ(M) where χ(M) denotes the Euler characteristic of M. This theorem has a generalization to any compact even-dimensional Riemannian manifold, see generalized Gauss-Bonnet theorem.
Nash embedding theorems. They state that every Riemannian manifold can be isometrically embedded in a Euclidean space Rn.
Geometry in large
In all of the following theorems we assume some local behavior of the space (usually formulated using curvature assumption) to derive some information about the global structure of the space, including either some information on the topological type of the manifold or on the behavior of points at "sufficiently large" distances.
Pinched sectional curvature
Sphere theorem. If M is a simply connected compact n-dimensional Riemannian manifold with sectional curvature strictly pinched between 1/4 and 1 then M is diffeomorphic to a sphere.
Cheeger's finiteness theorem. Given constants C, D and V, there are only finitely many (up to diffeomorphism) compact n-dimensional Riemannian manifolds with sectional curvature |K| ≤ C, diameter ≤ D and volume ≥ V.
Gromov's almost flat manifolds. There is an εn > 0 such that if an n-dimensional Riemannian manifold has a metric with sectional curvature |K| ≤ εn and diameter ≤ 1 then its finite cover is diffeomorphic to a nil manifold.
Sectional curvature bounded below
Cheeger–Gromoll's soul theorem. If M is a non-compact complete non-negatively curved n-dimensional Riemannian manifold, then M contains a compact, totally geodesic submanifold S such that M is diffeomorphic to the normal bundle of S (S is called the soul of M.) In particular, if M has strictly positive curvature everywhere, then it is diffeomorphic to Rn. G. Perelman in 1994 gave an astonishingly elegant/short proof of the Soul Conjecture: M is diffeomorphic to Rn if it has positive curvature at only one point.
Gromov's Betti number theorem. There is a constant C = C(n) such that if M is a compact connected n-dimensional Riemannian manifold with positive sectional curvature then the sum of its Betti numbers is at most C.
Grove–Petersen's finiteness theorem. Given constants C, D and V, there are only finitely many homotopy types of compact n-dimensional Riemannian manifolds with sectional curvature K ≥ C, diameter ≤ D and volume ≥ V.
Sectional curvature bounded above
The Cartan–Hadamard theorem states that a complete simply connected Riemannian manifold M with nonpositive sectional curvature is diffeomorphic to the Euclidean space Rn with n = dim M via the exponential map at any point. It implies that any two points of a simply connected complete Riemannian manifold with nonpositive sectional curvature are joined by a unique geodesic.
The geodesic flow of any compact Riemannian manifold with negative sectional curvature is ergodic.
If M is a complete Riemannian manifold with sectional curvature bounded above by a strictly negative constant k then it is a CAT(k) space. Consequently, its fundamental group Γ = 1(M) is Gromov hyperbolic. This has many implications for the structure of the fundamental group:
it is finitely presented;
the word problem for Γ has a positive solution;
the group Γ has finite virtual cohomological dimension;
it contains only finitely many conjugacy classes of elements of finite order;
the abelian subgroups of Γ are virtually cyclic, so that it does not contain a subgroup isomorphic to Z×Z.
Ricci curvature bounded below
Myers theorem. If a complete Riemannian manifold has positive Ricci curvature then its fundamental group is finite.
Bochner's formula. If a compact Riemannian n-manifold has non-negative Ricci curvature, then its first Betti number is at most n, with equality if and only if the Riemannian manifold is a flat torus.
Splitting theorem. If a complete n-dimensional Riemannian manifold has nonnegative Ricci curvature and a straight line (i.e. a geodesic that minimizes distance on each interval) then it is isometric to a direct product of the real line and a complete (n-1)-dimensional Riemannian manifold that has nonnegative Ricci curvature.
Bishop–Gromov inequality. The volume of a metric ball of radius r in a complete n-dimensional Riemannian manifold with positive Ricci curvature has volume at most that of the volume of a ball of the same radius r in Euclidean space.
Gromov's compactness theorem. The set of all Riemannian manifolds with positive Ricci curvature and diameter at most D is pre-compact in the Gromov-Hausdorff metric.
Negative Ricci curvature
The isometry group of a compact Riemannian manifold with negative Ricci curvature is discrete.
Any smooth manifold of dimension n ≥ 3 admits a Riemannian metric with negative Ricci curvature. (This is not true for surfaces.)
Positive scalar curvature
The n-dimensional torus does not admit a metric with positive scalar curvature.
If the injectivity radius of a compact n-dimensional Riemannian manifold is ≥ π then the average scalar curvature is at most n(n-1).
| Mathematics | Geometry | null |
195351 | https://en.wikipedia.org/wiki/Jacobian%20matrix%20and%20determinant | Jacobian matrix and determinant | In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature. They are named after Carl Gustav Jacob Jacobi.
Motivation
The Jacobian can be understood by considering a unit area in the new coordinate space; and examining how that unit area transforms when mapped into xy coordinate space in which the integral is visually understood. The process involves taking partial derivatives with respect to the new coordinates, then applying the determinant and hence obtaining the Jacobian.
Definition
Suppose is a function such that each of its first-order partial derivatives exists on . This function takes a point as input and produces the vector as output. Then the Jacobian matrix of , denoted , is defined such that its entry is , or explicitly
where is the transpose (row vector) of the gradient of the -th component.
The Jacobian matrix, whose entries are functions of , is denoted in various ways; other common notations include , , and . Some authors define the Jacobian as the transpose of the form given above.
The Jacobian matrix represents the differential of at every point where is differentiable. In detail, if is a displacement vector represented by a column matrix, the matrix product is another displacement vector, that is the best linear approximation of the change of in a neighborhood of , if is differentiable at . This means that the function that maps to is the best linear approximation of for all points close to . The linear map is known as the derivative or the differential of at .
When , the Jacobian matrix is square, so its determinant is a well-defined function of , known as the Jacobian determinant of . It carries important information about the local behavior of . In particular, the function has a differentiable inverse function in a neighborhood of a point if and only if the Jacobian determinant is nonzero at (see inverse function theorem for an explanation of this and Jacobian conjecture for a related problem of global invertibility). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).
When , that is when is a scalar-valued function, the Jacobian matrix reduces to the row vector ; this row vector of all first-order partial derivatives of is the transpose of the gradient of , i.e.
. Specializing further, when , that is when is a scalar-valued function of a single variable, the Jacobian matrix has a single entry; this entry is the derivative of the function .
These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).
Jacobian matrix
The Jacobian of a vector-valued function in several variables generalizes the gradient of a scalar-valued function in several variables, which in turn generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian matrix of a scalar-valued function in several variables is (the transpose of) its gradient and the gradient of a scalar-valued function of a single variable is its derivative.
At each point where a function is differentiable, its Jacobian matrix can also be thought of as describing the amount of "stretching", "rotating" or "transforming" that the function imposes locally near that point. For example, if is used to smoothly transform an image, the Jacobian matrix , describes how the image in the neighborhood of is transformed.
If a function is differentiable at a point, its differential is given in coordinates by the Jacobian matrix. However, a function does not need to be differentiable for its Jacobian matrix to be defined, since only its first-order partial derivatives are required to exist.
If is differentiable at a point in , then its differential is represented by . In this case, the linear transformation represented by is the best linear approximation of near the point , in the sense that
where is a quantity that approaches zero much faster than the distance between and does as approaches . This approximation specializes to the approximation of a scalar function of a single variable by its Taylor polynomial of degree one, namely
In this sense, the Jacobian may be regarded as a kind of "first-order derivative" of a vector-valued function of several variables. In particular, this means that the gradient of a scalar-valued function of several variables may too be regarded as its "first-order derivative".
Composable differentiable functions and satisfy the chain rule, namely for in .
The Jacobian of the gradient of a scalar function of several variables has a special name: the Hessian matrix, which in a sense is the "second derivative" of the function in question.
Jacobian determinant
If , then is a function from to itself and the Jacobian matrix is a square matrix. We can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is sometimes simply referred to as "the Jacobian".
The Jacobian determinant at a given point gives important information about the behavior of near that point. For instance, the continuously differentiable function is invertible near a point if the Jacobian determinant at is non-zero. This is the inverse function theorem. Furthermore, if the Jacobian determinant at is positive, then preserves orientation near ; if it is negative, reverses orientation. The absolute value of the Jacobian determinant at gives us the factor by which the function expands or shrinks volumes near ; this is why it occurs in the general substitution rule.
The Jacobian determinant is used when making a change of variables when evaluating a multiple integral of a function over a region within its domain. To accommodate for the change of coordinates the magnitude of the Jacobian determinant arises as a multiplicative factor within the integral. This is because the -dimensional element is in general a parallelepiped in the new coordinate system, and the -volume of a parallelepiped is the determinant of its edge vectors.
The Jacobian can also be used to determine the stability of equilibria for systems of differential equations by approximating behavior near an equilibrium point.
Inverse
According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. That is, the Jacobian matrix of the inverse function at a point is
and the Jacobian determinant is
If the Jacobian is continuous and nonsingular at the point in , then is invertible when restricted to some neighbourhood of . In other words, if the Jacobian determinant is not zero at a point, then the function is locally invertible near this point.
The (unproved) Jacobian conjecture is related to global invertibility in the case of a polynomial function, that is a function defined by n polynomials in n variables. It asserts that, if the Jacobian determinant is a non-zero constant (or, equivalently, that it does not have any complex zero), then the function is invertible and its inverse is a polynomial function.
Critical points
If is a differentiable function, a critical point of is a point where the rank of the Jacobian matrix is not maximal. This means that the rank at the critical point is lower than the rank at some neighbour point. In other words, let be the maximal dimension of the open balls contained in the image of ; then a point is critical if all minors of rank of are zero.
In the case where , a point is critical if the Jacobian determinant is zero.
Examples
Example 1
Consider a function with given by
Then we have
and
The Jacobian matrix of is
and the Jacobian determinant is
Example 2: polar-Cartesian transformation
The transformation from polar coordinates to Cartesian coordinates (x, y), is given by the function with components
The Jacobian determinant is equal to . This can be used to transform integrals between the two coordinate systems:
Example 3: spherical-Cartesian transformation
The transformation from spherical coordinates to Cartesian coordinates (x, y, z), is given by the function with components
The Jacobian matrix for this coordinate change is
The determinant is . Since is the volume for a rectangular differential volume element (because the volume of a rectangular prism is the product of its sides), we can interpret as the volume of the spherical differential volume element. Unlike rectangular differential volume element's volume, this differential volume element's volume is not a constant, and varies with coordinates ( and ). It can be used to transform integrals between the two coordinate systems:
Example 4
The Jacobian matrix of the function with components
is
This example shows that the Jacobian matrix need not be a square matrix.
Example 5
The Jacobian determinant of the function with components
is
From this we see that reverses orientation near those points where and have the same sign; the function is locally invertible everywhere except near points where or . Intuitively, if one starts with a tiny object around the point and apply to that object, one will get a resulting object with approximately times the volume of the original one, with orientation reversed.
Other uses
Dynamical systems
Consider a dynamical system of the form , where is the (component-wise) derivative of with respect to the evolution parameter (time), and is differentiable. If , then is a stationary point (also called a steady state). By the Hartman–Grobman theorem, the behavior of the system near a stationary point is related to the eigenvalues of , the Jacobian of at the stationary point. Specifically, if the eigenvalues all have real parts that are negative, then the system is stable near the stationary point. If any eigenvalue has a real part that is positive, then the point is unstable. If the largest real part of the eigenvalues is zero, the Jacobian matrix does not allow for an evaluation of the stability.
Newton's method
A square system of coupled nonlinear equations can be solved iteratively by Newton's method. This method uses the Jacobian matrix of the system of equations.
Regression and least squares fitting
The Jacobian serves as a linearized design matrix in statistical regression and curve fitting; see non-linear least squares. The Jacobian is also used in random matrices, moments, local sensitivity and statistical diagnostics.
| Mathematics | Multivariable and vector calculus | null |
195407 | https://en.wikipedia.org/wiki/Einstein%20notation | Einstein notation | In mathematics, especially the usage of linear algebra in mathematical physics and differential geometry, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.
Introduction
Statement of convention
According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set ,
is simplified by the convention to:
The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is, in this context should be understood as the second component of rather than the square of (this can occasionally lead to ambiguity). The upper index position in is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see below). Typically, would be equivalent to the traditional .
In general relativity, a common convention is that
the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are ),
the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are ),
In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation.
An index that is summed over is a summation index, in this case "". It is also called a dummy index since any symbol can replace "" without changing the meaning of the expression (provided that it does not collide with other index symbols in the same term).
An index that is not summed over is a free index and should appear only once per term. If such an index does appear, it usually also appears in every other term in an equation. An example of a free index is the "" in the equation , which is equivalent to the equation .
Application
Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term. When dealing with covariant and contravariant vectors, where the position of an index indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see below.
Vector representations
Superscripts and subscripts versus only subscripts
In terms of covariance and contravariance of vectors,
upper indices represent components of contravariant vectors (vectors),
lower indices represent components of covariant vectors (covectors).
They transform contravariantly or covariantly, respectively, with respect to change of basis.
In recognition of this fact, the following notation uses the same symbol both for a vector or covector and its components, as in:
where is the vector and are its components (not the th covector ), is the covector and are its components. The basis vector elements are each column vectors, and the covector basis elements are each row covectors. ( | Mathematics | Linear algebra | null |
195666 | https://en.wikipedia.org/wiki/Derailleur | Derailleur | A derailleur is a variable-ratio bicycle gearing system consisting of a chain, multiple sprockets of different sizes, and a mechanism to move the chain from one sprocket to another.
Modern front and rear derailleurs typically consist of a moveable chain-guide that is operated remotely by a Bowden cable attached to a shifter mounted on the down tube, handlebar stem, or handlebar. When a rider operates the lever while pedalling, the change in cable tension moves the chain-guide from side to side, "derailing" the chain onto different sprockets.
Etymology
Dérailleur () is a French word, derived from the derailment of a train from its tracks. Its first recorded use was 1930.
History
Various derailleur systems were designed and built in the late 19th century. One example is the Protean two-speed derailleur available on the Whippet safety bicycle. The French bicycle tourist, writer and cycling promoter Paul de Vivie (1853–1930), who wrote under the name Vélocio, invented a two speed rear derailleur in 1905 which he used on forays into the Alps.
Some early designs used rods to move the chain onto various gears. 1928 saw the introduction of the "Super Champion Gear" (or "Osgear") from the company founded by champion cyclist Oscar Egg, as well as the Vittoria Margherita* both employed chainstay mounted 'paddles' and single lever chain tensioners mounted near or on the downtube. However, these systems, along with the rod-operated Campagnolo Cambio Corsa were eventually superseded by parallelogram derailleurs.
In 1937, the derailleur system was introduced to the Tour de France, allowing riders to change gears without having to remove wheels. Previously, riders would have to dismount in order to change their wheel from downhill to uphill mode. Derailleurs did not become common road racing equipment until 1938 when Simplex introduced a cable-shifted derailleur.
In 1949 Campagnolo introduced the Gran Sport, a more refined version of the already existing, yet less commercially successful, cable-operated parallelogram rear derailleurs.
In 1964, Suntour invented the slant-parallelogram rear derailleur, which let the jockey pulley maintain a more constant distance from the different sized sprockets, resulting in easier shifting. Once the patents expired, other manufacturers adopted this design, at least for their better models, and the "slant parallelogram" remains the current rear derailleur pattern.
Before the 1990s many manufacturers made derailleurs, including Simplex, Huret, Galli, Mavic, Gipiemme, Zeus, Suntour, and Shimano. However, the successful introduction and promotion of indexed shifting by Shimano in 1985 required a compatible system of shift levers, derailleur, sprockets, chainrings, chain, shift cable, and shift housing.
The major innovations since the 1990s have been the switch from friction to indexed shifting and the gradual increase in the number of gears. With friction shifting, a lever directly controls the continuously variable position of the derailleur. To shift gears, the rider first moves the lever enough for the chain to jump to the next sprocket, and then adjusts the lever a slight amount to center the chain on that sprocket. An indexed shifter has a detent or ratchet mechanism which stops the gear lever, and hence the cable and the derailleur, after moving a specific distance with each press or pull. Indexed shifters require re-calibration when cables stretch and parts get damaged or swapped. On racing bicycles, 10-gear rear cassettes appeared in 2000, and 11-gear cassettes appeared in 2009. Most current mountain bicycles have either. Many modern, high-end mountain bikes have begun using entirely one chain ring drivetrains, with the industry constantly pushing the number of rear cogs up and up, as shown by SRAM's Eagle groupsets (1 by 12) and Rotor's recent 1 by 13 drive-train. Most road bicycles have two chainrings, and touring bicycles commonly have three.
An electronic gear-shifting system enables riders to shift with electronic switches instead of using conventional control levers. The switches are connected by wire or wirelessly to a battery pack and to a small electric motor that drives the derailleur. Although expensive, an electronic system could save a racing cyclist time when changing gears.
The three main manufacturers of derailleurs are Shimano (Japan), SRAM (USA), and Campagnolo (Italy).
Rear derailleurs
The rear derailleur has two functions: it moves the chain between rear sprockets while taking up chain slack caused by moving to a smaller sprocket at the rear or a smaller chainring by the front derailleur. In order to accomplish this second task, it is positioned in the path of the bottom, slack portion of chain. Sometimes the rear derailleurs are re-purposed as chain tensioners for single-speed bicycles that cannot adjust chain tension by a different method.
Although variations exist, most rear derailleurs have several components in common. They have a cage that holds two pulleys that guide the chain in an S-shaped pattern. The pulleys are known as the jockey pulley or guide pulley (top) and the tension pulley (bottom). The cage rotates in its plane and is spring-loaded to take up chain slack. The cage is positioned under the desired sprocket by an arm that can swing back and forth under the sprockets. The arm is usually implemented with a parallelogram mechanism to keep the cage properly aligned with the chain as it swings back and forth. The other end of the arm mounts to a pivot point attached to the bicycle frame. The arm pivots about this point to maintain the cage at a nearly constant distance from the different sized sprockets. There may be one or more adjustment screws that control the amount of lateral travel allowed and the spring tension.
The components may be constructed of aluminium alloy, steel, plastic, or carbon fibre composite. The pivot points may be bushings or ball bearings. These will require moderate lubrication.
Relaxed position
High normal or top normal rear derailleurs return the chain to the smallest sprocket on the cassette when no cable tension is applied. This is the regular pattern used on most Shimano mountain, all Shimano road, and all SRAM and Campagnolo derailleurs. In this condition, spring pressure takes care of the easier change to smaller sprockets. In road racing, the swiftest gear changes are required on the sprints to the finish line. Therefore high-normal types, which allow a quick change to a higher gear, remain the preference.
Low normal or rapid rise rear derailleurs return the chain to the largest sprocket on the cassette when no cable tension is applied. While this was once a common design for rear derailleurs, it has become relatively uncommon. In mountain biking and off-road cycling, the most critical gear changes occur on uphill sections, where riders must cope with obstacles and difficult turns while pedalling under heavy load. This derailleur type provides an advantage over high normal derailleurs because gear changes to lower gears occur in the direction of the loaded spring, making these shifts easier during high load pedalling.
Cage length
The distance between the upper and lower pulleys of a rear derailleur is known as the cage length. Cage length, when combined with the pulley size, determines the capacity of a derailleur to take up chain slack. Cage length determines the total capacity of the derailleur, that is the size difference between the largest and smallest chainrings, and the size difference between the largest and smallest sprockets on the cogset added together. A larger sum requires a longer cage length. Typical cross country mountain bikes with three front chainrings will use a long cage rear derailleur. A road bike with only two front chainrings and close ratio sprockets can operate with either a short or long cage derailleur, but will work better with a short cage.
Manufacturer stated derailleur capacities are as follows:
Shimano: long = 45T*, medium = 33T
SRAM: long = 43T*, medium = 37T*, short = 30T
Benefits of a shorter cage length:
more positive gear-changing due to less flex in the parallelogram
better gear-changing with good cable leverage
better obstruction clearance
less danger of catching spokes.
slight weight savings.
Cage positioning
There are at least two methods employed by rear derailleurs to maintain the appropriate gap between the upper jockey wheel and the rear sprockets as the derailleur moves between the large sprockets and the small sprockets. One method, used by Shimano, is to use chain tension to pivot the cage. This has the advantage of working with most sets of sprockets, if the chain has the proper length. A disadvantage is that rapid shifts from small sprockets to large over multiple sprockets at once can cause the cage to strike the sprockets before the chain moves onto the larger sprockets and pivots the cage as necessary. Another method, used by SRAM, is to design the spacing into the parallelogram mechanism of the derailleur itself. The advantage is that no amount of rapid, multi-sprocket shifting can cause the cage to strike the sprockets. The disadvantage is that there are limited options for sprocket sizes that can be used with a particular derailleur.
Actuation and shift ratios
The actuation ratio is the ratio between the amount of shifter cable length and the amount of transverse derailleur travel that it generates. Shift ratio is the reciprocal of actuation ratio and is more easily expressed for derailleurs than actuation. There are currently several standards in use, and in each the product of the derailleur's shift ratio and the length of cable pulled must equal the pitch of the rear sprockets. The following standards exist.
The Shimano compatible family of derailleurs is stated as having a shift ratio of two-to-one (2:1), and since SRAM makes two families of components, the term has been widely adopted to distinguish it from SRAM's own one-to-one (1:1) ratio family of derailleurs. Notice that these family names do not give the exact shift ratios: the 2:1 shift ratio is in fact about 1.7 (Or 1.9 on the Dura Ace series up to 7400) rather than 2, and the native SRAM shift ratio is about 1.1. The family names of these standards are reversed by some in actuation ratio notation as opposed to that of the more common shift ratio. Thus, in Shimano systems a unit of cable shifted causes about twice as much movement of the derailleur.
The native SRAM convention is called one-to-one (1:1). These have actual shift ratios of 1.1. A unit of cable retracted at the shifter causes about an equal amount of movement in the derailleur. SRAM claims that standard makes their systems more robust: more resistant to the effects of contamination. Some SRAM shifters are made to be 2:1 Shimano-compatible, but these clearly will not work with SRAM's 1:1 derailleurs.
The Campagnolo convention. The shift ratios are 1.5 for modern units but their old units had 1.4 ratios.
The Suntour's convention.
Shifters employing one convention are generally not compatible with derailleurs employing another, although exceptions exist, and adaptors are available.
Clutch
Some rear derailleurs, especially for mountain bikes, incorporate a clutch to keep the lower length of chain in sufficient tension to prevent the chain from striking the bottom of the chain stay: this is called chain slap and can damage the chain stay. Clutches are also helpful in preventing the chain from derailing from the chain ring on systems without a front derailleur.
Front derailleurs
The front derailleur only has to move the chain side to side between the front chainrings, but it has to do this with the top, taut portion of the chain. It also needs to accommodate large differences in chainring size: from as many as 53 teeth to as few as 20 teeth.
As with the rear derailleur, the front derailleur has a cage through which the chain passes. On a properly adjusted derailleur, the chain will only touch the cage while shifting. The cage is held in place by a movable arm which is usually implemented with a parallelogram mechanism to keep the cage properly aligned with the chain as it swings back and forth. There are usually two adjustment screws controlling the limits of lateral travel allowed. The components may be constructed of aluminium alloy, steel, plastic, or carbon fibre composite. The pivot points are usually bushings, and these will require lubrication.
Cable pull types
Bottom pull: Commonly used on road and touring bikes, this type of derailleur is actuated by a cable pulling downwards. The cable is often routed across the top or along the bottom of the bottom bracket shell on a cable guide, which redirects the cable up the lower edge of the frame's down tube. Full-suspension mountain bikes often have bottom pull routing as the rear suspension prevents routing via the top tube.
Top pull: This type is more commonly seen on mountain bikes without rear-suspension. The derailleur is actuated by a cable pulling upwards, which is usually routed along the frame's top tube, using cable stops and a short length of housing to change the cable's direction. This arrangement keeps the cable away from the underside of the bottom bracket/down tube which get pelted with dirt when off-road.
Dual pull: There are some derailleurs available that have provisions for either top pull or bottom pull, and can be used in either application.
Cage types
Double (Standard): These are intended to be used with cranksets having two chainrings. When viewed from the side of the bicycle, the inner and outer plates of the cage have roughly the same profile.
Triple (Alpine): Derailleurs designed to be used with cranksets having three chainrings, or with two chainrings that differ greatly in size. When viewed from the side of the bicycle, the inner cage plate extends further towards the bottom bracket's center of rotation than the outer cage plate does. This is to help shift the chain from the smallest ring onto the middle ring more easily.
Swing types
Bottom swing: The derailleur cage is mounted to the bottom of the four-bar linkage that carries it. This is the most common type of derailleur.
Top swing: The derailleur cage is mounted to the top of the four-bar linkage that carries it. This alternate arrangement was created as a way to get the frame clamp of the derailleur closer to the bottom bracket to be able to clear larger suspension components and allow different frame shapes. The compact construction of a top swing derailleur can cause it to be less robust than its bottom swing counterpart. Top swing derailleurs are typically only used in applications where a bottom swing derailleur will not fit. An alternative solution would be to use an E-type front derailleur, which does not clamp around the seat tube at all.
Mount types
Clamp: Until recently, most front derailleurs are mounted to the frame by a clamp around the frame's seat tube, and this style is still the standard on mountain bikes and is common on road bikes. Derailleurs are available with several different clamp diameters designed to fit different types of frame tubing. Recently, there has been a trend to make derailleurs with only one diameter clamp, and several sets of shims are included to space the clamp down to the appropriate size.
Braze-on: An alternative to the clamp is the braze-on derailleur hanger, where the derailleur is mounted by bolting a tab on the derailleur to a corresponding tab on the frame's seat tube. This avoids any clamp size issues, but requires either a frame with the appropriate braze-on, or an adapter clamp that simulates a braze-on derailleur tab. These have become common on newer road bikes, as carbon frames no longer have a round seat tube. They are rarely seen on mountain bikes.
E-type: This type front derailleurs do not clamp around the frame's seat tube, but instead are attached to the frame by a plate mounted under the drive side bottom bracket cup and a screw threaded into a boss on the seat tube. These derailleurs are usually found on mountain bikes with rear suspension components that do not allow space for a normal derailleur's clamp to go around the seat tube.
DMD: Direct-Mount-Derailleur — Initiated by Specialized Bicycles, this type of derailleur is bolted directly to bosses on the chainstay of the bike. They are mostly used on dual suspension mountain bikes, where suspension movement causes changes to the chain angle as it enters the front derailleur cage. By utilizing a DMD system, the chain and derailleur move together, allowing for better shifting when the suspension is active. A DMD derailleur should not be confused with Shimano's Direct Mount, which uses a different mounting system. However, SRAM's direct mount front derailleurs are compatible with DMD, and certain Shimano E-type derailleurs can be used with DMD if the e-type plate is removed.
Because of the possibility of the chain shifting past the smallest inner chainring, especially when the inner chainring is very small, even on bikes adjusted by professional race mechanics, and the problems such misshifts can cause, a small after-market of add-on products, called chain deflectors, exists to help prevent them from occurring. Some clamp around the seat tube, below the front derailleur, and at least one attaches to the front derailleur mount.
Use
Derailleurs require the chain to be in movement in order to shift from one ring or sprocket to another. This usually requires the rider to be pedalling, but some systems have been developed with the freewheel in the crankset so that the chain moves even when the rider is not pedalling. The Shimano FFS (Front Freewheel System) circa 1980 was the most widespread such system.
Chain-drive systems such as the derailleur systems work best if the chain is aligned with the sprocket plane, especially avoiding the biggest drive sprocket running with the biggest driven sprocket (or the smallest with the smallest). The diagonal chain run produced by these practices is less efficient and shortens the life of all components, with no advantage from the middle of the range ratio obtained.
Derailleur gears generally have an efficiency around 95%, a few percentage points higher than other gear types.
| Technology | Human-powered transport | null |
195729 | https://en.wikipedia.org/wiki/Period%203%20element | Period 3 element | A period 3 element is one of the chemical elements in the third row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases: a new row is begun when chemical behavior begins to repeat, meaning that elements with similar behavior fall into the same vertical columns. The third period contains eight elements: sodium, magnesium, aluminium, silicon, phosphorus, sulfur, chlorine and argon. The first two, sodium and magnesium, are members of the s-block of the periodic table, while the others are members of the p-block. All of the period 3 elements occur in nature and have at least one stable isotope.
Atomic structure
In a quantum mechanical description of atomic structure, this period corresponds to the buildup of electrons in the third () shell, more specifically filling its 3s and 3p subshells. There is a 3d subshell, but—in compliance with the Aufbau principle—it is not filled until period 4. This makes all eight elements analogs of the period 2 elements in the same exact sequence. The octet rule generally applies to period 3 in the same way as to period 2 elements, because the 3d subshell is normally non-acting.
Elements
Sodium
Sodium (symbol Na) is a soft, silvery-white, highly reactive metal and is a member of the alkali metals; its only stable isotope is 23Na. It is an abundant element that exists in numerous minerals such as feldspars, sodalite and rock salt. Many salts of sodium are highly soluble in water and are thus present in significant quantities in the Earth's bodies of water, most abundantly in the oceans as sodium chloride.
Many sodium compounds are useful, such as sodium hydroxide (lye) for soapmaking, and sodium chloride for use as a deicing agent and a nutrient. The same ion is also a component of many minerals, such as sodium nitrate.
The free metal, elemental sodium, does not occur in nature but must be prepared from sodium compounds. Elemental sodium was first isolated by Humphry Davy in 1807 by the electrolysis of sodium hydroxide.
Magnesium
Magnesium (symbol Mg) is an alkaline earth metal and has common oxidation number +2. It is the eighth most abundant element in the Earth's crust and the ninth in the known universe as a whole. Magnesium is the fourth most common element in the Earth as a whole (behind iron, oxygen and silicon), making up 13% of the planet's mass and a large fraction of the planet's mantle. It is relatively abundant because it is easily built up in supernova stars by sequential additions of three helium nuclei to carbon (which in turn is made from three helium nuclei). Due to the magnesium ion's high solubility in water, it is the third most abundant element dissolved in seawater.
The free element (metal) is not found naturally on Earth, as it is highly reactive (though once produced, it is coated in a thin layer of oxide [see passivation], which partly masks this reactivity). The free metal burns with a characteristic brilliant white light, making it a useful ingredient in flares. The metal is now mainly obtained by electrolysis of magnesium salts obtained from brine. Commercially, the chief use for the metal is as an alloying agent to make aluminium-magnesium alloys, sometimes called "magnalium" or "magnelium". Since magnesium is less dense than aluminium, these alloys are prized for their relative lightness and strength.
Magnesium ions are sour to the taste, and in low concentrations help to impart a natural tartness to fresh mineral waters.
Aluminium
Aluminium (symbol Al) or aluminum (American English) is a silvery white member of the boron group of chemical elements and a p-block metal classified by some chemists as a post-transition metal. It is not soluble in water under normal circumstances. Aluminium is the third most abundant element (after oxygen and silicon), and the most abundant metal, in the Earth's crust. It makes up about 8% by weight of the Earth's solid surface. Aluminium metal is too reactive chemically to occur natively. Instead, it is found combined in over 270 different minerals. The chief ore of aluminium is bauxite.
Aluminium is remarkable for the metal's low density and for its ability to resist corrosion due to the phenomenon of passivation. Structural components made from aluminium and its alloys are vital to the aerospace industry and are important in other areas of transportation and structural materials. The most useful compounds of aluminium, at least on a weight basis, are the oxides and sulfates.
Silicon
Silicon (symbol Si) is a group 14 metalloid. It is less reactive than its chemical analog carbon, the nonmetal directly above it in the periodic table, but more reactive than germanium, the metalloid directly below it in the table. Controversy about silicon's character dates from its discovery: silicon was first prepared and characterized in pure form in 1824, and given the name silicium (from , flints), with an -ium word-ending to suggest a metal. However, its final name, suggested in 1831, reflects the more chemically similar elements carbon and boron.
Silicon is the eighth most common element in the universe by mass, but very rarely occurs as the pure free element in nature. It is most widely distributed in dusts, sands, planetoids and planets as various forms of silicon dioxide (silica) or silicates. Over 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust (about 28% by mass) after oxygen.
Most silicon is used commercially without being separated, and indeed often with little processing of compounds from nature. These include direct industrial building use of clays, silica sand and stone. Silica is used in ceramic brick. Silicate goes into Portland cement for mortar and stucco, and combined with silica sand and gravel, to make concrete. Silicates are also in whiteware ceramics such as porcelain, and in traditional quartz-based soda–lime glass. More modern silicon compounds such as silicon carbide form abrasives and high-strength ceramics. Silicon is the basis of the ubiquitous synthetic silicon-based polymers called silicones.
Elemental silicon also has a large impact on the modern world economy. Although most free silicon is used in the steel refining, aluminum-casting, and fine chemical industries (often to make fumed silica), the relatively small portion of very highly purified silicon that is used in semiconductor electronics (< 10%) is perhaps even more critical. Because of wide use of silicon in integrated circuits, the basis of most computers, a great deal of modern technology depends on it.
Phosphorus
Phosphorus (symbol P) is a multivalent nonmetal of the nitrogen group, phosphorus as a mineral is almost always present in its maximally oxidized (pentavalent) state, as inorganic phosphate rocks. Elemental phosphorus exists in two major forms—white phosphorus and red phosphorus—but due to its high reactivity, phosphorus is never found as a free element on Earth.
The first form of elemental phosphorus to be produced (white phosphorus, in 1669) emits a faint glow upon exposure to oxygen – hence its name given from Greek mythology, meaning "light-bearer" (Latin: Lucifer), referring to the "Morning Star", the planet Venus. Although the term "phosphorescence", meaning glow after illumination, derives from this property of phosphorus, the glow of phosphorus originates from oxidation of the white (but not red) phosphorus and should be called chemiluminescence. It is also the lightest element to easily produce stable exceptions to the octet rule.
The vast majority of phosphorus compounds are consumed as fertilizers. Other applications include the role of organophosphorus compounds in detergents, pesticides and nerve agents and matches.
Sulfur
Sulfur (symbol S) is an abundant multivalent nonmetal, one of chalcogens. Under normal conditions, sulfur atoms form cyclic octatomic molecules with chemical formula S8. Elemental sulfur is a bright yellow crystalline solid when at room temperature. Chemically, sulfur can react as either an oxidant or a reducing agent. It oxidizes most metals and several nonmetals, including carbon, which leads to its negative charge in most organosulfur compounds, but it reduces several strong oxidants, such as oxygen and fluorine.
In nature, sulfur can be found as the pure element and as sulfide and sulfate minerals. Elemental sulfur crystals are commonly sought after by mineral collectors for their brightly colored polyhedron shapes. Being abundant in native form, sulfur was known in ancient times, mentioned for its uses in ancient Greece, China and Egypt. Sulfur fumes were used as fumigants, and sulfur-containing medicinal mixtures were used as balms and antiparasitics. Sulfur is referenced in the Bible as brimstone in English, with this name still used in several nonscientific terms. Sulfur was considered important enough to receive its own alchemical symbol. It was needed to make the best quality of black gunpowder, and the bright yellow powder was hypothesized by alchemists to contain some of the properties of gold, which they sought to synthesize from it. In 1777, Antoine Lavoisier helped convince the scientific community that sulfur was a basic element, rather than a compound.
Elemental sulfur was once extracted from salt domes, where it sometimes occurs in nearly pure form, but this method has been obsolete since the late 20th century. Today, almost all elemental sulfur is produced as a byproduct of removing sulfur-containing contaminants from natural gas and petroleum. The element's commercial uses are primarily in fertilizers, because of the relatively high requirement of plants for it, and in the manufacture of sulfuric acid, a primary industrial chemical. Other well-known uses for the element are in matches, insecticides and fungicides. Many sulfur compounds are odiferous, and the smell of odorized natural gas, skunk scent, grapefruit, and garlic is due to sulfur compounds. Hydrogen sulfide produced by living organisms imparts the characteristic odor to rotting eggs and other biological processes.
Chlorine
Chlorine (symbol Cl) is the second-lightest halogen. The element forms diatomic molecules under standard conditions, called dichlorine. It has the highest electron affinity and the one of highest electronegativity of all the elements; thus chlorine is a strong oxidizing agent.
The most common compound of chlorine, sodium chloride (table salt), has been known since ancient times; however, around 1630, chlorine gas was obtained by the Belgian chemist and physician Jan Baptist van Helmont. The synthesis and characterization of elemental chlorine occurred in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated muriatic acid air", as he thought he synthesized the oxide obtained from the hydrochloric acid, because acids were thought at the time to necessarily contain oxygen. A number of chemists, including Claude Berthollet, suggested that Scheele's "dephlogisticated muriatic acid air" must be a combination of oxygen and the yet undiscovered element, and Scheele named the supposed new element within this oxide as muriaticum. The suggestion that this newly discovered gas was a simple element was made in 1809 by Joseph Louis Gay-Lussac and Louis-Jacques. This was confirmed in 1810 by Sir Humphry Davy, who named it chlorine, from the Greek word χλωρός (chlōros), meaning "green-yellow".
Chlorine is a component of many other compounds. It is the second most abundant halogen and 21st most abundant element in Earth's crust. The great oxidizing power of chlorine led it to its bleaching and disinfectant uses, as well as being an essential reagent in the chemical industry. As a common disinfectant, chlorine compounds are used in swimming pools to keep them clean and sanitary. In the upper atmosphere, chlorine-containing molecules such as chlorofluorocarbons have been implicated in ozone depletion.
Argon
Argon (symbol Ar) is the third element in group 18, the noble gases. Argon is the third most common gas in the Earth's atmosphere, at 0.93%, making it more common than carbon dioxide. Nearly all of this argon is radiogenic argon-40 derived from the decay of potassium-40 in the Earth's crust. In the universe, argon-36 is by far the most common argon isotope, being the preferred argon isotope produced by stellar nucleosynthesis.
The name "argon" is derived from the Greek neuter adjective ἀργόν, meaning "lazy" or "the inactive one", as the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is produced industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily non-reactive substances become reactive: for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon gas also has uses in incandescent and fluorescent lighting, and other types of gas discharge tubes. Argon makes a distinctive blue–green gas laser.
Biological roles
Sodium is an essential element for all animals and some plants. In animals, sodium ions are used against potassium ions to build up charges on cell membranes, allowing transmission of nerve impulses when the charge is dissipated; it is therefore classified as a dietary inorganic macromineral.
Magnesium is the eleventh most abundant element by mass in the human body; its ions are essential to all living cells, where they play a major role in manipulating important biological polyphosphate compounds like ATP, DNA, and RNA. Hundreds of enzymes thus require magnesium ions to function. Magnesium is also the metallic ion at the center of chlorophyll, and is thus a common additive to fertilizers. Magnesium compounds are used medicinally as common laxatives, antacids (e.g., milk of magnesia), and in a number of situations where stabilization of abnormal nerve excitation and blood vessel spasm is required (e.g., to treat eclampsia).
Despite its prevalence in the environment, aluminium salts are not known to be used by any form of life. In keeping with its pervasiveness, it is well tolerated by plants and animals. Because of their prevalence, potential beneficial (or otherwise) biological roles of aluminium compounds are of continuing interest.
Silicon is an essential element in biology, although only tiny traces of it appear to be required by animals, though various sea sponges need silicon in order to have structure. It is much more important to the metabolism of plants, particularly many grasses, and silicic acid (a type of silica) forms the basis of the striking array of protective shells of the microscopic diatoms.
Phosphorus is essential for life. As phosphate, it is a component of DNA, RNA, ATP, and also the phospholipids that form all cell membranes. Demonstrating the link between phosphorus and life, elemental phosphorus was historically first isolated from human urine, and bone ash was an important early phosphate source. Phosphate minerals are fossils. Low phosphate levels are an important limit to growth in some aquatic systems. Today, the most important commercial use of phosphorus-based chemicals is the production of fertilizers, to replace the phosphorus that plants remove from the soil.
Sulfur is an essential element for all life, and is widely used in biochemical processes. In metabolic reactions, sulfur compounds serve as both fuels and respiratory (oxygen-replacing) materials for simple organisms. Sulfur in organic form is present in the vitamins biotin and thiamine, the latter being named for the Greek word for sulfur. Sulfur is an important part of many enzymes and in antioxidant molecules like glutathione and thioredoxin. Organically bonded sulfur is a component of all proteins, as the amino acids cysteine and methionine. Disulfide bonds are largely responsible for the mechanical strength and insolubility of the protein keratin, found in outer skin, hair, and feathers, and the element contributes to their pungent odor when burned.
Elemental chlorine is extremely dangerous and poisonous for all lifeforms, and is used as a pulmonary agent in chemical warfare; however, chlorine is necessary to most forms of life, including humans, in the form of chloride ions.
Argon has no biological role. Like any gas besides oxygen, argon is an asphyxiant.
Table of elements
| Physical sciences | Periods | Chemistry |
195734 | https://en.wikipedia.org/wiki/Pecan | Pecan | The pecan ( , , ; Carya illinoinensis) is a species of hickory native to the Southern United States and northern Mexico in the region of the Mississippi River.
The tree is cultivated for its seed primarily in the U.S. states of Georgia, New Mexico, and Texas, and in Mexico. The seed is an edible nut used as a snack and in various recipes, such as praline candy and pecan pie. The pecan is the state nut of Alabama, Arkansas, California, Texas, and Louisiana, and is also the state tree of Texas.
Name
derives from an Algonquian word variously referring to pecans, walnuts, and hickory nuts. There are many pronunciations, some regional and others not. There is little agreement in the United States regarding the "correct" pronunciation, even regionally.
In 1927, the National Pecan Growers Association acknowledged variant pronunciations while designating one as official and correct: "pronounced as though spelled pea-con ... those in the habit of using any other pronunciation therefore be requested henceforth to adopt exclusively the pronunciation above specified above and hereby adopted by the Association."<ref>"Proceedings, 27th National Convention, National Pecan Growers Association (27-29 September 1927, Shreveport, LA)," 153. Also cited in "'Chattanooga Daily Times, 30 September 1927, 7 ("Pecan Growers Vote Nut’s Pronunciation").</ref>
Description
The pecan tree is a large deciduous tree, growing to in height, rarely to . It typically has a spread of with a trunk up to diameter. A 10-year-old sapling grown in optimal conditions will stand about tall. The leaves are alternate, long, and pinnate with 9–17 leaflets, each leaflet long and broad.
A pecan, like the fruit of all other members of the hickory genus, is not truly a nut, but is technically a drupe, a fruit with a single stone or pit, surrounded by a husk. The husks are produced from the exocarp tissue of the flower, while the part known as the nut develops from the endocarp and contains the seed. The husk itself is aeneous, that is, brassy greenish-gold in color, oval to oblong in shape, long, and broad. The outer husk is thick, starts out green, and turns brown at maturity, at which time it splits off in four sections to release the thin-shelled seed.Collingwood, G. H., Brush, W. D., & Butches, D., eds. (1964). Knowing your trees. 2nd ed. American Forestry Association, Washington, DC.
Taxonomy Carya illinoinensis, is a member of the Juglandaceae family. Juglandaceae are represented worldwide by seven and ten extant genera and more than 60 species. Most of these species are concentrated in the Northern Hemisphere of the New World, but some can be found on every continent except Antarctica.
Phylogeny
The first fossil examples of Juglandaceae appear during the Cretaceous. Differentiation between the subfamilies of Engelhardioideae and Juglandioideae occurred during the early Paleogene, about 64 million years ago. Extant examples of Engelhardioideae are generally tropical and evergreen, while those of Juglandioideae are deciduous and found in more temperate zones.
The second major step in the development of pecan was a change from wind-dispersed fruits to animal dispersion. This dispersal strategy coincides with developing a husk around the fruit and a drastic change in the relative concentrations of fatty acids. The ratio of oleic to linoleic acids is inverted between wind- and animal-dispersed seeds. Further differentiation from other species of Juglandaceae occurred about 44 million years ago during the Eocene. The fruits of the pecan genus Carya differ from those of the walnut genus Juglans only in the formation of the husk of the fruit. The husks of walnuts develop from the bracts, bracteoles and sepals, or sepals only. The husks of pecans develop from the bracts and the bracteoles only.
Cultivation
Pecans are one of the most recently domesticated of the major crops. Although wild pecans were well known among native and colonial Americans as a delicacy, the commercial growth of pecans in the United States did not begin until the 1880s. As of 2014, the United States produced an annual crop of , with 75% of the total crop produced in Georgia, New Mexico, and Texas. They can be grown from USDA hardiness zones approximately 5 to 9, and grow best where summers are long, hot and humid. The nut harvest for growers is typically around mid-October.
In 2017, outside the U.S., Mexico produced nearly half of the world's total, similar in volume to that of the U.S., together accounting for 93% of global production. As of 2024, South Africa is the third largest producer, mostly exporting to China. Pecan trees require large quantities of water during the growing
season, and most orchards in the region use flood irrigation to optimize consumptive water use and production of mature pecans. Generally, two or more trees of different cultivars must be present to pollinate each other.
Choosing cultivars can be a complex practice, based on the Alternate Bearing Index (ABI) and their period of pollinating. Commercial growers are most concerned with the ABI, which describes a cultivar's likelihood to bear on alternating years (index of 1.0 signifies the highest likelihood of bearing little to nothing every other year). The period of pollination groups all cultivars into two families: those that shed pollen before they can receive pollen (protandrous) and those that shed pollen after becoming receptive to pollen (protogynous). State-level resources provide recommended varieties for specific regions.
Native pecans in Mexico are adapted from zones 9 to 11. Little or no breeding work has been done with these populations. A few selections from native stands have been made, such as Frutosa and Norteña, which are recommended for cultivation in Mexico. Improved varieties recommended for cultivation in Mexico are USDA-developed cultivars. This represents a gap in breeding development given that native pecans can be cultivated at least down to the Yucatán peninsula while the USDA cultivars have chilling hour requirements greater than those occurring in much of the region. Some regions of the U.S. such as parts of Florida and Puerto Rico are zone 10 or higher, and these regions have limited options for pecan cultivation. 'Western' is the only commonly available variety that can make a crop in low-chill zones.
Breeding and selection programs
Active breeding and selection is carried out by the USDA Agricultural Research Service with growing locations at Brownwood and College Station, Texas. University of Georgia has a breeding program at the Tifton campus working on selecting pecan varieties adapted to subtropical Southeastern U.S. growing conditions.
While selection work has been done since the late 19th century, most acreage of pecans grown today is of older cultivars, such as 'Stuart', 'Schley', 'Elliott', and 'Desirable', with known flaws, but also with known production potential. Cultivars such as 'Elliot' are increasing in popularity due to resistance to pecan scab. The long cycle time for pecan trees plus financial considerations dictate that new varieties go through an extensive vetting process before being widely planted. Numerous varieties produce well in Texas, but fail in the Southeastern U.S. due to increased disease pressure. Selection programs are ongoing at the state level, with Alabama, Arkansas, Florida, Georgia, Kansas, Missouri, New Mexico, and others having trial plantings.
Varieties adapted from the southern tier of states north through some parts of Iowa and even into southern Canada are available from nurseries. Production potential drops significantly when planted further north than Tennessee. Most breeding efforts for northern-adapted varieties have not been on a large enough scale to significantly affect production. Varieties that are available and adapted (e.g., 'Major', 'Martzahn', 'Witte', 'Greenriver', 'Mullahy', and 'Posey') in zones 6 and farther north are almost entirely selections from wild stands. 'Kanza', a northern-adapted release from the USDA breeding program, is a grafted pecan having high productivity and quality, and cold tolerance.
Diseases, pests, and disorders
Pecans are subject to various diseases, pests, and physiological disorders that can limit tree growth and fruit production. These range from scab to hickory shuckworm to shuck decline.
Pecans are prone to infection by bacteria and fungi such as pecan scab, especially in humid conditions. Scab is the most destructive disease affecting pecan trees untreated with fungicides. Recommendations for preventive spray materials and schedules are available from state-level resources.
Various insects feed on the leaves, stems, and developing nuts. These include ambrosia beetles, twig girdlers, pecan nut casebearer, hickory shuckworm, phylloxera, curculio, weevils, and several aphid species.
In the Southeastern U.S., nickel deficiency in C. illinoinensis produces a disorder called "mouse-ear" in trees fertilized with urea. Similarly, zinc deficiency causes rosetting of the leaves. Various other disorders are documented, including canker disease and shuck decline complex.
Uses
Pecan seeds are edible, with a rich, buttery flavor. They can be eaten fresh or roasted, or used in cooking, particularly in sweet desserts, such as pecan pie, a traditional Southern U.S. dish. Butter pecan is also a common flavor in cookies, cakes, and ice creams. Pecans are a significant ingredient in American praline candy. Other applications of cooking with pecans include pecan oil and pecan butter.
Pecan wood is used in making furniture and wood flooring, as well as flavoring fuel for smoking meats, giving grilled foods a sweet and nutty flavor stronger than many fruit woods.
Nutrition
A pecan nut is 4% water, 72% fat, 9% protein, and 14% carbohydrates. In a 100 g reference amount, pecans provide 690 calories and are a rich source (20% or more of the Daily Value, DV) of dietary fiber (38% DV), manganese (214% DV), magnesium (34% DV), phosphorus (40% DV), zinc (48% DV), and thiamine (57% DV). Pecans are a moderate source (10–19% DV) of iron and B vitamins. Pecan fat content consists principally of monounsaturated fatty acids, mainly oleic acid (57% of total fat), and the polyunsaturated fatty acid, linoleic acid (30% of total fat).
History
Before European settlement, pecans were widely consumed and traded by Native Americans. As a wild forage, the fruit of the previous growing season is commonly still edible when found on the ground. Native American tribes would collect the fruit to make flour that was used as a meat substitute and a milky fermented drink called "Pow-cohicora", along with the bark and leaves made into a tea to heal ailments such as Tuberculosis.
Pecans first became known to Europeans in the 16th century. The first Europeans to come into contact with pecans were Spanish explorers in what is now Louisiana, Texas, and Mexico. These Spanish explorers called the pecan, nuez de la arruga, which roughly translates to "wrinkle nut". Because of their familiarity with the genus Juglans, these early explorers referred to the nuts as nogales and nueces, the Spanish terms for "walnut trees" and "fruit of the walnut". They noted the particularly thin shell and acorn-like shape of the fruit, indicating they were referring to pecans. The Spaniards took the pecan into Europe, Asia, and Africa in the 16th century.
In 1792, William Bartram reported in his botanical book, Travels, a nut tree, Juglans exalata that some botanists today argue was the American pecan tree. Still, others argue hickory, Carya ovata. Pecan trees are native to the United States, and writing about the pecan tree goes back to the nation's founders. Thomas Jefferson planted pecan trees, C. illinoinensis (Illinois nuts), in his nut orchard at his home, Monticello, in Virginia. George Washington reported in his journal that Thomas Jefferson gave him "Illinois nuts", pecans, which Washington then grew at Mount Vernon, his Virginia home.
Commercial production of pecans was slow because trees were slow to mature and bear fruit. More importantly, the trees grown from the nuts of one tree have very diverse characters. To speed nut production and retain the best tree characteristics, grafting from mature, productive trees was the apparent strategy. However, this proved technically challenging. The Centennial cultivar was the first to be successfully grafted. This was accomplished by an enslaved person called Antoine in 1846 or 1847, who was owned by Jacques Telesphore Roman of the Oak Alley Plantation near the Mississippi River. The scions were supplied by Dr. A. E. Colomb, who had unsuccessfully attempted to graft them.
Genetics
Pecan is a 32-chromosome species (1N = 16) that readily hybridizes with other 32-chromosome members of the Carya genus, such as Carya ovata, Carya laciniosa, Carya cordiformis and has been reported to hybridize with 64-chromosome species such as Carya tomentosa. Most such hybrids are unproductive. Hybrids are referred to as "hicans" to indicate their hybrid origin. Recent efforts at NMSU to complete a pecan genome showed that DNA introgressed from C. aquatica (water hickory), C. myristiciformis (nutmeg hickory), and C. cordiformis'' (bitternut hickory) is present in commercial pecan varieties grown today.
In culture
In 1919, the 36th Texas Legislature made the pecan tree the state tree of Texas; in 2001, the pecan was declared the state's official "health nut", and in 2013, pecan pie was made the state's official pie. The town of San Saba, Texas claims to be "The Pecan Capital of the World" and is the site of the "Mother Tree" () considered to be the source of the state's production through its progeny.
Alabama named the pecan the official state nut in 1982. Arkansas adopted it as the official nut in 2009. California adopted it, along with the almond, pistachio, and walnut, as one of four state nuts in 2017. Louisiana, known for pralines, adopted the Pecan as its official state nut in 2023. In 1988, Oklahoma enacted an official state meal which included pecan pie.
Gallery
| Biology and health sciences | Fagales | null |
195752 | https://en.wikipedia.org/wiki/Flash%20%28photography%29 | Flash (photography) | A flash is a device used in photography that produces a brief burst of light (lasting around of a second) at a color temperature of about 5500 K to help illuminate a scene. The main purpose of a flash is to illuminate a dark scene. Other uses are capturing quickly moving objects or changing the quality of light. Flash refers either to the flash of light itself or to the electronic flash unit discharging the light. Most current flash units are electronic, having evolved from single-use flashbulbs and flammable powders. Modern cameras often activate flash units automatically.
Flash units are commonly built directly into a camera. Some cameras allow separate flash units to be mounted via a standardized accessory mount bracket (a hot shoe). In professional studio equipment, flashes may be large, standalone units, or studio strobes, powered by special battery packs or connected to mains power. They are either synchronized with the camera using a flash synchronization cable or radio signal, or are light-triggered, meaning that only one flash unit needs to be synchronized with the camera, and in turn triggers the other units, called slaves.
Types
Flash-lamp/Flash powder
Studies of magnesium by Bunsen and Roscoe in 1859 showed that burning this metal produced a light with similar qualities to daylight. The potential application to photography inspired Edward Sonstadt to investigate methods of manufacturing magnesium so that it would burn reliably for this use. He applied for patents in 1862 and by 1864 had started the Manchester Magnesium Company with Edward Mellor. With the help of engineer William Mather, who was also a director of the company, they produced flat magnesium ribbon, which was said to burn more consistently and completely so giving better illumination than round wire. It also had the benefit of being a simpler and cheaper process than making round wire. Mather was also credited with the invention of a holder for the ribbon, which formed a lamp to burn it in. A variety of magnesium ribbon holders were produced by other manufacturers, such as the Pistol Flashmeter, which incorporated an inscribed ruler that allowed the photographer to use the correct length of ribbon for the exposure they needed. The packaging also implies that the magnesium ribbon was not necessarily broken off before being ignited.
An alternative to magnesium ribbon was flash powder, a mixture of magnesium powder and potassium chlorate, was introduced by its German inventors Adolf Miethe and Johannes Gaedicke in 1887. A measured amount was put into a pan or trough and ignited by hand, producing a brief brilliant flash of light, along with the smoke and noise that might be expected from such an explosive event. This could be a life-threatening activity, especially if the flash powder was damp. An electrically triggered flash lamp was invented by Joshua Lionel Cowen in 1899. His patent describes a device for igniting photographers' flash powder by using dry cell batteries to heat a wire fuse. Variations and alternatives were touted from time to time and a few found a measure of success, especially for amateur use. In 1905, one French photographer was using intense non-explosive flashes produced by a special mechanized carbon arc lamp to photograph subjects in his studio, but more portable and less expensive devices prevailed. On through the 1920s, flash photography normally meant a professional photographer sprinkling powder into the trough of a T-shaped flash lamp, holding it aloft, then triggering a brief and (usually) harmless bit of pyrotechnics.
Flashbulbs
The use of flash powder in an open lamp was replaced by flashbulbs; magnesium filaments were contained in bulbs filled with oxygen gas, and electrically ignited by a contact in the camera shutter. Manufactured flashbulbs were first produced commercially in Germany in 1929. Such a bulb could only be used once, and was too hot to handle immediately after use, but the confinement of what would otherwise have amounted to a small explosion was an important advance. A later innovation was the coating of flashbulbs with a plastic film to maintain bulb integrity in the event of the glass shattering during the flash. A blue plastic film was introduced as an option to match the spectral quality of the flash to daylight-balanced colour film. Subsequently, the magnesium was replaced by zirconium, which produced a brighter flash.
There was a significant delay after ignition for a flashbulb to reach full brightness, and the bulb burned for a relatively long time, compared to shutter speeds required to stop motion and not display camera shake. Slower shutter speeds (typically from to of a second) were initially used on cameras to ensure proper synchronization and to make use of all the bulb's light output. Cameras with flash sync triggered the flashbulb a fraction of a second before opening the shutter to allow it to reach full brightness, allowing faster shutter speeds. A flashbulb widely used during the 1960s was the Press 25, the flashbulb often used by newspapermen in period movies, usually attached to a press camera or a twin-lens reflex camera. Its peak light output was around a million lumens. Other flashbulbs in common use were the M-series, M-2, M-3 etc., which had a small ("miniature") metal bayonet base fused to the glass bulb. The largest flashbulb ever produced was the GE Mazda No. 75, being over eight inches long with a girth of 4 inches, initially developed for nighttime aerial photography during World War II.
The all-glass PF1 bulb was introduced in 1954. Eliminating the metal base and the multiple manufacturing steps needed to attach it to the glass bulb cut the cost substantially compared to the larger M series bulbs. The design required a fibre ring around the base to hold the contact wires against the side of the glass base. An adapter was available allowing the bulb to fit into flash guns made for bayonet-capped bulbs. The PF1 (along with the M2) had a faster ignition time (less delay between shutter contact and peak output), so it could be used with X synch below of a second—while most bulbs require a shutter speed of on X synch to keep the shutter open long enough for the bulb to ignite and burn. A smaller version which was not as bright but did not require the fibre ring, the AG-1, was introduced in 1958; it was cheaper, and rapidly supplanted the PF1.
Flashcubes, Magicubes and Flipflash
In 1965 Eastman Kodak of Rochester, New York replaced the individual flashbulb technology used on early Instamatic cameras with the Flashcube developed by Sylvania Electric Products.
A flashcube was a module with four expendable flashbulbs, each mounted at 90° from the others in its own reflector. For use it was mounted atop the camera with an electrical connection to the shutter release and a battery inside the camera. After each flash exposure, the film advance mechanism also rotated the flashcube 90° to a fresh bulb. This arrangement allowed the user to take four images in rapid succession before inserting a new flashcube.
The later Magicube (or X-Cube) by General Electric retained the four-bulb format, but did not require electrical power. It was not interchangeable with the original Flashcube. Each bulb in a Magicube was set off by releasing one of four cocked wire springs within the cube. The spring struck a primer tube at the base of the bulb, which contained a fulminate, which in turn ignited shredded zirconium foil in the flash. A Magicube could also be fired using a key or paper clip to trip the spring manually. X-cube was an alternate name for Magicubes, indicating the appearance of the camera's socket.
Other common flashbulb-based devices were the Flashbar and Flipflash, which provided ten flashes from a single unit. The bulbs in a Flipflash were set in a vertical array, putting a distance between the bulb and the lens, eliminating red eye. The Flipflash name derived from the fact that once half the flashbulbs had been used, the unit had to be flipped over and re-inserted to use the remaining bulbs. In many Flipflash cameras, the bulbs were ignited by electrical currents produced when a piezoelectric crystal was struck mechanically by a spring-loaded striker, which was cocked each time the film was advanced.
Electronic flash
The electronic flash tube was introduced by Harold Eugene Edgerton in 1931. The electronic flash reaches full brightness almost instantaneously, and is of very short duration. Edgerton took advantage of the short duration to make several iconic photographs, such as one of a bullet bursting through an apple. The large photographic company Kodak was initially reluctant to take up the idea. Electronic flash, often called "strobe" in the US following Edgerton's use of the technique for stroboscopy, came into some use in the late 1950s, although flashbulbs remained dominant in amateur photography until the mid 1970s. Early units were expensive, and often large and heavy; the power unit was separate from the flash head and was powered by a large lead-acid battery carried with a shoulder strap. Towards the end of the 1960s electronic flashguns of similar size to conventional bulb guns became available; the price, although it had dropped, was still high. The electronic flash system eventually superseded bulb guns as prices came down. Already in the early 1970s, amateur electronic flashes were available for less than $100.
A typical electronic flash unit has electronic circuitry to charge a high-capacitance capacitor to several hundred volts. When the flash is triggered by the shutter's flash synchronization contact, the capacitor is discharged rapidly through a permanent flash tube, producing an immediate flash lasting typically less than of a second, shorter than shutter speeds used, with full brightness before the shutter has started to close, allowing easy synchronization of maximum shutter opening with full flash brightness, unlike flashbulbs which were slower to reach full brightness and burned for a longer time, typically of a second.
A single electronic flash unit is often mounted on a camera's accessory shoe or a bracket; many inexpensive cameras have an electronic flash unit built in. For more sophisticated and longer-range lighting several synchronised flash units at different positions may be used.
Ring flashes that fit to a camera's lens can be used for shadow free portrait and macro photography; some lenses have built-in ring-flash.
In a photographic studio, more powerful and flexible studio flash systems are used. They usually contain a modelling light, a lamp close to the flash tube; the continuous illumination of the modelling light lets the photographer visualize the effect of the flash. LED lamps are replacing the previous incandescent light bulbs in new designs, modelling lights typically being proportionately variable to flash power require dimmable LEDs and suitable circuitry in the head. Multiple flashes may be synchronised for multi-source lighting.
The strength of a flash device is often indicated in terms of a guide number designed to simplify exposure setting. The energy released by larger studio flash units, such as monolights, is indicated in watt-seconds.
Canon names its electronic flash units Speedlite, and Nikon uses Speedlight; these terms are frequently used as generic terms for electronic flash units designed to be mounted on, and triggered by, a camera hot shoe.
High speed flash
An air-gap flash is a high-voltage device that discharges a flash of light with an exceptionally short duration, often much less than one microsecond. These are commonly used by scientists or engineers for examining extremely fast-moving objects or reactions, famous for producing images of bullets tearing through light bulbs and balloons (see Harold Eugene Edgerton). An example of a process by which to create a high speed flash is the exploding wire method.
Multi-flash
A camera that implements multiple flashes can be used to find depth edges or create stylized images. Such a camera has been developed by researchers at the Mitsubishi Electric Research Laboratories (MERL). Successive flashing of strategically placed flash mechanisms results in shadows along the depths of the scene. This information can be manipulated to suppress or enhance details or capture the intricate geometric features of a scene (even those hidden from the eye), to create a non-photorealistic image form. Such images could be useful in technical or medical imaging.
Flash intensity
Unlike flashbulbs, the intensity of an electronic flash can be adjusted on some units. To do this, smaller flash units typically vary the capacitor discharge time, whereas larger (e.g., higher power, studio) units typically vary the capacitor charge. Color temperature can change as a result of varying the capacitor charge, making color correction necessary. Constant-color-temperature flash can be achieved by using appropriate circuitry.
Flash intensity is typically measured in stops or in fractions (1, , , etc.). Some monolights display an "EV Number", so that a photographer can know the difference in brightness between different flash units with different watt-second ratings. EV10.0 is defined as 6400 watt-seconds, and EV9.0 is one stop lower, i.e. 3200 watt-seconds.
Flash duration
Flash duration is commonly described by two numbers that are expressed in fractions of a second:
t0.1 is the length of time the light intensity is above 0.1 (10%) of the peak intensity
t0.5 is the length of time the light intensity is above 0.5 (50%) of the peak intensity
For example, a single flash event might have a t0.5 value of and t0.1 of . These values determine the ability of a flash to "freeze" moving subjects in applications such as sports photography.
In cases where intensity is controlled by capacitor discharge time, t0.5 and t0.1 decrease with decreasing intensity. Conversely, in cases where intensity is controlled by capacitor charge, t0.5 and t0.1 increase with decreasing intensity due to the non-linearity of the capacitor's discharge curve.
Flash LED used in phones
High-current flash LEDs are used as flash sources in camera phones, although they are less bright than xenon flash tubes. Unlike xenon tubes, LEDs require only a low voltage, eliminating the need of a high-voltage capacitor. They are more energy-efficient, and very small. The LED flash can also be used for illumination of video recordings or as an autofocus assist lamp in low-light photography; it can also be used as a general-purpose non-photographic light source.
Focal-plane-shutter synchronization
Electronic flash units have shutter speed limits with focal-plane shutters. Focal-plane shutters expose using two curtains that cross the sensor. The first one opens and the second curtain follows it after a delay equal to the nominal shutter speed. A typical modern focal-plane shutter on a full-frame or smaller sensor camera takes about s to s to cross the sensor, so at exposure times shorter than this only part of the sensor is uncovered at any one time.
The time available to fire a single flash which uniformly illuminates the image recorded on the sensor is the exposure time minus the shutter travel time. Equivalently, the minimum possible exposure time is the shutter travel time plus the flash duration (plus any delays in triggering the flash).
For example, a Nikon D850 has a shutter travel time of about 2.4 ms. A full-power flash from a modern built-in or hot shoe mounted electronic flash has a typical duration of about 1ms, or a little less, so the minimum possible exposure time for even exposure across the sensor with a full-power flash is about 2.4 ms + 1.0 ms = 3.4 ms, corresponding to a shutter speed of about s. However some time is required to trigger the flash. At the maximum (standard) D850 X-sync shutter speed of s, the exposure time is s = 4.0 ms, so about 4.0 ms − 2.4 ms = 1.6 ms are available to trigger and fire the flash, and with a 1 ms flash duration, 1.6 ms − 1.0 ms = 0.6 ms are available to trigger the flash in this Nikon D850 example.
Mid- to high-end Nikon DSLRs with a maximum shutter speed of s (roughly D7000 or D800 and above) have an unusual menu-selectable feature which increases the maximum X-Sync speed to s = 3.1 ms with some electronic flashes. At s only 3.1 ms − 2.4 ms = 0.7 ms are available to trigger and fire the flash while achieving a uniform flash exposure, so the maximum flash duration, and therefore maximum flash output, must be, and is, reduced.
Contemporary (2018) focal-plane shutter cameras with full-frame or smaller sensors typically have maximum normal X-sync speeds of s or s. Some cameras are limited to s. X-sync speeds for medium format cameras when using focal-plane shutters are somewhat slower, e.g. s, because of the greater shutter travel time required for a wider, heavier, shutter that travels farther across a larger sensor.
In the past, slow-burning single-use flash bulbs allowed the use of focal-plane shutters at maximum speed because they produced continuous light for the time taken for the exposing slit to cross the film gate. If these are found they cannot be used on modern cameras because the bulb must be fired *before* the first shutter curtain begins to move (M-sync); the X-sync used for electronic flash normally fires only when the first shutter curtain reaches the end of its travel.
High-end flash units address this problem by offering a mode, typically called FP sync or HSS (High Speed Sync), which fires the flash tube multiple times during the time the slit traverses the sensor. Such units require communication with the camera and are thus dedicated to a particular camera make. The multiple flashes result in a significant decrease in guide number, since each is only a part of the total flash power, but it is all that illuminates any particular part of the sensor. In general, if s is the shutter speed, and t is the shutter traverse time, the guide number reduces by . For example, if the guide number is 100, and the shutter traverse time is 5 ms (a shutter speed of 1/200s), and the shutter speed is set to s (0.5 ms), the guide number reduces by a factor of , or about 3.16, so the resultant guide number at this speed would be about 32.
Current (2010) flash units frequently have much lower guide numbers in HSS mode than in normal modes, even at speeds below the shutter traverse time. For example, the Mecablitz 58 AF-1 digital flash unit has a guide number of 58 in normal operation, but only 20 in HSS mode, even at low speeds.
Technique
As well as dedicated studio use, flash may be used as the main light source where ambient light is inadequate, or as a supplementary source in more complex lighting situations. Basic flash lighting produces a hard, frontal light unless modified in some way. Several techniques are used to soften light from the flash or provide other effects.
Softboxes, diffusers that cover the flash lamp, scatter direct light and reduce its harshness. Reflectors, including umbrellas, flat-white backgrounds, drapes and reflector cards are commonly used for this purpose (even with small hand-held flash units). Bounce flash is a related technique in which flash is directed onto a reflective surface, for example a white ceiling or a flash umbrella, which then reflects light onto the subject. It can be used as fill-flash or, if used indoors, as ambient lighting for the whole scene. Bouncing creates softer, less artificial-looking illumination than direct flash, often reducing overall contrast and expanding shadow and highlight detail, and typically requires more flash power than direct lighting. Part of the bounced light can be also aimed directly on the subject by "bounce cards" attached to the flash unit which increase the efficiency of the flash and illuminate shadows cast by light coming from the ceiling. It is also possible to use one's own palm for that purpose, resulting in warmer tones on the picture, as well as eliminating the need to carry additional accessories.
Fill flash or "fill-in flash" describes flash used to supplement ambient light in order to illuminate a subject close to the camera that would otherwise be in shade relative to the rest of the scene. The flash unit is set to expose the subject correctly at a given aperture, while shutter speed is calculated to correctly expose for the background or ambient light at that aperture setting. Secondary or slave flash units may be synchronized to the master unit to provide light from additional directions. The slave units are electrically triggered by the light from the master flash. Many small flashes and studio monolights have optical slaves built in. Wireless radio transmitters, such as PocketWizards, allow the receiver unit to be around a corner, or at a distance too far to trigger using an optical sync.
To strobe, some high end units can be set to flash a specified number of times at a specified frequency. This allows action to be frozen multiple times in a single exposure.
Colored gels can also be used to change the color of the flash. Correction gels are commonly used, so that the light of the flash is the same as tungsten lights (using a CTO gel) or fluorescent lights.
Open flash, free flash or manually-triggered flash refers to modes in which the photographer manually triggers the flash unit to fire independently of the shutter.
Drawbacks
Using on-camera flash will give a very harsh light, which results in a loss of shadows in the image, because the only lightsource is in practically the same place as the camera. Balancing the flash power and ambient lighting or using off-camera flash can help overcome these issues. Using an umbrella or softbox (the flash will have to be off-camera for this) makes softer shadows.
A typical problem with cameras using built-in flash units is the low intensity of the flash; the level of light produced will often not suffice for good pictures at distances of over or so. Dark, murky pictures with excessive image noise or "grain" will result. In order to get good flash pictures with simple cameras, it is important not to exceed the recommended distance for flash pictures. Larger flashes, especially studio units and monoblocks, have sufficient power for larger distances, even through an umbrella, and can even be used against sunlight at short distances. Cameras which automatically flash in low light conditions often do not take into account the distance to the subject, causing them to fire even when the subject is several tens of metres away and unaffected by the flash. In crowds at sports matches, concerts and so on, the stands or the auditorium can be a constant sea of flashes, resulting in distraction to the performers or players and providing absolutely no benefit to the photographers.
The "red-eye effect" is another problem with on camera and ring flash units. Since the retina of the human eye reflects red light straight back in the direction it came from, pictures taken from straight in front of a face often exhibit this effect. It can be somewhat reduced by using the "red eye reduction" found on many cameras (a pre-flash that makes the subject's irises contract). However, very good results can be obtained only with a flash unit that is separated from the camera, sufficiently far from the optical axis, or by using bounce flash, where the flash head is angled to bounce light off a wall, ceiling or reflector.
On some cameras the flash exposure measuring logic fires a pre-flash very quickly before the real flash. In some camera/people combinations this will lead to shut eyes in every picture taken. The blink response time seems to be around of a second. If the exposure flash is fired at approximately this interval after the TTL measuring flash, people will be squinting or have their eyes shut. One solution may be the FEL (flash exposure lock) offered on some more expensive cameras, which allows the photographer to fire the measuring flash at some earlier time, long (many seconds) before taking the real picture. Many camera manufacturers do not make the TTL pre-flash interval configurable.
Flash distracts people, limiting the number of pictures that can be taken without irritating them. Photographing with flash may not be permitted in some museums even after purchasing a permit for taking pictures. Flash equipment may take some time to set up, and like any grip equipment, may need to be carefully secured, especially if hanging overhead, so it does not fall on anyone. A small breeze can easily topple a flash with an umbrella on a lightstand if it is not tied down or sandbagged. Larger equipment (e.g., monoblocks) will need a supply of AC power.
Gallery
| Technology | Photography | null |
195768 | https://en.wikipedia.org/wiki/Legionella | Legionella | Legionella is a genus of gram-negative bacteria that can be seen using a silver stain or grown in a special media that contains cysteine, an amino acid. It is known to cause legionellosis (all illnesses caused by Legionella) including a pneumonia-type illness called Legionnaires' disease and a mild flu-like illness called Pontiac fever. These bacteria are common in many places, like soil and water. There are over 50 species and 70 types (serogroups) identified. Legionella does not spread from person-to-person. Most individuals who are exposed to the bacteria do not get sick. Most outbreaks result from poorly maintained cooling towers.
The cell wall of the Legionella bacteria has parts that determine its specific type. The structural arrangement and building blocks (sugars) in the cell wall help classify the bacteria.
Etymology
Legionella was named after a 1976 outbreak of a then-unknown "mystery disease" at a convention of the American Legion, an association of U.S. military veterans, in Philadelphia. This outbreak happened within days of the 200th anniversary of the signing of the Declaration of Independence, which led to it being highly publicized and caused great concern in the U.S. On January 18, 1977, the causative agent was identified as a previously unknown bacterium subsequently named Legionella.
Detection
The detection of Legionella typically requires growing them on buffered charcoal yeast extract agar. As Legionella growth requires cysteine and iron, it cannot grow on other common lab media.
To detect Legionella in water, it is first concentrated, then inoculated into charcoal yeast extract agar containing selective agents that prevent the growth of other organisms. Heat or acid treatments are sometimes used to eliminate other microbes in a sample.
After incubation for up to 10 days, the presence of Legionella can be confirmed if colonies grow agar with cysteine but not on agar without it. Immunological techniques are then commonly used to determine the species and/or serogroups of bacteria present in the sample.
Some hospitals use the Legionella urinary antigen test when Legionella pneumonia is suspected. This test is faster and uses a urine instead of a sputum sample, giving results in hours compared to days. However, it only detects one type of Legionella: Legionella pneumophila serogroup 1 (LP1). Non-LP1 strains require can only be detected through culturing.
Methods, like polymerase chain reaction (PCR) and rapid immunological tests, can detect Legionella in water much faster.
Government health surveillance reports have shown an increase in the proportion of water-related Legionella outbreaks, particularly in healthcare settings.
Genomic analyses of Legionella has resulted in the identification of 24 conserved signature indels (CSIs) in diverse proteins including 30S ribosomal protein S8, periplasmic serine endoprotease DegP precursor, DNA polymerase I, and ABC transporter permease, that are specifically present in different species of the Legionella. These markers can help distinguish Legionella from other types of bacteria, improving diagnosis.
Sources
Documented sources include cooling towers, swimming pools, domestic water systems and showers, ice-making machines, refrigerated cabinets, whirlpool spas, hot springs, fountains, dental equipment, soil, automobile windshield washer fluid (especially if filled with water instead of wiper fluid), industrial coolant, and waste water treatment plants.
The following are not sources of infection: home/car/window (as seen in some hotels) air-conditioning units that do not use water to cool air.
Airborne transmission
The bacteria can spread through tiny droplets of water that get into the air. People can breathe in these droplets, which then infect cells in our airways, resulting in illness. This is the most common way Legionella spreads.
Recreational exposure
Cooling towers are well established as sources of Legionella that may have an effect on community exposure to the bacterium and Legionnaires' disease epidemics. In addition to cooling towers, use of swimming pools, spa pools, and other recreational water bodies has also been shown to increase risk of exposure to Legionella, though this differs by species of Legionella. In a review of disease caused by recreational exposure to Legionella, most exposures occurred in spas or pools used by the public (hotels or recreational centers) or in natural settings (hot springs or thermal water).
Hotels and other tourist destinations have contributed to Legionella exposure. The relative danger at commonly used facilities with heating and cooling water systems depends on several factors, such as the water source, how much Legionella is present (if there is any), if and how the water system is treated, how people are interacting with this water, and other factors that make the water systems so dynamic.
In addition to tourists and other recreators, gardeners may be at increased risk for exposure to Legionella. In some countries (like Australia), Legionella lives in soil and compost. Warmer temperatures and increased rainfall in some regions of the world due to climate change may impact Legionella in soil, gardeners' seasonal exposure to contaminated soil, and complex water systems used by the public.
Exposure related to natural disasters and climate change
Not only are Legionella spp. present in artificial water systems and infrastructure, but also these bacteria live in natural bodies of water, such as lakes and rivers. Weather patterns and other environmental factors may increase risk of Legionella outbreaks; a study in Minnesota, USA, using outbreak information from 2011 to 2018 showed precipitation as having the greatest effect of increasing risk of Legionella exposure when taking into account other environmental factors (temperature, relative humidity, land use and age of infected person). Weather patterns heavily relate to the established infrastructure and water sources, especially in urban settings. In the US, most cases of Legionella infection have occurred in the summertime, though they were likely more associated with rainfall and humidity than summer temperatures. Severe rain patterns can increase risk of water source contamination through flooding and unseasonable rains; therefore, natural disasters, especially those associated with climate change, may increase risk of exposure to Legionella.
Vaccine research
No vaccine is available for legionellosis. Vaccination studies using heat-killed or acetone-killed cells have been carried out in guinea pigs, which were then given Legionella intraperitoneally or by aerosol. Both vaccines were shown to give moderately high levels of protection. Protection was dose-dependent and correlated with antibody levels as measured by enzyme-linked immunosorbent assay (ELISA) to an outer membrane antigen and by indirect immunofluorescence to heat-killed cells.
Molecular biology
Legionella is a genetically diverse species with 7-11% of genes being strain-specific. The molecular function of some of the proven virulence factors of Legionella have been discovered.
Legionella disease manifestation
Signs and symptoms
Legionella pneumonia, often called "atypical pneumonia," is the most common form of legionellosis. The early symptoms are general, including fever, muscle pain, headache, shortness of breath, and a dry or productive cough. Patients with pneumonia who also have neurological or gastrointestinal symptoms like loss of appetite, nausea, or vomiting may be more likely to have legionellosis. A physical examination may reveal abnormal lungs sounds such as rales or rhonchi, and if consolidation is present, there may be signs like egophony or dullness to percussion. Laboratory tests might show either a high or low white blood cell count, low platelets, elevated liver enzymes (ALT, AST), low sodium levels, and possibly decreased kidney function.
Another form of legionellosis is Pontiac fever, which resembles the flu and includes symptoms like fever, headache, muscle pain, chills, dizziness, nausea, vomiting, and diarrhea. This form is milder than Legionella pneumonia and typically resolves on its own.
In some cases, Legionella can cause infections outside the lungs, including skin and soft tissue infections similar to cellulitis. This is especially a concern if contaminated water comes into contact with surgical wounds. It can also lead to heart infections, such as prosthetic valve endocarditis (without positive blood cultures), myocarditis, and pericarditis. In rare cases, Legionella species have been linked to joint infections (example: septic arthritis) and sinusitis.
Pathogenesis
In nature, Legionella bacteria live inside tiny organisms, like amoebae (examples: Acanthamoeba spp., Naegleria spp., Vermamoeba spp., or other protozoa such as Tetrahymena pyriformis). These amoebae are found in water and soil. They are found in low amounts in natural water sources like lakes and streams, but can grow quickly in man-made water systems under the right conditions.
Legionella is spread through inhaling contaminated water droplets, which can come from mists, sprays, or other sources that release tiny droplets into the air. In homes, the most common sources of exposure are shower heads and sinks. The incubation period, or the time it takes for symptoms to appear, is usually 2-10 days for Legionella pneumonia and 24-72 hours for Pontiac fever. In rare cases, infection can also happen if people accidentally breathe in drinking water. Person-to-person spread has not been proven, but could be possible in rare situations.
Most healthy people don't get severely sick. The risk of Legionella infection is higher in adults, especially those over 40 years old. People with certain health conditions, like kidney or liver disease, chronic lung disease, or heart disease, are at a greater risk. Those with weakened immune systems, such as cancer patients or organ transplant recipients, are at risk as well. People with chronic illnesses, like autoimmune disease treated with TNF inhibitors, also face a higher risk of infection. Men are about three times more likely than women to get infected, while children are less likely to develop severe cases. Smoking, including cannabis smoking, is strongly linked to increased risk due to damage to the airway lining.
Hospitals and nursing homes are especially concerned about water system safety because vulnerable patients are at a higher risk. For example, the Texas Department of State Health Services, has guidelines for hospitals to stop the spread of Legionella.
In the United States, Legionella infects about 8,000 to 18,000 people each year. Preventing exposure to contaminated water droplets remains key to reducing spread.
Mechanism
After inhaling or accidentally swallowing small aerosol particles, Legionella bacteria attach to immune cells and are taken up by them through a process called phagocytosis. Inside the body, the bacteria can grow and multiple in lung cells, specifically alveolar macrophages and monocytes.
Legionella has several ways to evade the immune system, increasing the chance that a person develops symptoms of infection. It creates special vacuoles, or protective bubbles, inside immune cells to hide from the body's defenses. It also reduces the activity of cytokine receptors (which play a role in immune response), blocks the production of certain proteins needed by the host, and avoids being broken down by lysosomes, which are cell structures meant to digest harmful particles.
Diagnosis
Legionella is usually diagnosed using a urinary antigen test. Some patients, especially those in the ICU or those who cannot provide a sputum sample, may need an invasive procedure, such as a bronchoscopy, if the initial urinary antigen test is negative. For the most accurate diagnosis, doctors may take cultures from sputum, fluid from the lungs (called bronchoalveolar lavage), lung tissue, or other affected areas. These cultures are considered the "gold standard" for confirming Legionella infection.
Prevention and screening
Preventing Legionella infection starts with improving water systems and setting up water-monitoring processes to keep it under control. In the U.S., prevention efforts focus mainly on health care settings, especially hospitals, where water-based exposures are more likely to be fatal. Federal guidelines to reduce Legionella risks were first introduced in June 2017, requiring all medical centers to monitor water quality and have systems in place to prevent hospital-acquired Legionella pneumonia. Facilities with water features, like therapy pools, ice machines, and decorative fountains, must have cleaning and disinfection policies.
To remove Legionella from water systems, chemical disinfectants are often added. Water filtration can also be used, either at the plumbing level or at specific points of use, as a primary or combined prevention method. Using disinfectants requires regular maintenance and monitoring of chemical levels to ensure they're effective in preventing Legionella growth.
Treatment
Antibiotics are usually the first choice when treating community-acquired pneumonia, which may or may not be caused by Legionella. The first-line options when Legionella is the causative agent are macrolides and fluoroquinolones. Azithromycin, a type of macrolide, is the preferred choice. For patients with mild illness, the treatment course usually lasts about 10-14 days, although most symptoms tend to improve within the first 3-5 days of starting the antibiotics. For patients who are immunosuppressed or have severe cases of Legionella pneumonia, a longer treatment course of three weeks is recommended to ensure effective recovery.
Outcomes and prognosis
Even with the right treatment, Legionella pneumonia can lead to serious health problems and can be life-threatening. The case fatality rate for this type of pneumonia is about 10%, and patients who are admitted to the ICU or have other major health issues are more likely to die from it. If there is a delay in starting antibiotic treatment, the risk of death can be about three times higher compared to those who receive treatment promptly. Among patients who develop pneumonia in the hospital, especially cases caused by Legionella, the death rate is around 25%. For those who are immunocompromised, the mortality rate can be as high as 30-50%.
After surviving Legionella pneumonia, many patients experience long-term complications, with more than 25% facing ongoing issues such as recurrent hospitalizations, acute kidney failure, lung problems, and recurring pneumonia. On the other hand, recover from Pontiac fever usually occurs within 3-5 days, and serious complications or death related to Pontiac fever are very rare.
Epidemiology
Legionella is responsible for more than 50% of all waterborne outbreaks and over 10% of diseases related to drinking water in the U.S. The incidence of legionellosis, or Legionella infection, is about 2 to 3 cases for every 100,000 people; however, the true number of cases is likely higher than reported. This is because many studies on community-acquired pneumonia do not routinely test for Legionella, meaning some cases may go undetected.
History
Examples of common-source outbreaks:
2001 Spain; source: cooling tower; 449 documented cases of Legionnaires' disease
2012: Hotel; source: potable water, fountain, spa; # cases: 89 (+29 suspected)
2012: Hospital; source: potable water; # cases: 22
2014: Community; source: cooling tower; # cases: 334
2014-2015: Hospital/community; source: potable water, household, cooling towers; # case: 86
2015: Long-term care facility; source: potable water; # cases: 74
2018: Hospital; source: potable water, showers; # cases: 128
2019: Hotel; source: fountain; # cases: 13 (+66 suspected)
2019: Community; source: hot tub display; # cases: 141
Legionella control and biomonitoring
Control of Legionella growth can occur through chemical, thermal or ultraviolet treatment methods.
Heat
One option is temperature control—i.e., keeping all cold water below and all hot water above .
Temperature affects the survival of Legionella as follows:
Above – Legionella dies almost instantly
At – 90% die in 2 minutes (Decimal reduction time (D) = 2 minutes)
At – 90% die in 80–124 minutes, depending on strain (D = 80–124 minutes)
– can survive but do not multiply
– ideal growth range
– growth range
Below – can survive, but are dormant
Other temperature sensitivity:
to – Disinfection range
– Legionella dies within 2 minutes
– Legionella dies within 32 minutes
– Legionella dies within 5 to 6 hours
Water temperature can be monitored in real-time with electronic devices.
Controlling Legionella in potable water systems
Potable water refers to hot or cold water that is intended for drinking. The CDC recommends that hot water is kept between and and that cold water is stored below . It is also recommended to flush infrequently used fixtures regularly.
In building water systems
Chlorine
A very effective chemical treatment is chlorine. For systems with marginal issues, chlorine provides effective results at >0.5 ppm residual in the hot water system. For systems with significant Legionella problems, temporary shock chlorination—where levels are raised to higher than 2 ppm for a period of 24 hours or more and then returned to 0.5 ppm—may be effective. Hyperchlorination can also be used where the water system is taken out of service and the chlorine residual is raised to 50 ppm or higher at all distal points for 24 hours or more. The system is then flushed and returned to 0.5 ppm chlorine prior to being placed back into service. These high levels of chlorine penetrate biofilm, killing both the Legionella bacteria and the host organisms. Annual hyperchlorination can be an effective part of a comprehensive Legionella preventive action plan.
Copper-silver ionization
Copper-silver ionization is recognized by the U.S. Environmental Protection Agency and WHO for Legionella control and prevention. It is a popular method used in building water systems to control Legionella bacteria, mainly because it is affordable and does not require much maintenance. In this method, the copper ions weaken the bacteria's cell wall, allowing silver ions to then disrupt the bacteria's DNA and proteins, preventing further proliferation. Copper and silver ion concentrations must be maintained at optimal levels, taking into account both water flow and overall water usage, to control Legionella.
Copper-silver ionization is an effective process to control Legionella in potable water distribution systems found in health facilities, hotels, nursing homes, and most large buildings. However, it is not intended for cooling towers because of pH levels greater than 8.6, that cause ionic copper to precipitate. Furthermore, tolyltriazole, a common additive in cooling water treatment, could bind the copper making it ineffective. Ionization became the first such hospital disinfection process to have fulfilled a proposed four-step modality evaluation; by then, it had been adopted by over 100 hospitals. Copper-silver ionization works slower than other disinfectants and is affected by the water's chemical makeup.
Chlorine dioxide
Chlorine dioxide has been approved by the U.S. Environmental Protection Agency as a primary disinfectant of potable water since 1945. Chlorine dioxide does not produce any carcinogenic byproducts like some other chlorine sources when used in the purification of drinking water that contains natural organic compounds such as humic and fulvic acids; trihalomethanes may be formed. Drinking water containing such molecules has been shown to increase the risk of cancer.
Since chlorine dioxide stays as a gas, it is easier for it to enter microorganisms to disrupt their internal functions. It works better than chlorine when it comes to disrupting biofilms and is effective across a wider pH range. Testing has demonstrated that low levels of chlorine dioxide reduced Legionella bacteria to undetectable levels in 6 days; however, its effectiveness can be reduced when amoebae are present. It is also not widely distributed in water systems due to concerns regarding its toxicity, unpleasant odors, and harmful byproducts, as stated above, that it can create.
Monochloramine is an alternative. It is created by mixing chlorine and ammonia and valued for its stability and ability to penetrate biofilms better than chlorine. Like chlorine and chlorine dioxide, monochloramine is approved by the U.S. Environmental Protection Agency as a primary potable water disinfectant. Environmental Protection Agency registration requires a biocide label, which lists toxicity and other data required for all registered biocides.
It does work slower than chlorine and requires precise chemical management due to concerns for toxicity.
Ultraviolet light
Ultraviolet light, in the range of 200 to 300 nm, can inactivate Legionella. According to a review by the US EPA, three-log (99.9%) inactivation can be achieved with a dose of less than 7 mJ/cm2.
Biomonitoring
A Legionella-specific aptamer has been discovered and in 2022 was developed into an assay for detecting to a limit of 104.3 cells/mL with no processing steps.
European standards
Several European countries established the European Working Group for Legionella Infections to share knowledge and experience about monitoring potential sources of Legionella. The working group has published guidelines about the actions to be taken to limit the number of colony-forming units (that is, live bacteria that are able to multiply) of Legionella per litre:
Monitoring guidelines are stated in Approved Code of Practice L8 in the UK. These are not mandatory, but are widely regarded as so. An employer or property owner must follow an Approved Code of Practice, or achieve the same result. Failure to show monitoring records to at least this standard has resulted in several high-profile prosecutions, e.g. Nalco + Bulmers – neither could prove a sufficient scheme to be in place while investigating an outbreak, therefore both were fined about £300,000GBP. Important case law in this area is R v Trustees of the Science Museum 3 All ER 853, (1993) 1 WLR 1171
Employers and those responsible for premises within the UK are required under Control of Substances Hazardous to Health to undertake an assessment of the risks arising from Legionella. This risk assessment may be very simple for low risk premises, however for larger or higher risk properties may include a narrative of the site, asset register, simplified schematic drawings, recommendations on compliance, and a proposed monitoring scheme.
The L8 Approved Code of Practice recommends that the risk assessment should be reviewed at least every 2 years and whenever a reason exists to suspect it is no longer valid, such as water systems have been amended or modified, or if the use of the water system has changed, or if there is reason to suspect that Legionella control measures are no longer working.
Weaponization
Legionella can be used as a weapon, and indeed genetic modification of L. pneumophila has been shown where the mortality rate in infected animals can be increased to nearly 100%. A former Soviet bioengineer, Sergei Popov, stated in 2000 that his team experimented with genetically enhanced bioweapons, including Legionella. Popov worked as a lead researcher at the Vector Institute from 1976 to 1986, then at Obolensk until 1992, when he defected to the West. He later divulged much of the Soviet biological weapons program and settled in the United States.
| Biology and health sciences | Gram-positive bacteria | Plants |
195781 | https://en.wikipedia.org/wiki/Australian%20magpie | Australian magpie | The Australian magpie (Gymnorhina tibicen) is a black and white passerine bird native to Australia and southern New Guinea, and introduced to New Zealand, and the Fijian island of Taveuni. Although once considered to be three separate species, it is now considered to be one, with nine recognised subspecies. A member of the Artamidae, the Australian magpie is placed in its own genus Gymnorhina and is most closely related to the black butcherbird (Melloria quoyi). It is not closely related to the Eurasian magpie, which is a corvid.
The adult Australian magpie is a fairly robust bird ranging from in length, with black and white plumage, gold brown eyes and a solid wedge-shaped bluish-white and black bill. The male and female are similar in appearance, but can be distinguished by differences in back markings. The male has pure white feathers on the back of the head where the female has white blending to grey feathers. With its long legs, the Australian magpie walks rather than waddles or hops and spends much time on the ground.
Described as one of Australia's most accomplished songbirds, the Australian magpie has an array of complex vocalisations. It is omnivorous, with the bulk of its varied diet made up of invertebrates. It is generally sedentary and territorial throughout its range. Common and widespread, it has adapted well to human habitation and is a familiar bird of parks, gardens and farmland in Australia and New Guinea. This species is commonly fed by households around Australia, but in spring (and occasionally in autumn) a small minority of breeding magpies (almost always males) become aggressive, swooping and attacking those who approach their nests. Research has shown that magpies can recognise at least 100 different people, and may be less likely to swoop individuals they have befriended.
Over 1,000 Australian magpies were introduced into New Zealand from 1864 to 1874, but were subsequently deemed to be displacing native birds and are now treated as a pest species. Introductions also occurred in the Solomon Islands and Fiji, where the birds are not considered an invasive species. The Australian magpie is the mascot of several Australian and New Zealand sporting teams, including the Collingwood Magpies, the Western Suburbs Magpies, Port Adelaide Magpies and, in New Zealand, the Hawke's Bay Magpies.
Taxonomy and nomenclature
The Australian magpie was first described in the scientific literature by English ornithologist John Latham in 1801 as Coracias tibicen, the type collected in the Port Jackson region. Its specific epithet derived from the Latin tibicen "flute-player" or "piper" in reference to the bird's melodious call. An early recorded vernacular name is piping poller, written on a painting by Thomas Watling, one of a group known collectively as the Port Jackson Painter, some time between 1788 and 1792. Other names used include piping crow-shrike, piping shrike, piper, maggie, flute-bird and organ-bird. The term bell-magpie was proposed to help distinguish it from corvid magpies but failed to gain wide acceptance.
Tarra-won-nang, or djarrawunang, wibung, and marriyang were names used by the local Eora and Darug inhabitants of the Sydney Basin. Booroogong and garoogong were Wiradjuri words and Victorian terms included carrak (Jardwadjali), kuruk (Western Victorian languages), kiri (Dhauwurd Wurrung language) and kurikari (Wuluwurrung). Among the Kamilaroi, it is burrugaabu, galalu, or guluu. In Western Australia it is known as warndurla among the Yindjibarndi people of the central and western Pilbara, and koorlbardi amongst the south west Noongar peoples. In South Australia, where it is the State emblem, it is the kurraka (Kaurna), murru (Narungga), urrakurli (Adnyamathanha), goora (Barngarla), konlarru (Ngarrindjeri) and tuwal (Bunganditj).
The bird was named for its similarity in colouration to the Eurasian magpie; it was a common practice for early settlers to name plants and animals after European counterparts. However, the Eurasian magpie is a member of the Corvidae, while its Australian counterpart is placed in the family Artamidae (although both are members of a broad corvid lineage). The Australian magpie's affinities with butcherbirds and currawongs were recognised early, and the three genera were placed in the family Cracticidae in 1914 by John Albert Leach after he had studied their musculature. American ornithologists Charles Sibley and Jon Ahlquist recognised the close relationship between woodswallows and the butcherbirds in 1985, and combined them into a Cracticini clade, in the Artamidae. The Australian magpie is placed in its own monotypic genus Gymnorhina, which was introduced by the English zoologist George Robert Gray in 1840. The name of the genus is from the Ancient Greek gumnos for "naked" or "bare" and rhis, rhinos "nostrils". Some authorities such as Glen Storr in 1952 and Leslie Christidis and Walter Boles in their 2008 checklist, have placed the Australian magpie in the butcherbird genus Cracticus, arguing that its adaptation to ground-living is not enough to consider it a separate genus. A molecular genetic study published in a 2013 showed that the Australian magpie is a sister taxon to the black butcherbird (Melloria quoyi), and that the two species are in turn sister to a clade that includes the other butcherbirds in the genus Cracticus. The ancestor to the two species is thought to have split from the other butcherbirds between 8.3 and 4.2 million years ago, during the late Miocene to early Pliocene, while the two species themselves diverged sometime during the Pliocene (5.8–3.0 million years ago).
The Australian magpie was subdivided into three species in the literature for much of the twentieth century: the black-backed magpie (G. tibicen), the white-backed magpie (G. hypoleuca), and the western magpie (G. dorsalis). They were later noted to hybridise readily where their territories crossed, with hybrid grey or striped-backed magpies being quite common. They were reclassified as one species by Julian Ford in 1969, with most recent authors following suit.
Subspecies
There are currently thought to be nine subspecies of the Australian magpie, although there are large zones of overlap with intermediate forms between the taxa. There is a tendency for birds to become larger with increasing latitude, the southern subspecies being larger than those further north, except the Tasmanian form which is small. The original form, known as the black-backed magpie and classified as Gymnorhina tibicen, has been split into four black-backed races:
G. tibicen tibicen, the nominate form, is a large subspecies found in southeastern Queensland, from the vicinity of Moreton Bay through eastern New South Wales to Moruya, New South Wales almost to the Victorian border. It is coastal or near-coastal and is restricted to east of the Great Dividing Range.
G. tibicen terraereginae, found from Cape York and the Gulf Country southwards across Queensland to the coast between Halifax Bay in the north and south to the Mary River, and central and western New South Wales and into northern South Australia, is a small to medium-sized subspecies. The plumage is the same as that of subspecies tibicen, although the female has a shorter black tip to the tail. The wings and tarsus are shorter and the bill proportionally longer. It was originally described by Gregory Mathews in 1912, its subspecies name a Latin translation, terra "land" reginae "queen's" of "Queensland". Hybridisation with the large white-backed subspecies tyrannica occurs in northern Victoria and southeastern New South Wales; intermediate forms have black bands of varying sizes in white-backed area. Three-way hybridisation occurs between Bega and Batemans Bay on the New South Wales south coast.
G. tibicen eylandtensis, the Top End magpie, is found from the Kimberley in northern Western Australia, across the Northern Territory through Arnhem Land and Groote Eylandt and into the Gulf Country. It is a small subspecies with a long and thinner bill, with birds of Groote Eylandt possibly even smaller than mainland birds. It has a narrow black terminal tailband, and a narrow black band; the male has a large white nape, the female pale grey. This form was initially described by H. L. White in 1922. It intergrades with subspecies terraereginae southeast of the Gulf of Carpentaria.
G. tibicen longirostris, the long-billed magpie, is found across northern Western Australia, from Shark Bay into the Pilbara. Named in 1903 by Alex Milligan, it is a medium-sized subspecies with a long thin bill. Milligan speculated the bill may have been adapted for the local conditions, slim fare meaning the birds had to pick at dangerous scorpions and spiders. There is a broad area of hybridisation with the western dorsalis in southern central Western Australia from Shark Bay south to the Murchison River and east to the Great Victoria Desert.
The white-backed magpie, originally described as Gymnorhina hypoleuca by John Gould in 1837, has also been split into races:
G. tibicen tyrannica, a very large white-backed form found from Twofold Bay on the New South Wales far south coast, across southern Victoria south of the Great Dividing Range through to the Coorong in southeastern South Australia. It was first described by Schodde and Mason in 1999. It has a broad black tail band.
G. tibicen telonocua, found from Cowell south into the Eyre and Yorke Peninsulas in southern South Australia, as well as the southwestern Gawler Ranges. Described by Schodde and Mason in 1999, its subspecific name is an anagram of leuconota "white-backed". It is very similar to tyrannica, differing in having a shorter wing and being lighter and smaller overall. The bill is relatively short compared with other magpie subspecies. Intermediate forms are found in the Mount Lofty Ranges and on Kangaroo Island.
G. tibicen hypoleuca now refers to a small white-backed subspecies with a short compact bill and short wings, found on King and Flinders Islands, as well as Tasmania.
The western magpie, G. tibicen dorsalis was originally described as a separate species by A. J. Campbell in 1895 and is found in the fertile south-west corner of Western Australia. The adult male has a white back and most closely resembles subspecies telonocua, though it is a little larger with a longer bill and the black tip of its tail plumage is narrower. The female is unusual in that it has a scalloped black or brownish-black mantle and back; the dark feathers there are edged with white. This area appears a more uniform black as the plumage ages and the edges are worn away. Both sexes have black thighs.
The New Guinean magpie, G. tibicen papuana, is a little-known subspecies found in southern New Guinea. The adult male has a mostly white back with a narrow black stripe, and the female a blackish back; the black feathers here are tipped with white similar to subspecies dorsalis. It has a long deep bill resembling that of subspecies longirostris. Genetically it is closely related to a western lineage of Australian magpies comprising subspecies dorsalis, longirostris and eylandtensis, suggesting their ancestors occupied in savannah country that was a land bridge between New Guinea and Australia and was submerged around 16,500 years ago.
Description
The adult magpie ranges from in length with a wingspan, and weighing . Its robust wedge-shaped bill is bluish-white bordered with black, with a small hook at the tip. The black legs are long and strong. The plumage is pure glossy black and white; both sexes of all subspecies have black heads, wings and underparts with white shoulders. The tail has a black terminal band. The nape is white in the male and light greyish-white in the female. Mature magpies have dull red eyes, in contrast to the yellow eyes of currawongs and white eyes of Australian ravens and crows. The main difference between the subspecies lies in the "saddle" markings on the back below the nape. Black-backed subspecies have a black saddle and white nape. White-backed subspecies have a wholly white nape and saddle. The male Western Australian subspecies dorsalis is also white-backed, but the equivalent area in the female is scalloped black.
Juveniles have lighter greys and browns amidst the starker blacks and whites of their plumage; two- or three-year-old birds of both sexes closely resemble and are difficult to distinguish from adult females. Immature birds have dark brownish eyes until around two years of age. Australian magpies generally live to around 25 years of age, though ages of up to 30 years have been recorded. The reported age of first breeding has varied according to area, but the average is between three and five years.
Well-known and easily recognisable, the Australian magpie is unlikely to be confused with any other species. The pied butcherbird has a similar build and plumage, but has white underparts unlike the former species' black underparts. The magpie-lark is a much smaller and more delicate bird with complex and very different banded black and white plumage. Currawong species have predominantly dark plumage and heavier bills.
Vocalisations
One of Australia's most highly regarded songbirds, the Australian magpie has a wide variety of calls, many of which are complex. Pitch may vary as much as four octaves, and the bird can mimic over 35 species of native and introduced bird species, as well as dogs and horses. Magpies have even been noted to mimic human speech when living in close proximity to humans. Its complex, musical, warbling call is one of the most familiar Australian bird sounds. In Denis Glover's poem "The Magpies", the mature magpie's call is described as quardle oodle ardle wardle doodle, one of the most famous lines in New Zealand poetry, and as waddle giggle gargle paddle poodle, in the children's book Waddle Giggle Gargle by Pamela Allen. The bird has been known to mimic environmental sounds as well, including the noises made by emergency vehicles during the New South Wales wildfire state of emergency for Australian bushfire.
When alone, a magpie may make a quiet musical warbling; these complex melodious warbles or subsongs are pitched at 2–4 KHz and do not carry for long distances. These songs have been recorded as being up to 70 minutes long, and are more frequent after the end of the breeding season. Pairs of magpies often take up a loud musical calling known as carolling to advertise or defend their territory; one bird initiates the call with the second (and sometimes more) joining in. Often preceded by warbling, carolling is pitched between 6 and 8 kHz and has 4–5 elements with slurring indistinct noise in between. Birds adopt a specific posture by tilting their heads back, expanding their chests, and moving their wings backwards. A group of magpies sing a short repetitive version of carolling just before dawn (dawn song), and at twilight after sundown (dusk song), in winter and spring.
Fledgling and juvenile magpies emit a repeated short and loud (80 dB), high-pitched (8 kHz) begging call. Magpies may indulge in beak-clapping to warn other species of birds. They employ several high pitched (8–10 kHz) alarm or rallying calls when intruders or threats are spotted. Distinct calls have been recorded for the approach of eagles and monitor lizards.
Distribution and habitat
The Australian magpie is found in the Trans-Fly region of southern New Guinea, between the Oriomo River and Muli Strait, and across most of Australia, bar the tip of Cape York, the Gibson and Great Sandy Deserts, and the southwest of Tasmania.
The Australian magpie prefers open areas such as grassland, fields and residential areas such as parks, gardens, golf courses, and streets, with scattered trees or forest nearby. Birds nest and shelter in trees but forage mainly on the ground in these open areas. It has also been recorded in mature pine plantations; birds only occupy rainforest and wet sclerophyll forest in the vicinity of cleared areas. In general, evidence suggests the range and population of the Australian magpie has increased with land-clearing, although local declines in Queensland due to a 1902 drought, and in Tasmania in the 1930s have been noted; the cause for the latter is unclear but rabbit baiting, pine tree removal, and spread of the masked lapwing (Vanellus miles) have been implicated.
New Zealand
Birds taken mainly from Tasmania and Victoria were introduced into New Zealand by local Acclimatisation Societies of Otago and Canterbury in the 1860s, with the Wellington Acclimatisation Society releasing 260 birds in 1874. White-backed forms are spread on both the North and eastern South Island, while black-backed forms are found in the Hawke's Bay region. Magpies were introduced into New Zealand to control agricultural pests, and were therefore a protected species until 1951. They are thought to affect native New Zealand bird populations such as the tūī and kererū, sometimes raiding nests for eggs and nestlings, although studies by Waikato University have cast doubt on this, and much blame on the magpie as a predator in the past has been anecdotal only. Introductions also occurred in the Solomon Islands and Sri Lanka, although the species has failed to become established. It has become established in western Taveuni in Fiji, however.
Behaviour
The Australian magpie is almost exclusively diurnal, although it may call into the night, like some other members of the Artamidae. Natural predators of magpies include various species of monitor lizard and the barking owl. Birds are often killed on roads or electrocuted by powerlines, or poisoned after killing and eating house sparrows, mice, rats or rabbits that have eaten poison bait. The Australian raven may take nestlings left unattended.
On the ground, the Australian magpie moves around by walking, and is the only member of the Artamidae to do so; woodswallows, butcherbirds and currawongs all tend to hop with legs parallel. The magpie has a short femur (thigh bone), and long lower leg below the knee, suited to walking rather than running, although birds can run in short bursts when hunting prey.
The magpie is generally sedentary and territorial throughout its range, living in groups occupying a territory, or in flocks or fringe groups. A group may occupy and defend the same territory for many years. Much energy is spent defending a territory from intruders, particularly other magpies, and different behaviours are seen with different opponents. The sight of a raptor results in a rallying call by sentinel birds and subsequent coordinated mobbing of the intruder. Magpies place themselves either side of the bird of prey so that it will be attacked from behind should it strike a defender, and harass and drive the raptor to some distance beyond the territory. A group will use carolling as a signal to advertise ownership and warn off other magpies. In the negotiating display, the one or two dominant magpies parade along the border of the defended territory while the rest of the group stand back a little and look on. The leaders may fluff their feathers or caroll repeatedly. In a group strength display, employed if both the opposing and defending groups are of roughly equal numbers, all magpies will fly and form a row at the border of the territory. The defending group may also resort to an aerial display where the dominant magpies, or sometimes the whole group, swoop and dive while calling to warn an intruding magpie's group.
A wide variety of displays are seen, with aggressive behaviours outnumbering pro-social ones. Crouching low and uttering quiet begging calls are common signs of submission. The manus flutter is a submissive display where a magpie will flutter the primary feathers in its wings. A magpie, particularly a juvenile, may also fall, roll over on its back and expose its underparts. Birds may fluff up their flank feathers as an aggressive display or preceding an attack. Young birds display various forms of play behaviour, either by themselves or in groups, with older birds often initiating the proceedings with juveniles. These may involve picking up, manipulating or tugging at various objects such as sticks, rocks or bits of wire, and handing them to other birds. A bird may pick up a feather or leaf and fly off with it, with other birds pursuing and attempting to bring down the leader by latching onto its tail feathers. Birds may jump on each other and even engage in mock fighting. Play may even take place with other species such as blue-faced honeyeaters and Australasian pipits.
A 2022 study showed cooperative behaviour, along with a moderate level of problem-solving, when magpies (G. tibicen) assisted one another to remove tracking devices placed on their bodies in a specially-designed harness by researchers for conservation purposes. This was the first recorded example of birds acting in this way to remove tracking devices, a form of rescue behaviour.
Breeding
Magpies have a long breeding season which varies in different parts of the country; in northern parts of Australia they will breed between June and September, but not commence until August or September in cooler regions, and may continue until January in some alpine areas. The nest is a bowl-shaped structure made of sticks and lined with softer material such as grass and bark. Near human habitation, synthetic material may be incorporated. Nests are built exclusively by females and generally placed high up in a tree fork, often in an exposed position. The trees used are most commonly eucalypts, although a variety of other native trees as well as introduced pine, Crataegus, and elm have been recorded. Other bird species, such as the yellow-rumped thornbill (Acanthiza chrysorrhoa), willie wagtail (Rhipidura leucophrys), southern whiteface (Aphelocephala leucopsis), and (less commonly) noisy miner (Manorina melanocephala), often nest in the same tree as the magpie. The first two species may even locate their nest directly beneath a magpie nest, while the diminutive striated pardalote (Pardalotus striatus) has been known to make a burrow for breeding into the base of the magpie nest itself. These incursions are all tolerated by the magpies. The channel-billed cuckoo (Scythrops novaehollandiae) is a notable brood parasite in eastern Australia; magpies will raise cuckoo young, which eventually outcompete the magpie nestlings.
The Australian magpie produces a clutch of two to five light blue or greenish eggs, which are oval in shape and about . The chicks hatch synchronously around 20 days after incubation begins; like all passerines, the chicks are altricial—they are born pink, naked, and blind with large feet, a short broad beak and a bright red throat. Their eyes are fully open at around 10 days. Chicks develop fine downy feathers on their head, back and wings in the first week, and pinfeathers in the second week. The black and white colouration is noticeable from an early stage. Nestlings are usually fed exclusively by the female, though the male magpie will feed his partner. Individual males do feed nestlings and fledglings, to varying degrees, from sporadic to equal frequency to the female. The Australian magpie is known to engage in cooperative breeding, and helper birds will assist in feeding and raising young. This does vary from region to region, and with the size of the group—the behaviour is rare or non-existent in pairs or small groups.
Juvenile magpies begin foraging on their own three weeks after leaving the nest, and mostly feeding themselves by six months old. Some birds continue begging for food until eight or nine months of age, but are usually ignored. Birds reach adult size by their first year. The age at which young birds disperse varies across the country, and depends on the aggressiveness of the dominant adult of the corresponding sex; males are usually evicted at a younger age. Many leave at around a year old, but the age of departure may range from eight months to four years.
Feeding
The Australian magpie is omnivorous, eating various items located at or near ground level including invertebrates such as earthworms, millipedes, snails, spiders and scorpions as well as a wide variety of insects—cockroaches, ants, earwigs, beetles, cicadas, moths and caterpillars and other larvae. Insects, including large adult grasshoppers, may be seized mid-flight. Skinks, frogs, mice and other small animals as well as grain, tubers, figs and walnuts have also been noted as components of their diet.
It has even learnt to safely eat the poisonous cane toad by flipping it over and consuming the underparts. Predominantly a ground feeder, the Australian magpie paces open areas methodically searching for insects and their larvae. One study showed birds were able to find scarab beetle larvae by sound or vibration. Birds use their bills to probe into the earth or otherwise overturn debris in search of food. Smaller prey are swallowed whole, although magpies rub off the stingers of bees, stinging ants and wasps and irritating hairs of caterpillars before swallowing.
Swooping
Magpies are ubiquitous in urban areas all over Australia, and have become accustomed to people. A small percentage of birds become highly aggressive during breeding season from late August to late November – early December or occasionally late February to late April – early May, and will swoop and sometimes attack passersby. Attacks begin as the eggs hatch, increase in frequency and severity as the chicks grow, and tail off as the chicks leave the nest.
Magpie attacks occur in most parts of Australia, though Tasmanian magpies are much less aggressive than their mainland counterparts. Magpie attacks can cause injuries, typically wounds to the head. Being unexpectedly swooped while cycling can result in loss of control of the bicycle, which may cause injury or even fatal accidents.
Magpies may engage in an escalating series of behaviours to drive off intruders. Least threatening are alarm calls and distant swoops, where birds fly within several metres from behind and perch nearby. Next in intensity are close swoops, where a magpie will swoop in from behind or the side and audibly "snap" their beaks or even peck or bite at the face, neck, ears or eyes. More rarely, a bird may dive-bomb and strike the intruder's (usually a cyclist's) head with its chest. A magpie may rarely attack by landing on the ground in front of a person and lurching up and landing on the victim's chest and pecking at the face and eyes.
Targets
The percentage of magpies that swoop has been difficult to estimate but is less than 9%. Almost all attacking birds (around 99%) are male, and they are generally known to attack pedestrians at around from their nest, and cyclists at around . There appears to be some specificity in choice of attack targets, with the majority of individuals specializing on either pedestrians or cyclists.
Younger people, lone people, and people travelling quickly (i.e., runners and cyclists) appear to be targeted most often by swooping magpies. Anecdotal evidence suggests that if a magpie sees a human trying to rescue a chick that has fallen from its nest, the bird will view this help as predation, and will become more aggressive to humans from then on. Some attacks have indirectly been fatal. For example, in 2021, a Brisbane woman tripped and fell onto her infant while attempting to avoid a swooping, and the infant died.
Prevention
Magpies are a protected native species in Australia, so it is illegal to kill or harm them. However, some states provide exceptions for a magpie that attacks a human, allowing a particularly aggressive bird to be killed. Such a provision is made, for example, in section 54 of the South Australian National Parks and Wildlife Act. More commonly, an aggressive bird will be caught and relocated to an unpopulated area. Magpies have to be moved a considerable distance, as almost all are able to find their way home from distances of less than . Removing the nest is of no use, as birds will breed again and possibly be more aggressive the second time around.
If it is necessary to walk near the nest, wearing a broad-brimmed or legionnaire's hat or using an umbrella will deter attacking birds, but beanies and bicycle helmets are of little value, as birds attack the sides of the head and neck. Magpies prefer to swoop at the back of the head; therefore, keeping the magpie in sight at all times can discourage the bird. A basic disguise such as sunglasses worn on the back of the head may fool the magpie as to where a person is looking. Eyes painted on hats or helmets will deter attacks on pedestrians but not cyclists. Cyclists can deter attack by attaching a long pole with a flag to a bike, and the use of cable ties on helmets has become common and appears to be effective.
Some claim that hand-feeding magpies can reduce the risk of swooping. Magpies will become accustomed to being fed by humans, and although they are wild, will return to the same place looking for handouts. The idea is that humans thereby appear less of a threat to the nesting birds. Although this has not been studied systematically, there are reports of its success.
Cultural references
The Australian magpie featured in aboriginal folklore around Australia. The Yindjibarndi people of the Pilbara in the northwest of the country used the bird as a signal for sunrise, awakening them with its call. They were also familiar with its highly territorial nature, and it features in a song in their Burndud, or songs of customs. It was a totem bird of the people of the Illawarra region south of Sydney.
Under the name piping shrike, the white-backed magpie was declared the official emblem of the Government of South Australia in 1901 by Governor Tennyson, and has featured on the South Australian flag since 1904. The magpie is a commonly used emblem of sporting teams in Australia, and its brash, cocky attitude has been likened to the Australian psyche. Such teams tend to wear uniforms with black and white stripes. The Collingwood Football Club adopted the magpie from a visiting South Australian representative team in 1892. The Port Adelaide Magpies would likewise adopt the black and white colours and Magpie name in 1902. Other examples include Brisbane's Souths Logan Magpies and Sydney's Western Suburbs Magpies. Disputes over who has been the first club to adopt the magpie emblem have been heated at times. Another club, Glenorchy Football Club of Tasmania, was forced to change uniform design when placed in the same league as another club (Claremont Magpies) with the same emblem.
In New Zealand, the Hawke's Bay Rugby Union team, from Napier, New Zealand, is also known as the magpies. One of the best-known New Zealand poems is "The Magpies" by Denis Glover, with its refrain "Quardle oodle ardle wardle doodle", imitating the sound of the bird – and the popular New Zealand comic Footrot Flats features a magpie character by the name of Pew. Other magpies depicted in fiction include: Magpie in Colin Thiele's 1974 children's book Magpie Island, Miss Magpie in The Adventures of Blinky Bill, and Penguin the magpie in Penguin Bloom. The sculpture Big Swoop in central Canberra was installed in Garema Place on 16 March 2022.
The Australian Magpie won the inaugural Australian Bird of the Year poll conducted by Guardian Australia and BirdLife Australia in late 2017. The Australian magpie won the 2017 contest with 19,926 votes (13.3%), narrowly ahead of the Australian white ibis. The magpie slumped to the #4 place in the 2019 poll, and to the #9 place in the 2021 poll. The voting rules changed in all three years of the Bird of the Year poll, which may have affected the results. The magpie also won a 2023 ABC Science poll for Australia's favourite animal sound.
| Biology and health sciences | Passerida | Animals |
195795 | https://en.wikipedia.org/wiki/Metric%20tensor | Metric tensor | In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point of is a bilinear form defined on the tangent space at (that is, a bilinear function that maps pairs of tangent vectors to real numbers), and a metric field on consists of a metric tensor at each point of that varies smoothly with .
A metric tensor is positive-definite if for every nonzero vector . A manifold equipped with a positive-definite metric tensor is known as a Riemannian manifold. Such a metric tensor can be thought of as specifying infinitesimal distance on the manifold. On a Riemannian manifold , the length of a smooth curve between two points and can be defined by integration, and the distance between and can be defined as the infimum of the lengths of all such curves; this makes a metric space. Conversely, the metric tensor itself is the derivative of the distance function (taken in a suitable manner).
While the notion of a metric tensor was known in some sense to mathematicians such as Gauss from the early 19th century, it was not until the early 20th century that its properties as a tensor were understood by, in particular, Gregorio Ricci-Curbastro and Tullio Levi-Civita, who first codified the notion of a tensor. The metric tensor is an example of a tensor field.
The components of a metric tensor in a coordinate basis take on the form of a symmetric matrix whose entries transform covariantly under changes to the coordinate system. Thus a metric tensor is a covariant symmetric tensor. From the coordinate-independent point of view, a metric tensor field is defined to be a nondegenerate symmetric bilinear form on each tangent space that varies smoothly from point to point.
Introduction
Carl Friedrich Gauss in his 1827 Disquisitiones generales circa superficies curvas (General investigations of curved surfaces) considered a surface parametrically, with the Cartesian coordinates , , and of points on the surface depending on two auxiliary variables and . Thus a parametric surface is (in today's terms) a vector-valued function
depending on an ordered pair of real variables , and defined in an open set in the -plane. One of the chief aims of Gauss's investigations was to deduce those features of the surface which could be described by a function which would remain unchanged if the surface underwent a transformation in space (such as bending the surface without stretching it), or a change in the particular parametric form of the same geometrical surface.
One natural such invariant quantity is the length of a curve drawn along the surface. Another is the angle between a pair of curves drawn along the surface and meeting at a common point. A third such quantity is the area of a piece of the surface. The study of these invariants of a surface led Gauss to introduce the predecessor of the modern notion of the metric tensor.
The metric tensor is in the description below; E, F, and G in the matrix can contain any number as long as the matrix is positive definite.
Arc length
If the variables and are taken to depend on a third variable, , taking values in an interval , then will trace out a parametric curve in parametric surface . The arc length of that curve is given by the integral
where represents the Euclidean norm. Here the chain rule has been applied, and the subscripts denote partial derivatives:
The integrand is the restriction to the curve of the square root of the (quadratic) differential
where
The quantity in () is called the line element, while is called the first fundamental form of . Intuitively, it represents the principal part of the square of the displacement undergone by when is increased by units, and is increased by units.
Using matrix notation, the first fundamental form becomes
Coordinate transformations
Suppose now that a different parameterization is selected, by allowing and to depend on another pair of variables and . Then the analog of () for the new variables is
The chain rule relates , , and to , , and via the matrix equation
where the superscript T denotes the matrix transpose. The matrix with the coefficients , , and arranged in this way therefore transforms by the Jacobian matrix of the coordinate change
A matrix which transforms in this way is one kind of what is called a tensor. The matrix
with the transformation law () is known as the metric tensor of the surface.
Invariance of arclength under coordinate transformations
first observed the significance of a system of coefficients , , and , that transformed in this way on passing from one system of coordinates to another. The upshot is that the first fundamental form () is invariant under changes in the coordinate system, and that this follows exclusively from the transformation properties of , , and . Indeed, by the chain rule,
so that
Length and angle
Another interpretation of the metric tensor, also considered by Gauss, is that it provides a way in which to compute the length of tangent vectors to the surface, as well as the angle between two tangent vectors. In contemporary terms, the metric tensor allows one to compute the dot product(non-euclidean geometry) of tangent vectors in a manner independent of the parametric description of the surface. Any tangent vector at a point of the parametric surface can be written in the form
for suitable real numbers and . If two tangent vectors are given:
then using the bilinearity of the dot product,
This is plainly a function of the four variables , , , and . It is more profitably viewed, however, as a function that takes a pair of arguments and which are vectors in the -plane. That is, put
This is a symmetric function in and , meaning that
It is also bilinear, meaning that it is linear in each variable and separately. That is,
for any vectors , , , and in the plane, and any real numbers and .
In particular, the length of a tangent vector is given by
and the angle between two vectors and is calculated by
Area
The surface area is another numerical quantity which should depend only on the surface itself, and not on how it is parameterized. If the surface is parameterized by the function over the domain in the -plane, then the surface area of is given by the integral
where denotes the cross product, and the absolute value denotes the length of a vector in Euclidean space. By Lagrange's identity for the cross product, the integral can be written
where is the determinant.
Definition
Let be a smooth manifold of dimension ; for instance a surface (in the case ) or hypersurface in the Cartesian space . At each point there is a vector space , called the tangent space, consisting of all tangent vectors to the manifold at the point . A metric tensor at is a function which takes as inputs a pair of tangent vectors and at , and produces as an output a real number (scalar), so that the following conditions are satisfied:
is bilinear. A function of two vector arguments is bilinear if it is linear separately in each argument. Thus if , , are three tangent vectors at and and are real numbers, then
is symmetric. A function of two vector arguments is symmetric provided that for all vectors and ,
is nondegenerate. A bilinear function is nondegenerate provided that, for every tangent vector , the function obtained by holding constant and allowing to vary is not identically zero. That is, for every there exists a such that .
A metric tensor field on assigns to each point of a metric tensor in the tangent space at in a way that varies smoothly with . More precisely, given any open subset of manifold and any (smooth) vector fields and on , the real function
is a smooth function of .
Components of the metric
The components of the metric in any basis of vector fields, or frame, are given by
The functions form the entries of an symmetric matrix, . If
are two vectors at , then the value of the metric applied to and is determined by the coefficients () by bilinearity:
Denoting the matrix by and arranging the components of the vectors and into column vectors and ,
where T and T denote the transpose of the vectors and , respectively. Under a change of basis of the form
for some invertible matrix , the matrix of components of the metric changes by as well. That is,
or, in terms of the entries of this matrix,
For this reason, the system of quantities is said to transform covariantly with respect to changes in the frame .
Metric in coordinates
A system of real-valued functions , giving a local coordinate system on an open set in , determines a basis of vector fields on
The metric has components relative to this frame given by
Relative to a new system of local coordinates, say
the metric tensor will determine a different matrix of coefficients,
This new system of functions is related to the original by means of the chain rule
so that
Or, in terms of the matrices and ,
where denotes the Jacobian matrix of the coordinate change.
Signature of a metric
Associated to any metric tensor is the quadratic form defined in each tangent space by
If is positive for all non-zero , then the metric is positive-definite at . If the metric is positive-definite at every , then is called a Riemannian metric. More generally, if the quadratic forms have constant signature independent of , then the signature of is this signature, and is called a pseudo-Riemannian metric. If is connected, then the signature of does not depend on .
By Sylvester's law of inertia, a basis of tangent vectors can be chosen locally so that the quadratic form diagonalizes in the following manner
for some between 1 and . Any two such expressions of (at the same point of ) will have the same number of positive signs. The signature of is the pair of integers , signifying that there are positive signs and negative signs in any such expression. Equivalently, the metric has signature if the matrix of the metric has positive and negative eigenvalues.
Certain metric signatures which arise frequently in applications are:
If has signature , then is a Riemannian metric, and is called a Riemannian manifold. Otherwise, is a pseudo-Riemannian metric, and is called a pseudo-Riemannian manifold (the term semi-Riemannian is also used).
If is four-dimensional with signature or , then the metric is called Lorentzian. More generally, a metric tensor in dimension other than 4 of signature or is sometimes also called Lorentzian.
If is -dimensional and has signature , then the metric is called ultrahyperbolic.
Inverse metric
Let be a basis of vector fields, and as above let be the matrix of coefficients
One can consider the inverse matrix , which is identified with the inverse metric (or conjugate or dual metric). The inverse metric satisfies a transformation law when the frame is changed by a matrix via
The inverse metric transforms contravariantly, or with respect to the inverse of the change of basis matrix . Whereas the metric itself provides a way to measure the length of (or angle between) vector fields, the inverse metric supplies a means of measuring the length of (or angle between) covector fields; that is, fields of linear functionals.
To see this, suppose that is a covector field. To wit, for each point , determines a function defined on tangent vectors at so that the following linearity condition holds for all tangent vectors and , and all real numbers and :
As varies, is assumed to be a smooth function in the sense that
is a smooth function of for any smooth vector field .
Any covector field has components in the basis of vector fields . These are determined by
Denote the row vector of these components by
Under a change of by a matrix , changes by the rule
That is, the row vector of components transforms as a covariant vector.
For a pair and of covector fields, define the inverse metric applied to these two covectors by
The resulting definition, although it involves the choice of basis , does not actually depend on in an essential way. Indeed, changing basis to gives
So that the right-hand side of equation () is unaffected by changing the basis to any other basis whatsoever. Consequently, the equation may be assigned a meaning independently of the choice of basis. The entries of the matrix are denoted by , where the indices and have been raised to indicate the transformation law ().
Raising and lowering indices
In a basis of vector fields , any smooth tangent vector field can be written in the form
for some uniquely determined smooth functions . Upon changing the basis by a nonsingular matrix , the coefficients change in such a way that equation () remains true. That is,
Consequently, . In other words, the components of a vector transform contravariantly (that is, inversely or in the opposite way) under a change of basis by the nonsingular matrix . The contravariance of the components of is notationally designated by placing the indices of in the upper position.
A frame also allows covectors to be expressed in terms of their components. For the basis of vector fields define the dual basis to be the linear functionals such that
That is, , the Kronecker delta. Let
Under a change of basis for a nonsingular matrix , transforms via
Any linear functional on tangent vectors can be expanded in terms of the dual basis
where denotes the row vector . The components transform when the basis is replaced by in such a way that equation () continues to hold. That is,
whence, because , it follows that . That is, the components transform covariantly (by the matrix rather than its inverse). The covariance of the components of is notationally designated by placing the indices of in the lower position.
Now, the metric tensor gives a means to identify vectors and covectors as follows. Holding fixed, the function
of tangent vector defines a linear functional on the tangent space at . This operation takes a vector at a point and produces a covector . In a basis of vector fields , if a vector field has components , then the components of the covector field in the dual basis are given by the entries of the row vector
Under a change of basis , the right-hand side of this equation transforms via
so that : transforms covariantly. The operation of associating to the (contravariant) components of a vector field T the (covariant) components of the covector field , where
is called lowering the index.
To raise the index, one applies the same construction but with the inverse metric instead of the metric. If are the components of a covector in the dual basis , then the column vector
has components which transform contravariantly:
Consequently, the quantity does not depend on the choice of basis in an essential way, and thus defines a vector field on . The operation () associating to the (covariant) components of a covector the (contravariant) components of a vector given is called raising the index. In components, () is
Induced metric
Let be an open set in , and let be a continuously differentiable function from into the Euclidean space , where . The mapping is called an immersion if its differential is injective at every point of . The image of is called an immersed submanifold. More specifically, for , which means that the ambient Euclidean space is , the induced metric tensor is called the first fundamental form.
Suppose that is an immersion onto the submanifold . The usual Euclidean dot product in is a metric which, when restricted to vectors tangent to , gives a means for taking the dot product of these tangent vectors. This is called the induced metric.
Suppose that is a tangent vector at a point of , say
where are the standard coordinate vectors in . When is applied to , the vector goes over to the vector tangent to given by
(This is called the pushforward of along .) Given two such vectors, and , the induced metric is defined by
It follows from a straightforward calculation that the matrix of the induced metric in the basis of coordinate vector fields is given by
where is the Jacobian matrix:
Intrinsic definitions of a metric
The notion of a metric can be defined intrinsically using the language of fiber bundles and vector bundles. In these terms, a metric tensor is a function
from the fiber product of the tangent bundle of with itself to such that the restriction of to each fiber is a nondegenerate bilinear mapping
The mapping () is required to be continuous, and often continuously differentiable, smooth, or real analytic, depending on the case of interest, and whether can support such a structure.
Metric as a section of a bundle
By the universal property of the tensor product, any bilinear mapping () gives rise naturally to a section of the dual of the tensor product bundle of with itself
The section is defined on simple elements of by
and is defined on arbitrary elements of by extending linearly to linear combinations of simple elements. The original bilinear form is symmetric if and only if
where
is the braiding map.
Since is finite-dimensional, there is a natural isomorphism
so that is regarded also as a section of the bundle of the cotangent bundle with itself. Since is symmetric as a bilinear mapping, it follows that is a symmetric tensor.
Metric in a vector bundle
More generally, one may speak of a metric in a vector bundle. If is a vector bundle over a manifold , then a metric is a mapping
from the fiber product of to which is bilinear in each fiber:
Using duality as above, a metric is often identified with a section of the tensor product bundle .
Tangent–cotangent isomorphism
The metric tensor gives a natural isomorphism from the tangent bundle to the cotangent bundle, sometimes called the musical isomorphism. This isomorphism is obtained by setting, for each tangent vector ,
the linear functional on which sends a tangent vector at to . That is, in terms of the pairing between and its dual space ,
for all tangent vectors and . The mapping is a linear transformation from to . It follows from the definition of non-degeneracy that the kernel of is reduced to zero, and so by the rank–nullity theorem, is a linear isomorphism. Furthermore, is a symmetric linear transformation in the sense that
for all tangent vectors and .
Conversely, any linear isomorphism defines a non-degenerate bilinear form on by means of
This bilinear form is symmetric if and only if is symmetric. There is thus a natural one-to-one correspondence between symmetric bilinear forms on and symmetric linear isomorphisms of to the dual .
As varies over , defines a section of the bundle of vector bundle isomorphisms of the tangent bundle to the cotangent bundle. This section has the same smoothness as : it is continuous, differentiable, smooth, or real-analytic according as . The mapping , which associates to every vector field on a covector field on gives an abstract formulation of "lowering the index" on a vector field. The inverse of is a mapping which, analogously, gives an abstract formulation of "raising the index" on a covector field.
The inverse defines a linear mapping
which is nonsingular and symmetric in the sense that
for all covectors , . Such a nonsingular symmetric mapping gives rise (by the tensor-hom adjunction) to a map
or by the double dual isomorphism to a section of the tensor product
Arclength and the line element
Suppose that is a Riemannian metric on . In a local coordinate system , , the metric tensor appears as a matrix, denoted here by , whose entries are the components of the metric tensor relative to the coordinate vector fields.
Let be a piecewise-differentiable parametric curve in , for . The arclength of the curve is defined by
In connection with this geometrical application, the quadratic differential form
is called the first fundamental form associated to the metric, while is the line element. When is pulled back to the image of a curve in , it represents the square of the differential with respect to arclength.
For a pseudo-Riemannian metric, the length formula above is not always defined, because the term under the square root may become negative. We generally only define the length of a curve when the quantity under the square root is always of one sign or the other. In this case, define
While these formulas use coordinate expressions, they are in fact independent of the coordinates chosen; they depend only on the metric, and the curve along which the formula is integrated.
The energy, variational principles and geodesics
Given a segment of a curve, another frequently defined quantity is the (kinetic) energy of the curve:
This usage comes from physics, specifically, classical mechanics, where the integral can be seen to directly correspond to the kinetic energy of a point particle moving on the surface of a manifold. Thus, for example, in Jacobi's formulation of Maupertuis' principle, the metric tensor can be seen to correspond to the mass tensor of a moving particle.
In many cases, whenever a calculation calls for the length to be used, a similar calculation using the energy may be done as well. This often leads to simpler formulas by avoiding the need for the square-root. Thus, for example, the geodesic equations may be obtained by applying variational principles to either the length or the energy. In the latter case, the geodesic equations are seen to arise from the principle of least action: they describe the motion of a "free particle" (a particle feeling no forces) that is confined to move on the manifold, but otherwise moves freely, with constant momentum, within the manifold.
Canonical measure and volume form
In analogy with the case of surfaces, a metric tensor on an -dimensional paracompact manifold gives rise to a natural way to measure the -dimensional volume of subsets of the manifold. The resulting natural positive Borel measure allows one to develop a theory of integrating functions on the manifold by means of the associated Lebesgue integral.
A measure can be defined, by the Riesz representation theorem, by giving a positive linear functional on the space of compactly supported continuous functions on . More precisely, if is a manifold with a (pseudo-)Riemannian metric tensor , then there is a unique positive Borel measure such that for any coordinate chart ,
for all supported in . Here is the determinant of the matrix formed by the components of the metric tensor in the coordinate chart. That is well-defined on functions supported in coordinate neighborhoods is justified by Jacobian change of variables. It extends to a unique positive linear functional on by means of a partition of unity.
If is also oriented, then it is possible to define a natural volume form from the metric tensor. In a positively oriented coordinate system the volume form is represented as
where the are the coordinate differentials and denotes the exterior product in the algebra of differential forms. The volume form also gives a way to integrate functions on the manifold, and this geometric integral agrees with the integral obtained by the canonical Borel measure.
Examples
Euclidean metric
The most familiar example is that of elementary Euclidean geometry: the two-dimensional Euclidean metric tensor. In the usual Cartesian coordinates, we can write
The length of a curve reduces to the formula:
The Euclidean metric in some other common coordinate systems can be written as follows.
Polar coordinates :
So
by trigonometric identities.
In general, in a Cartesian coordinate system on a Euclidean space, the partial derivatives are orthonormal with respect to the Euclidean metric. Thus the metric tensor is the Kronecker delta δij in this coordinate system. The metric tensor with respect to arbitrary (possibly curvilinear) coordinates is given by
The round metric on a sphere
The unit sphere in comes equipped with a natural metric induced from the ambient Euclidean metric, through the process explained in the induced metric section. In standard spherical coordinates , with the colatitude, the angle measured from the -axis, and the angle from the -axis in the -plane, the metric takes the form
This is usually written in the form
Lorentzian metrics from relativity
In flat Minkowski space (special relativity), with coordinates
the metric is, depending on choice of metric signature,
For a curve with—for example—constant time coordinate, the length formula with this metric reduces to the usual length formula. For a timelike curve, the length formula gives the proper time along the curve.
In this case, the spacetime interval is written as
The Schwarzschild metric describes the spacetime around a spherically symmetric body, such as a planet, or a black hole. With coordinates
we can write the metric as
where (inside the matrix) is the gravitational constant and represents the total mass–energy content of the central object.
| Mathematics | Linear algebra | null |
195802 | https://en.wikipedia.org/wiki/Eurasian%20magpie | Eurasian magpie | The Eurasian magpie or common magpie (Pica pica) is a resident breeding bird throughout the northern part of the Eurasian continent. It is one of several birds in the crow family (corvids) designated magpies, and belongs to the Holarctic radiation of "monochrome" magpies. In Europe, "magpie" is used by English speakers as a synonym for the Eurasian magpie: the only other magpie in Europe is the Iberian magpie (Cyanopica cooki), which is limited to the Iberian Peninsula. Despite having a shared name and similar colouration, it is not closely related to the Australian magpie.
The Eurasian magpie is one of the most intelligent birds. The expansion of its nidopallium is approximately the same in its relative size as the brain of chimpanzees, gorillas, orangutans and humans. It is the only non-mamalian species known to pass the mirror test.
Taxonomy and systematics
The magpie was described and illustrated by Swiss naturalist Conrad Gessner in his Historiae animalium of 1555. In 1758, Linnaeus included the species in the 10th edition of his Systema Naturae under the binomial name Corvus pica. The magpie was moved to a separate genus Pica by the French zoologist Mathurin Jacques Brisson in 1760. Pica is the Classical Latin word for this magpie.
The Eurasian magpie is almost identical in appearance to the North American black-billed magpie (Pica hudsonia) and at one time the two species were considered to be conspecific. The English name used was "black-billed magpie" and the scientific name used was Pica pica. In 2000, the American Ornithologists' Union decided to treat the black-billed magpie as a separate species based on studies of the vocalization and behaviour that indicated that the black-billed magpie was closer to the yellow-billed magpie (Pica nuttalli) than to the Eurasian magpie.
The gradual clinal variation over the large geographic range and the intergradation of the different subspecies means that the geographical limits, and acceptance of the various subspecies, vary between authorities. The International Ornithological Congress recognises six subspecies (a seventh, P. p. hemileucoptera, is included in P. p. bactriana):
P. p. fennorum – Lönnberg, 1927: northern Scandinavia and northwest Russia
P. p. pica – (Linnaeus, 1758): British Isles and southern Scandinavia east to Russia, south to Mediterranean, including most islands
P. p. melanotos – A.E. Brehm, 1857: Iberian Peninsula
P. p. bactriana – Bonaparte, 1850: Siberia east to Lake Baikal, south to Caucasus, Iraq, Iran, Central Asia and Pakistan
P. p. leucoptera – Gould, 1862: southeast Russia and northeast China
P. p. camtschatica – Stejneger, 1884: northern Sea of Okhotsk, and Kamchatka Peninsula in Russian Far East
Others now considered as distinct species:
P. mauritanica – Malherbe, 1845: North Africa (Morocco, northern Algeria and Tunisia) (now considered a separate species, the Maghreb magpie)
P. asirensis – Bates, 1936: southwest Saudi Arabia (now considered a separate species, the Asir magpie)
P. serica – Gould, 1845: east and south China, Taiwan, north Myanmar, north Laos and north Vietnam (now considered a separate species, the Oriental magpie)
P. bottanensis – Delessert, 1840: west central China (now considered a separate species, the black-rumped magpie)
A study using both mitochondrial and nuclear DNA found that magpies in eastern and northeastern China are genetically very similar to each other, but differ from those in northwestern China and Spain.
Etymology
Magpies were originally known as simply "pies". This is hypothesized to derive from a Proto-Indo-European root *(s)peyk- meaning "pointed", in reference to the beak or perhaps the tail (cf. woodpecker). The prefix "mag" dates from the 16th century and comes from the short form of the given name Margaret, which was once used to mean women in general (as Joe or Jack is used for men today); the pie's call was considered to sound like the idle chattering of a woman, and so it came to be called the "Mag pie". "Pie" as a term for the bird dates to the 13th century, and the word "pied", first recorded in 1552, became applied to other birds that resembled the magpie in having black-and-white plumage.
Description
The adult male of the nominate subspecies, P. p. pica, is in length, of which more than half is the tail. The wingspan is . The head, neck, breast and vent are glossy black with a metallic green and violet sheen; the belly and scapulars (shoulder feathers) are pure white; the wings are black glossed with green or purple, and the primaries have white inner webs, conspicuous when the wing is open. The graduated tail is black, glossed with green and reddish purple. The legs and bill are black; the iris is dark brown. The rump is black with white stripe above which varies in thickness between subspecies. The plumage of the sexes is similar but females are slightly smaller. The tail feathers of both sexes are quite long, about 12–28 cm long. Males of the nominate subspecies weigh while females weigh . The young resemble the adults, but are at first without much of the gloss on the sooty plumage. The young have the malar region pink, and somewhat clear eyes. The tail is much shorter than the adults.
The subspecies differ in their size, the amount of white on their plumage and the colour of the gloss on their black feathers. The Asian subspecies P. p. bactriana has more extensive white on the primaries and a prominent white rump.
Adults undergo an annual complete moult after breeding. Moult begins in June or July and ends in September or October. The primary flight feathers are replaced over a period of three months. Juvenile birds undergo a partial moult beginning about one month later than the adult birds in which their body feathers are replaced but not those of the wings or the tail.
Eurasian magpies have a well-known call. It is a choking chatter "chac-chac" or a repetitive "chac-chac-chac-chac". The young also emit the previous call, although they also emit an acute call similar to a "Uik Uik", which may resemble the barking of a small dog. Both adults and young can emit a kind of hiss barely noticeable from afar.
Distribution and habitat
The range of the magpie extends across temperate Eurasia from Portugal, Spain and Ireland in the west to the Kamchatka Peninsula.
The preferred habit is open countryside with scattered trees and magpies are normally absent from treeless areas and dense forests. They sometimes breed at high densities in suburban settings such as parks and gardens. They can often be found close to the centre of cities.
Magpies are normally sedentary and spend winters close to their nesting territories but birds living near the northern limit of their range in Sweden, Finland and Russia can move south in harsh weather.
Behaviour and ecology
Breeding
Some magpies breed after their first year, while others remain in the non-breeding flocks and first breed in their second year. They are monogamous, and the pairs often remain together from one breeding season to the next. They generally occupy the same territory on successive years.
Mating takes place in spring. In the courtship display, males rapidly raise and depress their head feathers, uplift, open and close their tails like fans, and call in soft tones quite distinct from their usual chatter. The loose feathers of the flanks are brought over the primaries, and the shoulder patch is spread so the white is conspicuous, presumably to attract females. Short buoyant flights and chases follow.
Magpies prefer tall trees for their bulky nest, firmly attaching them to a central fork in the upper branches. A framework of the sticks is cemented with earth and clay, and a lining of the same is covered with fine roots. Above is a stout though loosely built dome of prickly branches with a single well-concealed entrance. These huge nests are conspicuous when the leaves fall. Where trees are scarce, though even in well-wooded country, nests are at times built in bushes and hedgerows.
In Europe, clutches are typically laid in April, and usually contain five or six eggs, but clutches with as few as three and as many as ten have been recorded. The eggs are laid in early morning, usually at daily intervals. On average, the eggs of the nominate species measure and weigh . Small for the size of the bird, they are typically pale blue-green, with close specks and spots of olive brown, but show much variation in ground and marking.
The eggs are incubated for 21–22 days by the female, who is fed on the nest by the male. The chicks are altricial, hatching nearly naked with closed eyes. They are brooded by the female for the first 5–10 days and fed by both parents. Initially the parents eat the faecal sacs of the nestlings, but as the chicks grow larger, they defecate on the edge of the nest. The nestlings open their eyes 7 to 8 days after hatching. Their body feathers start to appear after around 8 days and the primary wing feathers after 10 days. For several days before they are ready to leave the nest, the chicks clamber around the nearby branches. They fledge at around 27 days. The parents then continue to feed the chicks for several more weeks. They also protect the chicks from predators, as their ability to fly is poor, making them vulnerable. On average, only 3 or 4 chicks survive to fledge successfully. Some nests are lost to predators, but an important factor causing nestling mortality is starvation. Magpie eggs hatch asynchronously, and if the parents have difficulty finding sufficient food, the last chicks to hatch are unlikely to survive. Only a single brood is reared, unless disaster overtakes the first clutch.
A study conducted near Sheffield in Britain, using birds with coloured rings on their legs, found that only 22% of fledglings survived their first year. For subsequent years, the survival rate for the adult birds was 69%, implying that for those birds that survive the first year, the average total lifespan was 3.7 years. The maximum age recorded for a magpie is 21 years and 8 months for a bird from near Coventry in England that was ringed in 1925 and shot in 1947.
Feeding
The magpie is omnivorous, eating young birds and eggs, small mammals, insects, scraps and carrion, acorns, grain, and other vegetable substances.
Intelligence
Along with other corvids such as ravens, western jackdaws and crows, the Eurasian magpie is believed to be not only among the most intelligent of birds, but also among the most intelligent of all animals. The Eurasian magpie's nidopallium is approximately the same relative size as those in chimpanzees and humans, and significantly larger than those of the gibbons. Their total brain-to-body mass ratio is equal to most great apes and cetaceans. A 2004 review suggests that the intelligence of the corvid family to which the Eurasian magpie belongs is equivalent to that of the great apes (bonobos, gorillas and orangutans) in terms of social cognition, causal reasoning, flexibility, imagination and prospection.
Magpies have been observed engaging in elaborate social rituals, possibly including the expression of grief. Mirror self-recognition has been demonstrated in European magpies, making them one of only a few species known to possess this capability. The cognitive abilities of the Eurasian magpie are regarded as evidence that intelligence evolved independently in both corvids and primates. This is indicated by tool use, an ability to hide and store food across seasons, episodic memory, and using their own experience to predict the behavior of conspecifics. Another behaviour exhibiting intelligence is cutting their food in correctly sized proportions for the size of their young. In captivity, magpies have been observed counting up to get food, imitating human voices, and regularly using tools to clean their own cages. In the wild, they organise themselves into gangs and use complex strategies hunting other birds and when confronted by predators.
Status
The Eurasian magpie has an extremely large range. The European population is estimated to be between 7.5 and 19 million breeding pairs. Allowing for the birds breeding in other continents, the total population is estimated to be between 46 and 228 million individuals. The population trend in Europe has been stable since 1980. There is no evidence of any serious overall decline in numbers, so the species is classified by the International Union for Conservation of Nature as being of Least Concern.
Relationship with humans
Traditions, symbolism, and reputation
Europe
In Europe, magpies have been historically demonized by humans, mainly as a result of superstition and myth. The bird has found itself in this situation mainly by association, says Steve Roud: "Large black birds, like crows and ravens, are viewed as evil in British folklore and white birds are viewed as good". In European folklore, the magpie is associated with a number of superstitions surrounding its reputation as an omen of ill fortune. In the 19th century book, A Guide to the Scientific Knowledge of Things Familiar, a proverb concerning magpies is recited: "A single magpie in spring, foul weather will bring". The book further explains that this superstition arises from the habits of pairs of magpies to forage together only when the weather is fine. In Scotland, a magpie near the window of the house is said to foretell death. An English tradition holds that a single magpie be greeted with a salutation in order to ward off the bad luck it may bring. A greeting might be something like "Good morning, Mr Magpie, how are Mrs Magpie and all the other little magpies?", and a 19th century version recorded in Shropshire is to say "Devil, Devil, I defy thee! Magpie, magpie, I go by thee!" and to spit on the ground three times.
In Britain and Ireland, a widespread traditional rhyme, "One for Sorrow", records the myth (it is not clear whether it has been seriously believed) that seeing magpies predicts the future, depending on how many are seen. There are many regional variations on the rhyme, which means that it is impossible to give a definitive version.
In Italian, British and French folklore, magpies are believed to have a penchant for picking up shiny items, particularly precious stones or metal objects. Rossini's opera La gazza ladra and The Adventures of Tintin comic The Castafiore Emerald are based on this theme. However, one recent research study has cast doubt on the veracity of this belief. In Bulgarian, Czech, German, Hungarian, Polish, Russian, Slovak and Swedish folklore the magpie is seen as a thief. In Hungary there is an old saying which said when you heard a magpie singing it meant guests would be coming to your house. Perhaps because the magpie loved to sit on the trees in front of the village houses and signaled when a man was approaching.
In Sweden, it is further associated with witchcraft. In Norway, a magpie is considered cunning and thievish, but also the bird of hulder, the underground people.
Magpies have been attacked for their role as predators, which includes eating other birds' eggs and their young, mostly smaller songbirds. However, one study has disputed the view that they affect total song-bird populations, finding "no evidence of any effects of [magpie] predator species on songbird population growth rates. We therefore had no indication that predators had a general effect on songbird population growth rates". Another study has claimed that smaller songbird populations increased in places where magpie populations were high and that they do not have a negative impact on the total songbird population.
Citations
Cited sources
| Biology and health sciences | Corvoidea | null |
1096021 | https://en.wikipedia.org/wiki/Earthworks%20%28engineering%29 | Earthworks (engineering) | Earthworks are engineering works created through the processing of parts of the earth's surface involving quantities of soil or unformed rock.
Shoring structures
An incomplete list of possible temporary or permanent geotechnical shoring structures that may be designed and utilised as part of earthworks:
Mechanically stabilized earth
Earth anchor
Cliff stabilization
Grout curtain
Retaining wall
Slurry wall
Soil nailing
Tieback (geotechnical)
Trench shoring
Caisson
Dam
Gabion
Ground freezing
Gallery
Excavation
Excavation may be classified by type of material:
Topsoil excavation
Earth excavation
Rock excavation
Muck excavation – this usually contains excess water and unsuitable soil
Unclassified excavation – this is any combination of material types
Excavation may be classified by the purpose:
Stripping
Roadway excavation
Drainage or structure excavation
Bridge excavation
Channel excavation
Footing excavation
Borrow excavation
Dredge excavation
Underground excavation
Civil engineering use
Typical earthworks include road construction, railway beds, causeways, dams, levees, canals, and berms. Other common earthworks are land grading to reconfigure the topography of a site, or to stabilize slopes.
Military use
In military engineering, earthworks are, more specifically, types of fortifications constructed from soil. Although soil is not very strong, it is cheap enough that huge quantities can be used, generating formidable structures. Examples of older earthwork fortifications include moats, sod walls, motte-and-bailey castles, and hill forts. Modern examples include trenches and berms.
Equipment
Heavy construction equipment is usually used due to the amounts of material to be moved — up to millions of cubic metres. Earthwork construction was revolutionized by the development of the (Fresno) scraper and other earth-moving machines such as the loader, the dump truck, the grader, the bulldozer, the backhoe, and the dragline excavator.
Mass haul planning
Engineers need to concern themselves with issues of geotechnical engineering (such as soil density and strength) and with quantity estimation to ensure that soil volumes in the cuts match those of the fills, while minimizing the distance of movement. In the past, these calculations were done by hand using a slide rule and with methods such as Simpson's rule. Earthworks cost is a function of hauled amount x hauled distance. The goal of mass haul planning is to determine these amounts and the goal of mass haul optimization is to minimize either or both.
Now they can be performed with a computer and specialized software, including optimisation on haul cost and not haul distance (as haul cost is not proportional to haul distance).
| Technology | Earthworks | null |
1096809 | https://en.wikipedia.org/wiki/Polar%20night | Polar night | Polar night is a phenomenon that occurs in the northernmost and southernmost regions of Earth when the Sun remains below the horizon for more than 24 hours. This only occurs inside the polar circles. The opposite phenomenon, polar day or midnight sun, occurs when the Sun remains above the horizon for more than 24 hours.
There are multiple ways to define twilight, the gradual transition to and from darkness when the Sun is below the horizon. "Civil" twilight occurs when the Sun is between 0 and 6 degrees below the horizon. Nearby planets like Venus and bright stars like Sirius are visible during this period. "Nautical" twilight continues until the Sun is 12 degrees below the horizon. During nautical twilight, the horizon is visible enough for navigation. "Astronomical" twilight continues until the Sun has sunk 18 degrees below the horizon. Beyond 18 degrees, refracted sunlight is no longer visible. True night is defined as the period when the sun is 18 or more degrees below either horizon.
Since the atmosphere refracts sunlight, polar day is longer than polar night, and the area that experiences polar night is slightly smaller than the area that experiences polar day. The polar circles are located at latitudes between these two areas, at approximately 66.5°. While it is day in the Arctic Circle, it is night in the Antarctic Circle, and vice versa.
Any planet or moon with a sufficient axial tilt that rotates with respect to its star significantly more frequently than it orbits the star (and with no tidal locking between the two) will experience the same phenomenon (a nighttime lasting more than one rotation period).
Description
The length of polar night varies by latitude from 24 hours just inside the polar circles to 179 days at the poles. As there are various kinds of twilight, there also exist various kinds of polar twilight that progress towards true polar night. Each kind of polar night is defined as when it is darker than the corresponding kind of twilight. The descriptions below are based on relatively clear skies, so the sky will be darker in the presence of dense clouds.
Types of polar night
Polar twilight
As mentioned, a location experiencing polar night does not mean that the location will be in full darkness; in most cases, due to sunlight being refracted over the horizon, a location experiencing polar night will actually be in one of the various phases of polar twilight. As in locations experiencing daylight, the middle of the day will typically be the brightest time in locations experiencing polar twilight.
For example, a typical day during civil polar twilight in Vadsø, Norway will begin with night, astronomical twilight, nautical twilight, and civil twilight in that order (with each successive phase including more light than the last). Following civil twilight, the day will progress through the other phases in the opposite order (nautical twilight, then astronomical twilight, then night to end the day).
Civil polar twilight
Civil polar twilight occurs at latitudes between about 67°24' and 72°34' North or South, where the Sun will be below the horizon all day on the winter solstice, but by less than 6° at solar noon. There is then no true daylight at the solar culmination, only civil twilight. During civil polar twilight, there is still enough light for most normal outdoor activities at midday because of light scattering by the upper atmosphere and refraction. However, during dense cloud cover, places like the coast of Finnmark (about 70°) in Norway will experience a "day" that is darker than usual. Street lamps may therefore remain on even at midday, and a person looking at a window from within a brightly lit room might still be able to see their reflection, as the level of outdoor illuminance will be below that of many illuminated indoor spaces.
Northern Hemisphere:
68° North: about December 9 to January 2
69° North: about December 1 to January 10
70° North: about November 26 to January 16
71° North: about November 21 to January 21
72° North: about November 16 to January 25
Southern Hemisphere:
68° South: about June 7 to July 3
69° South: about May 30 to July 11
70° South: about May 24 to July 18
71° South: about May 19 to July 23
72° South: about May 14 to July 27
Sufferers of seasonal affective disorder tend to seek out therapy with artificial light, as the psychological benefits of daylight require relatively high levels of ambient light (up to 10,000 lux) which are not present in any stage of twilight; thus, the midday twilights experienced anywhere inside the polar circles are still "polar nights" for this purpose.
Nautical polar twilight
Nautical polar twilight occurs at latitudes between about 72° 34' and 78° 34' North or South, which is exactly 6° to 12° inside the polar circles. There is then no civil twilight at the solar culmination, only nautical twilight. During nautical polar twilight, the human eye may distinguish general outlines of ground objects at midday but cannot participate in detailed outdoor operations.
Nautical twilight happens when the Sun is between 6 and 12° below the horizon, so this phenomenon can also be referred to as nautical polar night. Nowhere on mainland Europe is this definition met. The Norwegian town of Longyearbyen, Svalbard, experiences nautical polar twilight from about 11 November until 30 January. Dikson, in Russia, experiences nautical polar twilight from about 6 December to 6 January. On the Canadian territory of Pond Inlet, Nunavut, nautical polar twilight lasts from about 16 December until 26 December.
Astronomical polar twilight
Astronomical polar twilight occurs at latitudes between about 78° 34' and 84° 34' North or South, which is exactly 12° to 18° inside the polar circles. There is then no nautical twilight at the solar culmination, only astronomical twilight. During astronomical polar twilight, the sky is dark enough at midday to permit astronomical observation of point sources of light such as stars, except in regions with more intense skyglow due to light pollution, moonlight, auroras, and other sources of light. There is a location at the horizon with more light than others around midday due to refraction. Some critical observations, such as of faint diffuse items such as nebulae and galaxies, may require observations beyond the limit of astronomical twilight.
Astronomical twilight happens when the Sun is between 12 and 18° degrees below the horizon, so this phenomenon can also be referred to as astronomical polar night. The Norwegian town of Ny-Ålesund, Svalbard, experiences this from about December 12 to 30. Its antipode () experiences this from about June 12 to July 1. The Canadian research base of Eureka, Nunavut, experiences this from about December 2 to January 8. Its antipode () experiences this from about June 1 to July 11. The Russian territory of Franz Josef Land experiences this from about November 27 to January 15. Its antipode () experiences this from about May 25 to July 17. Alert, Nunavut, the northernmost settlement in Canada and the world, experiences this from about November 19 to January 22. Its antipode () experiences this from about May 19 to July 25. Oodaaq, a gravel bank at the northern tip of Greenland and a disputed northernmost point of land, experiences this from about November 15 to January 27. Its antipode () experiences this from about May 13 to July 31.
True polar night
A true polar night is a period of continuous night where no astronomical twilight occurs at the solar culmination. During a true polar night, stars of the sixth magnitude, which are the dimmest stars visible to the naked eye, will be visible throughout the entire 24-hour day. At solar noon, the sun will be between exactly 18° and approximately 23° 26' below the horizon. These conditions last for about 11 weeks at the poles.
True polar night is limited to latitudes above roughly 84° 34' North or South, which is exactly 18° within the polar circles, or approximately five and a half degrees from the poles. The only permanent settlement on Earth at these latitudes is the Amundsen–Scott scientific research station in Antarctica, whose winter personnel are completely isolated from mid-February to late October. The South Pole experiences this from about May 11 to August 1, while the North Pole experiences this from about November 12 to January 28.
Polar Sun cycle
If an observer located on either the North Pole or the South Pole were to define a "day" as the time from the maximal elevation of the Sun above the horizon during one period of daylight, until the maximal elevation of the Sun above the horizon of the next period of daylight, then a "polar day" as experienced by such an observer would be one Earth-year long.
Effects on sleep and mental health
Numerous analyses have been conducted to examine the effects of polar night on humans. In Tromsø, Norway, a city located at 69 degrees north, there is a 2 month long polar night, lasting from mid-November to mid-January. An analysis was conducted based on 2015-16 data from a health survey that involved residents of the region over age 40, with the goal being to analyze the seasonal variation of sleeping patterns in Tromsø. The study found that there was a higher prevalence of insomnia among men in the fall and winter months, but not among women. However, overall, sleep duration varied little to none throughout the year despite the extreme changes in daylight; it is worthwhile to note that a factor in this result may be the significant amount of artificial light in Tromsø.
A similar study was conducted among men who overwintered at Belgrano II, an Argentine research station in Antarctica. The station is located at 77 degrees south, resulting in a polar night 4 months in length. The study was conducted across 5 different winter campaigns in the 2010s, bringing in a total of 82 participants. The study found that participants generally slept for longer periods of time in the summer months than the winter months. Additionally, greater amounts of social jetlag were observed in the winter months.
A third study aimed to examine the mental health of 88 Korean crew members at two different research stations in Antarctica, King Sejong Station and Jang Bogo Station. No crew members had been diagnosed with a mental illness prior to the study. While in Antarctica, 7 of the 88 crew members were diagnosed with a mental illness during early winter. The mental illnesses included insomnia disorder (3 diagnosed), depressive disorder (1 diagnosed), adjustment disorder (2 diagnosed), and alcohol use disorder (1 diagnosed).
Overall, both Antarctic studies showed a lower amount of sleep beginning at the start of winter, while the study from the Korean bases also showed an onset of mental health problems at that time. While the study from Tromsø did not show a similar drop in sleep duration as the Antarctic studies (perhaps due to the high amounts of artificial light), it did show an increased amount of insomnia in men during winter; therefore, the polar night was shown to have sleep and/or mental health effects in all three studies.
| Physical sciences | Celestial mechanics | Astronomy |
1097107 | https://en.wikipedia.org/wiki/Red%20Delicious | Red Delicious | Red Delicious is a type of apple with a red exterior and sweet taste that was first recognized in Madison County, Iowa, in 1872. Today, the name Red Delicious comprises more than 50 cultivars. It was the most produced apple cultivar in the United States from 1968 until 2018, when it was surpassed by Gala.
History
The 'Red Delicious' originated at an orchard in 1872 as "a round, blushed yellow fruit of surpassing sweetness". Stark Nurseries held a competition in 1892 to find an apple to replace the 'Black Ben Davis' apple. The winner was a red and yellow striped apple sent by Jesse Hiatt, a farmer in Peru, Iowa, who called it "Hawkeye". Stark Nurseries bought the rights from Hiatt, renamed the variety "Stark Delicious", and began propagating it. Another apple tree, later named the 'Golden Delicious', was also marketed by Stark Nurseries after it was purchased from a farmer in Clay County, West Virginia, in 1914; the 'Delicious' became the 'Red Delicious' as a retronym.
Selective breeding and decline in demand
Starting in the 1950s, changes in grocery buying habits led to consumers prioritizing visual appearance. As a result, commercial growers increasingly selected for longer storage and cosmetic appeal rather than flavor and palatability. In particular the selection of redder fruit caused deselection of flavor, and the genes that produced the yellow stripes on the original fruit were on the same chromosomes as those for the flavor-producing compounds. Breeding for uniformity and storability favored a thicker skin. Later, as other cultivars entered supermarkets, demand for the 'Red Delicious' declined.
In the 1940s the apple was the most popular in the US. In the 1980s, 'Red Delicious' represented three-quarters of the harvest in Washington state, but the selection of beauty and long storage over taste was making the apples less popular, and demand was declining as supermarkets started carrying other varieties. By the 1990s, reliance on the now-unwanted 'Red Delicious' had helped to push Washington state's apple industry "to the edge" of collapse. In 2000, Congress approved and President Bill Clinton signed a bill to bail out the apple industry, after apple growers had lost $760 million since 1997.
Farmers began to replace their orchards with other cultivars such as Gala, Fuji, and Honeycrisp. By 2000, this cultivar made up less than one half of the Washington state output, and in 2003, the crop had shrunk to 37 percent of the state's harvest, which totaled 103 million boxes. Although Red Delicious still remained the single largest variety produced in the state in 2005, others were growing in popularity, notably the Fuji and Gala varieties. By 2014 the Washington Apple Commission was recommending growers plan to export 60% or more of production. In 2018 the Gala apple overtook US sales of the Red Delicious for the first time. Through 2020 production continued to decline. The COVID-19 pandemic was expected to further continue decline in demand as many cafeterias and other typical sales points for the apple were closed.
Sports (mutations)
Over the years many propagable mutations, or sports, have been identified in 'Red Delicious' apple trees.
Patented
In addition to those propagated without any patent applications (or cut out because they were seen as inferior), 42 sports have been patented in the United States:
In 1977, the application for #4159 noted the "starchy and bland taste of some of the newer varieties".
The plant patent for #4926 promoted the sport as a dwarfing interstock, a dwarfing rootstock for pears, or to produce "crab apple"-sized 'Delicious' apples.
Unpatented sports
| Biology and health sciences | Pomes | Plants |
1097186 | https://en.wikipedia.org/wiki/Frilled%20lizard | Frilled lizard | The frilled lizard (Chlamydosaurus kingii), also known commonly as the frilled agama, the frillneck lizard, the frill-necked lizard, and the frilled dragon, is a species of lizard in the family Agamidae. The species is native to northern Australia and southern New Guinea and is the only member of the genus Chlamydosaurus. Its common names refer to the large frill around its neck, which usually stays folded against the lizard's body. The frilled lizard grows to from head to tail tip and can weigh . Males are larger and more robust than females. The lizard's body is generally grey, brown, orangish-brown, or black in colour. The frills have red, orange, yellow, or white colours.
The frilled lizard is largely arboreal, spending most of its time in trees. Its diet consists mainly of insects and other invertebrates. It is more active during the wet season, when it spends more time near or on the ground, and is less observed during the dry season, during which it seeks shade in the branches of the upper canopy. It breeds in the late dry season and early wet season. The lizard uses its frill to scare off predators and display to other individuals. The species is considered to be of least concern by the International Union for Conservation of Nature.
Males and females erect their frills during social encounters, which can be seen as a means of communication for the frilled neck lizard. The development of this feature has been linked to not only adaptation but also allometric relationships.
Taxonomy
British zoologist John Edward Gray described the frilled lizard in 1825 as Clamydosaurus kingii. He used a specimen collected by botanist Allan Cunningham at Careening Bay, off north-western Australia, while part of an expedition conducted by Captain Phillip Parker King in . The generic name, Chlamydosaurus, is derived from the Ancient Greek chlamydo (χλαμύς), meaning "cloaked" or "mantled", and Latin saurus (sauros), meaning "lizard". The specific name, kingii, is a Latinised form of King. It is the only species classified in its genus.
The frilled lizard is classified in the family Agamidae and the subfamily Amphibolurinae. It split from its closest living relatives around 10 million years ago based on genetic evidence. A 2017 mitochondrial DNA analysis of the species across its range revealed three lineages demarcated by the Ord River and the southeast corner of the Gulf of Carpentaria (Carpentarian Gap). One lineage ranged across Queensland and southern New Guinea and is sister to one that ranged from western Queensland to the Ord River. The ancestor of these two split from a lineage that populates the Kimberley. Frilled lizards entered southern New Guinea possibly around 17,000 years ago during a glacial cycle, when sea levels were lower and a land bridge connected the island to Cape York. The study upholds C. kingii as one species with the different populations being "shallow allopatric clades".
The following cladogram is based on Pyron and colleagues (2013).
Description
The frilled lizard grows to a total length of around and a head-body length of , and weighs up to . It has a particularly large and wide head; a long neck to accommodate the frill; long legs and a tail that makes most of its total length. The species is sexually dimorphic, males being larger than females and having proportionally bigger frills, heads and jaws. The corners of the frilled lizard's eyes are pointed and the rounded nostrils face away from each other and angle downwards. Most of the lizard's scales are keeled, having a ridge down the centre. From the backbone to the sides, the scales alternate between small and large.
The distinctive frill is a flap of skin that extends from the head and neck and contains several folded ridges. When fully extended, the frill is disc-shaped and can reach over four times the length of the animal's torso in diameter, or around across. When not extended, the frill wraps around the body, like a cape over the neck and shoulders. The frill is laterally symmetrical; the right and left sides are attached at the bottom in a V-shape, and cartilage-like connective tissue (Grey's cartilage) connects the top ends to each side of the head near the ear openings. The frill is supported by rod-like hyoid bones, and is spread out by movements of these bones, the lower jaw and Grey's cartilage. This structure mainly functions as a threat display to predators and for communication between individuals. It can also act as camouflage when folded, but this is unlikely to have been a consequence of selection pressure. The frill may be capable of working like a directional microphone, allowing them to better hear sounds directly in front of them but not around them. There is no evidence for other suggested functions, such as food storage, gliding or temperature regulation.
Frilled lizards vary between grey, brown, orangish-brown, and black dorsally, the underside being paler white or yellow. Males have a dark belly but a lighter chest. The underside and lateral sides of the species are sprinkled with dark brown markings that merge to create bands on the tail. The colours of the frills vary based on range; lizards west of the Ord River have red-coloured frills, those living between the river and the Carpentarian Gap have orange frills, and those east of the gap have yellow to white frills. New Guinean frilled lizards are yellow-frilled. The more colourful frills have white patches which may add to the display. Colouration is mainly created by carotenoids and pteridine pigments; lizards with red and orange frills have more carotenoids than those with yellow and white frills, the latter two are also lacking in pteridines. Yellow colouration has been linked to higher steroid hormones.
Distribution and habitat
The frilled lizard inhabits northern Australia and southern New Guinea. Its Australian range stretches from the Kimberley region of Western Australia east through the Top End of the Northern Territory to Queensland's Cape York Peninsula and nearby islands of Muralug, Badu, and Moa, and south to Brisbane. In New Guinea, it lives in the Trans-Fly ecosystem on both the Papua New Guinean and Indonesian sides of the island. The species mainly inhabits savannahs and sclerophyll woodlands. It prefers highly elevated areas with good soil drainage and a greater variety of tree species, mostly Eucalyptus species, and avoids lower plains with mostly Melaleuca and Pandanus trees. Frilled lizards also prefer areas with less vegetation on the ground, as they can then better spot prey from above.
Behaviour and ecology
The frilled lizard is a diurnal (daytime) and arboreal species, spending over 90% each day up in the trees. It spends as little time on the ground as possible, mostly to feed, interact socially, or to travel to a new tree. Males move around more, per day on average versus for females at Kakadu National Park. In the same area, male lizards were found to have an average home range of during the dry season and during the wet season; females used and for the wet and dry seasons, respectively. Male lizards assert their boundaries with frill displays. Frilled lizards are capable of moving bipedally and do so while hunting or to escape from predators. To keep balanced, they lean their heads far back enough, so it lines up behind the tail base.
These lizards are more active during the wet season, when they select smaller trees and are more commonly seen near the ground; during the dry season, they use larger trees and are found at greater heights. Frilled lizards do not enter torpidity during the dry season, but they can greatly reduce their energy usage and metabolic rate in response to less food and water. Body temperatures can approach . The species will bask vertically on the main tree trunk in the morning and near the end of the day, though in the dry season they cease basking at a lower body temperature to better maintain energy and water. When it gets hotter during day, they climb higher in the canopy for shade. Frilled lizards will use large trees and termite mounds as refuges during wildfires. After a forest is burnt, the lizards select trees with more continuous canopies.
Frilled lizards primarily feed on insects and other invertebrates, and very rarely take vertebrates. Prominent prey includes termites, ants and centipedes; termites are particularly important food during the dry season, and moth larvae become important during the wet season. Consumption of ants drops after early dry season fires but rises following fires later in the season. This species is a sit-and-wait predator: it watches for potential prey from a tree and, upon seeing it, climbs down and rushes towards it on two legs before descending on all four to grab and eat it. After feeding, it retreats back up a tree.
Frilled lizards face threats from birds of prey and larger lizards and snakes. When threatened, the species erects its frill to make itself look bigger. This display is accompanied by a gaping mouth, puffing, hissing, and tail lashes. The lizard may also flee and hide from its predators. Several species of nematode infest the gastrointestinal tract. There is at least one record of an individual dying of cryptosporidiosis.
Frilled lizards can breed during the late dry and early wet seasons. Competing males display with gaping mouths and spread frills. Fights can ensue, in which the lizards pounce and bite each other's heads. The female digs a shallow cavity to leave her eggs. They can lay multiple clutches per season, and the number of eggs in a clutch can vary from four to over 20. The incubation period can last two to four months, with milder temperatures producing more males and more extreme temperatures producing more females. Hatchlings have proportionally smaller frills than adults. Lizards grow during the wet season when food is more abundant, and males grow faster than females. Juvenile males also disperse further from their hatching area. The species reaches sexual maturity within two years; males live up to six years compared to four years for females.
Conservation
The International Union for Conservation of Nature lists the frilled lizard as of least concern, due to its abundance and wide range, but warns that its population may be locally declining in some areas. It is a popular species in the pet trade, which may threaten some wild populations. Most pet lizards appear to come from Indonesia, as export of them is banned in Australia and Papua New Guinea. Nevertheless, the Indonesian government themselves have allocated the frilled lizard as a protected species under the Article 20 of the Environment and Forestry Ministerial Regulation On Types of Protected Plants and Animals. Being difficult to breed in captivity, many presumed captive bred lizards are likely to have been taken from the wild. Frilled lizards may also be threatened by feral cats, though they do not appear to be significantly affected by the invasive cane toad.
Relationship with humans
The frilled lizard is considered to be among the most iconic Australian animals along with the kangaroo and koala. Archaeological evidence indicates that frilled lizards were eaten by some indigenous peoples in ancient times. In the late 19th century, William Saville-Kent brought a live lizard to England where it was observed by fellow biologists. Another specimen was kept at a reptile display in Paris, as reptiles were becoming more popular in captivity.
Because of its unique appearance and behaviour, the creature has often been used in media. In Steven Spielberg's 1993 film Jurassic Park, the dinosaur Dilophosaurus was portrayed with a similar neck frill that rose when attacking. Its image has been used in the 1994 LGBT-themed film The Adventures of Priscilla, Queen of the Desert. The species has been featured on some Australian coins.
| Biology and health sciences | Iguania | Animals |
1097662 | https://en.wikipedia.org/wiki/Flood%20basalt | Flood basalt | A flood basalt (or plateau basalt) is the result of a giant volcanic eruption or series of eruptions that covers large stretches of land or the ocean floor with basalt lava. Many flood basalts have been attributed to the onset of a hotspot reaching the surface of the Earth via a mantle plume. Flood basalt provinces such as the Deccan Traps of India are often called traps, after the Swedish word trappa (meaning "staircase"), due to the characteristic stairstep geomorphology of many associated landscapes.
Michael R. Rampino and Richard Stothers (1988) cited eleven distinct flood basalt episodes occurring in the past 250 million years, creating large igneous provinces, lava plateaus, and mountain ranges. However, more have been recognized such as the large Ontong Java Plateau, and the Chilcotin Group, though the latter may be linked to the Columbia River Basalt Group.
Large igneous provinces have been connected to five mass extinction events, and may be associated with bolide impacts.
Description
Flood basalts are the most voluminous of all extrusive igneous rocks, forming enormous deposits of basaltic rock found throughout the geologic record. They are a highly distinctive form of intraplate volcanism, set apart from all other forms of volcanism by the huge volumes of lava erupted in geologically short time intervals. A single flood basalt province may contain hundreds of thousands of cubic kilometers of basalt erupted over less than a million years, with individual events each erupting hundreds of cubic kilometers of basalt. This highly fluid basalt lava can spread laterally for hundreds of kilometers from its source vents, covering areas of tens of thousands of square kilometers. Successive eruptions form thick accumulations of nearly horizontal flows, erupted in rapid succession over vast areas, flooding the Earth's surface with lava on a regional scale.
These vast accumulations of flood basalt constitute large igneous provinces. These are characterized by plateau landforms, so that flood basalts are also described as plateau basalts. Canyons cut into the flood basalts by erosion display stair-like slopes, with the lower parts of flows forming cliffs and the upper part of flows or interbedded layers of sediments forming slopes. These are known in Dutch as trap or in Swedish as trappa, which has come into English as trap rock, a term particularly used in the quarry industry.
The great thickness of the basalt accumulations, often in excess of , usually reflects a very large number of thin flows, varying in thickness from meters to tens of meters, or more rarely to . There are occasionally very thick individual flows. The world's thickest basalt flow may be the Greenstone flow of the Keweenaw Peninsula of Michigan, US, which is thick. This flow may have been part of a lava lake the size of Lake Superior.
Deep erosion of flood basalts exposes vast numbers of parallel dikes that fed the eruptions. Some individual dikes in the Columbia River Plateau are over long. In some cases, erosion exposes radial sets of dikes with diameters of several thousand kilometers. Sills may also be present beneath flood basalts, such as the Palisades Sill of New Jersey, US. The sheet intrusions (dikes and sills) beneath flood basalts are typically diabase that closely matches the composition of the overlying flood basalts. In some cases, the chemical signature allows individual dikes to be connected with individual flows.
Smaller-scale features
Flood basalt commonly displays columnar jointing, formed as the rock cooled and contracted after solidifying from the lava. The rock fractures into columns, typically with five to six sides, parallel to the direction of heat flow out of the rock. This is generally perpendicular to the upper and lower surfaces, but rainwater infiltrating the rock unevenly can produce "cold fingers" of distorted columns. Because heat flow out of the base of the flow is slower than from its upper surface, the columns are more regular and larger in the bottom third of the flow. The greater hydrostatic pressure, due to the weight of overlying rock, also contributes to making the lower columns larger. By analogy with Greek temple architecture, the more regular lower columns are described as the colonnade and the more irregular upper fractures as the entablature of the individual flow. Columns tend to be larger in thicker flows, with columns of the very thick Greenstone flow, mentioned earlier, being around thick.
Another common small-scale feature of flood basalts is pipe-stem vesicles. Flood basalt lava cools quite slowly, so that dissolved gases in the lava have time to come out of solution as bubbles (vesicles) that float to the top of the flow. Most of the rest of the flow is massive and free of vesicles. However, the more rapidly cooling lava close to the base of the flow forms a thin chilled margin of glassy rock, and the more rapidly crystallized rock just above the glassy margin contains vesicles trapped as the rock was rapidly crystallizing. These have a distinctive appearance likened to a clay tobacco pipe stem, particularly as the vesicle is usually subsequently filled with calcite or other light-colored minerals that contrast with the surrounding dark basalt.
Petrology
At still smaller scales, the texture of flood basalts is aphanitic, consisting of tiny interlocking crystals. These interlocking crystals give trap rock its tremendous toughness and durability. Crystals of plagioclase are embedded in or wrapped around crystals of pyroxene and are randomly oriented. This indicates rapid emplacement so that the lava is no longer flowing rapidly when it begins to crystallize. Flood basalts are almost devoid of large phenocrysts, larger crystals present in the lava prior to its being erupted to the surface, which are often present in other extrusive igneous rocks. Phenocrysts are more abundant in the dikes that fed lava to the surface.
Flood basalts are most often quartz tholeiites. Olivine tholeiite (the characteristic rock of mid-ocean ridges) occurs less commonly, and there are rare cases of alkali basalts. Regardless of composition, the flows are very homogeneous and rarely contain xenoliths, fragments of the surrounding rock (country rock) that have been entrained in the lava. Because the lavas are low in dissolved gases, pyroclastic rock is extremely rare. Except where the flows entered lakes and became pillow lava, the flows are massive (featureless). Occasionally, flood basalts are associated with very small volumes of dacite or rhyolite (much more silica-rich volcanic rock), which forms late in the development of a large igneous province and marks a shift to more centralized volcanism.
Geochemistry
Flood basalts show a considerable degree of chemical uniformity across geologic time, being mostly iron-rich tholeiitic basalts. Their major element chemistry is similar to mid-ocean ridge basalts (MORBs), while their trace element chemistry, particularly of the rare earth elements, resembles that of ocean island basalt. They typically have a silica content of around 52%. The magnesium number (the mol% of magnesium out of the total iron and magnesium content) is around 55, versus 60 for a typical MORB. The rare earth elements show abundance patterns suggesting that the original (primitive) magma formed from rock of the Earth's mantle that was nearly undepleted; that is, it was mantle rock rich in garnet and from which little magma had previously been extracted. The chemistry of plagioclase and olivine in flood basalts suggests that the magma was only slightly contaminated with melted rock of the Earth's crust, but some high-temperature minerals had already crystallized out of the rock before it reached the surface. In other words, the flood basalt is moderately evolved. However, only small amounts of plagioclase appear to have crystallized out of the melt.
Though regarded as forming a chemically homogeneous group, flood basalts sometimes show significant chemical diversity even with in a single province. For example, the flood basalts of the Parana Basin can be divided into a low phosphorus and titanium group (LPT) and a high phosphorus and titanium group (HPT). The difference has been attributed to inhomogeneity in the upper mantle, but strontium isotope ratios suggest the difference may arise from the LPT magma being contaminated with a greater amount of melted crust.
Formation
Theories of the formation of flood basalts must explain how such vast amounts of magma could be generated and erupted as lava in such short intervals of time. They must also explain the similar compositions and tectonic settings of flood basalts erupted across geologic time and the ability of flood basalt lava to travel such great distances from the eruptive fissures before solidifying.
Generation of melt
A tremendous amount of heat is required for so much magma to be generated in so short a time. This is widely believed to have been supplied by a mantle plume impinging on the base of the Earth's lithosphere, its rigid outermost shell. The plume consists of unusually hot mantle rock of the asthenosphere, the ductile layer just below the lithosphere, that creeps upwards from deeper in the Earth's interior. The hot asthenosphere rifts the lithosphere above the plume, allowing magma produced by decompressional melting of the plume head to find pathways to the surface.
The swarms of parallel dikes exposed by deep erosion of flood basalts show that considerable crustal extension has taken place. The dike swarms of west Scotland and Iceland show extension of up to 5%. Many flood basalts are associated with rift valleys, are located on passive continental plate margins, or extend into aulacogens (failed arms of triple junctions where continental rifting begins.) Flood basalts on continents are often aligned with hotspot volcanism in ocean basins. The Paraná and Etendeka traps, located in South America and Africa on opposite sides of the Atlantic Ocean, formed around 125 million years ago as the South Atlantic opened, while a second set of smaller flood basalts formed near the Triassic-Jurassic boundary in eastern North America as the North Atlantic opened. However, the North Atlantic flood basalts are not connected with any hot spot traces, but seem to have been evenly distributed along the entire divergent boundary.
Flood basalts are often interbedded with sediments, typically red beds. The deposition of sediments begins before the first flood basalt eruptions, so that subsidence and crustal thinning are precursors to flood basalt activity. The surface continues to subside as basalt erupt, so that the older beds are often found below sea level. Basalt strata at depth (dipping reflectors) have been found by reflection seismology along passive continental margins.
Ascent to the surface
The composition of flood basalts may reflect the mechanisms by which the magma reaches the surface. The original melt formed in the upper mantle (the primitive melt) cannot have the composition of quartz tholeiite, the most common and typically least evolved volcanic rock of flood basalts, because quartz tholeiites are too rich in iron relative to magnesium to have formed in equilibrium with typical mantle rock. The primitive melt may have had the composition of picrite basalt, but picrite basalt is uncommon in flood basalt provinces. One possibility is that a primitive melt stagnates when it reaches the mantle-crust boundary, where it is not buoyant enough to penetrate the lower-density crust rock. As a tholeiitic magma differentiates (changes in composition as high-temperature minerals crystallize and settle out of the magma) its density reaches a minimum at a magnesium number of about 60, similar to that of flood basalts. This restores buoyancy and permits the magma to complete its journey to the surface, and also explains why flood basalts are predominantly quartz tholeiites. Over half the original magma remains in the lower crust as cumulates in a system of dikes and sills.
As the magma rises, the drop in pressure also lowers the liquidus, the temperature at which the magma is fully liquid. This likely explains the lack of phenocrysts in erupted flood basalt. The resorption (dissolution back into the melt) of a mixture of solid olivine, augite, and plagioclase—the high-temperature minerals likely to form as phenocrysts—may also tend to drive the composition closer to quartz tholeiite and help maintain buoyancy.
Eruption
Once the magma reaches the surface, it flows rapidly across the landscape, literally flooding the local topography. This is possible in part because of the rapid rate of extrusion (over a cubic km per day per km of fissure length) and the relatively low viscosity of basaltic lava. However, the lateral extent of individual flood basalt flows is astonishing even for so fluid a lava in such quantities. It is likely that the lava spreads by a process of inflation in which the lava moves beneath a solid insulating crust, which keeps it hot and mobile. Studies of the Ginkgo flow of the Columbia River Plateau, which is thick, show that the temperature of the lava dropped by just over a distance of . This demonstrates that the lava must have been insulated by a surface crust and that the flow was laminar, reducing heat exchange with the upper crust and base of the flow. It has been estimated that the Ginkgo flow advanced 500 km in six days (a rate of advance of about 3.5 km per hour).
The lateral extent of a flood basalt flow is roughly proportional to the cube of the thickness of the flow near its source. Thus, a flow that is double in thickness at its source can travel roughly eight times as far.
Flood basalt flows are predominantly pāhoehoe flows, with ʻaʻā flows much less common.
Eruption in flood basalt provinces is episodic, and each episode has its own chemical signature. There is some tendency for lava within a single eruptive episode to become more silica-rich with time, but there is no consistent trend across episodes.
Large igneous provinces
Large Igneous Provinces (LIPs) were originally defined as voluminous outpourings, predominantly of basalt, over geologically very short durations. This definition did not specify minimum size, duration, petrogenesis, or setting. A new attempt to refine classification focuses on size and setting. LIPs characteristically cover large areas, and the great bulk of the magmatism occurs in less than 1 Ma. Principal LIPs in the ocean basins include Oceanic Volcanic Plateaus (OPs) and Volcanic Passive Continental Margins. Oceanic flood basalts are LIPs distinguished from oceanic plateaus by some investigators because they do not form morphologic plateaus, being neither flat-topped nor elevated more than 200 m above the seafloor. Examples include the Caribbean, Nauru, East Mariana, and Pigafetta provinces. Continental flood basalts (CFBs) or plateau basalts are the continental expressions of large igneous provinces.
Impact
Flood basalts contribute significantly to the growth of continental crust. They are also catastrophic events, which likely contributed to many mass extinctions in the geologic record.
Crust formation
The extrusion of flood basalts, averaged over time, is comparable with the rate of extrusion of lava at mid-ocean ridges and much higher than the rate of extrusion by hotspots. However, extrusion at mid-ocean ridges is relatively steady, while extrusion of flood basalts is highly episodic. Flood basalts create new continental crust at a rate of per year, while the eruptions that form oceanic plateaus produce of crust per year.
Much of the new crust formed during flood basalt episodes takes the form of underplating, with over half the original magma crystallizing out as cumulates in sills at the base of the crust.
Mass extinctions
The eruption of flood basalts has been linked with mass extinctions. For example, the Deccan Traps, erupted at the Cretaceous-Paleogene boundary, may have contributed to the extinction of the non-avian dinosaurs. Likewise, mass extinctions at the Permian-Triassic boundary, the Triassic-Jurassic boundary, and in the Toarcian Age of the Jurassic correspond to the ages of large igneous provinces in Siberia, the Central Atlantic Magmatic Province, and the Karoo-Ferrar flood basalt.
Some idea of the impact of flood basalts can be given by comparison with historical large eruptions. The 1783 eruption of Lakagígar was the largest in the historical record, killing 75% of the livestock and a quarter of the population of Iceland. However, the eruption produced just of lava, which is tiny compared with the Roza Member of the Columbia River Plateau, erupted in the mid-Miocene, which contained at least of lava.
During the eruption of the Siberian Traps, some of magma penetrated the crust, covering an area of , equal to 62% of the area of the contiguous states of the United States. The hot magma contained vast quantities of carbon dioxide and sulfur oxides, and released additional carbon dioxide and methane from deep petroleum reservoirs and younger coal beds in the region. The released gases created over 6400 diatreme-like pipes, each typically over in diameter. The pipes emitted up to 160 trillion tons of carbon dioxide and 46 trillion tons of methane. Coal ash from burning coal beds spread toxic chromium, arsenic, mercury, and lead across northern Canada. Evaporite beds heated by the magma released hydrochloric acid, methyl chloride, methyl bromide, which damaged the ozone layer and reduced ultraviolet shielding by as much as 85%. Over 5 trillion tons of sulfur dioxide was also released. The carbon dioxide produced extreme greenhouse conditions, with global average sea water temperatures peaking at , the highest ever seen in the geologic record. Temperatures did not drop to for another 5.1 million years. Temperatures this high are lethal to most marine organisms, and land plants have difficulty continuing to photosynthesize at temperatures above . The Earth's equatorial zone became a dead zone.
However, not all large igneous provinces are connected with extinction events. The formation and effects of a flood basalt depend on a range of factors, such as continental configuration, latitude, volume, rate, duration of eruption, style and setting (continental vs. oceanic), the preexisting climate, and the biota resilience to change.
List of flood basalts
Representative continental flood basalts and oceanic plateaus, arranged by chronological order, together forming a listing of large igneous provinces:
Elsewhere in the Solar System
Flood basalts are the dominant form of magmatism on the other planets and moons of the Solar System.
The maria on the Moon have been described as flood basalts composed of picritic basalt. Individual eruptive episodes were likely similar in volume to flood basalts of Earth, but were separated by much longer quiescent intervals and were likely produced by different mechanisms.
Extensive flood basalts are present on Mars.
Uses
Trap rock is the most durable construction aggregate of all rock types, because the interlocking crystals are oriented at random.
| Physical sciences | Volcanic landforms | Earth science |
1097818 | https://en.wikipedia.org/wiki/Blue-tongued%20skink | Blue-tongued skink | Blue-tongued skinks comprise the Australasian genus Tiliqua, which contains some of the largest members of the skink family (Scincidae). They are commonly called blue-tongued lizards or simply blue-tongues or blueys in Australia or panana in Indonesia. As suggested by these common names, a prominent characteristic of the genus is a large blue tongue that can be bared as bluff-warning to potential enemies. Their tongue can also deform itself and produce a thick mucus in order to catch prey. They are relatively shy in comparison with other lizards, and also significantly slower due to their shorter legs.
Systematics and distribution
Blue-tongued skinks are closely related to the genera Cyclodomorphus and Hemisphaeriodon. All species are found on mainland Australia with the exception of Tiliqua gigas, which occurs in New Guinea and various islands of Indonesia. The Tanimbar blue-tongued skink, a subspecies of Tiliqua scincoides, is also found on several small Indonesian islands between Australia and New Guinea. Tiliqua nigrolutea, the blotched blue-tongued skink, is the only species present in Tasmania.
Ecology
Most species are diurnal, ground-foraging omnivores, feeding on a wide variety of insects, gastropods, flowers, fruits, and berries. The pygmy blue-tongue is again the exception, being primarily an ambush predator of terrestrial arthropods.
All are ovoviviparous, with litter sizes ranging from 1-4 in the pygmy blue-tongue and shingleback to 5-24 in the eastern and northern blue-tongues.
Species
Extinct species
Multiple extinct species have been proposed. T. frangens, the largest known species of the genus, lived during the Pliocene and Pleistocene epoch around the Wellington Caves of New South Wales in Australia. Another extinct species T. laticephala may represent the same taxon as T. frangens. Its nearest relative is the extant T. rugosa, which is half the size and lacks the bony plates of T. frangens.
Another extinct species T. wilkinsonorum also lived during the Pliocene epoch. The earliest possible species is T. pusilla from the middle Miocene, but researchers question whether this species belong to the genus Tiliqua due to its uncertain phylogenetic position that causes paraphyly.
In captivity
Some species of this genus are kept as household pets. They are on average very friendly when raised in captivity, and are often called 'the dogs of reptiles'. Captive specimens can live 20 years or longer.
| Biology and health sciences | Lizards and other Squamata | Animals |
1099196 | https://en.wikipedia.org/wiki/Missile%20defense | Missile defense | Missile defense is a system, weapon, or technology involved in the detection, tracking, interception, and also the destruction of attacking missiles. Conceived as a defense against nuclear-armed intercontinental ballistic missiles (ICBMs), its application has broadened to include shorter-ranged non-nuclear tactical and theater missiles.
China, France, India, Iran, Israel, Italy, Russia, Taiwan, the United Kingdom and the United States have all developed such air defense systems.
Missile defense categories
Missile defense can be divided into categories based on various characteristics: type/range of missile intercepted, the trajectory phase where the intercept occurs, and whether intercepted inside or outside the Earth's atmosphere:
Type/range of missile intercepted
These types/ranges include strategic, theater and tactical. Each entails unique requirements for intercept; a defensive system capable of intercepting one missile type frequently cannot intercept others. However, there is sometimes overlap in capability.
Strategic
Targets long-range ICBMs, which travel at about 7 km/s (15,700 mph). Examples of currently active systems: Russian A-135, which defends Moscow, the US Ground-Based Midcourse Defense that defends the United States from missiles launched from Asia and the Israeli Arrow 3 which defends Israel from ICBMs. Geographic range of strategic defense can be regional (Russian system) or national (US and Israeli system's).
Theater
Targets medium-range missiles, which travel at about 3 km/s (6,700 mph) or less. In this context, the term "theater" means the entire localized region for military operations, typically a radius of several hundred kilometers; defense range of these systems is usually on this order. Examples of deployed theater missile defenses: Israeli Arrow 2 missile and David's Sling, American THAAD, and Russian S-400.
Tactical
Targets short-range tactical ballistic missiles, which usually travel at less than 1.5 km/s (3,400 mph). Tactical anti-ballistic missiles (ABMs) have short ranges, typically 20–80 km (12–50 miles). Examples of currently-deployed tactical ABMs are the Israeli Iron Dome, American MIM-104 Patriot and Russian S-300V.
Trajectory phase
Ballistic missiles can be intercepted in three regions of their trajectory: boost phase, midcourse phase, or terminal phase.
Boost phase
Intercepting the missile while its rocket motors are firing, usually over the launch territory.
Advantages:
Bright, hot rocket exhaust makes detection and targeting easier.
Decoys cannot be used during boost phase.
At this stage, the missile is full of flammable propellant, which makes it very vulnerable to explosive warheads.
Disadvantages:
Difficult to geographically position interceptors to intercept missiles in boost phase (not always possible without flying over hostile territory).
Short time for intercept (typically about 180 seconds).
Mid-course phase
Intercepting the missile in space after the rocket burns out (example: American Ground-Based Midcourse Defense (GMD), Chinese SC-19 & DN-series missiles, Israeli Arrow 3 missile).
Advantages:
Extended decision/intercept time (the coast period through space before reentering the atmosphere can be several minutes, up to 20 minutes for an ICBM).
Very large geographic defensive coverage; potentially continental.
Disadvantages:
Requires large, heavy anti-ballistic missiles and sophisticated powerful radar which must often be augmented by space-based sensors.
Must handle potential space-based decoys.
Terminal phase
Intercepting the missile after it reenters the atmosphere (examples: American Aegis Ballistic Missile Defense System, Chinese HQ-29, American THAAD, American Sprint, Russian ABM-3 Gazelle)
Advantages:
Smaller, lighter anti-ballistic missile is sufficient.
Balloon decoys do not work during reentry.
Smaller, less sophisticated radar required.
Disadvantages:
Very short intercept time, possibly less than 30 seconds.
Less defended geographic coverage.
Possible blanketing of target area with hazardous materials in the case of detonation of nuclear warhead(s).
Intercept location relative to the atmosphere
Missile defense can take place either inside (endoatmospheric) or outside (exoatmospheric) the Earth's atmosphere. The trajectory of most ballistic missiles takes them inside and outside the Earth's atmosphere, and they can be intercepted in either place. There are advantages and disadvantages to either intercept technique.
Some missiles such as THAAD can intercept both inside and outside the Earth's atmosphere, giving two intercept opportunities.
Endoatmospheric
Endoatmospheric anti-ballistic missiles are usually shorter ranged (e.g., American MIM-104 Patriot, Indian Advanced Air Defence).
Advantages:
Physically smaller and lighter
Easier to move and deploy
Endoatmospheric intercept means balloon-type decoys won't work
Disadvantages:
Limited range and defended area
Limited decision and tracking time for the incoming warhead
Exoatmospheric
Exoatmospheric anti-ballistic missiles are usually longer-ranged (e.g., American GMD, Ground-Based Midcourse Defense).
Advantages:
More decision and tracking time
Fewer missiles required for defense of a larger area
Disadvantages:
Larger and heavier missiles required
More difficult to transport and place compared to smaller missiles
Must handle decoys
Countermeasures to missile defense
Given the immense variety by which a defense system can operate (targeting nuclear-armed intercontinental ballistic missiles (ICBMs), tactical, and theater missiles), there are some unarguably effective exoatmospheric (outside the Earth's atmosphere) countermeasures an attacking party can use to deter or completely defend against certain types of defense systems, ranges of ACBM's, and intercept locations. Many of defenses to these countermeasures have been implemented and taken into account when constructing missile defense systems, however, it does not guarantee their effectiveness or success. The US Missile Defense Agency has received scrutiny in regards to their lack of foresight of these countermeasures, causing many scientists to perform various studies and data analysis as to the true effectiveness of these countermeasures.
Decoys
A common countermeasure that attacking parties use to disrupt the efficacy of Missile Defense Systems are the simultaneous launching of decoys from the primary launch site or from the exterior of the main attacking missile itself. These decoys are usually small, lightweight dud rockets that take advantage of the interceptor sensors tracking and fool it by making many different targets available in an instant. This is accomplished via the releasing of decoys in certain phases of flight. Because objects of differing weights follow the same trajectory when in space, decoys released during the midcourse phase can prevent interceptor missiles from accurately identifying the warhead. This could force the defense system to attempt to destroy all incoming projectiles, which masks the true attacking missile and lets it slip by the defense system.
Common types of decoys
Since there can be many forms of this type of deception of a missile system, different categorizations of decoys have developed, all of which operate and are designed slightly different. Details of these types of decoys and their effectiveness were provided in a report by a variety of prominent scientists in 2000.
Replica decoys
This categorization of decoy is the most similar to the standard understanding of what a missile decoy is. These types of decoys attempt to mask the attacking ICBM via the release of many similar missiles. This type of decoy confuses the missile defense system by the sudden replication and the sheer amount that the defense has to deal with. Knowing that no defense system is 100% reliable, this confusion within the targeting of the defense system would cause the system to target each decoy with equal priority and as if it was the actual warhead, allowing the real warheads chance of passing through the system and striking the target to increase drastically.
Decoys using signature diversity
Similar to replica decoys, these types of decoys also take advantage of the limitations in number within the missile defense systems targeting. However, rather than using missiles of similar build and trace to the attacking warhead, these types of decoys all have slightly different appearances from both each other and the warhead itself. This creates a different kind of confusion within the system; rather than creating a situation where each decoy (and the warhead itself) appears the same and is therefore targeted and treated exactly like the "real" warhead, the targeting system simply does not know what is the real threat and what is a decoy due to the mass amount of differing information. This creates a similar situation as the result of the replica decoy, increasing the chance that the real warhead passes through the system and strikes the target.
Decoys using antisimulation
This type of decoy is perhaps the most difficult and subversive for a missile defense system to determine. Instead of taking advantage of the missile defense system's targeting, this type of decoy intends to fool the operation of the system itself. Rather than using sheer quantity to overrun the targeting system, an anti-simulation decoy disguises the actual warhead as a decoy, and a decoy as the actual warhead. This system of "anti-simulation" allows the attacking warhead to, in some cases, take advantage of the "bulk-filtering" of certain missile defense systems, in which objects with characteristics of the warhead poorly matching those expected by the defense are either not observed because of sensor filters, or observed very briefly and immediately rejected without the need for a detailed examination. The actual warhead may simply pass by undetected, or rejected as a threat.
Cooled shrouds
Another common countermeasure used to fool missile defense systems are the implementation of cooled shrouds surrounding attacking missiles. This method covers the entire missile in a steel containment filled with liquid oxygen, nitrogen, or other coolants that prevent the missile from being easily detected. Because many missile defense systems use infrared sensors to detect the heat traces of incoming missiles, this capsule of extremely cold liquid either renders the incoming missile entirely invisible to detection or reduces the system's ability to detect the incoming missile fast enough.
Other types of infrared stealth
Another commonly applied countermeasure to missile defense is the application of various low-emissivity coatings. Similar to cooled shrouds, these warheads are fully coated with infrared reflective or resistant coatings that allow similar resistance to infrared detection that cooled shrouds do. Because the most effective coating discovered so far is gold, though, this method is often overstepped by cooled shrouds.
Biological and chemical weapons
This is perhaps the most extreme approach to countering missile defense systems that are designed to destroy ICBMs and other forms of nuclear weaponry. Rather than using many missiles equipped with nuclear warheads as their main weapon of attack, this idea involves the release of biological or chemical sub-munition weapons or agents from the missile shortly after the boost phase of the attacking ICBM. Because missile defense systems are designed with intent to destroy main attacking missiles or ICBMs, this system of sub-munition attack is too numerous for the system to defend against while also distributing the chemical or biological agent across a large area of attack.
There is currently no proposed countermeasure to this type of attack except through diplomacy and the effective banning of biological weaponry and chemical agents within war. However, this does not guarantee that this countermeasure to missile defense system will not be abused via extremists or terrorists. An example of this severe threat can be further seen in North Korea's testing of anthrax tipped ICBMs in 2017.
Dynamic trajectories
Countries including Iran and North Korea may have sought missiles that can maneuver and vary their trajectories in order to evade missile defense systems.
In March 2022, when Russia used a hypersonic missile against Ukraine, Joe Biden characterized the weapon as "almost impossible to stop". Boost-glide hypersonic weapons shift trajectory to evade current missile-defense systems.
Glide Phase Interceptor (GPI) will provide defense against maneuvering hypersonic weapons.
Multiple independently targetable re-entry vehicles
Another way to counter an ABM system is to attach multiple warheads that break apart upon reentry. If the ABM is able to counter one or two of the warheads via detonation or collision the others would slip through radar either because of limitations on ABM firing speeds or because of radar blackout caused by plasma interference. The first MRV was the Polaris A-3 which had three warheads and was launched from a submarine. Before regulations on how many warheads could be stored in a MIRV, the Soviets had up to twenty to thirty attached to ICBMs.
Jammers
Jammers use radar noise to saturate the incoming signals to the point where the radar cannot discern meaningful data about a target's location with meaningless noise. They can also imitate the signal of a missile to create a fake target. They are usually spread over planned missile paths to enemy territory to give the missile a clear path to their target. Because these jammers take relatively little electricity and hardware to operate, they are usually small, self-contained, and easily dispersible.
Command and control
Command and control, battle management, and communications (C2BMC)
Command and control, battle management, and communications (C2BMC) systems are hardware and software interfaces that integrate a multitude of sensory information at a centralized center for the ballistic missile defense system (BMDS). The command center allows for human management in accordance to the incorporated sensory information- BMDS status, system coverage, and ballistic missile attacks. The interface system helps build an image of the battle scenario or situation which enables the user to select the optimal firing solutions.
The first C2BMC system became operational in 2004. Since then, many elements have been added to update the C2BMC, which act to provide further sensory information and allow for enhanced communications between combatant commanders. A C2BMC is even capable of initiating live planning system before any engagement has even started.
GMD fire control and communication
The function of ground-based midcourse defense (GMD) systems is to provide combatants the ability to seek and destroy intermediate- and long-range ballistic missiles en route to the US homeland. Data are transmitted from the defense satellite communication system, and compiles an image using the coordinated information. The system is able to relay real-time data once missiles have been launched. The GMD can also work to receive information from the C2BMC, which allows Aegis SPY-1, and TPY-2 to contribute to the defense system.
A problem with GMD is that the ground systems have increasingly becoming obsolete as the technology was initially installed as early as the 1990s. So, the ground sensors had been replaced sometime in 2018. The update was to add the capability of handling up to 44 systems; it would also reduce overlapping redundancies and inefficiencies.
Missiles are a link that connects communication between land, air, and sea forces to support joint operations and improve operability. The system is intended to improve the interoperability for joint operations of NATO and coalition forces. Link-16 is also used by the US Army and Navy for air and sea operations. An important feature of Link-16 is its ability to broadcast information simultaneously to as many users as needed. Another feature of Link-16 is its ability to act as nodes, which allows for a multitude of distributed forces to operate cohesively.
The newest generation of Link-16 is the multifunctional information distribution system low-volume terminal (MIDS LVT). It is a much smaller unit that can be fitted on air, ground, and sea units to incorporate data. The MIDS LVT terminals are installed on most bombers, aircraft, UAVs, and tankers, allowing for the incorporation of most air defense systems.
Integrated Air and Missile Defense Battle Command System
The Integrated Air and Missile Defense Battle Command System (IBCS) is an unified command and control network developed by the US Army. It is designed to integrate data relay between weapon launchers, radars, and the operators, which allows air-defense units to fire interceptors with information being relayed among radars. The advantage of such a system is it can increase the area an air unit can defend and reduce interceptor spending by ensuring than no other air defense unit would engage the same target. The IBCS will be able to integrate with air defense networks of foreign military as the global C2BMC system.
IBCS engagement stations will integrate raw data from multiple sensors and process it into a single air picture, and choose elect different weapons and launcher locations depending on the detected threat instead of being limited to particular unit capabilities.
The IBCS system is intended to be operational in 2019; between 2016 and 2017, implementation of IBCS had to be put on hold due to software issues with the system. In 2021, F-35 sensor data were linked via airborne gateway to ground-based IBCS, to conduct a simulated Army fires exercise, for future Joint All-Domain Command and Control (JADC2).
History
The problem was first studied during the last year of the Second World War. The only countermeasure against the V-2 missile that could be devised was a massive barrage of anti-aircraft guns. Even if the missile's trajectory were accurately calculated, the guns would still have a small probability of destroying it before impact with the ground. Also, the shells fired by the guns would have caused more damage than the actual missile when they fell back to the ground. Plans for an operational test began anyway, but the idea was rendered moot when the V-2 launching sites in the Netherlands were captured.
In the 1950s and 1960s, missile defense meant defense against strategic (usually nuclear-armed) missiles. The technology mostly centered around detecting offensive launch events and tracking inbound ballistic missiles, but with limited ability to actually defend against the missile. The Soviet Union achieved the first nonnuclear intercept of a ballistic missile warhead by a missile at the Sary Shagan antiballistic missile defense test range on 4 March 1961. Nicknamed the "Griffon" missile system, it would be installed around Leningrad as a test
Throughout the 1950s and 1960s, the United States Project Nike air defense program focused initially on targeting hostile bombers before shifting focus to targeting ballistic missiles. In the 1950s, the first United States anti-ballistic missile system was the Nike Hercules, which had the ability to intercept incoming short-range ballistic missiles, but not intermediate-range ballistic missiles (IRBMs) or ICBMs. This was followed by the Nike Zeus, which was capable of intercepting ICBMs by using a nuclear warhead, upgraded radar systems, faster computers, and control systems that were more effective in the upper atmosphere. However it was feared the missile's electronics may be vulnerable to x-rays from a nuclear detonation in space. A program was started to devise methods of hardening weapons from radiation damage. By the early 1960s the Nike Zeus was the first anti-ballistic missile to achieve hit-to-kill (physically colliding with the incoming warhead).
In 1963, Secretary of Defense Robert McNamara diverted funds from the Zeus missile program, and instead directed that funding to the development of the Nike-X system, which used the high-speed, short-range Sprint missile. These missiles were meant to intercept incoming warheads after they had descended from space and were only seconds from their targets. To accomplish this, Nike-X required advances in missile design to make the Sprint missile quick enough to intercept incoming warheads in time. The system also included advanced active electronically scanned array radar systems and a powerful computer complex.
During the development of Nike-X, controversy over the effectiveness of anti-ballistic missile systems became more prominent. Critiques of the Nike-X included an estimate that the anti-ballistic missile system could be defeated by Soviets manufacturing more ICBMs, and the cost of those additional ICBMs needed to defeat Nike-X would also cost less than what the United States would spend on implementing Nike-X. Additionally, McNamara reported that a ballistic missile system would save American lives at the cost of approximately $700 per life, compared to a shelter system that could save lives at a lower cost of approximately $40 per life. As a result of these estimations, McNamara opposed implementation of Nike-X due to the high costs associated with construction and perceived poor cost-effectiveness of the system, and instead expressed support for pursuing arms limitations agreements with the Soviets. After the Chinese government detonated their first hydrogen bomb during Test No. 6. in 1967, McNamara modified the Nike-X program into a program called Sentinel. This program's goal was to protect major US cities from a limited ICBM attack, especially on one from China.
This would be done by building fifteen sites across the continental US, and one site in each of Alaska and Hawaii. This in turn reduced tensions with the Soviet Union, which retained the offensive capability to overwhelm any US defense. McNamara favored this approach as deploying the Sentinel program was less costly than a fully implemented Nike-X program, and would reduce congressional pressures to implement an ABM system. In the months following the announcements regarding the Sentinel program, Secretary of Defense Robert McNamara stated: "Let me emphasize—and I cannot do so too strongly—that our decision to go ahead with a limited ABM deployment in no way indicates that we feel an agreement with the Soviet Union on the limitation of strategic nuclear offensive and defensive forces is in any way less urgent or desirable.
With the conclusion of the Cuban Missile Crisis and the withdrawal of Soviet missiles from their strategic positions in Cuba, the USSR started to think about a missile defense systems. A year after the crisis in 1963 the Soviets created the SA-5. Unlike its predecessors like the SA-1 or Griffon systems, this system was able to fly much higher and further and was fast enough to intercept some missiles however its main purpose was to intercept the new XB-70 supersonic aircraft the US was planning to make. However, since these types of aircraft never went into production in the US, the project was abandoned, and the Soviets reverted to the slower, low altitude SA-2 and SA-3 systems. In 1964 the Soviets publicly unveiled their newest interceptor missile named the "Galosh" which was nuclear armed and was meant for high altitude, long range interception. The Soviet Union began installing the A-35 anti-ballistic missile system around Moscow in 1965 using these "Galosh" missiles and would become operational by 1971. It consisted of four complex around Moscow each with 16 launchers and two missile tracking radars. Another notable feature of the A-35 was that it was the first monopulse radar. Developed by OKB 30, the Russian Special Design Bureau, the effort design to create a monopulse radar started in 1954. This was used to conduct the first successful intercept in 1961. There were known flaws with the design such as an inability to defend against MIRV and decoy style weapons.The reason for this was because the detonation of a nuclear interceptor missile like the "Galosh" creates a cloud of plasma that temporarily impairs radar readings around the area of the explosion limiting these kinds of systems to a one-shot capacity. This means that with MIRV style attacks the interceptor would be able to take out one or two but the rest would slip though. Another issue with the 1965 model was that it consisted of 11 large radar stations at six locations on the borders of Russia. These bases where visible to the US and could be taken out easily leaving the defense system useless in a concentrated and coordinated attack. Finally, the missiles that could be held on each base was limited by the ABM treaty to only 100 launchers maximum, meaning that in a massive attack they would be depleted quickly. During installation, a Ministry of Defense commission concluded that the system should not be fully implemented, reducing the capabilities of the completed system. That system was later upgraded to the A-135 anti-ballistic missile system and is still operational. This upgrade period started in 1975 and was headed by Dr. A.G. Basistov. When it was completed in 1990, the new A-135 system had a central control multifunctional radar called the "Don" and 100 interceptor missiles. Another improvement was the layering of interceptor missiles where high acceleration missiles are being added for low flying targets and the "Galosh" style missiles where improved further for high altitude targets. All of these missiles where moved underground into silos to make them less vulnerable, which was a flaw of the previous system.
The SALT I talks began in 1969, and led to the Anti-Ballistic Missile Treaty in 1972, which ultimately limited the US and USSR to one defensive missile site each, with no more than 100 missiles per site. This included both ABM interceptor missiles as well as launchers. Originally, the agreement made by the Nixon administration and the Soviet Union stated that both of the two nations were each allowed to have two ABM defensive systems present in their own countries. The goal was to effectively have one ABM defense system located near each nation's capital city as well as another ABM defense system placed near the nation's most important or strategical ICBM field. This treaty allowed for an effective form of deterrence for both sides as if either side were to make an offensive move, the other side would be capable of countering that move. However, a few years later in 1974 both sides reworked the treaty to include only one ABM defensive system present around an ICBM launch area or the nation's capital city. This occurred once both sides determined the other side was not going to construct a second ABM defensive system. Along with limiting the amount of ballistic missile defense systems each nation could have, the treaty also stated if either country desired to have a radar for incoming missile detection, the radar system must be located on the outskirts of the territory and must be aligned in the opposite direction of one's own country. This treaty would end up being the precedent set for future missile defense programs, as any systems that were not stationary and land-based were a violation of the treaty.
As a result of the treaty and of technical limitations, along with public opposition to nearby nuclear-armed defensive missiles, the US Sentinel program was re-designated the Safeguard Program, with the new goal of defending US ICBM sites, not cities. The US Safeguard system was planned to be implemented in various sites across the US, including at Whiteman AFB in Missouri, Malmstrom AFB in Montana, and Grand Forks AFB in North Dakota. The Anti-Ballistic Missile Treaty of 1972 placed a limit of two ABM systems within the US, causing the work site in Missouri to be abandoned, and the partially-completed Montana site was abandoned in 1974 after an additional agreement between the US and USSR that limited each country to one ABM system. As a result, the only Safeguard system that was deployed was to defend the LGM-30 Minuteman ICBMs near Grand Forks, North Dakota. However, it was deactivated in 1976 after being operational for less than four months due to a changing political climate plus concern over limited effectiveness, low strategic value, and high operational cost.
In the early 1980s, technology had matured to consider space based missile defense options. Precision hit-to-kill systems more reliable than the early Nike Zeus were thought possible. With these improvements, the Reagan administration promoted the Strategic Defense Initiative, an ambitious plan to provide a comprehensive defense against an all-out ICBM attack. In pursuit of that goal, the Strategic Defense Initiative investigated a variety of potential missile-defense systems, which included systems using ground-based missile systems and space-based missile systems, as well as systems using lasers or particle beam weapons. This program faced controversy over the feasibility of the projects it pursued, as well as the substantial amount of funding and time required for the research to develop the requisite technology. The Strategic Defense Initiative earned the nickname "Star Wars" due to criticism from Senator Ted Kennedy in which he described the Strategic Defense Initiative as "reckless Star Wars schemes.". Reagan established the Strategic Defense Initiative Organization (SDIO) to oversee the development of the program's projects. Upon request by the SDIO, the American Physical Society (APS) performed a review of the concepts being developed within SDIO and concluded that all of the concepts pursuing use of Directed Energy Weapons were not feasible solutions for an anti-missile defense system without decades of additional research and development. Following the APS's report in 1986, the SDIO switched focus to a concept called the Strategic Defense System, which would use a system of space-based missiles called Space Rocks which would intercept incoming ballistic missiles from orbit, and would be supplemented by ground-based missile defense systems. In 1993, the SDIO was closed and the Ballistic Missile Defense Organization (BMDO) was created, which focuses on ground-based missile defense systems using interceptor missiles. In 2002, BMDO's name was changed to its current title, the Missile Defense Agency (MDA). See National Missile Defense for additional details. In the early 1990s, missile defense expanded to include tactical missile defense, as seen in the first Gulf War. Although not designed from the outset to intercept tactical missiles, upgrades gave the Patriot system a limited missile defense capability. The effectiveness of the Patriot system in disabling or destroying incoming Scuds was the subject of congressional hearings and reports in 1992.
In the time following the agreement of the 1972 Anti-Ballistic Missile Treaty, it was becoming increasingly more and more difficult for the United States to create a new missile defense strategy without violating the terms of the treaty. During the Clinton administration, the initial goal the United States had interest in, was to negotiate with the former Soviet Union, which is now Russia, and hopefully agree to a revision to the treaty signed a few decades prior. In the late 1990s the United States had interest in an idea termed NMD or National Missile Defense. This idea essentially would allow the United States to increase the number of ballistic missile interceptors that would be available to missile defense personnel at the Alaska location. While the initial ABM treaty was designed primarily to deter the Soviet Union and help create a period of détente, the United States was primarily fearing other threats such as Iraq, North Korea, and Iran. The Russian government was not interested in making any sort of modification to the ABM treaty that would allow for technology to be developed that was explicitly banned when the treaty was agreed upon. However, Russia was interested in revising the treaty in such a way that would allow for a more diplomatic approach to potential missile harboring countries. During this period, the United States was also seeking assistance for their ballistic missile defense systems from Japan. Following the testing of the Taepo Dong missile by the North Korean government, the Japanese government became more concerned and inclined to accept a partnership for a BMD system with the United States. In late 1998, Japan and the United States agreed to the Naval Wide Theater system which would allow the two sides to design, construct, and test ballistic missile defense systems together. Nearing the end of Clinton's time in office, it had been determined that the NMD program was not as effective as the United States would have liked, and the decision was made to not employ this system while Clinton served out the rest of his term. The decision on future of the NMD program was going to be given to the next president in line, who ultimately would end up being George W. Bush.
In the late 1990s and early 2000s, the issue of defense against cruise missiles became more prominent with the new Bush administration. In 2002, President George W. Bush withdrew the US from the Anti-Ballistic Missile Treaty, allowing further development and testing of ABMs under the Missile Defense Agency, as well as deployment of interceptor vehicles beyond the single site allowed under the treaty. During the Bush's time in office, the potentially threatening countries to the United States included North Korea as well as Iran. While these countries might not have possessed the weaponry that many countries containing missile defense systems had, the Bush administration expected an Iranian missile test within the next ten years. In order to counter the potential risk of North Korean missiles, the United States Department of Defense desired to create missile defense systems along the west coast of the United States, namely in both California and Alaska.There are still technological hurdles to an effective defense against ballistic missile attack. The United States National Ballistic Missile Defense System has come under scrutiny about its technological feasibility. Intercepting midcourse (rather than launch or reentry stage) ballistic missiles traveling at several miles per second with a "kinetic kill vehicle" has been characterized as trying to hit a bullet with a bullet. Despite this difficulty, there have been several successful test intercepts and the system was made operational in 2006, while tests and system upgrades continue. Moreover, the warheads or payloads of ballistic missiles can be concealed by a number of different types of decoys. Sensors that track and target warheads aboard the kinetic kill vehicle may have trouble distinguishing the "real" warhead from the decoys, but several tests that have included decoys were successful. Nira Schwartz's and Theodore Postol's criticisms about the technical feasibility of these sensors have led to a continuing investigation of research misconduct and fraud at the Massachusetts Institute of Technology.
In February 2007, the US missile defense system consisted of 13 ground-based interceptors (GBIs) at Fort Greely, Alaska, plus two interceptors at Vandenberg Air Force Base, California. The US planned to have 21 interceptor missiles by the end of 2007. The system was initially called National Missile Defense (NMD), but in 2003 the ground-based component was renamed Ground-Based Midcourse Defense (GMD). As of 2014, the Missile Defense Agency had 30 operational GBIs, with a total 44 GBIs in the missile fields in 2018. In 2021 an additional 20 GBIs of 64 total were planned, but not yet fielded. They are tasked with meeting more complex threats than those met by the EKV.
Defending against cruise missiles is similar to defending against hostile, low-flying crewed aircraft. As with aircraft defense, countermeasures such as chaff, flares, and low altitude can complicate targeting and missile interception. High-flying radar aircraft such as AWACS can often identify low flying threats by using doppler radar. Another possible method is using specialized satellites to track these targets. By coupling a target's kinetic inputs with infrared and radar signatures it may be possible to overcome the countermeasures.
In March 2008, the US Congress convened hearings to re-examine the status of missile defense in US military strategy. Upon taking office, President Obama directed a comprehensive review of ballistic missile defense policy and programs. The review's findings related to Europe were announced on 17 September 2009. The Ballistic Missile Defense Review (BMDR) Report was published in February 2010.
NATO missile defense system
Mechanisms
The Conference of National Armaments Directors (CNAD) is the senior NATO committee which acts as the tasking authority for the theater missile defense program. The ALTBMD Program Management Organization, which comprises a steering committee and a program office hosted by the NATO C3 Agency, directs the program and reports to the CNAD.
The focal point for consultation on full-scale missile defense is the Reinforced Executive Working Group. The CNAD is responsible for conducting technical studies and reporting the outcome to the Group.
The NRC Ad hoc Working Group on TMD is the steering body for NATO-Russia cooperation on theater missile defense.
In September 2018, a consortium of 23 NATO nations met to collaborate on the Nimble Titan 18 integrated air and missile defense (IAMD) campaign of experimentation.
Missile defense
By early 2010, NATO will have an initial capability to protect Alliance forces against missile threats and is examining options for protecting territory and populations. This is in response to the proliferation of weapons of mass destruction and their delivery systems, including missiles of all ranges. NATO is conducting three missile defense–related activities:
Active Layered Theater Ballistic Missile Defense System capability
Active Layered Theater Ballistic Missile Defense System is "ALTBMD" for short.
As of early 2010, the Alliance has an interim capability to protect troops in a specific area against short-range and medium-range ballistic missiles (up to 3,000 kilometers).
The end system consists of a multi-layered system of systems, comprising low- and high-altitude defenses (also called lower- and upper-layer defenses), including Battle Management Command, Control, Communications and Intelligence (BMC3I), early warning sensors, radar, and various interceptors. NATO member countries provide the sensors and weapon systems, while NATO has developed the BMC3I segment and facilitate the integration of all these elements.
Missile Defense for the protection of NATO territory
A Missile Defense Feasibility Study was launched after NATO's 2002 Prague summit. The NATO Consultation, Command and Control Agency (NC3A) and NATO's Conference of National Armaments Directors (CNAD) were also involved in negotiations. The study concluded that missile defense is technically feasible, and it provided a technical basis for ongoing political and military discussions regarding the desirability of a NATO missile defense system.
During the 2008 Bucharest summit, the alliance discussed the technical details as well as the political and military implications of the proposed elements of the US missile defense system in Europe. Allied leaders recognized that the planned deployment of European-based US missile defense assets would help protect North American Allies, and agreed that this capability should be an integral part of any future NATO-wide missile defense architecture. However, these opinions are in the process of being reconstructed given the Obama administration's decision in 2009 to replace the long-range interceptor project in Poland with a short/medium range interceptor.
Russian Foreign Minister Sergei Lavrov has stated that NATO's pattern of deployment of Patriot missiles indicates that these will be used to defend against Iranian missiles in addition to the stated goal of defending against spillover from the Syrian civil war.
Aegis-based system
In order to accelerate the deployment of a missile shield over Europe, Barack Obama sent ships with the Aegis Ballistic Missile Defense System to European waters, including the Black Sea as needed.
In 2012 the system will achieve an "interim capability" that will for the first time offer American forces in Europe some protection against IRBM attack. However, these interceptors may be poorly placed and of the wrong type to defend the United States, in addition to American troops and facilities in Europe.
The Aegis ballistic missile defense-equipped SM-3 Block II-A missile demonstrated it can shoot down an ICBM target on 16 Nov 2020.
ACCS Theater Missile Defense 1
According to BioPrepWatch, NATO has signed a 136 million euro contract with ThalesRaytheonSystems to upgrade its current theater missile defense program.
The project, called ACCS Theater Missile Defense 1, will bring new capabilities to NATO's Air Command and Control System, including updates for processing ballistic missile tracks, additional satellite and radar feeds, improvements to data communication and correlation features. The upgrade to its theater missile defense command and control system will allow for NATO to connect national sensors and interceptors in defense against short and medium-range ballistic missiles. According to NATO's Assistant Secretary General for Defense Investment Patrick Auroy, the execution of this contract will be a major technical milestone forward for NATO's theater missile defense. The project was expected to be complete by 2015. An integrated air and missile defense (IAMD) capability will be delivered to the operational community by 2016, by which time NATO will have a true theater missile defense.
Defense systems and initiatives
Akash missile surface-to-air missile defense system
Arrow missile
Chū-SAM (中SAM) Japan's JGSDF Medium-Range Surface-to-Air Missile
David's Sling
HQ-9 regional air defence/anti-ballistic missile
IAMD The SMDC is leading the United States Army's laser efforts to replace MIM-104 Patriot.
Indian Ballistic Missile Defense Program
Iron Dome
Italian-French SAMP/T missile air defense system
KS-1 regional air defence missile
L-SAM
Medium Extended Air Defense System (MEADS)
Patriot surface-to-air missile system
Hawk medium range surface-to-air missile (SAM) system
RIM-161 Standard Missile 3
Russian A-135 anti-ballistic missile system
S-400 Triumf
Skyguard chemical laser-based area defense system proposed by Northrop Grumman
Sky Bow
Strategic Defense Initiative ("Star Wars")
Terminal High Altitude Area Defense (THAAD)
Vigilant Eagle Airport surface-to-air missile protection system
Iran Belief System 373 (Bavar-373)
Iran Arman long-range anti-ballistic missile system
Khordad 15
| Technology | Countermeasures | null |
1099280 | https://en.wikipedia.org/wiki/Eye%20examination | Eye examination | An eye examination, commonly known as an eye test, is a series of tests performed to assess vision and ability to focus on and discern objects. It also includes other tests and examinations of the eyes. Eye examinations are primarily performed by an optometrist, ophthalmologist, or an orthoptist.
Health care professionals often recommend that all people should have periodic and thorough eye examinations as part of routine primary care, especially since many eye diseases are asymptomatic. Typically, a healthy individual who otherwise has no concerns with their eyes receives an eye exam once in their 20s and twice in their 30s.
Eye examinations may detect potentially treatable blinding eye diseases, ocular manifestations of systemic disease, or signs of tumors or other anomalies of the brain.
A full eye examination consists of a comprehensive evaluation of medical history, followed by 8 steps of visual acuity, pupil function, extraocular muscle motility and alignment, intraocular pressure, confrontational visual fields, external examination, slit-lamp examination and fundoscopic examination through a dilated pupil.
A minimal eye examination consists of tests for visual acuity, pupil function, and extraocular muscle motility, as well as direct ophthalmoscopy through an undilated pupil.
Medical History
Collecting medical history is the first and an essential step in eye examination. Many eye conditions are associated with systemic health, and many diseases can have manifestations in the eye. Certain systematic medications can carry ocular side effects and warrant routine eye exams. Personal and family history of eye diseases can help providers identify individuals at higher risk, allowing for early interventions.
Common Chief Complaints
Common chief complaints for an eye exam include vision loss (transient or persistent), blurry vision, double vision, seeing flashes of light, and seeing floaters.
Medical Conditions
Diabetes Mellitus
Diabetes mellitus, or diabetes, can lead to changes in the eye. Individuals with diabetes can develop early cataract and diabetic retinopathy in the long term.
Hypertension
Longstanding hypertension can contribute to microvascular damage of the blood vessels in the retina, leading to hypertensive retinopathy.
Malignant hypertension can lead to papilledema, which is the swelling of the optic nerve. This is a medical emergency and can lead to blindness.
Autoimmune Disorders
Autoimmune disorders can affect the eyes in different ways. Most commonly, Grave's disease can lead to Grave's ophthalmolopathy or Thyroid Eye Disease (TED). Sjogren's disease manifest as dry eye.
Medication Use
Hydroxychloroquine
Hydroxychloroquine, also known as Plaquenil, is an antimalaria medication commonly used to treat lupus and rheumatoid arthritis. Individuals who are on long-term hydroxychloroquine for more than 5 years are recommended to have a comprehensive eye exam annually. Patients usually receive a baseline exam before starting the medication to document their baseline eye condition as well.
Corticosteroids
Corticoteroids can have ocular side effects. It can increase the intraocular pressure, which can lead to glaucoma.
Personal History of Eye Conditions
Collecting one's personal history of eye conditions provides valuable information for the eye examination. History of trauma to the eye, such as open globe injury, and prior surgeries, such as refractive surgeries, cataract surgeries, and minimally invasive glaucoma surgery (MIGS) procedures are usually gathered during an eye examination.
Family History of Eye Conditions
A family history of glaucoma, age-related macular degeneration, and other inherited eye diseases are often collected, as these diseases have a genetic component.
The 8-Point Eye Exam
Visual Acuity
Visual acuity is the eyes ability to detect fine details and is the quantitative measure of the eye's ability to see an in-focus image at a certain distance. The standard definition of normal visual acuity (20/20 or 6/6 vision) is the ability to resolve a spatial pattern separated by a visual angle of one minute of arc. The terms 20/20 and 6/6 are derived from standardized sized objects that can be seen by a "person of normal vision" at the specified distance. For example, if one can see at a distance of 20 ft an object that normally can be seen at 20 ft, then one has 20/20 vision. If one can see at 20 ft what a normal person can see at 40 ft, then one has 20/40 vision. Put another way, suppose you have trouble seeing objects at a distance and you can only see out to 20 ft what a person with normal vision can see out to 200 feet, then you have 20/200 vision. The 6/6 terminology is used in countries using the metric system, and that represents the distance in meters.
This is often measured with a Snellen chart or LogMAR chart.
Measuring Visual Acuity
Visual acuity is usually measured with a Snellen or LogMAR chart with a lit background to give the reader the best chance of detecting the optotypes (letters or non-letter symbols). Distance visual acuity and near visual acuity are often measured separated. Usually, one eye is measured at a time, first without corrections (glasses or pinhole), then with corrections.
Best corrected visual acuity refers to the best visual acuity one can achieve with corrective lenses. When corrective lenses are not available, a pinhole is often used to simulate the effect of glasses. Any improvement from corrective lenses or/and pinholes are often documented to indicate the individual's refractive potential.
The visual acuity is assigned in the form of a fraction. Visual acuity is recorded as "20/20" (or another fraction like 20/40) when all optotypes (letters or symbols) on a specific line of the eye chart are correctly identified. When an individual correctly identifies additional 2 letters in the next 20/30 lines, then they will be assigned 20/40+2. Alternatively, if an individual correctly identifies all optotypes on the 20/40 lines except 2, they will be assigned 20/40-2.
When an individual cannot read the chart, visual acuity is assessed using alternative methods that do not involve the chart. CF is used when an individual can see and count fingers at a certain distance. For example, CF@2 ft' refers to "count fingers at 2 feet". HM (hand motion) is used when an individual can only see the direction of hand movement close to the face. LP (light perception) is used when an individual can only detect light but not shapes, motions or colors. NLP (no light perception) is assigned when an individual cannot detect any light.
Pupil Function
An examination of pupilary function includes inspecting the pupils for equal size (1 mm or less of difference may be normal), regular shape, reactivity to light, and direct and consensual accommodation. These steps can be easily remembered with the mnemonic PERRLA (D+C): Pupils Equal and Round; Reactive to Light and Accommodation (Direct and Consensual).
A swinging-flashlight test may also be desirable if neurologic damage is suspected.
The swinging-flashlight test is the most useful clinical test available to a general physician for the assessment of optic nerve anomalies.
This test detects the afferent pupil defect, also referred to as the Marcus Gunn pupil. It is conducted in a semidarkened room.
In a normal reaction to the swinging-flashlight test, both pupils constrict when one is exposed to light.
As the light is being moved from one eye to another, both eyes begin to dilate, but constrict again when light has reached the other eye.
If there is an efferent defect in the left eye, the left pupil will remain dilated regardless of where the light is shining, while the right pupil will respond normally.
If there is an afferent defect in the left eye, both pupils will dilate when the light is shining on the left eye, but both will constrict when it is shining on the right eye. This is because the left eye will not respond to external stimulus (afferent pathway), but can still receive neural signals from the brain (efferent pathway) to constrict.
If there is a unilateral small pupil with normal reactivity to light, it is unlikely that a neuropathy is present.
However, if accompanied by ptosis of the upper eyelid, this may indicate Horner's syndrome.
If there is a small, irregular pupil that constricts poorly to light, but normally to accommodation, this is an Argyll Robertson pupil.
Extraocular Motility and Alignment
Ocular motility should always be tested, especially when patients complain of double vision or physicians suspect neurologic disease. First, the doctor should visually assess the eyes for deviations that could result from strabismus, extraocular muscle dysfunction, or palsy of the cranial nerves innervating the extraocular muscles. Saccades are assessed by having the patient move his or her eye quickly to a target at the far right, left, top and bottom. This tests for saccadic dysfunction whereupon poor ability of the eyes to "jump" from one place to another may impinge on reading ability and other skills, whereby the eyes are required to fixate and follow a desired object.
The patient is asked to follow a target with both eyes as it is moved in each of the nine cardinal directions of gaze. The examiner notes the speed, smoothness, range and symmetry of movements and observes for unsteadiness of fixation. These nine fields of gaze test the extraocular muscles: inferior, superior, lateral and medial rectus muscles, as well as the superior and inferior oblique muscles.
Intraocular Pressure
Intraocular pressure (IOP) can be measured by tonometry devices. The eye can be thought of as an enclosed compartment through which there is a constant circulation of fluid that maintains its shape and internal pressure. Tonometry is a method of measuring this pressure using various instruments. The normal range is 10-21 mmHg.
Confrontational Visual Fields
Testing the visual fields consists of confrontation field testing in which each eye is tested separately to assess the extent of the peripheral field.
To perform the test, the individual occludes one eye while fixated on the examiner's eye with the non-occluded eye. The patient is then asked to count the number of fingers that are briefly flashed in each of the four quadrants. This method is preferred to the wiggly finger test that was historically used because it represents a rapid and efficient way of answering the same question: is the peripheral visual field affected?
Common problems of the visual field include scotoma (area of reduced vision), hemianopia (half of visual field lost), homonymous hemianopsia and bitemporal hemianopia.
External Examination
External examination of eyes consists of inspection of the eyelids, surrounding tissues and palpebral fissure. Palpation of the orbital rim may also be performed depending on the presenting signs and symptoms, especially when a fracture is suspected or there was a history of trauma to the head. The general contour and shape of the eyes are observed and compared between two eyes. The position of the eyelids are checked for abnormalities such as ptosis which is an asymmetry between eyelid positions. Any asymmetry, discharge, pus, changes in color and structure around the eyelid will be noted.
The white part of the eye, the conjunctiva and sclera, is examined next. The conjunctiva and sclera can be inspected by having the individual look up, and shining a light while retracting the upper or lower eyelid. Any changes in color of the conjunctiva or the shapes of the blood vessels will be observed. The conjunctiva that lines the inner side of the eyelids can be observed with gentle pulling and inversion of the eyelids.
Slit-Lamp Examination
Close inspection of the anterior eye structures and ocular adnexa are often done with a slit lamp which is a table mounted microscope with a special adjustable illumination source attached. A small beam of light that can be adjusted to vary in width, height, incident angle, orientation and color, is passed over the eye. Often, this light beam is narrowed into a vertical "slit", during slit-lamp examination. The examiner views the illuminated ocular structures, through an optical system that magnifies the image of the eye and the patient is seated while being examined, and the head stabilized by an adjustable chin rest and a bar around the forehead.
The slit lamp also allows inspection of all the ocular media, from cornea to vitreous, plus magnified view of eyelids, and other external ocular related structures. Fluorescein staining of the tear film before slit lamp examination may reveal etiologies of the surface of the eye, such as corneal abrasions or keratitis due to herpes simplex viral infection.
The binocular slit-lamp examination provides stereoscopic, dimensional and magnified view of the eye structures in striking detail, enabling exact anatomical diagnoses to be made for a variety of eye conditions. Specifically, it allows for assessment of height of elevation and indentation of the structures.
Also ophthalmoscopy and gonioscopy examinations can also be performed through the slit lamp when combined with special lenses. These exams help to see the specific structures, such as the retina and optic nerve, which is at the back of the eye, and the drainage system that controls the intraocular pressure, which is in the angle formed between the cornea and the iris.
These lenses include the Goldmann 3-mirror lens, gonioscopy single-mirror/Zeiss 4-mirror lens for (ocular) anterior chamber angle structures and +90D lens, +78D lens and +66D lens the examination of retinal structures is accomplished.
Fundoscopic Examination
Examination of retina (fundus examination) is an important part of the general eye examination. Dilating the pupil using dilating eye drops greatly enhances the view and permits an extensive examination of peripheral retina. A limited view can be obtained through an undilated pupil, in which case best results are obtained with the room darkened and the patient looking towards the far corner. The appearance of the optic disc and retinal vasculature are also recorded during fundus examination.
Findings that can be identified with fundoscopic examination include different types of retinal hemorrhages and vitreous hemorrhages, neovascularization, cotton wool spots, drusen, changes in the caliber or shape of the retinal blood vessels, chanegs in optic nerve color and shape, changes in the retinal pigmented epithelium (RPE), uveal nevus and melanoma, retinal holes, tears or detachments.
Refraction
In physics, "refraction" is the mechanism that bends the path of light as it passes from one medium to another, as when it passes from the air through the parts of the eye. In an eye exam, the term refraction is the determination of the ideal correction of refractive error. Refractive error is an optical abnormality in which the shape of the eye fails to bring light into sharp focus on the retina, resulting in blurred or distorted vision. Examples of refractive error are myopia, hyperopia, presbyopia and astigmatism. The errors are specified in diopters, in a similar format to an eyeglass prescription. A refraction procedure consists of two parts: objective and subjective.
Objective refraction
An objective refraction is a refraction obtained without receiving any feedback from the patient, using a retinoscope or auto-refractor.
To perform a retinoscopy, the doctor projects a streak of light into a pupil. A series of lenses are flashed in front of the eye. By looking through the retinoscope, the doctor can study the light reflex of the pupil. Based on the movement and orientation of this retinal reflection, the refractive state of the eye is measured.
An auto-refractor is a computerized instrument that shines light into an eye. The light travels through the front of the eye, to the back and then forward through the front again. The information bounced back to the instrument gives an objective measurement of refractive error without asking the patients any questions.
Subjective refraction
A subjective refraction requires responses from the patient. Typically, the patient will sit behind a phoropter or wear a trial frame and look at an eye chart. The eye care professional will change lenses and other settings while asking the patient for feedback on which set of lenses give the best vision.
Eye exams for children
The eye exam for children can be different from that for adults, especially for children at a young age who are unable to read the letters in the Snellen chart or cooperate with the more complex components of the assessment.
It is often recommended that children should have their first eye exam at six months old, or earlier if a parent suspects something is wrong with the eyes. Across the world, screening programs are important for identifying children who have a need for spectacles but either do not wear any or have the wrong prescription. Often, children who are suspected of having amblyopia are too young to be able to verbally recognize letters on the Snellen chart, making the eye examination challenging.
It is critical to identify eye conditions early in children, as early detection and intervention can save vision and lives. Retinoblastoma is a rare but life-threatening eye cancer that primarily affects children under the age of 5. Amblyopia, often also called lazy eye, is a common condition in children where the neurological connection between the eye and brain fails to fully establish, resulting the brain's inability to process the visual information from the eye, despite normal structure and function of the eye. The treatment of amblyopia usually involves patching of the good eye. However, this intervention needs to happen in a critical period of time, usually before the age of 12, ideally before the age of 7 or 8, in order for affected children to achieve full visual potential in adulthood. Refractive errors, congenital or early childhood cataract, and strabismus, can all contribute to the development of amblyopia. Thus, it is crucial to address all these ophthalmic conditions in childhood urgently.
Visual Acuity in Infant and Toddlers
The information about the mother's pregnancy, the child's birth and in the neonatal period is often critical. Specific details that might be collected include maternal health, gestation age at birth, and neonatal history. The examination begins as soon as the infant or toddler enters the room. Close attention is paid to the infant's visual behaviors, such as tracking and following moving items or people, head position, and abnormal facial features.
Visual acuity is often assessed qualitatively documented based on their ability to fix and follow (F&F). The fixation behavior can be further characterized as central, steady, and maintained (CSM).
In infants born prematurely, with history of oxygen use in the neonatal period, and with low birth weight, there is an increased risk of developing retinopathy of prematurity. Screening of ROP is often initiated promptly while the infants are still in the hospital, and they are often followed up closely in the first few weeks to months of life to monitor the normal development of blood vessels in the premature retina.
Visual Acuity in Preschool Children
For preschool children, depending on their level of literacy, different types of optotypes (eg. LEA symbols, tumbling E chart) can be used for the assessment of visual acuity. For children who know some letters, the HOTV chart, which only has these 4 letters H, O, T, and V, can be used to reduce confusion. Sometimes, crowded visual acuity test is used to diagnose subtle amblyopia as well.
Red Reflex
Red reflex examination, also called Bruckner Test, is a useful test in children to look for misalignment of the eyes and significant refractive errors. A red reflex can be seen when looking at a patient's pupil through a direct ophthalmoscope. This part of the examination is done from a distance of about 1m and is usually symmetrical between the two eyes. An opacity may indicate a cataract.
Visual Field
Visual field testing in young children is often done after they are able to fixate reliably (usually around 4 months). An object is presented from far peripheral and slowly moves into the center of the vision, while the child maintains fixation on a central target. The point at which the peripheral object captures the child’s attention and prompts a shift in gaze or fixation marks the boundary of their visual field.
Cycloplegic Refraction
Young children have the greatest ability to accommodate, but this strong accommodative ability interferes with the accurate measurement of refractive errors. Accommodation is the ability of eyes to adjust to various different distances of focus. This is accomplished by the ciliary muscles that change the shape of the lens of the eye. Therefore, to achieve the most accurate measure of refractive errors, cycloplegic refraction, which paralyzes the ciliary muscle and prevents accommodation, is often performed. This involves using cycloplegic eye drops, such as cyclopentolate and tropicamide. Often, the effect of the medications could last for several hours to a day.
Retinoscopy is often used in children to measure their refractive errors. This method is a type of objective refraction. It involves the provider shining a narrow beam of light into the eye to see the red reflex of the retina while adjusting differently powered lenses in front of the eye to look for a neutralized point of the reflex. The main advantage of this method is that it does not require verbal feedback from the children and is easy for cooperation.
Children need the following basic visual skills for learning:
Near vision
Distance vision: Tumbling E chart, Landolt C chart
Eye teaming (binocularity)
Eye movement
Accommodation (focusing skills)
Peripheral vision
Eye–hand coordination
Conditions diagnosed during eye examinations
Myopia
Hyperopia
Presbyopia
Amblyopia
Diplopia
Astigmatism
Strabismus
Specialized eye examinations
Color vision
Stereopsis
Near point of convergence
Keratometry
Cycloplegic refraction
Accommodative system
Amplitude of accommodation
Negative relative accommodation
Positive relative accommodation
Vergence system
Optokinetic system
Amsler grid
Gonioscopy
Corneal topography
Corneal pachymetry
Scheimpflug ocular imaging
Retinal tomography
Ocular computed tomography
Scanning laser polarimetry
Electrooculography
Electroretinography
Ultrasound biomicroscopy
Maddox rod
Brock string
Convergence Testing
Worth 4 dot test
Pulfrich effect
| Biology and health sciences | Medical procedures | null |
1099348 | https://en.wikipedia.org/wiki/Morphology%20%28biology%29 | Morphology (biology) | Morphology in biology is the study of the form and structure of organisms and their specific structural features.
This includes aspects of the outward appearance (shape, structure, color, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of the gross structure of an organism or taxon and its component parts.
History
The etymology of the word "morphology" is from the Ancient Greek (), meaning "form", and (), meaning "word, study, research".
While the concept of form in biology, opposed to function, dates back to Aristotle (see Aristotle's biology), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800).
Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Carl Gegenbaur and Ernst Haeckel.
In 1830, Cuvier and Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution.
Divisions of morphology
Comparative morphology is an analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.
Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.
Anatomy is a "branch of morphology that deals with the structure of organisms".
Molecular morphology is a rarely used term, usually referring to the superstructure of polymers such as fiber formation or to larger composite assemblies. The term is commonly not applied to the spatial structure of individual molecules.
Gross morphology refers to the collective structures of an organism as a whole as a general description of the form and structure of an organism, taking into account all of its structures without specifying an individual structure.
Morphology and classification
Most taxa differ morphologically from other taxa. Typically, closely related taxa differ much less than more distantly related ones, but there are exceptions to this. Cryptic species are species which look very similar, or perhaps even outwardly identical, but are reproductively isolated. Conversely, sometimes unrelated taxa acquire a similar appearance as a result of convergent evolution or even mimicry. In addition, there can be morphological differences within a species, such as in Apoica flavissima where queens are significantly smaller than workers. A further problem with relying on morphological data is that what may appear morphologically to be two distinct species may in fact be shown by DNA analysis to be a single species. The significance of these differences can be examined through the use of allometric engineering in which one or both species are manipulated to phenocopy the other species.
A step relevant to the evaluation of morphology between traits/features within species, includes an assessment of the terms: homology and homoplasy. Homology between features indicates that those features have been derived from a common ancestor. Alternatively, homoplasy between features describes those that can resemble each other, but derive independently via parallel or convergent evolution.
3D cell morphology: classification
The invention and development of microscopy enabled the observation of 3-D cell morphology with both high spatial and temporal resolution. The dynamic processes of this cell morphology which are controlled by a complex system play an important role in varied important biological processes, such as immune and invasive responses.
| Biology and health sciences | Basic anatomy | Biology |
1099396 | https://en.wikipedia.org/wiki/Drug%20interaction | Drug interaction | In pharmaceutical sciences, drug interactions occur when a drug's mechanism of action is affected by the concomitant administration of substances such as foods, beverages, or other drugs. A popular example of drug–food interaction is the effect of grapefruit on the metabolism of drugs.
Interactions may occur by simultaneous targeting of receptors, directly or indirectly. For example, both Zolpidem and alcohol affect GABAA receptors, and their simultaneous consumption results in the overstimulation of the receptor, which can lead to loss of consciousness. When two drugs affect each other, it is a drug–drug interaction (DDI). The risk of a DDI increases with the number of drugs used.
A large share of elderly people regularly use five or more medications or supplements, with a significant risk of side-effects from drug–drug interactions.
Drug interactions can be of three kinds:
additive (the result is what you expect when you add together the effect of each drug taken independently),
synergistic (combining the drugs leads to a larger effect than expected), or
antagonistic (combining the drugs leads to a smaller effect than expected).
It may be difficult to distinguish between synergistic or additive interactions, as individual effects of drugs may vary.
Direct interactions between drugs are also possible and may occur when two drugs are mixed before intravenous injection. For example, mixing thiopentone and suxamethonium can lead to the precipitation of thiopentone.
Interactions based on pharmacodynamics
Pharmacodynamic interactions are the drug–drug interactions that occur at a biochemical level and depend mainly on the biological processes of organisms. These interactions occur due to action on the same targets; for example, the same receptor or signaling pathway.
Pharmacodynamic interactions can occur on protein receptors. Two drugs can be considered to be homodynamic, if they act on the same receptor. Homodynamic effects include drugs that act as (1) pure agonists, if they bind to the main locus of the receptor, causing a similar effect to that of the main drug, (2) partial agonists if, on binding to a secondary site, they have the same effect as the main drug, but with a lower intensity and (3) antagonists, if they bind directly to the receptor's main locus but their effect is opposite to that of the main drug. These may be competitive antagonists, if they compete with the main drug to bind with the receptor. or uncompetitive antagonists, when the antagonist binds to the receptor irreversibly. The drugs can be considered heterodynamic competitors, if they act on distinct receptor with similar downstream pathways.
The interaction my also occur via signal transduction mechanisms. For example, low blood glucose leads to a release of catecholamines, triggering symptoms that hint the organism to take action, like consuming sugary foods. If a patient is on insulin, which reduces blood sugar, and also beta-blockers, the body is less able to cope with an insulin overdose.
Interactions based on pharmacokinetics
Pharmacokinetics is the field of research studying the chemical and biochemical factors that directly affect dosage and the half-life of drugs in an organism, including absorption, transport, distribution, metabolism and excretion. Compounds may affect any of those process, ultimately interfering with the flux of drugs in the human body, increasing or reducing drug availability.
Based on absorption
Drugs that change intestinal motility may impact the level of other drugs taken. For example, prokinetic agents increase the intestinal motility, which may cause drugs to go through the digestive system too fast, reducing absorption.
The pharmacological modification of pH can affect other compounds. Drugs can be present in ionized or non-ionized forms depending on pKa, and neutral compounds are usually better absorbed by membranes. Medication like antacids can increase pH and inhibit the absorption of other drugs such as zalcitabine, tipranavir and amprenavir. The opposite is more common, with, for example, the antacid cimetidine stimulating the absorption of didanosine. Some resources describe that a gap of two to four hours between taking the two drugs is needed to avoid the interaction.
Factors such as food with high-fat content may also alter the solubility of drugs and impact its absorption. This is the case for oral anticoagulants and avocado. The formation of non-absorbable complexes may occur also via chelation, when cations can make certain drugs harder to absorb, for example between tetracycline or the fluoroquinolones and dairy products, due to the presence of calcium ions. . Other drugs bind to proteins. Some drugs such as sucralfate bind to proteins, especially if they have a high bioavailability. For this reason its administration is contraindicated in enteral feeding.
Some drugs also alter absorption by acting on the P-glycoprotein of the enterocytes. This appears to be one of the mechanisms by which grapefruit juice increases the bioavailability of various drugs beyond its inhibitory activity on first pass metabolism.
Based on transport and distribution
Drugs also may affect each other by competing for transport proteins in plasma, such as albumin. In these cases the drug that arrives first binds with the plasma protein, leaving the other drug dissolved in the plasma, modifying its expected concentration. The organism has mechanisms to counteract these situations (by, for example, increasing plasma clearance), and thus they are not usually clinically relevant. They may become relevant if other problems are present, such as issues with drug excretion.
Based on metabolism
Many drug interactions are due to alterations in drug metabolism. Further, human drug-metabolizing enzymes are typically activated through the engagement of nuclear receptors. One notable system involved in metabolic drug interactions is the enzyme system comprising the cytochrome P450 oxidases.
CYP450
Cytochrome P450 is a very large family of haemoproteins (hemoproteins) that are characterized by their enzymatic activity and their role in the metabolism of a large number of drugs. Of the various families that are present in humans, the most interesting in this respect are the 1, 2 and 3, and the most important enzymes are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4.
The majority of the enzymes are also involved in the metabolism of endogenous substances, such as steroids or sex hormones, which is also important should there be interference with these substances. The function of the enzymes can either be stimulated (enzyme induction) or inhibited (enzyme inhibition).
Through enzymatic inhibition and induction
If a drug is metabolized by a CYP450 enzyme and drug B blocks the activity of these enzymes, it can lead to pharmacokinetic alterations. A. This alteration results in drug A remaining in the bloodstream for an extended duration, and eventually increase in concentration.
In some instances, the inhibition may reduce the therapeutic effect, if instead the metabolites of the drug is responsible for the effect.
Compounds that increase the efficiency of the enzymes, on the other hand, may have the opposite effect and increase the rate of metabolism.
Examples of metabolism-based interactions
An example of this is shown in the following table for the CYP1A2 enzyme, showing the substrates (drugs metabolized by this enzyme) and some inductors and inhibitors of its activity:
Some foods also act as inductors or inhibitors of enzymatic activity. The following table shows the most common:
Based on excretion
Renal and biliary excretion
Drugs tightly bound to proteins (i.e. not in the free fraction) are not available for renal excretion.
Filtration depends on a number of factors including the pH of the urine. Drug interactions may affect those points.
With herbal medicines
Herb-drug interactions are drug interactions that occur between herbal medicines and conventional drugs. These types of interactions may be more common than drug-drug interactions because herbal medicines often contain multiple pharmacologically active ingredients, while conventional drugs typically contain only one. Some such interactions are clinically significant, although most herbal remedies are not associated with drug interactions causing serious consequences. Most catalogued herb-drug interactions are moderate in severity. The most commonly implicated conventional drugs in herb-drug interactions are warfarin, insulin, aspirin, digoxin, and ticlopidine, due to their narrow therapeutic indices. The most commonly implicated herbs involved in such interactions are those containing St. John’s Wort, magnesium, calcium, iron, or ginkgo.
Examples
Examples of herb-drug interactions include, but are not limited to:
St. John's wort affects the clearance of numerous drugs, including cyclosporin, SSRI antidepressants, digoxin, indinavir, and phenprocoumon. It may also interact with the anti-cancer drugs irinotecan and imatinib.
Salvia miltiorrhiza may enhance anticoagulation and bleeding among people taking warfarin.
Allium sativum has been found to decrease the plasma concentration of saquinavir, and may cause hypoglycemia when taken with chlorpropamide.
Ginkgo biloba can cause bleeding when combined with warfarin or aspirin.
Concomitant Ephedra and caffeine use has been reported to, in rare cases, cause fatalities.
Mechanisms
The mechanisms underlying most herb-drug interactions are not fully understood. Interactions between herbal medicines and anticancer drugs typically involve enzymes that metabolize cytochrome P450. For example, St. John's Wort has been shown to induce CYP3A4 and P-glycoprotein in vitro and in vivo.
Underlying factors
The factors or conditions that predispose the appearance of interactions include factors such as old age. This is where human physiology changing with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission, or the functioning of bone marrow all decrease with age. In addition, in old age, there is a sensory decrease that increases the chances of errors being made in the administration of drugs. The elderly are also more vulnerable to polypharmacy, and the more drugs a patient takes, the higher is the chance of an interaction.
Genetic factors may also affect the enzymes and receptors, thus altering the possibilities of interactions.
Patients with hepatic or renal diseases already may have difficulties metabolizing and excreting drugs, which may exacerbate the effect of interactions.
Some drugs present an intrinsic increased risk for a harmful interaction, including drugs with a narrow therapeutic index, where the difference between the effective dose and the toxic dose is small. The drug digoxin is an example of this type of drug.
Risks are also increased when the drug presents a steep dose-response curve, and small changes in the dosage produce large changes in the drug's concentration in the blood plasma.
Epidemiology
As of 2008, among adults in the United States of America older than 56, 4% were taking medication and/ or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the less-educated elderly even after controlling for age, sex, place of residence, and comorbidity.
| Biology and health sciences | General concepts_2 | Health |
1099413 | https://en.wikipedia.org/wiki/Orbital%20eccentricity | Orbital eccentricity | In astrodynamics, the orbital eccentricity of an astronomical object is a dimensionless parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit (or capture orbit), and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the Galaxy.
Definition
In a two-body problem with inverse-square-law force, every orbit is a Kepler orbit. The eccentricity of this Kepler orbit is a non-negative number that defines its shape.
The eccentricity may take the following values:
Circular orbit:
Elliptic orbit:
Parabolic trajectory:
Hyperbolic trajectory:
The eccentricity is given by
where is the total orbital energy, is the angular momentum, is the reduced mass, and the coefficient of the inverse-square law central force such as in the theory of gravity or electrostatics in classical physics:
( is negative for an attractive force, positive for a repulsive one; related to the Kepler problem)
or in the case of a gravitational force:
where is the specific orbital energy (total energy divided by the reduced mass), the standard gravitational parameter based on the total mass, and the specific relative angular momentum (angular momentum divided by the reduced mass).
For values of from to just under the orbit's shape is an increasingly elongated (or flatter) ellipse; for values of just over to infinity the orbit is a hyperbola branch making a total turn of decreasing from 180 to 0 degrees. Here, the total turn is analogous to turning number, but for open curves (an angle covered by velocity vector). The limit case between an ellipse and a hyperbola, when equals , is parabola.
Radial trajectories are classified as elliptic, parabolic, or hyperbolic based on the energy of the orbit, not the eccentricity. Radial orbits have zero angular momentum and hence eccentricity equal to one. Keeping the energy constant and reducing the angular momentum, elliptic, parabolic, and hyperbolic orbits each tend to the corresponding type of radial trajectory while tends to (or in the parabolic case, remains ).
For a repulsive force only the hyperbolic trajectory, including the radial version, is applicable.
For elliptical orbits, a simple proof shows that gives the projection angle of a perfect circle to an ellipse of eccentricity . For example, to view the eccentricity of the planet Mercury (), one must simply calculate the inverse sine to find the projection angle of 11.86 degrees. Then, tilting any circular object by that angle, the apparent ellipse of that object projected to the viewer's eye will be of the same eccentricity.
Etymology
The word "eccentricity" comes from Medieval Latin eccentricus, derived from Greek ekkentros "out of the center", from ek-, "out of" + kentron "center". "Eccentric" first appeared in English in 1551, with the definition "...a circle in which the earth, sun. etc. deviates from its center". In 1556, five years later, an adjectival form of the word had developed.
Calculation
The eccentricity of an orbit can be calculated from the orbital state vectors as the magnitude of the eccentricity vector:
where:
is the eccentricity vector ("Hamilton's vector").
For elliptical orbits it can also be calculated from the periapsis and apoapsis since and where is the length of the semi-major axis.
where:
is the radius at apoapsis (also "apofocus", "aphelion", "apogee"), i.e., the farthest distance of the orbit to the center of mass of the system, which is a focus of the ellipse.
is the radius at periapsis (or "perifocus" etc.), the closest distance.
The semi-major axis, a, is also the path-averaged distance to the centre of mass, while the time-averaged distance is a(1 + e e / 2).
The eccentricity of an elliptical orbit can be used to obtain the ratio of the apoapsis radius to the periapsis radius:
For Earth, orbital eccentricity , apoapsis is aphelion and periapsis is perihelion, relative to the Sun.
For Earth's annual orbit path, the ratio of longest radius () / shortest radius () is
Examples
The table lists the values for all planets and dwarf planets, and selected asteroids, comets, and moons. Mercury has the greatest orbital eccentricity of any planet in the Solar System (e = ), followed by Mars of . Such eccentricity is sufficient for Mercury to receive twice as much solar irradiation at perihelion compared to aphelion. Before its demotion from planet status in 2006, Pluto was considered to be the planet with the most eccentric orbit (e = ). Other Trans-Neptunian objects have significant eccentricity, notably the dwarf planet Eris (0.44). Even further out, Sedna has an extremely-high eccentricity of due to its estimated aphelion of 937 AU and perihelion of about 76 AU, possibly under influence of unknown object(s).
The eccentricity of Earth's orbit is currently about ; its orbit is nearly circular. Neptune's and Venus's have even lower eccentricities of and respectively, the latter being the least orbital eccentricity of any planet in the Solar System. Over hundreds of thousands of years, the eccentricity of the Earth's orbit varies from nearly to almost 0.058 as a result of gravitational attractions among the planets.
Luna's value is , the most eccentric of the large moons in the Solar System. The four Galilean moons (Io, Europa, Ganymede and Callisto) have their eccentricities of less than 0.01. Neptune's largest moon Triton has an eccentricity of (), the smallest eccentricity of any known moon in the Solar System; its orbit is as close to a perfect circle as can be currently measured. Smaller moons, particularly irregular moons, can have significant eccentricities, such as Neptune's third largest moon, Nereid, of .
Most of the Solar System's asteroids have orbital eccentricities between 0 and 0.35 with an average value of 0.17. Their comparatively high eccentricities are probably due to under influence of Jupiter and to past collisions.
Comets have very different values of eccentricities. Periodic comets have eccentricities mostly between 0.2 and 0.7, but some of them have highly eccentric elliptical orbits with eccentricities just below 1; for example, Halley's Comet has a value of 0.967. Non-periodic comets follow near-parabolic orbits and thus have eccentricities even closer to 1. Examples include Comet Hale–Bopp with a value of , Comet Ikeya-Seki with a value of and Comet McNaught (C/2006 P1) with a value of . As first two's values are less than 1, their orbit are elliptical and they will return.
McNaught has a hyperbolic orbit but within the influence of the inner planets, is still bound to the Sun with an orbital period of about 105 years. Comet C/1980 E1 has the largest eccentricity of any known hyperbolic comet of solar origin with an eccentricity of 1.057, and will eventually leave the Solar System.
Oumuamua is the first interstellar object to be found passing through the Solar System. Its orbital eccentricity of 1.20 indicates that Oumuamua has never been gravitationally bound to the Sun. It was discovered 0.2 AU ( km; mi) from Earth and is roughly 200 meters in diameter. It has an interstellar speed (velocity at infinity) of 26.33 km/s ( mph).
Mean average
The mean eccentricity of an object is the average eccentricity as a result of perturbations over a given time period. Neptune currently has an instant (current epoch) eccentricity of , but from 1800 to 2050 has a mean eccentricity of .
Climatic effect
Orbital mechanics require that the duration of the seasons be proportional to the area of Earth's orbit swept between the solstices and equinoxes, so when the orbital eccentricity is extreme, the seasons that occur on the far side of the orbit (aphelion) can be substantially longer in duration. Northern hemisphere autumn and winter occur at closest approach (perihelion), when Earth is moving at its maximum velocity—while the opposite occurs in the southern hemisphere. As a result, in the northern hemisphere, autumn and winter are slightly shorter than spring and summer—but in global terms this is balanced with them being longer below the equator. In 2006, the northern hemisphere summer was 4.66 days longer than winter, and spring was 2.9 days longer than autumn due to orbital eccentricity.
Apsidal precession also slowly changes the place in Earth's orbit where the solstices and equinoxes occur. This is a slow change in the orbit of Earth, not the axis of rotation, which is referred to as axial precession. The climatic effects of this change are part of the Milankovitch cycles. Over the next years, the northern hemisphere winters will become gradually longer and summers will become shorter. Any cooling effect in one hemisphere is balanced by warming in the other, and any overall change will be counteracted by the fact that the eccentricity of Earth's orbit will be almost halved. This will reduce the mean orbital radius and raise temperatures in both hemispheres closer to the mid-interglacial peak.
Exoplanets
Of the many exoplanets discovered, most have a higher orbital eccentricity than planets in the Solar System. Exoplanets found with low orbital eccentricity (near-circular orbits) are very close to their star and are tidally-locked to the star. All eight planets in the Solar System have near-circular orbits. The exoplanets discovered show that the Solar System, with its unusually-low eccentricity, is rare and unique. One theory attributes this low eccentricity to the high number of planets in the Solar System; another suggests it arose because of its unique asteroid belts. A few other multiplanetary systems have been found, but none resemble the Solar System. The Solar System has unique planetesimal systems, which led the planets to have near-circular orbits. Solar planetesimal systems include the asteroid belt, Hilda family, Kuiper belt, Hills cloud, and the Oort cloud. The exoplanet systems discovered have either no planetesimal systems or a very large one. Low eccentricity is needed for habitability, especially advanced life. High multiplicity planet systems are much more likely to have habitable exoplanets. The grand tack hypothesis of the Solar System also helps understand its near-circular orbits and other unique features.
| Physical sciences | Celestial mechanics | Astronomy |
9107162 | https://en.wikipedia.org/wiki/ScRGB | ScRGB | scRGB is a wide color gamut RGB color space created by Microsoft and HP that uses the same color primaries and white/black points as the sRGB color space but allows coordinates below zero and greater than one. The full range is −0.5 through just less than +7.5.
Negative numbers enables scRGB to encompass most of the CIE 1931 color space while maintaining simplicity and backward compatibility with sRGB by not changing the primary colors. However this means approximately 80% of the scRGB color space consists of imaginary colors. Numbers greater than 1.0 allow high dynamic range images to be represented, though the dynamic range is less than other formats.
Encoding
Two encodings are defined for the individual primaries: a linear 16 bit per channel encoding and a nonlinear 12 bit per channel encoding.
The 16 bit scRGB(16) encoding is the linear RGB channels converted by . Compared to 8-bit sRGB this ranges from almost times the color resolution near 0.0 to more than 14 times the color resolution near 1.0. Storage as 16 bits clamps the linear range to .
The 12-bit scRGB-nl encoding is the linear RGB channels passed through the same opto-electric conversion function as sRGB (for negative numbers use ) and then converted by . This is exactly 5 times the color resolution of 8-bit sRGB, and 8-bit sRGB can be converted directly with . The linear range is clamped to the slightly larger .
A 12-bit encoding called scYCC-nl is the conversion of the non-linear sRGB levels to JFIF-Y'CbCr and then converted by , , . This form can allow greater compression and direct conversion to/from JPEG files and video hardware.
With the addition of an alpha channel with the same number of bits the 16-bit encoding may be seen referred to as 64 bit and the 12-bit encoding referred to as 48-bit. Alpha is not encoded as above, however. Alpha is instead a linear 0-1 range multiplied by where is 12 or 16.
Usage
The first implementation of scRGB was the GDI+ API in Windows Vista. At WinHEC 2008 Microsoft announced that Windows 7 would support 48-bit scRGB (which for HDMI can be converted and output as xvYCC). The components in Windows 7 that support 48-bit scRGB are Direct3D, the Windows Imaging Component, and the Windows Color System and they support it in both full screen exclusive mode and in video overlays.
Origin of sc in scRGB
The origin of the sc in scRGB is shrouded in mystery. Officially it stands for nothing. According to Michael Stokes (the national and international leader of the International Electrotechnical Commission, or IEC, group working on scRGB), the name appeared when the Japanese national committee requested a name change from the earlier XsRGB (excess RGB). The two leading candidates for meaning are "specular RGB" because scRGB supports whites greater than the diffuse 1.0 values, and "standard compositing RGB" because the linearity, floating-point support, HDR (high dynamic range) support, and wide gamut support are ideally suited for compositing. This meaning also implicitly emphasizes that scRGB is not intended to be directly supported in devices or formats, since by definition scRGB encompasses values that are beyond both the human visual system and (even theoretically) realizable physical devices.
| Physical sciences | Basics | Physics |
655358 | https://en.wikipedia.org/wiki/Standard%20time | Standard time | Standard time is the synchronization of clocks within a geographical region to a single time standard, rather than a local mean time standard. Generally, standard time agrees with the local mean time at some meridian that passes through the region, often near the centre of the region. Historically, standard time was established during the 19th century to aid weather forecasting and train travel. Applied globally in the 20th century, the geographical regions became time zones. The standard time in each time zone has come to be defined as an offset from Universal Time. A further offset is applied for part of the year in regions with daylight saving time.
The adoption of standard time, because of the inseparable correspondence between longitude and time, solidified the concept of halving the globe into the Eastern Hemisphere and the Western Hemisphere, with one Prime Meridian replacing the various prime meridians that had previously been used.
History of standard time
During the 19th century, scheduled steamships and trains required time standardisation in the industrialized world.
Great Britain
A standardised time system was first used by British railways on 1 December 1847, when they switched from local mean time, which varied from place to place, to Greenwich Mean Time (GMT). It was also given the name railway time, reflecting the important role the railway companies played in bringing it about. The vast majority of Great Britain's public clocks were standardised to GMT by 1855.
North America
Until 1883, each United States railroad chose its own time standards. The Pennsylvania Railroad used the "Allegheny Time" system, an astronomical timekeeping service which had been developed by Samuel Pierpont Langley at the University of Pittsburgh's Allegheny Observatory (then known as the Western University of Pennsylvania, located in Pittsburgh, Pennsylvania). Instituted in 1869, the Allegheny Observatory's service is believed to have been the first regular and systematic system of time distribution to railroads and cities as well as the origin of the modern standard time system. By 1870 the Allegheny Time service extended over 2,500 miles with 300 telegraph offices receiving time signals.
However, almost all railroads out of New York ran on New York time, and railroads west from Chicago mostly used Chicago time, but between Chicago and Pittsburgh/Buffalo the norm was Columbus time, even on railroads such as the PFtW&C and LS&MS, which did not run through Columbus. The Santa Fe Railroad used Jefferson City (Missouri) time all the way to its west end at Deming, New Mexico, as did the east–west lines across Texas; Central Pacific and Southern Pacific Railroads used San Francisco time all the way to El Paso. The Northern Pacific Railroad had seven time zones between St. Paul and the 1883 west end of the railroad at Wallula Jct; the Union Pacific Railway was at the other extreme, with only two time zones between Omaha and Ogden.
In 1870, Charles F. Dowd proposed four time zones based on the meridian through Washington, DC, for North American railroads. In 1872 he revised his proposal to base it on the Greenwich meridian. Sandford Fleming, a Scottish-born Canadian engineer, proposed worldwide Standard Time at a meeting of the Royal Canadian Institute on February 8, 1879. Cleveland Abbe advocated standard time to better coordinate international weather observations and resultant weather forecasts, which had been coordinated using local solar time. In 1879 he recommended four time zones across the contiguous United States, based upon Greenwich Mean Time. The General Time Convention (renamed the American Railway Association in 1891), an organization of US railroads charged with coordinating schedules and operating standards, became increasingly concerned that if the US government adopted a standard time scheme it would be disadvantageous to its member railroads. William F. Allen, the Convention secretary, argued that North American railroads should adopt a five-zone standard, similar to the one in use today, to avoid government action. On October 11, 1883, the heads of the major railroads met in Chicago at the Grand Pacific Hotel and agreed to adopt Allen's proposed system.
The members agreed that on Sunday, November 18, 1883, all United States and Canadian railroads would readjust their clocks and watches to reflect the new five-zone system on a telegraph signal from the Allegheny Observatory in Pittsburgh at exactly noon on the 90th meridian. Although most railroads adopted the new system as scheduled, some did so early on October 7 and others late on December 2. The Intercolonial Railway serving the Canadian maritime provinces of New Brunswick and Nova Scotia just east of Maine decided not to adopt Intercolonial Time based on the 60th meridian west of Greenwich, instead adopting Eastern Time, so only four time zones were actually adopted by American and Canadian railroads in 1883. Major American observatories, including the Allegheny Observatory, the United States Naval Observatory, the Harvard College Observatory, and the Yale University Observatory, agreed to provide telegraphic time signals at noon Eastern Time.
Standard time was not enacted into US law until the 1918 Standard Time Act established standard time in time zones; the law also instituted daylight saving time (DST). The daylight saving time portion of the law was repealed in 1919 over a presidential veto, but was re-established nationally during World War II. In 2007 the US enacted a federal law formalising the use of Coordinated Universal Time as the basis of standard time, and the role of the Secretary of Commerce (effectively, the National Institute of Standards and Technology) and the Secretary of the Navy (effectively, the US Naval Observatory) in interpreting standard time.
In 1999, standard time was inducted into the North America Railway Hall of Fame in the category "National: Technical Innovations."
The Dominion of Newfoundland, whose capital St. John's falls almost exactly midway between the meridians anchoring the Atlantic Time Zone and the Greenland Time Zone, voted in 1935 to create a half-hour offset time zone known as the Newfoundland Time Zone, at three and a half hours behind Greenwich time.
The Netherlands
In the Netherlands, introduction of the railways made it desirable to create a standard time. On 1 May 1909, Amsterdam Time or Dutch Time was introduced. Before that, time was measured in different cities; in the east of the country, this was a few minutes earlier than in the west. After that, all parts of the country had the same local time—that of the Wester Tower in Amsterdam (Westertoren/4°53'01.95" E). This time was indicated as GMT +0h 19m 32.13s until 17 March 1937, after which it was simplified to GMT+0h20m. This time zone was also known as the Loenen time or Gorinchem time, as this was the exact time in both Loenen and Gorinchem. At noon in Amsterdam, it was 11:40 in London and 12:40 in Berlin.
The shift to the current Central European Time zone took place on 16 May 1940. The German occupiers ordered the clock to be moved an hour and forty minutes forward. This time was kept in summer and winter throughout 1941 and 1942. It was only in November 1942 that a different Winter time was introduced, and the time was adjusted one hour backwards. This lasted for only three years; after the liberation of the Netherlands in 1945, Summer time was abolished for over thirty years, so during those years, standard time was 40 minutes ahead of the original Amsterdam Time. As of 2017, the Netherlands is in line with Central European Time (GMT+1 in the winter, GMT+2 in the summer, which is significantly different from Amsterdam Time).
New Zealand
In 1868, New Zealand was the first country in the world to establish a nationwide standard time.
A telegraph cable between New Zealand's two main islands became the instigating factor for the establishment of "New Zealand time". In 1868, the Telegraph Department adopted "Wellington time" as the standard time across all their offices so that opening and closing times could be synchronised. The Post Office, which usually shared the same building, followed suit. However, protests that time was being dictated by one government department, led to a resolution in parliament to establish a standard time for the whole country.
The director of the Geological Survey, James Hector, selected New Zealand time to be at the meridian 172°30′E. This was very close to the country's mean longitude and exactly 11.5 hours in advance of Greenwich Mean Time. It came into effect on 2 November 1868.
For over fifty years, the Colonial Time Service Observatory in Wellington, determined the correct time each morning. At 9 a.m. each day, it was transmitted by Morse code to post offices and railway stations around the country. In 1920, radio time signals began broadcasting, greatly increasing the accuracy of the time nationwide.
| Technology | Timekeeping | null |
656713 | https://en.wikipedia.org/wiki/Beta%20Pictoris | Beta Pictoris | Beta Pictoris (abbreviated β Pictoris or β Pic) is the second brightest star in the constellation Pictor. It is located from the Solar System, and is 1.75 times as massive and 8.7 times as luminous as the Sun. The Beta Pictoris system is very young, only 20 to 26 million years old, although it is already in the main sequence stage of its evolution. Beta Pictoris is the title member of the Beta Pictoris moving group, an association of young stars which share the same motion through space and have the same age.
The European Southern Observatory (ESO) has confirmed the presence of two planets, Beta Pictoris b, and Beta Pictoris c, through the use of direct imagery. Both planets are orbiting in the plane of the debris disk surrounding the star. Beta Pictoris c is currently the closest extrasolar planet to its star ever photographed: the observed separation is roughly the same as the distance between the asteroid belt and the Sun.
Beta Pictoris shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star. Detailed observations reveal a large disk of dust and gas orbiting the star, which was the first debris disk to be imaged around another star. In addition to the presence of several planetesimal belts and cometary activity, there are indications that planets have formed within this disk and that the processes of planet formation may be ongoing. Material from the Beta Pictoris debris disk is thought to be the dominant source of interstellar meteoroids in the Solar System.
Location and visibility
Beta Pictoris is a star in the southern constellation of Pictor, the Easel, and is located to the west of the bright star Canopus. It traditionally marked the sounding line of the ship Argo Navis, before the constellation was split. The star has an apparent visual magnitude of 3.861, so is visible to the naked eye under good conditions, though light pollution may result in stars dimmer than magnitude 3 being too dim to see. It is the second brightest in its constellation, exceeded only by Alpha Pictoris, which has an apparent magnitude of 3.30.
The distance to Beta Pictoris and many other stars was measured by the Hipparcos satellite. This was done by measuring its trigonometric parallax: the slight displacement in its position observed as the Earth moves around the Sun. Beta Pictoris was found to exhibit a parallax of 51.87 milliarcseconds, a value which was later revised to 51.44 milliarcseconds when the data was reanalyzed taking systematic errors more carefully into account. The distance to Beta Pictoris is therefore 63.4 light years, with an uncertainty of 0.1 light years.
The Hipparcos satellite also measured the proper motion of Beta Pictoris: it is traveling eastwards at a rate of 4.65 milliarcseconds per year, and northwards at a rate of 83.10 milliarcseconds per year. Measurements of the Doppler shift of the star's spectrum reveals it is moving away from Earth at a rate of 20 km/s. Several other stars share the same motion through space as Beta Pictoris and likely formed from the same gas cloud at roughly the same time: these comprise the Beta Pictoris moving group.
Physical properties
Spectrum, luminosity and variability
According to measurements made as part of the Nearby Stars Project, Beta Pictoris has a spectral type of A6V and has an effective temperature of , which is hotter than the Sun's . Analysis of the spectrum reveals that the star contains a slightly higher ratio of heavy elements, which are termed metals in astronomy, to hydrogen than the Sun. This value is expressed as the quantity [M/H], the base-10 logarithm of the ratio of the star's metal fraction to that of the Sun. In the case of Beta Pictoris, the value of [M/H] is 0.05, which means that the star's metal fraction is 12% greater than that of the Sun.
Analysis of the spectrum can also reveal the surface gravity of the star. This is usually expressed as log g, the base-10 logarithm of the gravitational acceleration given in CGS units, in this case, cm/s². Beta Pictoris has log g=4.15, implying a surface gravity of 140 m/s², which is about half of the gravitational acceleration at the surface of the Sun (274 m/s²).
As an A-type main sequence star, Beta Pictoris is more luminous than the Sun: combining the apparent magnitude of 3.861 with the distance of 19.44 parsecs gives an absolute magnitude of 2.4, as compared to the Sun, which has an absolute magnitude of 4.83. This corresponds to a visual luminosity 9.2 times greater than that of the Sun. When the entire spectrum of radiation from Beta Pictoris and the Sun is taken into account, Beta Pictoris is found to be 8.7 times more luminous than the Sun.
Many main sequence stars of spectral type A fall into a region of the Hertzsprung–Russell diagram called the instability strip, which is occupied by pulsating variable stars. In 2003, photometric monitoring of the star revealed variations in brightness of around 1–2 millimagnitudes on frequencies between about 30 and 40 minutes. Radial velocity studies of Beta Pictoris also reveal variability: there are pulsations at two frequencies, one at 30.4 minutes and one at 36.9 minutes. As a result, the star is classified as a Delta Scuti variable.
Mass, radius and rotation
The mass of Beta Pictoris has been determined by using models of stellar evolution and fitting them to the star's observed properties. This method yields a stellar mass between 1.7 and 1.8 solar masses. The star's angular diameter has been measured using interferometry with the Very Large Telescope and was found to be 0.84 milliarcseconds, giving it an actual size 1.7 times that of the Sun.
The rotational velocity of Beta Pictoris has been measured to be at least 130 km/s. Since this value is derived by measuring radial velocities, this is a lower limit on the true rotational velocity: the quantity measured is actually v sin(i), where i represents the inclination of the star's axis of rotation to the line-of-sight. If it is assumed that Beta Pictoris is viewed from Earth in its equatorial plane, a reasonable assumption since the circumstellar disk is seen edge-on, the rotation period can be calculated as approximately 16 hours, which is significantly shorter than that of the Sun (609.12 hours).
Age and formation
The presence of significant amounts of dust around the star implies a young age of the system and led to debate about whether it had joined the main sequence or was still a pre–main sequence star However, when the star's distance was measured by Hipparcos it was revealed that Beta Pictoris was located further away than previously thought and hence was more luminous than originally believed. Once the Hipparcos results were taken into account, it was found that Beta Pictoris was located close to the zero age main sequence and was not a pre–main sequence star after all. Analysis of Beta Pictoris and other stars within the Beta Pictoris moving group suggested that they are around 12 million years old. However more recent studies indicate that the age is roughly double this at 20 to 26 million years old.
Beta Pictoris may have been formed near the Scorpius–Centaurus association. The collapse of the gas cloud which resulted in the formation of Beta Pictoris may have been triggered by the shock wave from a supernova explosion: the star which went supernova may have been a former companion of HD 83058, which is now a runaway star. Tracing the path of HIP 46950 backwards suggests that it would have been in the vicinity of the Scorpius–Centaurus association about 13 million years ago. However, HD 83058 has been found to be a spectroscopic binary and unlikely to have been ejected by the supernova explosion of a close companion, so the simple explanation for the origin of the Beta Pictoris cluster is in doubt.
Circumstellar environment
Debris disks
Excess infrared radiation from Beta Pictoris was detected by the IRAS spacecraft in 1983. Along with Vega, Fomalhaut and Epsilon Eridani, it was one of the first four stars from which such an excess was detected: these stars are called "Vega-like" after the first such star discovered. Since A-type stars like Beta Pictoris tend to radiate most of their energy at the blue end of the spectrum, this implied the presence of cool matter in orbit around the star, which would radiate at infrared wavelengths and produce the excess. This hypothesis was verified in 1984 when Beta Pictoris became the first star to have its circumstellar disk imaged optically. The IRAS data are (at the micron wavelengths): [12]=2.68, [25]=0.05, [60]=−2.74 and [100]=−3.41. The colour excesses are: E12=0.69, E25=3.35, E60=6.17 and E100=6.90.
The debris disk around Beta Pictoris is seen edge-on by observers on Earth, and is orientated in a northeast-southwest direction. The disk is asymmetric: in the northeast direction it has been observed out to 1835 astronomical units from the star, while the southwest direction the extent is 1450 AU. The disk is rotating: the part to the northeast of the star is moving away from Earth, while the part to the southwest of the disc is moving towards Earth.
Several elliptical rings of material have been observed in the outer regions of the debris disk between 500 and 800 AU: these may have formed as a result of the system being disrupted by a passing star. Astrometric data from the Hipparcos mission reveal that the red giant star Beta Columbae passed within 2 light years of Beta Pictoris about 110,000 years ago, but a larger perturbation would have been caused by Zeta Doradus, which passed at a distance of 3 light years about 350,000 years ago. However computer simulations favor a lower encounter velocity than either of these two candidates, which suggest that the star responsible for the rings may have been a companion star of Beta Pictoris on an unstable orbit. The simulations suggest a perturbing star with a mass of 0.5 solar masses is likely to blame for the structures. Such a star would be a red dwarf of spectral type M0V.
In 2006, imaging of the system with the Hubble Space Telescope's Advanced Camera for Surveys revealed the presence of a secondary dust disk inclined at an angle of about 5° to the main disk and extending at least 130 AU from the star. The secondary disk is asymmetrical: the southwest extension is more curved and less inclined than the northeast. The imaging was not good enough to distinguish between the main and secondary disks within 80 AU of Beta Pictoris, however the northeast extension of the dust disk is predicted to intersect with the main disk at about 30 AU from the star. The secondary disk may be produced by a massive planet in an inclined orbit removing matter from the primary disk and causing it to move in an orbit aligned with the planet.
Studies made with the NASA Far Ultraviolet Spectroscopic Explorer have discovered that the disk around Beta Pictoris contains an extreme overabundance of carbon-rich gas. This helps stabilize the disk against radiation pressure which would otherwise blow the material away into interstellar space. Currently, there are two suggested explanations for the origin of the carbon overabundance. Beta Pictoris might be in the process of forming exotic carbon-rich planets, in contrast to the terrestrial planets in the Solar System, which are rich in oxygen instead of carbon. Alternatively it may be passing through an unknown phase that might also have occurred early in the development of the Solar System: in the Solar System there are carbon-rich meteorites known as enstatite chondrites, which may have formed in a carbon-rich environment. It has also been proposed that Jupiter may have formed around a carbon-rich core.
In 2011 the disk around Beta Pictoris became the first other planetary system to be photographed by an amateur astronomer. Rolf Olsen of New Zealand captured the disk with a 10-inch Newtonian reflector and a modified webcam.
Planetesimal belts
In 2003, imaging of the inner region of the Beta Pictoris system with the Keck II telescope revealed the presence of several features which are interpreted as being belts or rings of material. Belts at approximately 14, 28, 52 and 82 astronomical units from the star were detected, which alternate in inclination with respect to the main disk.
Observations in 2004 revealed the presence of an inner belt containing silicate material at a distance of 6.4 AU from the star. Silicate material was also detected at 16 and 30 AU from the star, with a lack of dust between 6.4 and 16 AU providing evidence that a massive planet may be orbiting in this region. Magnesium-rich olivine has also been detected, strikingly similar to that found in the Solar System comets and different from the olivine found in Solar System asteroids. Olivine crystals can only form closer than 10 AU from the star; therefore they have been transported to the belt after formation, probably by radial mixing.
Modeling of the dust disk at 100 AU from the star suggests the dust in this region may have been produced by a series of collisions initiated by the destruction of planetesimals with radii of about 180 kilometers. After the initial collision, the debris undergoes further collisions in a process called a collisional cascade. Similar processes have been inferred in the debris disks around Fomalhaut and AU Microscopii.
Two giant collisions are thought to have taken place in the past around Beta Pictoris. The first suspected collision occurred around 150 years ago and involved a mass between 1019 to 1021 kg, which translates to a body with a size between 100 and 500 km. This collision occurred around 85 au from the host star. This first collision could explain the so-called cat's tail seen only in JWST images of the debris disk. Comparisons were made between Spitzer observations (2004-2005) and JWST observations (2023). This showed that the 600 Kelvin hot dust continuum, as well as a forsterite signature disappeared. This was interpreted as another collision that occurred a few years before 2004. The dust produced in the collision was blown out by radiation pressure from the star in the years between 2005 and 2023.
Falling evaporating bodies
The spectrum of Beta Pictoris shows strong short-term variability that was first noticed in the red-shifted part of various absorption lines, which was interpreted as being caused by material falling onto the star. The source of this material was suggested to be small comet-like objects on orbits which take them close to the star where they begin to evaporate, termed the "falling evaporating bodies" model. Transient blue-shifted absorption events were also detected, though less frequently: these may represent a second group of objects on a different set of orbits. Detailed modeling indicates the falling evaporating bodies are unlikely to be mainly icy like comets, but instead are probably composed of a mixed dust and ice core with a crust of refractory material. These objects may have been perturbed onto their star-grazing orbits by the gravitational influence of a planet in a mildly eccentric orbit around Beta Pictoris at a distance of roughly 10 AU from the star. Falling evaporating bodies may also be responsible for the presence of gas located high above the plane of the main debris disk. A study from 2019 reported transiting exocomets with TESS. The dips are asymmetric in nature and are consistent with models of evaporating comets crossing the disc of the star. The comets are in a highly eccentric orbit and are non-periodic.
Planetary system
On November 21, 2008, it was announced that infrared observations made in 2003 with the Very Large Telescope had revealed a candidate planetary companion to the star. In the autumn of 2009 the planet was successfully observed on the other side of the parent star, confirming the existence of the planet itself and earlier observations. It is believed that in 15 years () it will be possible to record the whole orbit of the planet.
The European Southern Observatory confirmed the presence of Beta Pictoris c, on 6 October 2020, through the use of direct imagery. Beta Pictoris c is orbiting in the plane of the debris disk surrounding the star. Beta Pictoris c is currently the closest extrasolar planet to its star ever photographed: the observed separation is roughly the same as the distance between the asteroid belt and the Sun.
The radial velocity method is not well suited to study A-type stars like Beta Pictoris. The very young age of the star makes the noise even worse. Current limits derived from this method are enough to rule out hot Jupiter-type planets more massive than 2 Jupiter masses at a distance of less than from the star. For planets orbiting at , planets with less than 9 Jupiter masses would have evaded detection. Therefore, to find planets in the Beta Pictoris system, astronomers look for the effects that the planet has on the circumstellar environment.
Multiple lines of evidence suggested the existence of a massive planet orbiting in the region around from the star: the dust-free gap between the planetesimal belts at and suggest this region is being cleared out; a planet at this distance would explain the origin of the falling evaporating bodies, and the warps and inclined rings in the inner disk suggest a massive planet on an inclined orbit is disrupting the disk.
The observed planet by itself cannot explain the structure of the planetesimal belts at and from the star. These belts might be associated with smaller planets at , with around respectively. Such a system of planets, if it exists, would be close to a 1:3:7 orbital resonance. It may also be that the rings in the outer disc at are indirectly caused by the influence of these planets.
The object was observed at an angular distance of from Beta Pictoris, which corresponds to a distance in the plane of the sky of . For comparison, the orbital radii of the planets Jupiter and Saturn are and respectively. The separation in the radial direction is unknown, so this is a lower limit on the true separation. Estimates of its mass depend on theoretical models of planetary evolution, and predict the object has about 8 Jupiter masses and is still cooling, with a temperature ranging from . These figures come with the caveat that the models have not yet been tested against real data in the likely ranges of mass and age for the planet.
The semimajor axis is and its orbital period is . A "transit-like event" was observed in November 1981; this is consistent with those estimates. If this is confirmed as a true transit, the inferred radius of the transiting object is , which is larger than predicted by theoretical models. This may indicate that it is surrounded by a large ring system or a moon-forming disc.
Confirmation of a second planet in the Beta Pictoris system was announced on 6 October 2020. The planet has a temperature of , a dynamical mass of , and an age of . It has an orbital period of about and a semimajor axis of , about 3.5 times closer to its parent star than Beta Pictoris b. The orbit of Beta Pictoris c is moderately eccentric, with an eccentricity of 0.24.
This planet presents data with conflict with current, as of 2020, models for planetary formation. β Pic c is at an age where planetary formations is predicted to occur via disk instability. However the planet orbits at a distance of , which prediction says is too close for disk instability to occur. The low apparent magnitude, of , suggests that it formed via core accretion.
Existence of an additional smaller planet on a wider orbit, close to the inner edge of the disk, has been proposed to explain the observed inner debris disk edge at , which does not match the dynamical simulation results for the two-planet model. A planet of mass in the range of on a low-eccentricity orbit between would remain below the observational limit for direct observation, but can reproduce the observed disk profile in simulation.
Dust stream
In 2000, observations made with the Advanced Meteor Orbit Radar facility in New Zealand revealed the presence of a stream of particles coming from the direction of Beta Pictoris, which may be a dominant source of interstellar meteoroids in the Solar System. The particles in the Beta Pictoris dust stream are relatively large, with radii exceeding 20 micrometers, and their velocities suggest that they must have left the Beta Pictoris system at roughly 25 km/s. These particles may have been ejected from the Beta Pictoris debris disk as a result of the migration of gas giant planets within the disk and may be an indication that the Beta Pictoris system is forming an Oort cloud. Numerical modeling of dust ejection indicates radiation pressure may also be responsible and suggests that planets further than about 1 AU from the star cannot directly cause the dust stream.
| Physical sciences | Notable stars | Astronomy |
656951 | https://en.wikipedia.org/wiki/Women%27s%20health | Women's health | Women's health differs from that of men's health in many unique ways. Women's health is an example of population health, where health is defined by the World Health Organization (WHO) as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity". Often treated as simply women's reproductive health, many groups argue for a broader definition pertaining to the overall health of women, better expressed as "The health of women". These differences are further exacerbated in developing countries where women, whose health includes both their risks and experiences, are further disadvantaged.
While the rates of the leading causes of death, cardiovascular disease, cancer and lung disease, are similar in women and men, women have different experiences. Lung cancer has overtaken all other types of cancer as the leading cause of cancer related death in women, followed by breast cancer, colorectal, ovarian, uterine and cervical cancers. While smoking is the major cause of lung cancer, amongst nonsmoking women the risk of developing cancer is three times greater than among nonsmoking men. Despite this, breast cancer remains the most common cancer in women in developed countries, and is one of the major chronic diseases of women, while cervical cancer remains one of the most common cancers in developing countries, associated with human papilloma virus (HPV), a sexually transmitted infection. HPV vaccine together with screening offers the promise of controlling these diseases. Other important health issues for women include cardiovascular disease, depression, dementia, osteoporosis and anemia.
In 176 out of 178 countries for which records are available, there is a gender gap in favor of women in life expectancy. In Western Europe, this has been the case at least as far back as 1750. Gender remains an important social determinant of health, since women's health is influenced not just by their biology but also by conditions such as poverty, employment, and family responsibilities. Women have long been disadvantaged in many respects such as social and economic power which restricts their access to the necessities of life including health care, and the greater the level of disadvantage, such as in developing countries, the greater adverse impact on health.
Women's reproductive and sexual health has a distinct difference compared to men's health. Even in developed countries, pregnancy and childbirth are associated with substantial risks to women with maternal mortality accounting for more than a quarter of a million deaths per year, with large gaps between the developing and developed countries. Comorbidity from other non-reproductive diseases such as cardiovascular disease contribute to both the mortality and morbidity of pregnancy, including preeclampsia. Sexually transmitted infections have serious consequences for women and infants, with mother-to-child transmission leading to outcomes such as stillbirths and neonatal deaths, and pelvic inflammatory disease leading to infertility. In addition, infertility from many other causes, birth control, unplanned pregnancy, rape and the struggle for access to abortion create other burdens for women.
Definitions and scope
Women's experience of health and disease differ from those of men, due to unique biological, social and behavioral conditions. Biological differences vary from phenotypes to the cellular biology, and manifest unique risks for the development of ill health. WHO defines health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity". Women's health is an example of population health, the health of a specific defined population.
Women's health has been described as "a patchwork quilt with gaps". Although many of the issues around women's health relate to their reproductive health, including maternal and child health, genital health and breast health, and endocrine (hormonal) health, including menstruation, birth control and menopause, a broader understanding of women's health to include all aspects of the health of women has been urged, replacing "Women's Health" with "The Health of Women". WHO considers that an undue emphasis on reproductive health has been a major barrier to ensuring access to good quality health care for all women. Conditions that affect both men and women, such as cardiovascular disease, osteoporosis, also manifest differently in women. Women's health issues also include medical situations in which women face problems not directly related to their biology, such as gender-differentiated access to medical treatment and other socioeconomic factors. Women's health is of particular concern due to widespread discrimination against women in the world, leaving them disadvantaged.
A number of health and medical research advocates, such as the Society for Women's Health Research in the United States, support this broader definition, rather than merely issues specific to human female anatomy to include areas where biological sex differences between women and men exist. Women also need health care more and access the health care system more than do men. While part of this is due to their reproductive and sexual health needs, they also have more chronic non-reproductive health issues such as cardiovascular disease, cancer, mental illness, diabetes and osteoporosis. Another important perspective is realising that events across the entire life cycle (or life-course), from in utero to aging effect the growth, development and health of women. The life course perspective is one of the key strategies of the World Health Organization.
Global perspective
Gender differences in susceptibility and symptoms of disease and response to treatment in many areas of health are particularly true when viewed from a global perspective. Much of the available information comes from developed countries, yet there are marked differences between developed and developing countries in terms of women's roles and health. The global viewpoint is defined as the "area for study, research and practice that places a priority on improving health and achieving health equity for all people worldwide". In 2015 the World Health Organization identified the top ten issues in women's health as being cancer, reproductive health, maternal health, human immunodeficiency virus (HIV), sexually transmitted infections, violence, mental health, non communicable diseases, youth and aging.
Life expectancy
Women's life expectancy is greater than that of men, and they have lower death rates throughout life, regardless of race and geographic region. Historically though, women had higher rates of mortality, primarily from maternal deaths (death in childbirth). In industrialised countries, particularly the most advanced, the gender gap narrowed and was reversed following the Industrial Revolution.
Despite these differences, in many areas of health, women experience earlier and more severe disease, and experience poorer outcomes.
Despite these differences, the leading causes of death in the United States are remarkably similar for men and women, headed by heart disease, which accounts for a quarter of all deaths, followed by cancer, lung disease and stroke. While women have a lower incidence of death from unintentional injury and suicide, they have a higher incidence of dementia.
The major differences in life expectancy for women between developed and developing countries lie in the childbearing years. If a woman survives this period, the differences between the two regions become less marked, since in later life non-communicable diseases (NCDs) become the major causes of death in women throughout the world, with cardiovascular deaths accounting for 45% of deaths in older women, followed by cancer (15%) and lung disease (10%). These create additional burdens on the resources of developing countries. Changing lifestyles, including diet, physical activity and cultural factors that favour larger body size in women, are contributing to an increasing problem with obesity and diabetes amongst women in these countries and increasing the risks of cardiovascular disease and other NCDs.
Women who are socially marginalised are more likely to die at younger ages than women who are not. Women who have substance abuse disorders, who are homeless, who are sex workers, and/or who are imprisoned have significantly shorter lives than other women. At any given age, women in these overlapping, stigmatised groups are approximately 10 to 13 times more likely to die than typical women of the same age.
Social and cultural factors
Women's health is positioned within a wider body of knowledge cited by, amongst others, the World Health Organization, which places importance on gender as a social determinant of health. While women's health is affected by their biology, it is also affected by their social conditions, such as poverty, employment, and family responsibilities, and these aspects should not be overshadowed.
Women have traditionally been disadvantaged in terms of economic and social status and power, which in turn reduces their access to the necessities of life including health care. Despite recent improvements in Western nations, women remain disadvantaged with respect to men. The gender gap in health is even more acute in developing countries where women are relatively more disadvantaged. In addition to gender inequity, there remain specific disease processes uniquely associated with being a woman which create specific challenges in both prevention and health care.
In low- and middle-income countries, women’s diets, often rich in staple foods, are frequently poor in vitamins and minerals, with consequences for health. Low dietary diversity jeopardizes nutrient adequacy.
Deeply ingrained cultural, religious, and patriarchal systems within the MENA region perpetuate gender-based power dynamics within communities and lead to discrepancies in healthcare access. In a speech, UNFPA executive director Thoraya Ahmed Obaid outlined these difficulties and emphasized the need to change cultural and societal norms in order to improve the health of women in the area.
Even after succeeding in accessing health care, women have been discriminated against, a process that Iris Young has called "internal exclusion", as opposed to "external exclusion", the barriers to access. This invisibility effectively masks the grievances of groups already disadvantaged by power inequity, further entrenching injustice.
Behavioral differences also play a role, in which women display lower risk taking including consume less tobacco, alcohol, and drugs, reducing their risk of mortality from associated diseases, including lung cancer, tuberculosis and cirrhosis. Other risk factors that are lower for women include motor vehicle accidents. Occupational differences have exposed women to less industrial injuries, although this is likely to change, as is risk of injury or death in war. Overall such injuries contributed to 3.5% of deaths in women compared to 6.2% in the United States in 2009. Suicide rates are also less in women.
The social view of health combined with the acknowledgement that gender is a social determinant of health inform women's health service delivery in countries around the world. Women's health services such as Leichhardt Women's Community Health Centre which was established in 1974 and was the first women's health centre established in Australia is an example of women's health approach to service delivery.
Women's health is an issue which has been taken up by many feminists, especially where reproductive health is concerned and the international women's movement was responsible for much of the adoption of agendas to improve women's health.
Biological factors
Factors that specifically affect the health of women compared to men are most evident in those related to reproduction, but sex differences have been identified from the molecular to the behavioral scale. Some of these differences are subtle and difficult to explain, partly due to the fact that it is difficult to separate the health effects of inherent biological factors from the effects of the surrounding environment they exist in. Women's XX sex chromosomes compliment, hormonal environment, as well as sex-specific lifestyles, metabolism, immune system function, and sensitivity to environmental factors are believed to contribute to sex differences in health at the levels of physiology, perception, and cognition. Women can have distinct responses to drugs and thresholds for diagnostic parameters. All of these necessitate caution in extrapolating information derived from biomarkers from one sex to the other. Young women and adolescents are at risk from STIs, pregnancy and unsafe abortion, while older women often have few resources and are disadvantaged with respect to men, and also are at risk of dementia and abuse, and generally poor health.
Reproductive and sexual health
Women experience many unique health issues related to reproduction and sexuality and these are responsible for a third of all health problems experienced by women during their reproductive years (aged 15–44), of which unsafe sex is a major risk factor, especially in developing countries. Reproductive health includes a wide range of issues including the health and function of structures and systems involved in reproduction, pregnancy, childbirth and child rearing, including antenatal and perinatal care. Global women's health has a much larger focus on reproductive health than that of developed countries alone, but also infectious diseases such as malaria in pregnancy and non-communicable diseases (NCD). Many of the issues that face women and girls in resource poor regions are relatively unknown in developed countries, such as female genital cutting, and further lack access to the appropriate diagnostic and clinical resources.
Maternal health
Pregnancy presents substantial health risks, even in developed countries, and despite advances in obstetrical science and practice. Maternal mortality remains a major problem in global health and is considered a sentinel event in judging the quality of health care systems. Adolescent pregnancy represents a particular problem, whether intended or unintended, and whether within marriage or a union or not. Pregnancy results in major changes in the life of a young female: physically, emotionally, socially and economically, and jeopardizes their transition into adulthood. Adolescent pregnancy, more often than not, stems from a girl's lack of choices. or abuse. Child marriage (see below) is a major contributor worldwide, since 90% of births to females aged 15–19 occur within marriage.
Maternal death
In 2013 about 289,000 women (800 per day) in the world died due to pregnancy-related causes, with large differences between developed and developing countries. In developed nations maternal mortality had been steadily falling and on average means 16 deaths per 100,000 live births, as measured by the maternal mortality ratio (MMR). By contrast rates as high as 1,000 deaths per 100,000 live births are reported in the rest of the world, with the highest rates in Sub-Saharan Africa and South Asia, which account for 86% of such deaths. These deaths are rarely investigated, yet the World Health Organization considers that 99% of these deaths, the majority of which occur within 24 hours of childbirth, are preventable if the appropriate infrastructure, training, and facilities were in place. In these resource-poor countries, maternal health is further eroded by poverty and adverse economic factors which impact the roads, health care facilities, equipment and supplies in addition to limited skilled personnel. Other problems include cultural attitudes towards sexuality, contraception, child marriage, home birth and the ability to recognise medical emergencies. The direct causes of these maternal deaths are hemorrhage, eclampsia, obstructed labor, sepsis and unskilled abortion. In addition malaria and AIDS can also endanger pregnancy. In the period 2003–2009 hemorrhage was the leading cause of death, accounting for 27% of deaths in developing countries and 16% in developed countries.
Non-reproductive health remains an important predictor of maternal health. In the United States, the leading causes of maternal death are cardiovascular disease (15% of deaths), endocrine, respiratory and gastrointestinal disorders, infection, hemorrhage and hypertensive disorders of pregnancy (Gronowski and Schindler, Table II).In 2000, the United Nations created Millennium Development Goal (MDG) 5 to improve maternal health. Target 5A sought to reduce maternal mortality by three quarters from 1990 to 2015, using two indicators, 5.1 the MMR and 5.2 the proportion of deliveries attended by skilled health personnel (physician, nurse or midwife). Early reports indicated MDG 5 had made the least progress of all MDGs. By the target date of 2015 the MMR had only declined by 45%, from 380 to 210, most of which occurred after 2000. However this improvement occurred across all regions, but the highest MMRs were still in Africa and Asia, although South Asia witnessed the largest fall, from 530 to 190 (64%). The smallest decline was seen in the developed countries, from 26 to 16 (37%). In terms of assisted births, this proportion had risen globally from 59 to 71%. Although the numbers were similar for both developed and developing regions, there were wide variations in the latter from 52% in South Asia to 100% in East Asia. The risks of dying in pregnancy in developing countries remains fourteen times higher than in developed countries, but in Sub-Saharan Africa, where the MMR is highest, the risk is 175 times higher. In setting the MDG targets, skilled assisted birth was considered a key strategy, but also an indicator of access to care and closely reflect mortality rates. There are also marked differences within regions with a 31% lower rate in rural areas of developing countries (56 vs. 87%), yet there is no difference in East Asia but a 52% difference in Central Africa (32 vs. 84%). With the completion of the MDG campaign in 2015, new targets are being set for 2030 under the Sustainable Development Goals campaign. Maternal health is placed under Goal 3, Health, with the target being to reduce the global maternal mortality ratio to less than 70. Amongst tools being developed to meet these targets is the WHO Safe Childbirth Checklist.
Improvements in maternal health, in addition to professional assistance at delivery, will require routine antenatal care, basic emergency obstetric care, including the availability of antibiotics, oxytocics, anticonvulsants, the ability to manually remove a retained placenta, perform instrumented deliveries, and postpartum care that is financially accessible, such as through insurance. Research has shown the most effective programmes are those focussing on patient and community education, prenatal care, emergency obstetrics (including access to cesarean sections) and transportation. As with women's health in general, solutions to maternal health require a broad view encompassing many of the other MDG goals, such as poverty and status, and given that most deaths occur in the immediate intrapartum period, it has been recommended that intrapartum care (delivery) be a core strategy. New guidelines on antenatal care were issued by WHO in November 2016.
Complications of pregnancy
In addition to death occurring in pregnancy and childbirth, pregnancy can result in many non-fatal health problems including obstetrical fistulae, ectopic pregnancy, preterm labor, gestational diabetes, hyperemesis gravidarum, hypertensive states including preeclampsia, and anemia. Globally, complications of pregnancy vastly outway maternal deaths, with an estimated 9.5 million cases of pregnancy-related illness and 1.4 million near-misses (survival from severe life-threatening complications). Complications of pregnancy may be physical, mental, economic and social. It is estimated that 10–20 million women will develop physical or mental disability every year, resulting from complications of pregnancy or inadequate care. Consequently, international agencies have developed standards for obstetric care.
Many do not understand how past pregnancy complications impact future pregnancy birthing options. It is a common misconception that after one Cesarean Section birth, it is medically necessary for all future births. One C-section procedure does not deem parents ineligible for safe vaginal delivery.
Obstetrical fistula
Of near miss events, obstetrical fistulae (OF), including vesicovaginal and rectovaginal fistulae, remain one of the most serious and tragic. Although corrective surgery is possible it is often not available and OF is considered completely preventable. If repaired, subsequent pregnancies will require cesarean section. While unusual in developed countries, it is estimated that up to 100,000 cases occur every year in the world, and that about 2 million women are currently living with this condition, with the highest incidence occurring in Africa and parts of Asia. OF results from prolonged obstructed labor without intervention, when continued pressure from the fetus in the birth canal restricts blood supply to the surrounding tissues, with eventual fetal death, necrosis and expulsion. The damaged pelvic organs then develop a connection (fistula) allowing urine or feces, or both, to be discharged through the vagina with associated urinary and fecal incontinence, vaginal stenosis, nerve damage and infertility. Severe social and mental consequences are also likely to follow, with shunning of the women. Apart from lack of access to care, causes include young age, and malnourishment. The UNFPA has made prevention of OF a priority and is the lead agency in the Campaign to End Fistula, which issues annual reports and the United Nations observes May 23 as the International Day to End Obstetric Fistula every year. Prevention includes discouraging teenage pregnancy and child marriage, adequate nutrition, and access to skilled care, including caesarean section.
Sexual health
Contraception
The ability to determine if and when to become pregnant, is vital to a woman's autonomy and well-being, and contraception can protect girls and young women from the risks of early pregnancy and older women from the increased risks of unintended pregnancy. Adequate access to contraception can limit multiple pregnancies, reduce the need for potentially unsafe abortion and reduce maternal and infant mortality and morbidity. Some barrier forms of contraception such as condoms, also reduce the risk of STIs and HIV infection. Access to contraception allows women to make informed choices about their reproductive and sexual health, increases empowerment, and enhances choices in education, careers and participation in public life. At the societal level, access to contraception is a key factor in controlling population growth, with resultant impact on the economy, the environment and regional development. Consequently, the United Nations considers access to contraception a human right that is central to gender equality and women's empowerment that saves lives and reduces poverty, and birth control has been considered amongst the 10 great public health achievements of the 20th century.
To optimise women's control over pregnancy, it is essential that culturally appropriate contraceptive advice and means are widely, easily, and affordably available to anyone that is sexually active, including adolescents. In many parts of the world access to contraception and family planning services is very difficult or non existent and even in developed counties cultural and religious traditions can create barriers to access. Reported usage of adequate contraception by women has risen only slightly between 1990 and 2014, with considerable regional variability. Although global usage is around 55%, it may be as low as 25% in Africa. Research shows that women in the Middle East and North Africa use contraception at low rates. Only 14% of women who completed a survey in Jordan said they used condoms with their spouses. Worldwide 222 million women have no or limited access to contraception. Some caution is needed in interpreting available data, since contraceptive prevalence is often defined as "the percentage of women currently using any method of contraception among all women of reproductive age (i.e., those aged 15 to 49 years, unless otherwise stated) who are married or in a union. The "in-union" group includes women living with their partner in the same household and who are not married according to the marriage laws or customs of a country." This definition is more suited to the more restrictive concept of family planning, but omits the contraceptive needs of all other women and girls who are or are likely to be sexually active, are at risk of pregnancy and are not married or "in-union".
Three related targets of MDG5 were adolescent birth rate, contraceptive prevalence and unmet need for family planning (where prevalence+unmet need = total need), which were monitored by the Population Division of the UN Department of Economic and Social Affairs. Contraceptive use was part of Goal 5B (universal access to reproductive health), as Indicator 5.3. The evaluation of MDG5 in 2015 showed that amongst couples usage had increased worldwide from 55% to 64%. with one of the largest increases in Subsaharan Africa (13 to 28%). The corollary, unmet need, declined slightly worldwide (15 to 12%). In 2015 these targets became part of SDG5 (gender equality and empowerment) under Target 5.6: Ensure universal access to sexual and reproductive health and reproductive rights, where Indicator 5.6.1 is the proportion of women aged 15–49 years who make their own informed decisions regarding sexual relations, contraceptive use and reproductive health care (p. 31).
There remain significant barriers to accessing contraception for many women in both developing and developed regions. These include legislative, administrative, cultural, religious and economic barriers in addition to those dealing with access to and quality of health services. Much of the attention has been focused on preventing adolescent pregnancy. The Overseas Development Institute (ODI) has identified a number of key barriers, on both the supply and demand side, including internalising socio-cultural values, pressure from family members, and cognitive barriers (lack of knowledge), which need addressing. Even in developed regions many women, particularly those who are disadvantaged, may face substantial difficulties in access that may be financial and geographic but may also face religious and political discrimination. Women have also mounted campaigns against potentially dangerous forms of contraception such as defective intrauterine devices (IUD)s, particularly the Dalkon Shield.
Abortion
Abortion is the intentional killing of an unborn child, as compared to spontaneous termination (miscarriage). Abortion is closely allied to contraception in terms of women's control and regulation of their reproduction, and is often subject to similar cultural, religious, legislative and economic constraints. Where access to contraception is limited, women turn to abortion. Consequently, abortion rates may be used to estimate unmet needs for contraception. However the available procedures have carried great risk for women throughout most of history, and still do in the developing world, or where legal restrictions force women to seek clandestine facilities. Access to safe legal abortion places undue burdens on lower socioeconomic groups and in jurisdictions that create significant barriers. These issues have frequently been the subject of political and feminist campaigns where differing viewpoints pit health against moral values.
Globally, there were 87 million unwanted pregnancies in 2005, of those 46 million resorted to abortion, of which 18 million were considered unsafe, resulting in 46,068,000 deaths. The majority of these deaths occurred in the developing world. The United Nations considers these avoidable with access to safe abortion and post-abortion care. While abortion rates have fallen in developed countries, but not in developing countries. Between 2010 and 2014 there were 35 abortions per 1000 women aged 15–44, a total of 56 million abortions per year. The United nations has prepared recommendations for health care workers to provide more accessible and safe abortion and post-abortion care. An inherent part of post-abortion care involves provision of adequate contraception.
Sexually transmitted infections
Important sexual health issues for women include Sexually transmitted infections (STIs) and female genital cutting (FGC). STIs are a global health priority because they have serious consequences for women and infants. Mother-to-child transmission of STIs can lead to stillbirths, neonatal death, low-birth-weight and prematurity, sepsis, pneumonia, neonatal conjunctivitis, and congenital deformities. Syphilis in pregnancy results in over 300,000 fetal and neonatal deaths per year, and 215,000 infants with an increased risk of death from prematurity, low-birth-weight or congenital disease.
Diseases such as chlamydia and gonorrhoea are also important causes of pelvic inflammatory disease (PID) and subsequent infertility in women. Another important consequence of some STIs such as genital herpes and syphilis increase the risk of acquiring HIV by three-fold, and can also influence its transmission progression. Worldwide, women and girls are at greater risk of HIV/AIDS. STIs are in turn associated with unsafe sexual activity that is often unconsensual. In the Middle East and North Africa (MENA), a large number of HIV-positive women contracted the virus from their spouses or partners. In comparison to men, taboos, and discrimination against women living with HIV are more pervasive throughout the MENA region. Women in the MENA region are more vulnerable to HIV because of gender inequity, gender-based violence, and restricted access to comprehensive healthcare systems.
Female genital mutilation
Female genital mutilation (also referred to as female genital cutting) is defined by the World Health Organization (WHO) as "all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs for non-medical reasons". It has sometimes been referred to as female circumcision, although this term is misleading because it implies it is analogous to the circumcision of the foreskin from the male penis. Consequently, the term mutilation was adopted to emphasise the gravity of the act and its place as a violation of human rights. Subsequently, the term cutting was advanced to avoid offending cultural sensibility that would interfere with dialogue for change. To recognise these points of view some agencies use the composite female genital mutilation/cutting (FMG/C).
It has affected more than 200 million women and girls who are alive today. The practice is concentrated in some 30 countries in Africa, the Middle East and Asia. Female genital mutilation is still common, impacting around 50 million women and girls in the five countries of Yemen, Egypt, Sudan, Djibouti, and Iraq in the Middle East and North Africa (MENA) region, as adolescent women frequently experience a lack of bodily autonomy in the Arab world. According to data, the frequency of FGM among women between the ages of 15 and 49 is high: 94% in Djibouti, 87% in Egypt and Sudan, 19% in Yemen, and 7% in Iraq. FGC affects many religious faiths, nationalities, and socioeconomic classes and is highly controversial. The main arguments advanced to justify FGC are hygiene, fertility, the preservation of chastity, an important rite of passage, marriageability and enhanced sexual pleasure of male partners. The amount of tissue removed varies considerably, leading the WHO and other bodies to classify FGC into four types. These range from the partial or total removal of the clitoris with or without the prepuce (clitoridectomy) in Type I, to the additional removal of the labia minora, with or without excision of the labia majora (Type II) to narrowing of the vaginal orifice (introitus) with the creation of a covering seal by suturing the remaining labial tissue over the urethra and introitus, with or without excision of the clitoris (infibulation). In this type a small opening is created to allow urine and menstrual blood to be discharged. Type 4 involves all other procedures, usually relatively minor alterations such as piercing.
While defended by those cultures in which it constitutes a tradition, FGC is opposed by many medical and cultural organizations on the grounds that it is unnecessary and harmful. Short-term health effects may include hemorrhage, infection, sepsis, and even result in death, while long term effects include dyspareunia, dysmenorrhea, vaginitis and cystitis. In addition FGC leads to complications with pregnancy, labor and delivery. Reversal (defibulation) by skilled personnel may be required to open the scarred tissue. Amongst those opposing the practice are local grassroots groups, and national and international organisations including WHO, UNICEF, UNFPA and Amnesty International. Legislative efforts to ban FGC have rarely been successful and the preferred approach is education and empowerment and the provision of information about the adverse health effects as well the human rights aspects.
Progress has been made but girls 14 and younger represent 44 million of those who have been cut, and in some regions 50% of all girls aged 11 and younger have been cut. Ending FGC has been considered one of the necessary goals in achieving the targets of the Millennium Development Goals, while the United Nations has declared ending FGC a target of the Sustainable Development Goals, and for February 6 to known as the International Day of Zero Tolerance for Female Genital Mutilation, concentrating on 17 African countries and the 5 million girls between the ages of 15 and 19 that would otherwise be cut by 2030.
Infertility
In the United States, infertility affects 1.5 million couples. The rates of infertility in the Middle East and North Africa (MENA) are difficult to measure due to varying definitions of the condition. When infertility is defined as failure to have a successful birth, the MENA region has a very high rate at 33%. Morocco has the highest percentage of infertility among the MENA countries with an infertility rate of 56.8%. Rates of infertility, defined as failure to conceive (clinical infertility), are probably lower in the region but there is a lack of data on the exact numbers. There is a dearth of research on clinical infertility in the MENA region, with the exception of Iran, which is attributed to a societal reluctance to discuss infertility openly.
Many couples seek assisted reproductive technology (ART) for infertility. In the United States in 2010, 147,260 in vitro fertilization (IVF) procedures were carried out, with 47,090 live births resulting. In 2013 these numbers had increased to 160,521 and 53,252. However, about a half of IVF pregnancies result in multiple-birth deliveries, which in turn are associated with an increase in both morbidity and mortality of the mother and the infant. Causes for this include increased maternal blood pressure, premature birth and low birth weight. In addition, more women are waiting longer to conceive and seeking ART.
Child marriage
Child marriage (including union or cohabitation) is defined as marriage under the age of eighteen and is an ancient custom. In 2010 it was estimated that 67 million women, then, in their twenties had been married before they turned eighteen, and that 150 million would be in the next decade, equivalent to 15 million per year. This number had increased to 70 million by 2012. In developing countries one third of girls are married under age, and 1:9 before 15. The practice is commonest in South Asia (48% of women), Africa (42%) and Latin America and the Caribbean (29%). The highest prevalence is in Western and Sub-Saharan Africa. The percentage of girls married before the age of eighteen is as high as 75% in countries such as Niger. Approximately one in five young women in the Middle East and North Africa were married before becoming eighteen, and one in twenty-five married before turning fifteen. In Egypt, 17% of women in the 20–24 age group, 13% in Morocco, 28% in Iraq, 8% in Jordan, 6% in Lebanon, and 3% in Algeria were married or engaged before turning 18. Most child marriage involves girls. For instance in Mali the ratio of girls to boys is 72:1, while in countries such as the United States the ratio is 8:1. Marriage may occur as early as birth, with the girl being sent to her husbands home as early as age seven.
There are a number of cultural factors that reinforce this practice. These include the child's financial future, her dowry, social ties and social status, prevention of premarital sex, extramarital pregnancy and STIs. The arguments against it include interruption of education and loss of employment prospects, and hence economic status, as well as loss of normal childhood and its emotional maturation and social isolation. Child marriage places the girl in a relationship where she is in a major imbalance of power and perpetuates the gender inequality that contributed to the practice in the first place. Also in the case of minors, there are the issues of human rights, non-consensual sexual activity and forced marriage and a 2016 joint report of the WHO and Inter-Parliamentary Union places the two concepts together as Child, Early and Forced Marriage (CEFM), as did the 2014 Girl Summit (see below). In addition the likely pregnancies at a young age are associated with higher medical risks for both mother and child, multiple pregnancies and less access to care with pregnancy being amongst the leading causes of death amongst girls aged 15–19. Girls married under age are also more likely to be the victims of domestic violence.
There has been an international effort to reduce this practice, and in many countries eighteen is the legal age of marriage. Organizations with campaigns to end child marriage include the United Nations and its agencies, such as the Office of the High Commissioner for Human Rights, UNFPA, UNICEF and WHO. Like many global issues affecting women's health, poverty and gender inequality are root causes, and any campaign to change cultural attitudes has to address these. Child marriage is the subject of international conventions and agreements such as The Convention on the Elimination of All Forms of Discrimination against Women (CEDAW, 1979) (article 16) and the Universal Declaration of Human Rights and in 2014 a summit conference (Girl Summit) co-hosted by UNICEF and the UK was held in London (see illustration) to address this issue together with FGM/C. Later that same year the General Assembly of the United Nations passed a resolution, which inter alia
Amongst non-governmental organizations (NGOs) working to end child marriage are Girls not Brides, Young Women's Christian Association (YWCA), the International Center for Research on Women (ICRW) and Human Rights Watch (HRW). Although not explicitly included in the original Millennium Development Goals, considerable pressure was applied to include ending child marriage in the successor Sustainable Development Goals adopted in September 2015, where ending this practice by 2030 is a target of SDG 5 Gender Equality (see above). While some progress is being made in reducing child marriage, particularly for girls under fifteen, the prospects are daunting. The indicator for this will be the percentage of women aged 20–24 who were married or in a union before the age of eighteen. Efforts to end child marriage include legislation and ensuring enforcement together with empowering women and girls. To raise awareness, the inaugural UN International Day of the Girl Child in 2012 was dedicated to ending child marriage.
Menstrual cycle
Women's menstrual cycles, the approximately monthly cycle of changes in the reproductive system, can pose significant challenges for women in their reproductive years (the early teens to about 50 years of age). These include the physiological changes that can effect physical and mental health, symptoms of ovulation and the regular shedding of the inner lining of the uterus (endometrium) accompanied by vaginal bleeding (menses or menstruation). The onset of menstruation (menarche) may be alarming to unprepared girls and mistaken for illness. Menstruation can place undue burdens on women in terms of their ability to participate in activities, and access to menstrual aids such as tampons and sanitary pads. This is particularly acute amongst poorer socioeconomic groups where they may represent a financial burden and in developing countries where menstruation can be an impediment to a girl's education. In the Middle East and North Africa, period poverty and stigma have an influence on girls' education and general well-being. Misinformation and a lack of fundamental knowledge cause girls to miss school during their menstrual cycle and contribute to the prevailing stigma around getting your period.
Equally challenging for women are the physiological and emotional changes associated with the cessation of menses (menopause or climacteric). While typically occurring gradually towards the end of the fifth decade in life marked by irregular bleeding the cessation of ovulation and menstruation is accompanied by marked changes in hormonal activity, both by the ovary itself (oestrogen and progesterone) and the pituitary gland (follicle stimulating hormone or FSH and luteinizing hormone or LH). These hormonal changes may be associated with both systemic sensations such as hot flashes and local changes to the reproductive tract such as reduced vaginal secretions and lubrication. While menopause may bring relief from symptoms of menstruation and fear of pregnancy it may also be accompanied by emotional and psychological changes associated with the symbolism of the loss of fertility and a reminder of aging and possible loss of desirability. While menopause generally occurs naturally as a physiological process it may occur earlier (premature menopause) as a result of disease or from medical or surgical intervention. When menopause occurs prematurely the adverse consequences may be more severe.
Other issues
Other reproductive and sexual health issues include sex education, puberty, sexuality and sexual function. Women also experience a number of issues related to the health of their breasts and genital tract, which fall into the scope of gynaecology.
Non-reproductive health
Women and men have different experiences of the same illnesses, especially cardiovascular disease, cancer, depression and dementia. Women are also more prone to urinary tract infections than men.
Cardiovascular disease
Cardiovascular disease is the leading cause of death (35%) amongst women globally. The onset occurs at a later age in women than in men. For instance the incidence of stroke in women under the age of 80 is less than that in men, but higher in those aged over 80. Overall the lifetime risk of stroke in women exceeds that in men. The risk of cardiovascular disease amongst those with diabetes and amongst smokers is also higher in women than in men. Many aspects of cardiovascular disease vary between women and men, including risk factors, prevalence, physiology, symptoms, response to intervention and outcome. Among women in the Middle East, cardiovascular disease-related morbidity and death are increasing. At the same time, awareness and education on the disease, as well as research, are lacking in the region.
Cancer
Women and men have approximately equal risk of dying from cancer, which accounts for about a quarter of all deaths, and is the second leading cause of death. However the relative incidence of different cancers varies between women and men. Globally the three most common types of cancer of women in 2020 were breast, lung and colorectal cancers. These three account for 44.5% of all cancer cases in women. Other types of cancers specifically affecting women include ovarian, uterine (endometrial and cervical) cancers.
While cancer death rates rose rapidly during the twentieth century, the increase was less and happened later in women due to differences in smoking rates. More recently cancer death rates have started to decline as the use of tobacco becomes less common. Between 1991 and 2012, the death rate in women declined by 19% (less than in men). In the early twentieth century death from uterine (uterine body and cervix) cancers was the leading cause of cancer death in women, who had a higher cancer mortality than men. From the 1930s onwards, uterine cancer deaths declined, primarily due to lower death rates from cervical cancer following the availability of the Papanicolaou (Pap) screening test. This resulted in an overall reduction of cancer deaths in women between the 1940s and 1970s, when rising rates of lung cancer led to an overall increase. By the 1950s the decline in uterine cancer left breast cancer as the leading cause of cancer death until it was overtaken by lung cancer in the 1980s. All three cancers (lung, breast, uterus) are now declining in cancer death rates, but more women die from lung cancer every year than from breast, ovarian, and uterine cancers combined. Overall about 20% of people found to have lung cancer are never smokers, yet amongst nonsmoking women the risk of developing lung cancer is three times greater than amongst men who never smoked.
In addition to mortality, cancer is a cause of considerable morbidity in women. Women have a lower lifetime probability of being diagnosed with cancer (38% vs 45% for men), but are more likely to be diagnosed with cancer at an earlier age.
Breast cancer
Breast cancer is most common type of cancer among women. Globally, it accounts for 25% of all cancers. It is also among the ten most common chronic diseases of women, and a substantial contributor to loss of quality of life. In 2016, breast cancer was the most common cancer diagnosed among women in both developed and developing countries, accounting for nearly 30% of all cases, and worldwide accounts for one and a half million cases and over half a million deaths, being the fifth most common cause of cancer death overall and the second in developed regions. In the Middle East and North Africa, there were 95,000 cases of breast cancer in 2019. The countries with the highest age-standardized prevalence rates per 100,000 females in the region were Bahrain, Qatar, and Lebanon. Geographic variation in incidence is the opposite of that of cervical cancer, being highest in Northern America and lowest in Eastern and Middle Africa, but mortality rates are relatively constant, resulting in a wide variance in case mortality, ranging from 25% in developed regions to 37% in developing regions, and with 62% of deaths occurring in developing countries.
Cervical cancer
Globally, cervical cancer is the fourth most common cancer amongst women. It is particularly common in women with lower socioeconomic status, living in low-and middle-income countries who have reduced access to health care. Customs and cultural practices that involve child and forced marriage, higher rates of parity, polygamy and exposure to STIs from multiple sexual contacts of male partners further increase the chances of cervical cancer. In developing countries, cervical cancer accounts for 12% of cancer cases amongst women and is the second leading cause of death, where about 85% of the global burden of over 500,000 cases and 250,000 deaths from this disease occurred in 2012. The highest incidence occurs in Eastern Africa, where with Middle Africa, cervical cancer is the commonest cancer in women. The case fatality rate of 52% is also higher in developing countries than in developed countries (43%), and the mortality rate varies by 18-fold between regions of the world.
Cervical cancer is associated with human papillomavirus (HPV), which has also been implicated in cancers of the vulva, vagina, anus, and oropharynx. Almost 300 million women worldwide have been infected with HPV, one of the commoner sexually transmitted infections, and 5% of the 13 million new cases of cancer in the world have been attributed to HPV. In developed countries, screening for cervical cancer using the Pap test has identified pre-cancerous changes in the cervix, at least in those women with access to health care. Also an HPV vaccine programme is available in 45 countries. Screening and prevention programmes have limited availability in developing countries although inexpensive low technology programmes are being developed, but access to treatment is also limited. If applied globally, HPV vaccination at 70% coverage could save the lives of 4 million women from cervical cancer, since most cases occur in developing countries.
Ovarian cancer
Ovarian cancer is the eighth most common cancer globally. It is predominantly a disease of women in industrialized countries and death from ovarian cancer is more common in North America and Europe than in Africa and Asia. Because it is largely asymptomatic in its earliest stages and lacks an effective screening programme, more than 50% of women have stage III or higher cancer (spread beyond the ovaries) by the time they are diagnosed, with a consequent poor prognosis.
Mental health
Almost 25% of women will experience mental health issues over their lifetime. Women are at higher risk than men from anxiety, depression, and psychosomatic complaints. Globally, depression is the leading disease burden. In the United States, women have depression twice as often as men. The economic costs of depression in American women are estimated to be $20 billion every year. The risks of depression in women have been linked to changing hormonal environment that women experience, including puberty, menstruation, pregnancy, childbirth and the menopause. Women also metabolise drugs used to treat depression differently to men. Suicide rates are less in women than men (<1% vs. 2.4%), but are a leading cause of death for women under the age of 60. In the United Kingdom, the Women's Mental Health Taskforce was formed aiming to address differences in mental health experiences and needs between women and men.
Dementia
The prevalence of Alzheimer's disease in the United States is estimated at 5.1 million, and of these two thirds are women. Furthermore, women are far more likely to be the primary caregivers of adult family members with dementia, so that they bear both the risks and burdens of this disease. The lifetime risk for a woman of developing Alzheimer's disease is twice that of men. Part of this difference may be due to life expectancy, but changing hormonal status over their lifetime may also play a par as may differences in gene expression. Deaths due to dementia are higher in women than men (4.5% of deaths vs. 2.0%).
Bone health
Osteoporosis ranks sixth amongst chronic diseases of women in the United States, with an overall prevalence of 18%, and a much higher rate involving the femur, neck or lumbar spine amongst women (16%) than men (4%), over the age of 50. Osteoporosis is a risk factor for bone fracture and about 20% of senior citizens who sustain a hip fracture die within a year. The gender gap is largely the result of the reduction of estrogen levels in women following the menopause. Hormone Replacement Therapy (HRT) has been shown to reduce this risk by 25–30%, and was a common reason for prescribing it during the 1980s and 1990s. However the Women's Health Initiative (WHI) study that demonstrated that the risks of HRT outweighed the benefits has since led to a decline in HRT usage.
Anaemia
Anaemia is a major global health problem for women. Women are affected more than men, in which up to 30% of women being found to be anaemic and 42% of pregnant women. Anaemia is linked to a number of adverse health outcomes including a poor pregnancy outcome and impaired cognitive function (decreased concentration and attention). The main cause of anaemia is iron deficiency. In United States women iron deficiency anaemia (IDA) affects 37% of pregnant women, but globally the prevalence is as high as 80%. Anaemia affects over one-third of the population in the Middle East and North Africa, caused by iron deficiencies or a combination of other factors, with women making up the bulk of those affected. In Saudi Arabia, 40% of women in the 15–49 age range suffer from anaemia. IDA starts in adolescence, from excess menstrual blood loss, compounded by the increased demand for iron in growth and suboptimal dietary intake. In the adult woman, pregnancy leads to further iron depletion.
Violence
Women experience structural and personal violence differently than men. The United Nations has defined violence against women as;
Violence against women may take many forms, including physical, sexual, emotional and psychological and may occur throughout the life-course. Structural violence may be embedded in legislation or policy, or be systematic misogyny by organisations against groups of women. Perpetrators of personal violence include state actors, strangers, acquaintances, relatives and intimate partners and manifests itself across a spectrum from discrimination, through harassment, sexual assault and rape, and physical harm to murder (femicide). It may also include cultural practices such as female genital cutting.
Non-fatal violence against women has severe implications for women's physical, mental and reproductive health, and is seen as not simply isolated events but rather a systematic pattern of behaviour that both violates their rights but also limits their role in society and requires a systematic approach.
The World Health Organization (WHO) estimates that 35% of women in the world have experienced physical or sexual violence over their lifetime and that the commonest situation is intimate partner violence. 30% of women in relationships report such experience, and 38% of murders of women are due to intimate partners. These figures may be as high as 70% in some regions. Risk factors include low educational achievement, a parental experience of violence, childhood abuse, gender inequality and cultural attitudes that allow violence to be considered more acceptable.
The COVID-19 epidemic made gender-based violence more common in Arab countries and worsened already-existing health disparities between the sexes. Yet millions of women in the Middle East and North Africa did not receive enough attention when it came to the provision of enhanced protection from gender-based violence.
Violence was declared a global health priority by the WHO at its assembly in 1996, drawing on both the United Nations Declaration on the elimination of violence against women (1993) and the recommendations of both the International Conference on Population and Development (Cairo, 1994) and the Fourth World Conference on Women (Beijing, 1995) This was followed by its 2002 World Report on Violence and Health, which focusses on intimate partner and sexual violence. Meanwhile, the UN embedded these in an action plan when its General Assembly passed the Millennium Declaration in September 2000, which resolved inter alia "to combat all forms of violence against women and to implement the Convention on the Elimination of All Forms of Discrimination against Women". One of the Millennium Goals (MDG 3) was the promotion of gender equality and the empowerment of women, which sought to eliminate all forms of violence against women as well as implementing CEDAW. This recognised that eliminating violence, including discrimination was a prerequisite to achieving all other goals of improving women's health. However it was later criticised for not including violence as an explicit target, the "missing target". In the evaluation of MDG 3, violence remained a major barrier to achieving the goals. In the successor Sustainable Development Goals, which also explicitly list the related issues of discrimination, child marriage and genital cutting, one target is listed as "Eliminate all forms of violence against all women and girls in the public and private spheres" by 2030.
UN Women believe that violence against women "is rooted in gender-based discrimination and social norms and gender stereotypes that perpetuate such violence", and advocate moving from supporting victims to prevention, through addressing root and structural causes. They recommend programmes that start early in life and are directed towards both genders to promote respect and equality, an area often overlooked in public policy. This strategy, which involves broad educational and cultural change, also involves implementing the recommendations of the 57th session of the UN Commission on the Status of Women (2013). To that end the 2014 UN International Day of the Girl Child was dedicated to ending the cycle of violence. In 2016, the World Health Assembly also adopted a plan of action to combat violence against women, globally.
Women in health research
Changes in the way research ethics was visualised in the wake of the Nuremberg Trials (1946), led to an atmosphere of protectionism of groups deemed to be vulnerable that was often legislated or regulated. This resulted in the relative underrepresentation of women in clinical trials. The position of women in research was further compromised in 1977, when in response to the tragedies resulting from thalidomide and diethylstilbestrol (DES), the United States Food and Drug Administration (FDA) prohibited women of child-bearing years from participation in early stage clinical trials. In practice this ban was often applied very widely to exclude all women. Women, at least those in the child-bearing years, and female animals were also deemed unsuitable research subjects due to their fluctuating hormonal levels during the menstrual or other reproductive cycles. However, research has demonstrated significant biological differences between the sexes in rates of susceptibility, symptoms and response to treatment in many major areas of health, including heart disease and some cancers. These exclusions pose a threat to the application of evidence-based medicine to women, and compromise to care offered to both women and men.
The increasing focus on Women's Rights in the United States during the 1980s focused attention on the fact that many drugs being prescribed for women had never actually been tested in women of child-bearing potential, and that there was a relative paucity of basic research into women's health. In response to this the National Institutes of Health (NIH) created the Office of Research on Women's Health (ORWH) in 1990 to address these inequities. In 1993 the National Institutes of Health Revitalisation Act officially reversed US policy by requiring NIH funded phase III clinical trials to include women. This resulted in an increase in women recruited into research studies. The next phase was the specific funding of large scale epidemiology studies and clinical trials focussing on women's health such as the Women's Health Initiative (1991), the largest disease prevention study conducted in the US. Its role was to study the major causes of death, disability and frailty in older women. Despite this apparent progress, women remain underrepresented. In 2006 women accounted for less than 25% of clinical trials published in 2004, A follow-up study by the same authors five years later found little evidence of improvement. Another study found between 10 and 47% of women in heart disease clinical trials, despite the prevalence of heart disease in women. Lung cancer is the leading cause of cancer death amongst women, but while the number of women enrolled in lung cancer studies is increasing, they are still far less likely to be enrolled than men.
One of the challenges in assessing progress in this area is the number of clinical studies that either do not report the gender of the subjects or lack the statistical power to detect gender differences. These were still issues in 2014, and further compounded by the fact that the majority of animal studies also exclude females or fail to account for differences in sex and gender. for instance despite the higher incidence of depression amongst women, less than half of the animal studies use female animals. Consequently, a number of funding agencies and scientific journals are asking researchers to explicitly address issues of sex and gender in their research. Some countries address the underrepresentation of women in research studies by the establishment of centers of excellence focusing on women's health research and running large scale clinical trials such as the Women's Health Initiative.
A related issue is the inclusion of pregnant women in clinical studies. Since other illnesses can exist concurrently with pregnancy, information is needed on the response to and efficacy of interventions during pregnancy, but ethical issues relative to the fetus, make this more complex. This gender bias is partly offset by the initiation of large scale epidemiology studies of women, such as the Nurses' Health Study (1976), Women's Health Initiative and Black Women's Health Study.
Women have also been the subject of neglect in health care research, such as the situation revealed in the Cartwright Inquiry in New Zealand (1988), in which research by two feminist journalists revealed that women with cervical abnormalities were not receiving treatment, as part of an experiment. The women were not told of the abnormalities and several later died.
The Women's Health Care Market is today a major pharmaceutical industry, projected to double in size within the five years from 2019 to 2024 and reach USD 17.8 billion. The by far most valued company worldwide whose leading products are in Women's Health is Bayer (Germany) with the focus area of Contraception.
National and international initiatives
In addition to addressing gender inequity in research, a number of countries have made women's health the subject of national initiatives. For instance in 1991 in the United States, the Department of Health and Human Services established an Office on Women's Health (OWH) with the goal of improving the health of women in America, through coordinating the women's health agenda throughout the department, and other agencies. In the twenty first century the Office has focussed on underserviced women. Also, in 1994 the Centers for Disease Control and Prevention (CDC) established its own Office of Women's Health (OWH), which was formally authorised by the 2010 Affordable Health Care Act (ACA).
Internationally, many United Nations agencies such as the World Health Organization (WHO), United Nations Population Fund (UNFPA) and UNICEF maintain specific programs on women's health, or maternal, sexual and reproductive health. In addition the United Nations global goals address many issues related to women's health, both directly and indirectly. These include the 2000 Millennium Development Goals (MDG) and their successor, the Sustainable Development Goals adopted in September 2015, following the report on progress towards the MDGs (The Millennium Development Goals Report 2015). For instance the eight MDG goals, eradicating extreme poverty and hunger, achieving universal primary education, promoting gender equality and empowering women, reducing child mortality rates, improving maternal health, combating HIV/AIDS malaria and other diseases, ensuring environmental sustainability, and developing a global partnership for development, all impact on women's health, as do all seventeen SDG goals, in addition to the specific SDG5: Achieve gender equality and empower all women and girls.
Goals and challenges
Research is a priority in terms of improving women's health. Research needs include diseases unique to women, more serious in women and those that differ in risk factors between women and men. The balance of sex and/or gender in research studies needs to be balanced appropriately to allow analysis that will detect interactions between sex and/or gender and other factors. Gronowski and Schindler suggest that scientific journals make documentation of sex a requirement when reporting the results of animal studies, and that funding agencies require justification from investigators for any sex inequity in their grant proposals, giving preference to those that are inclusive. They also suggest it is the role of health organisations to encourage women to enroll in clinical research. However, there has been progress in terms of large scale studies such as the WHI, and in 2006 the Society for Women's Health Research founded the Organization for the Study of Sex Differences and the journal Biology of Sex Differences to further the study of sex differences.
Research findings can take some time before becoming routinely implemented into clinical practice. Clinical medicine needs to incorporate the information already available from research studies as to the different ways in which diseases affect women and men. Many "normal" laboratory values have not been properly established for the female population separately, and similarly the "normal" criteria for growth and development. Drug dosing needs to take sex differences in drug metabolism into account.
Globally, women's access to health care remains a challenge, both in developing and developed countries. In the United States, before the Affordable Health Care Act came into effect, 25% of women of child-bearing age lacked health insurance. In the absence of adequate insurance, women are likely to avoid important steps to self care such as routine physical examination, screening and prevention testing, and prenatal care. The situation is aggravated by the fact that women living below the poverty line are at greater risk of unplanned pregnancy, unplanned delivery and elective abortion. Added to the financial burden in this group are poor educational achievement, lack of transportation, inflexible work schedules and difficulty obtaining child care, all of which function to create barriers to accessing health care. These problems are much worse in developing countries. Under 50% of childbirths in these countries are assisted by healthcare providers (e.g. midwives, nurses, doctors) which accounts for higher rates of maternal death, up to 1:1,000 live births. This is despite the WHO setting standards, such as a minimum of four antenatal visits. A lack of healthcare providers, facilities, and resources such as formularies all contribute to high levels of morbidity amongst women from avoidable conditions such as obstetrical fistulae, sexually transmitted infections and cervical cancer.
These challenges are included in the goals of the Office of Research on Women's Health, in the United States, as is the goal of facilitating women's access to careers in biomedicine. The ORWH believes that one of the best ways to advance research in women's health is to increase the proportion of women involved in healthcare and health research, as well as assuming leadership in government, centres of higher learning, and in the private sector. This goal acknowledges the glass ceiling that women face in careers in science and in obtaining resources from grant funding to salaries and laboratory space. The National Science Foundation in the United States states that women only gain half of the doctorates awarded in science and engineering, fill only 21% of full-time professor positions in science and 5% of those in engineering, while earning only 82% of the remuneration their male colleagues make. These figures are even lower in Europe.
| Biology and health sciences | Health and fitness: General | Health |
656965 | https://en.wikipedia.org/wiki/Polymer%20chemistry | Polymer chemistry | Polymer chemistry is a sub-discipline of chemistry that focuses on the structures, chemical synthesis, and chemical and physical properties of polymers and macromolecules. The principles and methods used within polymer chemistry are also applicable through a wide range of other chemistry sub-disciplines like organic chemistry, analytical chemistry, and physical chemistry. Many materials have polymeric structures, from fully inorganic metals and ceramics to DNA and other biological molecules. However, polymer chemistry is typically related to synthetic and organic compositions. Synthetic polymers are ubiquitous in commercial materials and products in everyday use, such as plastics, and rubbers, and are major components of composite materials. Polymer chemistry can also be included in the broader fields of polymer science or even nanotechnology, both of which can be described as encompassing polymer physics and polymer engineering.
History
The work of Henri Braconnot in 1777 and the work of Christian Schönbein in 1846 led to the discovery of nitrocellulose, which, when treated with camphor, produced celluloid. Dissolved in ether or acetone, it becomes collodion, which has been used as a wound dressing since the U.S. Civil War. Cellulose acetate was first prepared in 1865. In years 1834-1844 the properties of rubber (polyisoprene) were found to be greatly improved by heating with sulfur, thus founding the vulcanization process.
In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first polymer made independent of the products of organisms, a thermosetting phenol-formaldehyde resin called Bakelite. Around the same time, Hermann Leuchs reported the synthesis of amino acid N-carboxyanhydrides and their high molecular weight products upon reaction with nucleophiles, but stopped short of referring to these as polymers, possibly due to the strong views espoused by Emil Fischer, his direct supervisor, denying the possibility of any covalent molecule exceeding 6,000 daltons. Cellophane was invented in 1908 by Jocques Brandenberger who treated sheets of viscose rayon with acid.
The chemist Hermann Staudinger first proposed that polymers consisted of long chains of atoms held together by covalent bonds, which he called macromolecules. His work expanded the chemical understanding of polymers and was followed by an expansion of the field of polymer chemistry during which such polymeric materials as neoprene, nylon and polyester were invented. Before Staudinger, polymers were thought to be clusters of small molecules (colloids), without definite molecular weights, held together by an unknown force. Staudinger received the Nobel Prize in Chemistry in 1953. Wallace Carothers invented the first synthetic rubber called neoprene in 1931, the first polyester, and went on to invent nylon, a true silk replacement, in 1935. Paul Flory was awarded the Nobel Prize in Chemistry in 1974 for his work on polymer random coil configurations in solution in the 1950s. Stephanie Kwolek developed an aramid, or aromatic nylon named Kevlar, patented in 1966. Karl Ziegler and Giulio Natta received a Nobel Prize for their discovery of catalysts for the polymerization of alkenes. Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa were awarded the 2000 Nobel Prize in Chemistry for the development of polyacetylene and related conductive polymers. Polyacetylene itself did not find practical applications, but organic light-emitting diodes (OLEDs) emerged as one application of conducting polymers.
Teaching and research programs in polymer chemistry were introduced in the 1940s. An Institute for Macromolecular Chemistry was founded in 1940 in Freiburg, Germany under the direction of Staudinger. In America, a Polymer Research Institute (PRI) was established in 1941 by Herman Mark at the Polytechnic Institute of Brooklyn (now Polytechnic Institute of NYU).
Polymers and their properties
Polymers are high molecular mass compounds formed by polymerization of monomers. They are synthesized by the polymerization process and can be modified by the additive of monomers. The additives of monomers change polymers mechanical property, processability, durability and so on. The simple reactive molecule from which the repeating structural units of a polymer are derived is called a monomer. A polymer can be described in many ways: its degree of polymerisation, molar mass distribution, tacticity, copolymer distribution, the degree of branching, by its end-groups, crosslinks, crystallinity and thermal properties such as its glass transition temperature and melting temperature. Polymers in solution have special characteristics with respect to solubility, viscosity, and gelation. Illustrative of the quantitative aspects of polymer chemistry, particular attention is paid to the number-average and weight-average molecular weights and , respectively.
The formation and properties of polymers have been rationalized by many theories including Scheutjens–Fleer theory, Flory–Huggins solution theory, Cossee–Arlman mechanism, Polymer field theory, Hoffman Nucleation Theory, Flory–Stockmayer theory, and many others.
The study of polymer thermodynamics helps improve the material properties of various polymer-based materials such as polystyrene (styrofoam) and polycarbonate. Common improvements include toughening, improving impact resistance, improving biodegradability, and altering a material's solubility.
Viscosity
As polymers get longer and their molecular weight increases, their viscosity tend to increase. Thus, the measured viscosity of polymers can provide valuable information about the average length of the polymer, the progress of reactions, and in what ways the polymer branches.
Classification
Polymers can be classified in many ways. Polymers, strictly speaking, comprise most solid matter: minerals (i.e. most of the Earth's crust) are largely polymers, metals are 3-d polymers, organisms, living and dead, are composed largely of polymers and water. Often polymers are classified according to their origin:
biopolymers
synthetic polymers
inorganic polymers
Biopolymers are the structural and functional materials that comprise most of the organic matter in organisms. One major class of biopolymers are proteins, which are derived from amino acids. Polysaccharides, such as cellulose, chitin, and starch, are biopolymers derived from sugars. The polynucleic acids DNA and RNA are derived from phosphorylated sugars with pendant nucleotides that carry genetic information.
Synthetic polymers are the structural materials manifested in plastics, synthetic fibers, paints, building materials, furniture, mechanical parts, and adhesives. Synthetic polymers may be divided into thermoplastic polymers and thermoset plastics. Thermoplastic polymers include polyethylene, teflon, polystyrene, polypropylene, polyester, polyurethane, Poly(methyl methacrylate), polyvinyl chloride, nylons, and rayon. Thermoset plastics include vulcanized rubber, bakelite, Kevlar, and polyepoxide. Almost all synthetic polymers are derived from petrochemicals.
| Physical sciences | Chemistry: General | null |
656979 | https://en.wikipedia.org/wiki/Environmental%20chemistry | Environmental chemistry | Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. It should not be confused with green chemistry, which seeks to reduce potential pollution at its source. It can be defined as the study of the sources, reactions, transport, effects, and fates of chemical species in the air, soil, and water environments; and the effect of human activity and biological activity on these. Environmental chemistry is an interdisciplinary science that includes atmospheric, aquatic and soil chemistry, as well as heavily relying on analytical chemistry and being related to environmental and other areas of science.
Environmental chemistry involves first understanding how the uncontaminated environment works, which chemicals in what concentrations are present naturally, and with what effects. Without this it would be impossible to accurately study the effects humans have on the environment through the release of chemicals.
Environmental chemists draw on a range of concepts from chemistry and various environmental sciences to assist in their study of what is happening to a chemical species in the environment. Important general concepts from chemistry include understanding chemical reactions and equations, solutions, units, sampling, and analytical techniques.
Contaminant
A contaminant is a substance present in nature at a level higher than fixed levels or that would not otherwise be there. This may be due to human activity and bioactivity. The term contaminant is often used interchangeably with pollutant, which is a substance that detrimentally impacts the surrounding environment. While a contaminant is sometimes a substance in the environment as a result of human activity, but without harmful effects, it sometimes the case that toxic or harmful effects from contamination only become apparent at a later date.
The "medium" such as soil or organism such as fish affected by the pollutant or contaminant is called a receptor, whilst a sink is a chemical medium or species that retains and interacts with the pollutant such as carbon sink and its effects by microbes.
Environmental indicators
Chemical measures of water quality include dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), total dissolved solids (TDS), pH, nutrients (nitrates and phosphorus), heavy metals, soil chemicals (including copper, zinc, cadmium, lead and mercury), and pesticides.
Applications
Environmental chemistry is used by the Environment Agency in England, Natural Resources Wales, the United States Environmental Protection Agency, the Association of Public Analysts, and other environmental agencies and research bodies around the world to detect and identify the nature and source of pollutants. These can include:
Heavy metal contamination of land by industry. These can then be transported into water bodies and be taken up by living organisms such as animals and plants.
PAHs (Polycyclic Aromatic Hydrocarbon) in large bodies of water contaminated by oil spills or leaks. Many of the PAHs are carcinogens and are extremely toxic. They are regulated by concentration (ppb) using environmental chemistry and chromatography laboratory testing.
Nutrients leaching from agricultural land into water courses, which can lead to algal blooms and eutrophication.
Urban runoff of pollutants washing off impervious surfaces (roads, parking lots, and rooftops) during rain storms. Typical pollutants include gasoline, motor oil and other hydrocarbon compounds, metals, nutrients and sediment (soil).
Organometallic compounds.
Methods
Quantitative chemical analysis is a key part of environmental chemistry, since it provides the data that frame most environmental studies.
Common analytical techniques used for quantitative determinations in environmental chemistry include classical wet chemistry, such as gravimetric, titrimetric and electrochemical methods. More sophisticated approaches are used in the determination of trace metals and organic compounds. Metals are commonly measured by atomic spectroscopy and mass spectrometry: Atomic Absorption Spectrophotometry (AAS) and Inductively Coupled Plasma Atomic Emission (ICP-AES) or Inductively Coupled Plasma Mass Spectrometric (ICP-MS) techniques. Organic compounds, including PAHs, are commonly measured also using mass spectrometric methods, such as Gas chromatography-mass spectrometry (GC/MS) and Liquid chromatography-mass spectrometry (LC/MS). Tandem Mass spectrometry MS/MS and High Resolution/Accurate Mass spectrometry HR/AM offer sub part per trillion detection. Non-MS methods using GCs and LCs having universal or specific detectors are still staples in the arsenal of available analytical tools.
Other parameters often measured in environmental chemistry are radiochemicals. These are pollutants which emit radioactive materials, such as alpha and beta particles, posing danger to human health and the environment. Particle counters and Scintillation counters are most commonly used for these measurements. Bioassays and immunoassays are utilized for toxicity evaluations of chemical effects on various organisms. Polymerase Chain Reaction PCR is able to identify species of bacteria and other organisms through specific DNA and RNA gene isolation and amplification and is showing promise as a valuable technique for identifying environmental microbial contamination.
Published analytical methods
Peer-reviewed test methods have been published by government agencies and private research organizations. Approved published methods must be used when testing to demonstrate compliance with regulatory requirements.
Notable environmental chemists
Joan Berkowitz
Paul Crutzen (Nobel Prize in Chemistry, 1995)
Philip Gschwend
Alice Hamilton
John M. Hayes
Charles David Keeling
Ralph Keeling
Mario Molina (Nobel Prize in Chemistry, 1995)
James J. Morgan
Clair Patterson
Roger Revelle
Sherry Roland (Nobel Prize in Chemistry, 1995)
Robert Angus Smith
Susan Solomon
Werner Stumm
Ellen Swallow Richards
Hans Suess
John Tyndall
| Physical sciences | Chemistry: General | null |
657224 | https://en.wikipedia.org/wiki/Bedrock | Bedrock | In geology, bedrock is solid rock that lies under loose material (regolith) within the crust of Earth or another terrestrial planet.
Definition
Bedrock is the solid rock that underlies looser surface material. An exposed portion of bedrock is often called an outcrop. The various kinds of broken and weathered rock material, such as soil and subsoil, that may overlie the bedrock are known as regolith.
Engineering geology
The surface of the bedrock beneath the soil cover (regolith) is also known as rockhead in engineering geology, and its identification by digging, drilling or geophysical methods is an important task in most civil engineering projects. Superficial deposits can be very thick, such that the bedrock lies hundreds of meters below the surface.
Weathering of bedrock
Exposed bedrock experiences weathering, which may be physical or chemical, and which alters the structure of the rock to leave it susceptible to erosion. Bedrock may also experience subsurface weathering at its upper boundary, forming saprolite.
Geologic map
A geologic map of an area will usually show the distribution of differing bedrock types, rock that would be exposed at the surface if all soil or other superficial deposits were removed. Where superficial deposits are so thick that the underlying bedrock cannot be reliably mapped, the superficial deposits will be mapped instead (for example, as alluvium).
| Physical sciences | Petrology | Earth science |
657938 | https://en.wikipedia.org/wiki/Viper | Viper | Vipers are snakes in the family Viperidae, found in most parts of the world, except for Antarctica, Australia, Hawaii, Madagascar, New Zealand, Ireland, and various other isolated islands. They are venomous and have long (relative to non-vipers), hinged fangs that permit deep penetration and injection of their venom. Three subfamilies are currently recognized. They are also known as viperids. The name "viper" is derived from the Latin word vipera, -ae, also meaning viper, possibly from vivus ("living") and parere ("to beget"), referring to the trait viviparity (giving live birth) common in vipers like most of the species of Boidae.
Description
All viperids have a pair of relatively long solenoglyphous (hollow) fangs that are used to inject venom from glands located towards the rear of the upper jaws, just behind the eyes. Each of the two fangs is at the front of the mouth on a short maxillary bone that can rotate back and forth. When not in use, the fangs fold back against the roof of the mouth and are enclosed in a membranous sheath. This rotating mechanism allows for very long fangs to be contained in a relatively small mouth. The left and right fangs can be rotated together or independently. During a strike, the mouth can open nearly 180° and the maxilla rotates forward, erecting the fangs as late as possible so that the fangs do not become damaged, as they are brittle. The jaws close upon impact and the muscular sheaths encapsulating the venom glands contract, injecting the venom as the fangs penetrate the target. This action is very fast; in defensive strikes, it will be more a stab than a bite. Viperids use this mechanism primarily for immobilization and digestion of prey. Pre-digestion occurs as the venom contains proteases, which degrade tissues. Secondarily, it is used for self defense, though in cases with nonprey, such as humans, they may give a dry bite (not inject any venom). A dry bite allows the snake to conserve its precious reserve of venom, because once it has been depleted, time is needed to replenish it, leaving the snake vulnerable. In addition to being able to deliver dry bites, vipers can inject larger quantities of venom into larger prey targets, and smaller amounts into small prey. This causes the ideal amount of predigestion for the lowest amount of venom.
Almost all vipers have keeled scales, a stocky build with a short tail, and a triangle-shaped head distinct from the neck, owing to the location of the venom glands. The great majority have vertically elliptical, or slit-shaped, pupils that can open wide to cover most of the eye or close almost completely, which helps them to see in a wide range of light levels. Typically, vipers are nocturnal and ambush their prey.
Compared to many other snakes, vipers often appear rather sluggish. Most are ovoviviparous: the eggs are retained inside the mother's body, and the young emerge living. However, a few lay eggs in nests. Typically, the number of young in a clutch remains constant, but as the weight of the mother increases, larger eggs are produced, yielding larger young.
Geographic range
Viperid snakes are found in the Americas, Africa, Eurasia, and South Asia. In the Americas, they are native from south of 48°N. In the Old World, viperids are located everywhere except Siberia, Ireland, and north of the Arctic Circle in Norway and Sweden. Wild viperids are not found in Australia. The common adder, a viperid, is the only venomous snake found in Great Britain.
Venom
Viperid venoms typically contain an abundance of protein-degrading enzymes, called proteases, that produce symptoms such as pain, strong local swelling and necrosis, blood loss from cardiovascular damage complicated by coagulopathy, and disruption of the blood-clotting system. Also being vasculotoxic in nature, viperine venom causes vascular endothelial damage and hemolysis. Death is usually caused by collapse in blood pressure. This is in contrast to elapid venoms, which generally contain neurotoxins that disable muscle contraction and cause paralysis. Death from elapid bites usually results from asphyxiation because the diaphragm can no longer contract, but this rule does not always apply; some elapid bites include proteolytic symptoms typical of viperid bites, while some viperid bites produce neurotoxic symptoms.
Proteolytic venom is also dual-purpose: first, it is used for defense and to immobilize prey, as with neurotoxic venoms; second, many of the venom's enzymes have a digestive function, breaking down molecules such as lipids, nucleic acids, and proteins. This is an important adaptation, as many vipers have inefficient digestive systems.
Due to the nature of proteolytic venom, a viperid bite is often a very painful experience and should always be taken seriously, though it may not necessarily prove fatal. Even with prompt and proper treatment, a bite can still result in a permanent scar, and in the worst cases, the affected limb may even have to be amputated. A victim's fate is impossible to predict, as this depends on many factors, including the species and size of the snake involved, how much venom was injected (if any), and the size and condition of the patient before being bitten. Viper bite victims may also be allergic to the venom or the antivenom.
Behavior
These snakes can decide how much venom to inject depending on the circumstances. The most important determinant of venom expenditure is generally the size of the snake; larger specimens can deliver much more venom. The species is also important, since some are likely to inject more venom than others, may have more venom available, strike more accurately, or deliver a number of bites in a short time. In predatory bites, factors that influence the amount of venom injected include the size of the prey, the species of prey, and whether the prey item is held or released. The need to label prey for chemosensory relocation after a bite and release may also play a role. In defensive bites, the amount of venom injected may be determined by the size or species of the predator (or antagonist), as well as the assessed level of threat, although larger assailants and higher threat levels may not necessarily lead to larger amounts of venom being injected.
Prey tracking
Hemotoxic venom takes more time than neurotoxic venom to immobilize prey, so viperid snakes need to track down prey animals after they have been bitten, in a process known as "prey relocalization". Vipers are able to do this via certain proteins contained in their venom. This important adaptation allowed rattlesnakes to evolve the strike-and-release bite mechanism, which provided a huge benefit to snakes by minimizing contact with potentially dangerous prey animals. This adaptation, then, requires the snake to track down the bitten animal to eat it, in an environment full of other animals of the same species. Western diamondback rattlesnakes respond more actively to mouse carcasses that have been injected with crude rattlesnake venom. When the various components of the venom were separated out, the snakes responded to mice injected with two kinds of disintegrins, which are responsible for allowing the snakes to track down their prey.
Subfamilies
Type genus = Vipera Laurenti, 1768
Sensory organs
Heat-sensing pits
Pit vipers have specialized sensory organs near the nostrils called heat-sensing pits. The location of this organ is unique to pit vipers. These pits have the ability to detect thermal radiation emitted by warm-blooded animals, helping them better understand their environment. Internally the organ forms a small pit lined with membranes, external and internal, attached to the trigeminal nerve. Infrared light signals the internal membranes, which in turn signal the trigeminal nerve and send the infrared signals to the brain, where they are overlaid onto the visual image created by the eyes.
Taxonomy
Whether family Viperidae is attributed to Oppel (1811), as opposed to Laurenti (1768) or Gray (1825), is subject to some interpretation. The consensus among leading experts, though, is that Laurenti used viperae as the plural of vipera (Latin for "viper", "adder", or "snake") and did not intend for it to indicate a family group taxon. Rather, it is attributed to Oppel, based on his Viperini as a distinct family group name, despite the fact that Gray was the first to use the form Viperinae.
| Biology and health sciences | Snakes | Animals |
1672587 | https://en.wikipedia.org/wiki/Caisson%20%28engineering%29 | Caisson (engineering) | In geotechnical engineering, a caisson (; borrowed , , an augmentative of ) is a watertight retaining structure used, for example, to work on the foundations of a bridge pier, for the construction of a concrete dam, or for the repair of ships.
Caissons are constructed in such a way that the water can be pumped out, keeping the work environment dry. When piers are being built using an open caisson, and it is not practical to reach suitable soil, friction pilings may be driven to form a suitable sub-foundation. These piles are connected by a foundation pad upon which the column pier is erected.
Caisson engineering has been used since at least the 19th century, with three prominent examples being the Royal Albert Bridge (completed in 1859), the Eads Bridge (completed in 1874), and the Brooklyn Bridge (completed in 1883).
Types
To install a caisson in place, it is brought down through soft mud until a suitable foundation material is encountered. While bedrock is preferred, a stable, hard mud is sometimes used when bedrock is too deep. The four main types of caisson are box caisson, open caisson, pneumatic caisson and monolithic caisson.
Box
A box caisson is a prefabricated concrete box (with sides and a bottom); it is set down on prepared bases. Once in place, it is filled with concrete to become part of the permanent works, such as the foundation for a bridge pier. Hollow concrete structures are usually less dense than water so a box caisson must be ballasted or anchored to keep it from floating until it can be filled with concrete. Sometimes elaborate anchoring systems may be required, such as in tidal zones. Adjustable anchoring systems combined with a GPS survey enable engineers to position a box caisson with pinpoint accuracy.
Open
An open caisson is similar to a box caisson, except that it does not have a bottom face. It is suitable for use in soft clays (e.g. in some river-beds), but not for where there may be large obstructions in the ground. An open caisson that is used in soft grounds or high water tables, where open trench excavations are impractical, can also be used to install deep manholes, pump stations and reception/launch pits for microtunnelling, pipe jacking and other operations.
A caisson is sunk by self-weight, concrete or water ballast placed on top, or by hydraulic jacks. The leading edge (or cutting shoe) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner; it is usually made of steel. The shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. An open caisson may fill with water during sinking. The material is excavated by clamshell excavator bucket on crane.
The formation level subsoil may still not be suitable for excavation or bearing capacity. The water in the caisson (due to a high water table) balances the upthrust forces of the soft soils underneath. If dewatered, the base may "pipe" or "boil", causing the caisson to sink. To combat this problem, piles may be driven from the surface to act as:
Load-bearing walls, in that they transmit loads to deeper soils.
Anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven.
H-beam sections (typical column sections, due to resistance to bending in all axis) may be driven at angles "raked" to rock or other firmer soils; the H-beams are left extended above the base. A reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. When the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil.
Monolithic
A monolithic caisson (or simply a monolith) is larger than the other types of caisson, but similar to open caissons. Such caissons are often found in quay walls, where resistance to impact from ships is required.
Pneumatic
Shallow caissons may be open to the air, whereas pneumatic caissons (sometimes called pressurized caissons), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. An airlock allows access to the chamber. Workers, called sandhogs in American English, move mud and rock debris (called muck) from the edge of the workspace to a water-filled pit, connected by a tube (called the muck tube) to the surface. A crane at the surface removes the soil with a clamshell bucket. The water pressure in the tube balances the air pressure, with excess air escaping up the muck tube. The pressurized air flow must be constant to ensure regular air changes for the workers and prevent excessive inflow of mud or water at the base of the caisson. When the caisson hits bedrock, the sandhogs exit through the airlock and fill the box with concrete, forming a solid foundation pier.
A pneumatic (compressed-air) caisson has the advantage of providing dry working conditions, which is better for placing concrete. It is also well suited for foundations for which other methods might cause settlement of adjacent structures.
Construction workers who leave the pressurized environment of the caisson must decompress at a rate that allows symptom-free release of inert gases dissolved in the body tissues if they are to avoid decompression sickness, a condition first identified in caisson workers, and originally named "caisson disease" in recognition of the occupational hazard. Construction of the Brooklyn Bridge, which was built with the help of pressurised caissons, resulted in numerous workers being either killed or permanently injured by caisson disease during its construction. Barotrauma of the ears, sinus cavities and lungs and dysbaric osteonecrosis are other risks.
Other uses
Caissons have also been used in the installation of hydraulic elevators where a single-stage ram is installed below the ground level.
Caissons, codenamed Phoenix, were an integral part of the Mulberry harbours used during the World War II Allied invasion of Normandy.
Other meanings
Boat lift caissons: The word caisson is also used as a synonym for the moving trough part of caisson locks, canal lifts and inclines in which boats and ships rest while being lifted from one canal elevation to another; the water is retained on the inside of the caisson, or excluded from the caisson, according to the respective operating principle.
Structural caissons: Caisson is also sometimes used as a colloquial term for a reinforced concrete structure formed by pouring into a hollow cylindrical form, typically by placing a caisson form below grade in an open excavation and pouring once backfill is complete, or by drilling at grade, although this can be problematic with deep caissons, as unsupported excavations can collapse before the caisson form can be inserted. In this manner, the earth placed around the empty caisson form provides stability and strength, allowing concrete to be poured with fewer complications and with less risk of a form blowout. While, technically, only the form itself is actually a caisson, it is not uncommon for any below-grade cast concrete pillar to be referred to as, simply, a caisson.
Ventilation filtration systems: The word caisson is also used as a name for an airtight housing for ventilation filters in facilities that handle hazardous materials. The housing usually has an upstream compartment for a pre-filter element and a downstream compartment for a high-efficiency filter element. It may have multiple sets of compartments. The housing has gasketed access doors to allow for the change out of the filter elements. The housing is usually equipped with connection points used to test the efficiency of the filters and monitor changes in the differential pressure across the filter media.
| Technology | Hydraulic infrastructure | null |
1672590 | https://en.wikipedia.org/wiki/Limbers%20and%20caissons | Limbers and caissons | A limber is a two-wheeled cart designed to support the trail of an artillery piece, or the stock of a field carriage such as a caisson or traveling forge, allowing it to be towed. The trail is the hinder end of the stock of a gun-carriage, which rests or slides on the ground when the carriage is unlimbered.
A caisson () is a two-wheeled cart designed to carry artillery ammunition; the British term is "ammunition wagon". Caissons are also used to bear the casket of the deceased in some state and military funerals in certain Western cultures, including the United States.
Before the 19th century
As artillery pieces developed trunnions and were placed on carriages featuring two wheels and a trail, a limber was devised. This was a simple cart with a pintle. When the piece was to be towed, it was raised over the limber and then lowered, with the pintle fitting into a hole in the trail. Horses or other draft animals were harnessed in single file to haul the limber. There was no provision for carrying ammunition on the limber, but an ammunition chest was often carried between the two pieces of the trail.
Nineteenth century
The British developed a new system of carriages, which was adopted by the French, then copied from the French by the Americans.
During the American Civil War, U.S. Army equipment was identical to Confederate Army equipment, essentially identical to French equipment, and similar to that of other nations. The field artillery limber assumed its archetypal form – two wheels, an ammunition chest, a pintle hook at the rear, and a central pole with horses harnessed on either side. The artillery piece had an iron ring (lunette) at the end of the trail. To move the piece, the lunette was dropped over the pintle hook (which resembles a modern trailer hitch). The connection was secured by inserting a pintle hook key into the pintle.
The quantity of ammunition in the chest, which could be detached from the limber, depended on the size of the piece. An ammunition chest for the M1857 light 12-pounder gun ("Napoleon") carried 28 rounds. The cover of the ammunition chest was made of sheet copper to prevent stray embers from setting the chest on fire.
Six horses were the preferred team for a field piece, with four being considered the minimum team. Horses were harnessed in pairs on either side of the limber pole. A driver rode on each left-hand ("near") horse and held reins for both the horse he rode and the horse to his right (the "off horse").
In addition to hauling the artillery piece, the limber also hauled the caisson, a two-wheeled cart that carried two extra ammunition chests, a spare wheel and extra limber pole slung beneath. There was one caisson for each artillery piece in a battery. The cannoneers could ride the ammunition chests on the limbers and the caisson when speed was required, but to do so for any length of time was too tiring for the horses, so cannoneers generally walked. The exception to this rule would be in horse-artillery batteries, where the cannoneers rode saddle horses.
When the artillery piece was in action, the piece's limber would have been six yards behind the piece, depending on the terrain, with the caisson and its limber farther to the rear of the firing line, preferably behind some natural cover such as a ridge. While firing the piece, if possible, the crew kept the two ammunition chests on the caisson full, preferably supplying the gun from the third ammunition chest on the caisson's limber. When the ammunition from the ammunition chest on the piece's limber was exhausted, the piece's limber and the caisson's limber exchanged places. The empty ammunition chest was removed, and then the middle chest on the caisson was moved forward onto the limber. A fully loaded ammunition chest for a "Napoleon" 12-pounder weighed 650 pounds, so the chest was dragged and pushed, rather than lifted, into place. With a full ammunition chest in place, the limber was ready to move forward and supply the piece.
Although the limber's primary purpose was to haul the artillery piece and the caisson, it also hauled the battery wagon and a traveling forge. The battery wagon carried spare parts, paint, etc., while the traveling forge was for use by a blacksmith in keeping the battery's hardware in repair. The ammunition chest on the limber hauling the battery wagon contained carpenters' and saddle-makers' tools, and the ammunition chest on the limber hauling the traveling forge contained blacksmiths' tools.
Siege artillery limbers, unlike field artillery limbers, did not have an ammunition chest. Siege artillery limbers resembled their predecessors: they were two-wheeled carts with a pintle, now somewhat behind the axle. When the piece was to be hauled, the trail was raised above the limber, then lowered, with the pintle fitting into a hole in the trail. Unlike the situation with its predecessors, horses were harnessed to the 19th-century limber in pairs, with six to ten horses needed to haul a siege gun or howitzer.
20th century
With the general passing of the horse as a mover of artillery, the need for limbers and caissons also largely passed. Trucks or artillery tractors could tow artillery pieces but did not completely take over until after the end of the Second World War. Many armies retained limbers seemingly from sheer inertia.
As a field artillery piece, the British 25-pdr was designed to be towed only in conjunction with a trailer. The trailer provided the vital over-run braking system for the gun. Both the unsatisfactory, and consequently short lived, trailer artillery No. 24 and the far more usual No. 27, had the same type of wheel hubs as the gun. The No. 27 also carried 32 rounds of ammunition, had a useful stores tray on the front and brackets for a gun traversing platform and spare hub on the top .
Some simple limbers were kept for heavier pieces such as the eight-inch Howitzer M1 to achieve better weight distribution.
Caissons in American and British culture
The song "The Caissons Go Rolling Along" refers to these; the version adopted as the U.S. Army's official song has, among other changes, replaced the word caissons with Army.
Caissons are used for burials at Arlington National Cemetery and for state funerals for United States government dignitaries including the President of the United States for the remains to be carried by members of The Old Guard's Caisson Platoon.
When the equipage is used in this way for a state funeral in Britain, the coffin is usually placed on a platform mounted on top of the gun and referred to as being carried on a gun carriage. For the funerals of British monarchs, there is a tradition that the horses be replaced by a detail from the Royal Navy.
| Technology | Artillery | null |
1673288 | https://en.wikipedia.org/wiki/Electron%20magnetic%20moment | Electron magnetic moment | In atomic physics, the electron magnetic moment, or more specifically the electron magnetic dipole moment, is the magnetic moment of an electron resulting from its intrinsic properties of spin and electric charge. The value of the electron magnetic moment (symbol μe) is In units of the Bohr magneton (μB), it is , a value that was measured with a relative accuracy of .
Magnetic moment of an electron
The electron is a charged particle with charge −, where is the unit of elementary charge. Its angular momentum comes from two types of rotation: spin and orbital motion. From classical electrodynamics, a rotating distribution of electric charge produces a magnetic dipole, so that it behaves like a tiny bar magnet. One consequence is that an external magnetic field exerts a torque on the electron magnetic moment that depends on the orientation of this dipole with respect to the field.
If the electron is visualized as a classical rigid body in which the mass and charge have identical distribution and motion that is rotating about an axis with angular momentum , its magnetic dipole moment is given by:
where e is the electron rest mass. The angular momentum L in this equation may be the spin angular momentum, the orbital angular momentum, or the total angular momentum. The ratio between the true spin magnetic moment and that predicted by this model is a dimensionless factor , known as the electron -factor:
It is usual to express the magnetic moment in terms of the reduced Planck constant and the Bohr magneton B:
Since the magnetic moment is quantized in units of B, correspondingly the angular momentum is quantized in units of .
Formal definition
Classical notions such as the center of charge and mass are, however, hard to make precise for a quantum elementary particle. In practice the definition used by experimentalists comes from the form factors
appearing in the matrix element
of the electromagnetic current operator between two on-shell states. Here and are 4-spinor solution of the Dirac equation normalized so that , and is the momentum transfer from the current to the electron. The form factor is the electron's charge, is its static magnetic dipole moment, and provides the formal definion of the electron's electric dipole moment. The remaining form factor would, if non zero, be the anapole moment.
Spin magnetic dipole moment
The spin magnetic moment is intrinsic for an electron. It is
Here is the electron spin angular momentum. The spin -factor is approximately two: . The factor of two indicates that the electron appears to be twice as effective in producing a magnetic moment as a charged body for which the mass and charge distributions are identical.
The spin magnetic dipole moment is approximately one B because and the electron is a spin- particle ():
The component of the electron magnetic moment is
where s is the spin quantum number. Note that is a negative constant multiplied by the spin, so the magnetic moment is antiparallel to the spin angular momentum.
The spin g-factor comes from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties. Reduction of the Dirac equation for an electron in a magnetic field to its non-relativistic limit yields the Schrödinger equation with a correction term, which takes account of the interaction of the electron's intrinsic magnetic moment with the magnetic field giving the correct energy.
For the electron spin, the most accurate value for the spin -factor has been experimentally determined to have the value
Note that this differs only marginally from the value from the Dirac equation. The small correction is known as the anomalous magnetic dipole moment of the electron; it arises from the electron's interaction with virtual photons in quantum electrodynamics. A triumph of the quantum electrodynamics theory is the accurate prediction of the electron g-factor. The CODATA value for the electron magnetic moment is
Orbital magnetic dipole moment
The revolution of an electron around an axis through another object, such as the nucleus, gives rise to the orbital magnetic dipole moment. Suppose that the angular momentum for the orbital motion is . Then the orbital magnetic dipole moment is
Here L is the electron orbital -factor and B is the Bohr magneton. The value of L is exactly equal to one, by a quantum-mechanical argument analogous to the derivation of the classical gyromagnetic ratio.
Total magnetic dipole moment
The total magnetic dipole moment resulting from both spin and orbital angular momenta of an electron is related to the total angular momentum by a similar equation:
The -factor J is known as the Landé g-factor, which can be related to L and S by quantum mechanics. See Landé g-factor for details.
Example: hydrogen atom
For a hydrogen atom, an electron occupying the atomic orbital , the magnetic dipole moment is given by
Here is the orbital angular momentum, , , and are the principal, azimuthal, and magnetic quantum numbers respectively.
The component of the orbital magnetic dipole moment for an electron with a magnetic quantum number ℓ is given by
History
The electron magnetic moment is intrinsically connected to electron spin and was first hypothesized during the early models of the atom in the early twentieth century. The first to introduce the idea of electron spin was Arthur Compton in his 1921 paper on investigations of ferromagnetic substances with X-rays. In Compton's article, he wrote: "Perhaps the most natural, and certainly the most generally accepted view of the nature of the elementary magnet, is that the revolution of electrons in orbits within the atom give to the atom as a whole the properties of a tiny permanent magnet."
That same year Otto Stern proposed an experiment carried out later called the Stern–Gerlach experiment in which silver atoms in a magnetic field were deflected in opposite directions of distribution. This pre-1925 period marked the old quantum theory built upon the Bohr-Sommerfeld model of the atom with its classical elliptical electron orbits. During the period between 1916 and 1925, much progress was being made concerning the arrangement of electrons in the periodic table. In order to explain the Zeeman effect in the Bohr atom, Sommerfeld proposed that electrons would be based on three 'quantum numbers', n, k, and m, that described the size of the orbit, the shape of the orbit, and the direction in which the orbit was pointing. Irving Langmuir had explained in his 1919 paper regarding electrons in their shells, "Rydberg has pointed out that these numbers are obtained from the series . The factor two suggests a fundamental two-fold symmetry for all stable atoms." This configuration was adopted by Edmund Stoner, in October 1924 in his paper 'The Distribution of Electrons Among Atomic Levels' published in the Philosophical Magazine. Wolfgang Pauli hypothesized that this required a fourth quantum number with a two-valuedness.
Electron spin in the Pauli and Dirac theories
Starting from here the charge of the electron is . The necessity of introducing half-integral spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong non-uniform magnetic field, which then splits into parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integral, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into 3 parts, corresponding to atoms with = −1, 0, and +1. The conclusion is that silver atoms have net intrinsic angular momentum of . Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so:
Here is the magnetic vector potential and the electric potential, both representing the electromagnetic field, and = (, , ) are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field:
This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it must use a two-component wave function. Pauli had introduced the 2 × 2 sigma matrices as pure phenomenology — Dirac now had a theoretical argument that implied that spin was somehow the consequence of incorporating relativity into quantum mechanics. On introducing the external electromagnetic 4-potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form (in natural units = = 1)
where are the gamma matrices (known as Dirac matrices) and is the imaginary unit. A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by , have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the units restored:
so
Assuming the field is weak and the motion of the electron non-relativistic, we have the total energy of the electron approximately equal to its rest energy, and the momentum reducing to the classical value,
and so the second equation may be written
which is of order - thus at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement
The operator on the left represents the particle energy reduced by its rest energy, which is just the classical energy, so we recover Pauli's theory if we identify his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious that appears in it, and the necessity of a complex wave function, back to the geometry of space-time through the Dirac algebra. It also highlights why the Schrödinger equation, although superficially in the form of a diffusion equation, actually represents the propagation of waves.
It should be strongly emphasized that this separation of the Dirac spinor into large and small components depends explicitly on a low-energy approximation. The entire Dirac spinor represents an irreducible whole, and the components we have just neglected to arrive at the Pauli theory will bring in new phenomena in the relativistic regime – antimatter and the idea of creation and annihilation of particles.
In a general case (if a certain linear function of electromagnetic field does not vanish identically), three out of four components of the spinor function in the Dirac equation can be algebraically eliminated, yielding an equivalent fourth-order partial differential equation for just one component. Furthermore, this remaining component can be made real by a gauge transform.
Measurement
The existence of the anomalous magnetic moment of the electron has been detected experimentally by magnetic resonance method. This allows the determination of hyperfine splitting of electron shell energy levels in atoms of protium and deuterium using the measured resonance frequency for several transitions.
The magnetic moment of the electron has been measured using a one-electron quantum cyclotron and quantum nondemolition spectroscopy. The spin frequency of the electron is determined by the -factor.
| Physical sciences | Physical constants | Physics |
1674141 | https://en.wikipedia.org/wiki/Vacancy%20defect | Vacancy defect | In crystallography, a vacancy is a type of point defect in a crystal where an atom is missing from one of the lattice sites. Crystals inherently possess imperfections, sometimes referred to as crystallographic defects.
Vacancies occur naturally in all crystalline materials. At any given temperature, up to the melting point of the material, there is an equilibrium concentration (ratio of vacant lattice sites to those containing atoms). At the melting point of some metals the ratio can be approximately 1:1000. This temperature dependence can be modelled by
where is the vacancy concentration, is the energy required for vacancy formation, is the Boltzmann constant, is the absolute temperature, and is the concentration of atomic sites i.e.
where is density, the Avogadro constant, and the molar mass.
It is the simplest point defect. In this system, an atom is missing from its regular atomic site. Vacancies are formed during solidification due to vibration of atoms, local rearrangement of atoms, plastic deformation and ionic bombardments.
The creation of a vacancy can be simply modeled by considering the energy required to break the bonds between an atom inside the crystal and its nearest neighbor atoms. Once that atom is removed from the lattice site, it is put back on the surface of the crystal and some energy is retrieved because new bonds are established with other atoms on the surface. However, there is a net input of energy because there are fewer bonds between surface atoms than between atoms in the interior of the crystal.
Material physics
In most applications vacancy defects are irrelevant to the intended purpose of a material, as they are either too few or spaced throughout a multi-dimensional space in such a way that force or charge can move around the vacancy. In the case of more constrained structures like carbon nanotubes however, vacancies and other crystalline defects can significantly weaken the material.
| Physical sciences | Crystallography | Physics |
1674308 | https://en.wikipedia.org/wiki/Hydrogen%20halide | Hydrogen halide | In chemistry, hydrogen halides (hydrohalic acids when in the aqueous phase) are diatomic, inorganic compounds that function as Arrhenius acids. The formula is HX where X is one of the halogens: fluorine, chlorine, bromine, iodine, astatine, or tennessine. All known hydrogen halides are gases at standard temperature and pressure.
Comparison to hydrohalic acids
The hydrogen halides are diatomic molecules with no tendency to ionize in the gas phase (although liquified hydrogen fluoride is a polar solvent somewhat similar to water). Thus, chemists distinguish hydrogen chloride from hydrochloric acid. The former is a gas at room temperature that reacts with water to give the acid. Once the acid has formed, the diatomic molecule can be regenerated only with difficulty, but not by normal distillation. Commonly the names of the acid and the molecules are not clearly distinguished such that in lab jargon, "HCl" often means hydrochloric acid, not the gaseous hydrogen chloride.
Occurrence
Hydrogen chloride, in the form of hydrochloric acid, is a major component of gastric acid.
Hydrogen fluoride, chloride and bromide are also volcanic gases.
Synthesis
The direct reaction of hydrogen with fluorine and chlorine gives hydrogen fluoride and hydrogen chloride, respectively. Industrially these gases are, however, produced by treatment of halide salts with sulfuric acid. Hydrogen bromide arises when hydrogen and bromine are combined at high temperatures in the presence of a platinum catalyst. The least stable hydrogen halide, HI, is produced less directly, by the reaction of iodine with hydrogen sulfide or with hydrazine.
Physical properties
The hydrogen halides are colourless gases at standard conditions for temperature and pressure (STP) except for hydrogen fluoride, which boils at 19 °C. Alone of the hydrogen halides, hydrogen fluoride exhibits hydrogen bonding between molecules, and therefore has the highest melting and boiling points of the HX series. From HCl to HI the boiling point rises. This trend is attributed to the increasing strength of intermolecular van der Waals forces, which correlates with numbers of electrons in the molecules. Concentrated hydrohalic acid solutions produce visible white fumes. This mist arises from the formation of tiny droplets of their concentrated aqueous solutions of the hydrohalic acid.
Reactions
Upon dissolution in water, which is highly exothermic, the hydrogen halides give the corresponding acids. These acids are very strong, reflecting their tendency to ionize in aqueous solution yielding hydronium ions (H3O+). With the exception of hydrofluoric acid, the hydrogen halides are strong acids, with acid strength increasing down the group. Hydrofluoric acid is complicated because its strength depends on the concentration owing to the effects of homoconjugation. As solutions in non-aqueous solvents, such as acetonitrile, the hydrogen halides are only modestly acidic however.
Similarly, the hydrogen halides react with ammonia (and other bases), forming ammonium halides:
HX + NH3 → NH4X
In organic chemistry, the hydrohalogenation reaction is used to prepare halocarbons. For example, chloroethane is produced by hydrochlorination of ethylene:
C2H4 + HCl → CH3CH2Cl
| Physical sciences | Hydrogen compounds | Chemistry |
72903 | https://en.wikipedia.org/wiki/Canvas | Canvas | Canvas is an extremely durable plain-woven fabric used for making sails, tents, marquees, backpacks, shelters, as a support for oil painting and for other items for which sturdiness is required, as well as in such fashion objects as handbags, electronic device cases, and shoes. It is popularly used by artists as a painting surface, typically stretched across a wooden frame.
Modern canvas is usually made of cotton or linen, or sometimes polyvinyl chloride (PVC), although historically it was made from hemp. It differs from other heavy cotton fabrics, such as denim, in being plain weave rather than twill weave. Canvas comes in two basic types: plain and duck. The threads in duck canvas are more tightly woven. The term duck comes from the Dutch word for cloth, doek. In the United States, canvas is classified in two ways: by weight (ounces per square yard) and by a graded number system. The numbers run in reverse of the weight so a number 10 canvas is lighter than number 4.
The word "canvas" is derived from the 13th century Anglo-French canevaz and the Old French canevas. Both may be derivatives of the Vulgar Latin cannapaceus for "made of hemp", originating from the Greek (cannabis).
For painting
Canvas has become the most common support medium for oil painting, replacing wooden panels. It was used from the 14th century in Italy, but only rarely. One of the earliest surviving oils on canvas is a French Madonna with angels from around 1410 in the Gemäldegalerie, Berlin. Its use in Saint George and the Dragon by Paolo Uccello in about 1470, and Sandro Botticelli's Birth of Venus in the 1480s was still unusual for the period. Large paintings for country houses were apparently more likely to be on canvas, and are perhaps less likely to have survived. It was a good deal cheaper than a panel painting, and may sometime indicate a painting regarded as less important. In the Uccello, the armour does not use silver leaf, as other of his paintings do (and the colour therefore remains undegraded). Another common category of paintings on lighter cloth such as linen was in distemper or glue, often used for banners to be carried in procession. This is a less durable medium, and surviving examples such as Dirk Bouts' Entombment, in distemper on linen (1450s, National Gallery) are rare, and often rather faded in appearance.
Panel painting remained more common until the 16th century in Italy and the 17th century in Northern Europe. Mantegna and Venetian artists were among those leading the change; Venetian sail canvas was readily available and regarded as the best quality.
Canvas is usually stretched across a wooden frame called a stretcher and may be coated with gesso prior to being used to prevent oil paint from coming into direct contact with the canvas fibres which would eventually cause the canvas to decay. A traditional and flexible chalk gesso is composed of lead carbonate and linseed oil, applied over a rabbit skin glue ground; a variation using titanium white pigment and calcium carbonate is rather brittle and susceptible to cracking. As lead-based paint is poisonous, care has to be taken in using it. Various alternative and more flexible canvas primers are commercially available, the most popular being a synthetic latex paint composed of titanium dioxide and calcium carbonate, bound with a thermo-plastic emulsion.
Many artists have painted onto unprimed canvas, such as Jackson Pollock, Kenneth Noland, Francis Bacon, Helen Frankenthaler, Dan Christensen, Larry Zox, Ronnie Landfield, Color Field painters, Lyrical Abstractionists and others. Staining acrylic paint into the fabric of cotton duck canvas was more benign and less damaging to the fabric of the canvas than the use of oil paint.
In 1970, artist Helen Frankenthaler commented about her use of staining:
When I first started doing the stain paintings, I left large areas of canvas unpainted, I think, because the canvas itself acted as forcefully and as positively as paint or line or color. In other words, the very ground was part of the medium, so that instead of thinking of it as background or negative space or an empty spot, that area did not need paint because it had paint next to it. The thing was to decide where to leave it and where to fill it and where to say this doesn't need another line or another pail of colors. It's saying it in space.
Early canvas was made of linen, a sturdy brownish fabric of considerable strength. Linen is particularly suitable for the use of oil paint. In the early 20th century, cotton canvas, often referred to as "cotton duck", came into use. Linen is composed of higher quality material, and remains popular with many professional artists, especially those who work with oil paint. Cotton duck, which stretches more fully and has an even, mechanical weave, offers a more economical alternative. The advent of acrylic paint has greatly increased the popularity and use of cotton duck canvas. Linen and cotton derive from two entirely different plants, the flax plant and the cotton plant, respectively.
Gessoed canvases on stretchers are also available. They are available in a variety of weights: light-weight is about or ; medium-weight is about or ; heavy-weight is about or . They are prepared with two or three coats of gesso and are ready for use straight away. Artists desiring greater control of their painting surface may add a coat or two of their preferred gesso. Professional artists who wish to work on canvas may prepare their own canvas in the traditional manner.
One of the most outstanding differences between modern painting techniques and those of the Flemish and Dutch Masters is in the preparation of the canvas. "Modern" techniques take advantage of both the canvas texture as well as those of the paint itself. Renaissance masters took extreme measures to ensure that none of the texture of the canvas came through. This required a painstaking, months-long process of layering the raw canvas with (usually) lead-white paint, then polishing the surface, and then repeating. The final product had little resemblance to fabric, but instead had a glossy, enamel-like finish.
With a properly prepared canvas, the painter will find that each subsequent layer of color glides on in a "buttery" manner, and that with the proper consistency of application (fat over lean technique), a painting entirely devoid of brushstrokes can be achieved. A warm iron is applied over a piece of wet cotton to flatten the wrinkles.
Canvas can also be printed on using offset or specialist digital printers to create canvas prints. This process of digital inkjet printing is popularly referred to as Giclée. After printing, the canvas can be wrapped around a stretcher and displayed.
For embroidery
Canvas is a popular base fabric for embroidery such as cross-stitch and Berlin wool work. Some specific types of embroidery canvases are Aida cloth (also called Java canvas), Penelope canvas, Chess canvas, and Binca canvas. Plastic canvas is a stiffer form of Binca canvas.
As a compound agent
From the 13th century onwards, canvas was used as a covering layer on pavise shields. The canvas was applied to the wooden surface of the pavise, covered with multiple layers of gesso and often richly painted in tempera technique. Finally, the surface was sealed with a transparent varnish. While the gessoed canvas was a perfect painting surface, the primary purpose of the canvas application may have been the strengthening of the wooden shield corpus in a manner similar to modern glass-reinforced plastic.
Splined canvas, stretched canvas and canvas boards
Splined canvases differ from traditional side-stapled canvas in that canvas is attached with a spline at the rear of the frame. This allows the artist to incorporate painted edges into the artwork itself without staples at the sides, and the artwork can be displayed without a frame. Splined canvas can be restretched by adjusting the spline.
Stapled canvases stay stretched tighter over a longer period of time, but are more difficult to re-stretch when the need arises.
Canvas boards are made of canvas stretched over and glued to a cardboard backing, and sealed on the backside. The canvas is typically linen primed for a certain type of paint. They are primarily used by artists for quick studies.
Types
Dyed canvas
Fire-proof canvas
Printed canvas
Stripe canvas
Water-resistant canvas
Waterproof canvas
Waxed canvas
Rolled canvas
Mechanical properties in canvas conservation
Understanding the mechanical properties of art canvases is necessary for art conservation, especially when deciding on transporting paintings, conservation treatments and environmental specifications inside museums. Canvases are layered structures made from weaving fibers together, where each layer responds differently to changes in humidity, resulting in localized stresses that cause deformation, cracking, and delamination. There are two directions to the canvas: the warp direction (threads run vertically) and the weft direction (threads run horizontally). Researchers performed tensile testing to determine the effects of humidity on the strength of canvases and observed that increasing humidity decreased the effective elastic modulus (combined modulus of the weft and warp directions). For example, the effective modulus at 30% relative humidity is 180 MPa, which drops to 13 MPa at 90% relative humidity, suggesting that canvas is becoming more flexible and susceptible to deformation. There is an inherent anisotropy to the elastic modulus measured in the weft and warp direction as evidenced in the strain vs. load behavior of the canvas. The canvas exhibits a 0.1 strain in the weft direction and 0.2 strain in the warp direction before failing (thread ripping apart). Though, tensile testing provides an explicit measure of material strength, conservators are unable to tare a piece of painting to create the samples (required length of 250 mm), therefore the traditional methods of assessing mechanical properties have been visual cues and pH values.
Art conservators have recently adopted a new method called zero-span strength analysis, nanoindentation, and numerical modelling to quantitatively evaluate the mechanical properties of painting canvases. Zero-span strength analysis measures the tensile strength of materials, such as paper and yarns, by reducing the clamping distance to 0.1 mm and applying load to a particular point on the yarn. This minimizes effects from material geometry and accurately assesses intrinsic fiber strength. This also reduces the amount of material needed for samples to 60 mm. Using zero-span strength analysis, conservators measured tensile strength of flax, commonly used canvas material in historical paintings and correlated tensile strength to the degree of cellulose depolymerization -- cellulose is a component of flax. Another method for assessing canvas quality is nanoindentation utilizing a millimeter-sized cantilever with a microsphere at its end and measuring local viscoelastic properties. However, with the nanoindentation method, conservators can probe the composite behavior of the layers of paint on top of the canvas, not the actual strength of the canvas itself. Lastly, conservators are using finite element modeling (FEM) and extended-FEM (XFEM) on canvases undergoing desiccation (removal of moisture) to visualize the global and local stresses.
Products
Wood-and-canvas canoes (see photo of canvas being stretched on a canoe)
Bags, including coated canvas (e.g. Goyard)
Non-disintegrating ammunition belts, which have evenly spaced pockets to allow the belt to be mechanically fed into the machine gun.
Covers and tarpaulins
Shoes (e.g. Converse, Vans, Keds)
Tents
Swags
Martial arts uniforms (e.g. Tokaido, Shureido, Judogi)
Canvas Prints
Wrestling canvas, used in WWE and other Sports Entertainment promotions.
Vests, often fishing, hunting, tactical/military
Coats
Jackets
Vehicle seat covers
| Technology | Fabrics and fibers | null |
72907 | https://en.wikipedia.org/wiki/Sundial | Sundial | A sundial is a horological device that tells the time of day (referred to as civil time in modern usage) when direct sunlight shines by the apparent position of the Sun in the sky. In the narrowest sense of the word, it consists of a flat plate (the dial) and a gnomon, which casts a shadow onto the dial. As the Sun appears to move through the sky, the shadow aligns with different hour-lines, which are marked on the dial to indicate the time of day. The style is the time-telling edge of the gnomon, though a single point or nodus may be used. The gnomon casts a broad shadow; the shadow of the style shows the time. The gnomon may be a rod, wire, or elaborately decorated metal casting. The style must be parallel to the axis of the Earth's rotation for the sundial to be accurate throughout the year. The style's angle from horizontal is equal to the sundial's geographical latitude.
The term sundial can refer to any device that uses the Sun's altitude or azimuth (or both) to show the time. Sundials are valued as decorative objects, metaphors, and objects of intrigue and mathematical study.
The passing of time can be observed by placing a stick in the sand or a nail in a board and placing markers at the edge of a shadow or outlining a shadow at intervals. It is common for inexpensive, mass-produced decorative sundials to have incorrectly aligned gnomons, shadow lengths, and hour-lines, which cannot be adjusted to tell correct time.
Introduction
There are several different types of sundials. Some sundials use a shadow or the edge of a shadow while others use a line or spot of light to indicate the time.
The shadow-casting object, known as a gnomon, may be a long thin rod or other object with a sharp tip or a straight edge. Sundials employ many types of gnomon. The gnomon may be fixed or moved according to the season. It may be oriented vertically, horizontally, aligned with the Earth's axis, or oriented in an altogether different direction determined by mathematics.
Given that sundials use light to indicate time, a line of light may be formed by allowing the Sun's rays through a thin slit or focusing them through a cylindrical lens. A spot of light may be formed by allowing the Sun's rays to pass through a small hole, window, oculus, or by reflecting them from a small circular mirror. A spot of light can be as small as a pinhole in a solargraph or as large as the oculus in the Pantheon.
Sundials also may use many types of surfaces to receive the light or shadow. Planes are the most common surface, but partial spheres, cylinders, cones and other shapes have been used for greater accuracy or beauty.
Sundials differ in their portability and their need for orientation. The installation of many dials requires knowing the local latitude, the precise vertical direction (e.g., by a level or plumb-bob), and the direction to true north. Portable dials are self-aligning: for example, it may have two dials that operate on different principles, such as a horizontal and analemmatic dial, mounted together on one plate. In these designs, their times agree only when the plate is aligned properly.
Sundials may indicate the local solar time only. To obtain the national clock time, three corrections are required:
The orbit of the Earth is not perfectly circular and its rotational axis is not perpendicular to its orbit. The sundial's indicated solar time thus varies from clock time by small amounts that change throughout the year. This correction—which may be as great as 16 minutes, 33 seconds—is described by the equation of time. A sophisticated sundial, with a curved style or hour lines, may incorporate this correction. The more usual simpler sundials sometimes have a small plaque that gives the offsets at various times of the year.
The solar time must be corrected for the longitude of the sundial relative to the longitude of the official time zone. For example, an uncorrected sundial located west of Greenwich, England but within the same time-zone, shows an earlier time than the official time. It may show "11:45" at official noon, and will show "noon" after the official noon. This correction can easily be made by rotating the hour-lines by a constant angle equal to the difference in longitudes, which makes this a commonly possible design option.
To adjust for daylight saving time, if applicable, the solar time must additionally be shifted for the official difference (usually one hour). This is also a correction that can be done on the dial, i.e. by numbering the hour-lines with two sets of numbers, or even by swapping the numbering in some designs. More often this is simply ignored, or mentioned on the plaque with the other corrections, if there is one.
Apparent motion of the Sun
The principles of sundials are understood most easily from the Sun's apparent motion. The Earth rotates on its axis, and revolves in an elliptical orbit around the Sun. An excellent approximation assumes that the Sun revolves around a stationary Earth on the celestial sphere, which rotates every 24 hours about its celestial axis. The celestial axis is the line connecting the celestial poles. Since the celestial axis is aligned with the axis about which the Earth rotates, the angle of the axis with the local horizontal is the local geographical latitude.
Unlike the fixed stars, the Sun changes its position on the celestial sphere, being (in the northern hemisphere) at a positive declination in spring and summer, and at a negative declination in autumn and winter, and having exactly zero declination (i.e., being on the celestial equator) at the equinoxes. The Sun's celestial longitude also varies, changing by one complete revolution per year. The path of the Sun on the celestial sphere is called the ecliptic. The ecliptic passes through the twelve constellations of the zodiac in the course of a year.
This model of the Sun's motion helps to understand sundials. If the shadow-casting gnomon is aligned with the celestial poles, its shadow will revolve at a constant rate, and this rotation will not change with the seasons. This is the most common design. In such cases, the same hour lines may be used throughout the year. The hour-lines will be spaced uniformly if the surface receiving the shadow is either perpendicular (as in the equatorial sundial) or circular about the gnomon (as in the armillary sphere).
In other cases, the hour-lines are not spaced evenly, even though the shadow rotates uniformly. If the gnomon is not aligned with the celestial poles, even its shadow will not rotate uniformly, and the hour lines must be corrected accordingly. The rays of light that graze the tip of a gnomon, or which pass through a small hole, or reflect from a small mirror, trace out a cone aligned with the celestial poles. The corresponding light-spot or shadow-tip, if it falls onto a flat surface, will trace out a conic section, such as a hyperbola, ellipse or (at the North or South Poles) a circle.
This conic section is the intersection of the cone of light rays with the flat surface. This cone and its conic section change with the seasons, as the Sun's declination changes; hence, sundials that follow the motion of such light-spots or shadow-tips often have different hour-lines for different times of the year. This is seen in shepherd's dials, sundial rings, and vertical gnomons such as obelisks. Alternatively, sundials may change the angle or position (or both) of the gnomon relative to the hour lines, as in the analemmatic dial or the Lambert dial.
History
The earliest sundials known from the archaeological record are shadow clocks (1500 BC or BCE) from ancient Egyptian astronomy and Babylonian astronomy. Presumably, humans were telling time from shadow-lengths at an even earlier date, but this is hard to verify. In roughly 700 BC, the Old Testament describes a sundial—the “dial of Ahaz” in and . By 240 BC, Eratosthenes had estimated the circumference of the world using an obelisk and a water well and a few centuries later, Ptolemy had charted the latitude of cities using the angle of the sun. The people of Kush created sun dials through geometry. The Roman writer Vitruvius lists dials and shadow clocks known at that time in his De architectura. The Tower of the Winds in Athens included both a sundial and a water clock for telling time. A canonical sundial is one that indicates the canonical hours of liturgical acts, and these were used from the 7th to the 14th centuries by religious orders. The Italian astronomer Giovanni Padovani published a treatise on the sundial in 1570, in which he included instructions for the manufacture and laying out of mural (vertical) and horizontal sundials. Giuseppe Biancani's Constructio instrumenti ad horologia solaria (c. 1620) discusses how to make a perfect sundial. They have been in common use since the 16th century.
Functioning
In general, sundials indicate the time by casting a shadow or throwing light onto a surface known as a dial face or dial plate. Although usually a flat plane, the dial face may also be the inner or outer surface of a sphere, cylinder, cone, helix, and various other shapes.
The time is indicated where a shadow or light falls on the dial face, which is usually inscribed with hour lines. Although usually straight, these hour lines may also be curved, depending on the design of the sundial (see below). In some designs, it is possible to determine the date of the year, or it may be required to know the date to find the correct time. In such cases, there may be multiple sets of hour lines for different months, or there may be mechanisms for setting/calculating the month. In addition to the hour lines, the dial face may offer other data—such as the horizon, the equator and the tropics—which are referred to collectively as the dial furniture.
The entire object that casts a shadow or light onto the dial face is known as the sundial's gnomon. However, it is usually only an edge of the gnomon (or another linear feature) that casts the shadow used to determine the time; this linear feature is known as the sundial's style. The style is usually aligned parallel to the axis of the celestial sphere, and therefore is aligned with the local geographical meridian. In some sundial designs, only a point-like feature, such as the tip of the style, is used to determine the time and date; this point-like feature is known as the sundial's nodus.
Some sundials use both a style and a nodus to determine the time and date.
The gnomon is usually fixed relative to the dial face, but not always; in some designs such as the analemmatic sundial, the style is moved according to the month. If the style is fixed, the line on the dial plate perpendicularly beneath the style is called the substyle, meaning "below the style". The angle the style makes with the plane of the dial plate is called the substyle height, an unusual use of the word height to mean an angle. On many wall dials, the substyle is not the same as the noon line (see below). The angle on the dial plate between the noon line and the substyle is called the substyle distance, an unusual use of the word distance to mean an angle.
By tradition, many sundials have a motto. The motto is usually in the form of an epigram: sometimes sombre reflections on the passing of time and the brevity of life, but equally often humorous witticisms of the dial maker. One such quip is, I am a sundial, and I make a botch, Of what is done much better by a watch.
A dial is said to be equiangular if its hour-lines are straight and spaced equally. Most equiangular sundials have a fixed gnomon style aligned with the Earth's rotational axis, as well as a shadow-receiving surface that is symmetrical about that axis; examples include the equatorial dial, the equatorial bow, the armillary sphere, the cylindrical dial and the conical dial. However, other designs are equiangular, such as the Lambert dial, a version of the analemmatic sundial with a moveable style.
In the Southern Hemisphere
A sundial at a particular latitude in one hemisphere must be reversed for use at the opposite latitude in the other hemisphere. A vertical direct south sundial in the Northern Hemisphere becomes a vertical direct north sundial in the Southern Hemisphere. To position a horizontal sundial correctly, one has to find true north or south. The same process can be used to do both. The gnomon, set to the correct latitude, has to point to the true south in the Southern Hemisphere as in the Northern Hemisphere it has to point to the true north. The hour numbers also run in opposite directions, so on a horizontal dial they run anticlockwise (US: counterclockwise) rather than clockwise.
Sundials which are designed to be used with their plates horizontal in one hemisphere can be used with their plates vertical at the complementary latitude in the other hemisphere. For example, the illustrated sundial in Perth, Australia, which is at latitude 32° South, would function properly if it were mounted on a south-facing vertical wall at latitude 58° (i.e. 90° − 32°) North, which is slightly further north than Perth, Scotland. The surface of the wall in Scotland would be parallel with the horizontal ground in Australia (ignoring the difference of longitude), so the sundial would work identically on both surfaces. Correspondingly, the hour marks, which run counterclockwise on a horizontal sundial in the southern hemisphere, also do so on a vertical sundial in the northern hemisphere. (See the first two illustrations at the top of this article.) On horizontal northern-hemisphere sundials, and on vertical southern-hemisphere ones, the hour marks run clockwise.
Adjustments to calculate clock time from a sundial reading
The most common reason for a sundial to differ greatly from clock time is that the sundial has not been oriented correctly or its hour lines have not been drawn correctly. For example, most commercial sundials are designed as horizontal sundials as described above. To be accurate, such a sundial must have been designed for the local geographical latitude and its style must be parallel to the Earth's rotational axis; the style must be aligned with true north and its height (its angle with the horizontal) must equal the local latitude. To adjust the style height, the sundial can often be tilted slightly "up" or "down" while maintaining the style's north-south alignment.
Summer (daylight saving) time correction
Some areas of the world practice daylight saving time, which changes the official time, usually by one hour. This shift must be added to the sundial's time to make it agree with the official time.
Time-zone (longitude) correction
A standard time zone covers roughly 15° of longitude, so any point within that zone which is not on the reference longitude (generally a multiple of 15°) will experience a difference from standard time that is equal to 4 minutes of time per degree. For illustration, sunsets and sunrises are at a much later "official" time at the western edge of a time-zone, compared to sunrise and sunset times at the eastern edge. If a sundial is located at, say, a longitude 5° west of the reference longitude, then its time will read 20 minutes slow, since the Sun appears to revolve around the Earth at 15° per hour. This is a constant correction throughout the year. For equiangular dials such as equatorial, spherical or Lambert dials, this correction can be made by rotating the dial surface by an angle equaling the difference in longitude, without changing the gnomon position or orientation. However, this method does not work for other dials, such as a horizontal dial; the correction must be applied by the viewer.
However, for political and practical reasons, time-zone boundaries have been skewed. At their most extreme, time zones can cause official noon, including daylight savings, to occur up to three hours early (in which case the Sun is actually on the meridian at official clock time of 3 ). This occurs in the far west of Alaska, China, and Spain. For more details and examples, see time zones.
Equation of time correction
Although the Sun appears to rotate uniformly about the Earth, in reality this motion is not perfectly uniform. This is due to the eccentricity of the Earth's orbit (the fact that the Earth's orbit about the Sun is not perfectly circular, but slightly elliptical) and the tilt (obliquity) of the Earth's rotational axis relative to the plane of its orbit. Therefore, sundial time varies from standard clock time. On four days of the year, the correction is effectively zero. However, on others, it can be as much as a quarter-hour early or late. The amount of correction is described by the equation of time. This correction is equal worldwide: it does not depend on the local latitude or longitude of the observer's position. It does, however, change over long periods of time, (centuries or more,)
because of slow variations in the Earth's orbital and rotational motions. Therefore, tables and graphs of the equation of time that were made centuries ago are now significantly incorrect. The reading of an old sundial should be corrected by applying the present-day equation of time, not one from the period when the dial was made.
In some sundials, the equation of time correction is provided as an informational plaque affixed to the sundial, for the observer to calculate. In more sophisticated sundials the equation can be incorporated automatically. For example, some equatorial bow sundials are supplied with a small wheel that sets the time of year; this wheel in turn rotates the equatorial bow, offsetting its time measurement. In other cases, the hour lines may be curved, or the equatorial bow may be shaped like a vase, which exploits the changing altitude of the sun over the year to effect the proper offset in time.
A heliochronometer is a precision sundial first devised in about 1763 by Philipp Hahn and improved by Abbé Guyoux in about 1827.
It corrects apparent solar time to mean solar time or another standard time. Heliochronometers usually indicate the minutes to within 1 minute of Universal Time.
The Sunquest sundial, designed by Richard L. Schmoyer in the 1950s, uses an analemmic-inspired gnomon to cast a shaft of light onto an equatorial time-scale crescent. Sunquest is adjustable for latitude and longitude, automatically correcting for the equation of time, rendering it "as accurate as most pocket watches".
Similarly, in place of the shadow of a gnomon the sundial at Miguel Hernández University uses the solar projection of a graph of the equation of time intersecting a time scale to display clock time directly.
An analemma may be added to many types of sundials to correct apparent solar time to mean solar time or another standard time. These usually have hour lines shaped like "figure eights" (analemmas) according to the equation of time. This compensates for the slight eccentricity in the Earth's orbit and the tilt of the Earth's axis that causes up to a 15 minute variation from mean solar time. This is a type of dial furniture seen on more complicated horizontal and vertical dials.
Prior to the invention of accurate clocks, in the mid 17th century, sundials were the only timepieces in common use, and were considered to tell the "right" time. The equation of time was not used. After the invention of good clocks, sundials were still considered to be correct, and clocks usually incorrect. The equation of time was used in the opposite direction from today, to apply a correction to the time shown by a clock to make it agree with sundial time. Some elaborate "equation clocks", such as one made by Joseph Williamson in 1720, incorporated mechanisms to do this correction automatically. (Williamson's clock may have been the first-ever device to use a differential gear.) Only after about 1800 was uncorrected clock time considered to be "right", and sundial time usually "wrong", so the equation of time became used as it is today.
With fixed axial gnomon
The most commonly observed sundials are those in which the shadow-casting style is fixed in position and aligned with the Earth's rotational axis, being oriented with true north and south, and making an angle with the horizontal equal to the geographical latitude. This axis is aligned with the celestial poles, which is closely, but not perfectly, aligned with the pole star Polaris. For illustration, the celestial axis points vertically at the true North Pole, whereas it points horizontally on the equator. The world's largest axial gnomon sundial is the mast of the Sundial Bridge at Turtle Bay in Redding, California . A formerly world's largest gnomon is at Jaipur, raised 26°55′ above horizontal, reflecting the local latitude.
On any given day, the Sun appears to rotate uniformly about this axis, at about 15° per hour, making a full circuit (360°) in 24 hours. A linear gnomon aligned with this axis will cast a sheet of shadow (a half-plane) that, falling opposite to the Sun, likewise rotates about the celestial axis at 15° per hour. The shadow is seen by falling on a receiving surface that is usually flat, but which may be spherical, cylindrical, conical or of other shapes. If the shadow falls on a surface that is symmetrical about the celestial axis (as in an armillary sphere, or an equatorial dial), the surface-shadow likewise moves uniformly; the hour-lines on the sundial are equally spaced. However, if the receiving surface is not symmetrical (as in most horizontal sundials), the surface shadow generally moves non-uniformly and the hour-lines are not equally spaced; one exception is the Lambert dial described below.
Some types of sundials are designed with a fixed gnomon that is not aligned with the celestial poles like a vertical obelisk. Such sundials are covered below under the section, "Nodus-based sundials".
Empirical hour-line marking
The formulas shown in the paragraphs below allow the positions of the hour-lines to be calculated for various types of sundial. In some cases, the calculations are simple; in others they are extremely complicated. There is an alternative, simple method of finding the positions of the hour-lines which can be used for many types of sundial, and saves a lot of work in cases where the calculations are complex. This is an empirical procedure in which the position of the shadow of the gnomon of a real sundial is marked at hourly intervals. The equation of time must be taken into account to ensure that the positions of the hour-lines are independent of the time of year when they are marked. An easy way to do this is to set a clock or watch so it shows "sundial time"
which is standard time, plus the equation of time on the day in question.
The hour-lines on the sundial are marked to show the positions of the shadow of the style when this clock shows whole numbers of hours, and are labelled with these numbers of hours. For example, when the clock reads 5:00, the shadow of the style is marked, and labelled "5" (or "V" in Roman numerals). If the hour-lines are not all marked in a single day, the clock must be adjusted every day or two to take account of the variation of the equation of time.
Equatorial sundials
The distinguishing characteristic of the equatorial dial (also called the equinoctial dial) is the planar surface that receives the shadow, which is exactly perpendicular to the gnomon's style. This plane is called equatorial, because it is parallel to the equator of the Earth and of the celestial sphere. If the gnomon is fixed and aligned with the Earth's rotational axis, the sun's apparent rotation about the Earth casts a uniformly rotating sheet of shadow from the gnomon; this produces a uniformly rotating line of shadow on the equatorial plane. Since the Earth rotates 360° in 24 hours, the hour-lines on an equatorial dial are all spaced 15° apart (360/24).
The uniformity of their spacing makes this type of sundial easy to construct. If the dial plate material is opaque, both sides of the equatorial dial must be marked, since the shadow will be cast from below in winter and from above in summer. With translucent dial plates (e.g. glass) the hour angles need only be marked on the sun-facing side, although the hour numberings (if used) need be made on both sides of the dial, owing to the differing hour schema on the sun-facing and sun-backing sides.
Another major advantage of this dial is that equation of time (EoT) and daylight saving time (DST) corrections can be made by simply rotating the dial plate by the appropriate angle each day. This is because the hour angles are equally spaced around the dial. For this reason, an equatorial dial is often a useful choice when the dial is for public display and it is desirable to have it show the true local time to reasonable accuracy. The EoT correction is made via the relation
Near the equinoxes in spring and autumn, the sun moves on a circle that is nearly the same as the equatorial plane; hence, no clear shadow is produced on the equatorial dial at those times of year, a drawback of the design.
A nodus is sometimes added to equatorial sundials, which allows the sundial to tell the time of year. On any given day, the shadow of the nodus moves on a circle on the equatorial plane, and the radius of the circle measures the declination of the sun. The ends of the gnomon bar may be used as the nodus, or some feature along its length. An ancient variant of the equatorial sundial has only a nodus (no style) and the concentric circular hour-lines are arranged to resemble a spider-web.
Horizontal sundials
In the horizontal sundial (also called a garden sundial), the plane that receives the shadow is aligned horizontally, rather than being perpendicular to the style as in the equatorial dial. Hence, the line of shadow does not rotate uniformly on the dial face; rather, the hour lines are spaced according to the rule.
Or in other terms:
where L is the sundial's geographical latitude (and the angle the gnomon makes with the dial plate), is the angle between a given hour-line and the noon hour-line (which always points towards true north) on the plane, and is the number of hours before or after noon. For example, the angle of the 3 hour-line would equal the arctangent of since tan 45° = 1. When (at the North Pole), the horizontal sundial becomes an equatorial sundial; the style points straight up (vertically), and the horizontal plane is aligned with the equatorial plane; the hour-line formula becomes as for an equatorial dial. A horizontal sundial at the Earth's equator, where would require a (raised) horizontal style and would be an example of a polar sundial (see below).
The chief advantages of the horizontal sundial are that it is easy to read, and the sunlight lights the face throughout the year. All the hour-lines intersect at the point where the gnomon's style crosses the horizontal plane. Since the style is aligned with the Earth's rotational axis, the style points true north and its angle with the horizontal equals the sundial's geographical latitude . A sundial designed for one latitude can be adjusted for use at another latitude by tilting its base upwards or downwards by an angle equal to the difference in latitude. For example, a sundial designed for a latitude of 40° can be used at a latitude of 45°, if the sundial plane is tilted upwards by 5°, thus aligning the style with the Earth's rotational axis.
Many ornamental sundials are designed to be used at 45 degrees north. Some mass-produced garden sundials fail to correctly calculate the hourlines and so can never be corrected. A local standard time zone is nominally 15 degrees wide, but may be modified to follow geographic or political boundaries. A sundial can be rotated around its style (which must remain pointed at the celestial pole) to adjust to the local time zone. In most cases, a rotation in the range of 7.5° east to 23° west suffices. This will introduce error in sundials that do not have equal hour angles. To correct for daylight saving time, a face needs two sets of numerals or a correction table. An informal standard is to have numerals in hot colors for summer, and in cool colors for winter. Since the hour angles are not evenly spaced, the equation of time corrections cannot be made via rotating the dial plate about the gnomon axis. These types of dials usually have an equation of time correction tabulation engraved on their pedestals or close by. Horizontal dials are commonly seen in gardens, churchyards and in public areas.
Vertical sundials
In the common vertical dial, the shadow-receiving plane is aligned vertically; as usual, the gnomon's style is aligned with the Earth's axis of rotation. As in the horizontal dial, the line of shadow does not move uniformly on the face; the sundial is not equiangular. If the face of the vertical dial points directly south, the angle of the hour-lines is instead described by the formula:
where is the sundial's geographical latitude, is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and is the number of hours before or after noon. For example, the angle of the 3 hour-line would equal the arctangent of since The shadow moves counter-clockwise on a south-facing vertical dial, whereas it runs clockwise on horizontal and equatorial north-facing dials.
Dials with faces perpendicular to the ground and which face directly south, north, east, or west are called vertical direct dials. It is widely believed, and stated in respectable publications, that a vertical dial cannot receive more than twelve hours of sunlight a day, no matter how many hours of daylight there are. However, there is an exception. Vertical sundials in the tropics which face the nearer pole (e.g. north facing in the zone between the Equator and the Tropic of Cancer) can actually receive sunlight for more than 12 hours from sunrise to sunset for a short period around the time of the summer solstice. For example, at latitude 20° North, on June 21, the sun shines on a north-facing vertical wall for 13 hours, 21 minutes. Vertical sundials which do not face directly south (in the northern hemisphere) may receive significantly less than twelve hours of sunlight per day, depending on the direction they do face, and on the time of year. For example, a vertical dial that faces due East can tell time only in the morning hours; in the afternoon, the sun does not shine on its face. Vertical dials that face due East or West are polar dials, which will be described below. Vertical dials that face north are uncommon, because they tell time only during the spring and summer, and do not show the midday hours except in tropical latitudes (and even there, only around midsummer). For non-direct vertical dials – those that face in non-cardinal directions – the mathematics of arranging the style and the hour-lines becomes more complicated; it may be easier to mark the hour lines by observation, but the placement of the style, at least, must be calculated first; such dials are said to be declining dials.
Vertical dials are commonly mounted on the walls of buildings, such as town-halls, cupolas and church-towers, where they are easy to see from far away. In some cases, vertical dials are placed on all four sides of a rectangular tower, providing the time throughout the day. The face may be painted on the wall, or displayed in inlaid stone; the gnomon is often a single metal bar, or a tripod of metal bars for rigidity. If the wall of the building faces toward the south, but does not face due south, the gnomon will not lie along the noon line, and the hour lines must be corrected. Since the gnomon's style must be parallel to the Earth's axis, it always "points" true north and its angle with the horizontal will equal the sundial's geographical latitude; on a direct south dial, its angle with the vertical face of the dial will equal the colatitude, or 90° minus the latitude.
Polar dials
In polar dials, the shadow-receiving plane is aligned parallel to the gnomon-style.
Thus, the shadow slides sideways over the surface, moving perpendicularly to itself as the Sun rotates about the style. As with the gnomon, the hour-lines are all aligned with the Earth's rotational axis. When the Sun's rays are nearly parallel to the plane, the shadow moves very quickly and the hour lines are spaced far apart. The direct East- and West-facing dials are examples of a polar dial. However, the face of a polar dial need not be vertical; it need only be parallel to the gnomon. Thus, a plane inclined at the angle of latitude (relative to horizontal) under the similarly inclined gnomon will be a polar dial. The perpendicular spacing of the hour-lines in the plane is described by the formula
where is the height of the style above the plane, and is the time (in hours) before or after the center-time for the polar dial. The center time is the time when the style's shadow falls directly down on the plane; for an East-facing dial, the center time will be 6 , for a West-facing dial, this will be 6 , and for the inclined dial described above, it will be noon. When approaches ±6 hours away from the center time, the spacing diverges to +∞; this occurs when the Sun's rays become parallel to the plane.
Vertical declining dials
A declining dial is any non-horizontal, planar dial that does not face in a cardinal direction, such as (true) north, south, east or west. As usual, the gnomon's style is aligned with the Earth's rotational axis, but the hour-lines are not symmetrical about the noon hour-line. For a vertical dial, the angle between the noon hour-line and another hour-line is given by the formula below. Note that is defined positive in the clockwise sense w.r.t. the upper vertical hour angle; and that its conversion to the equivalent solar hour requires careful consideration of which quadrant of the sundial that it belongs in.
where is the sundial's geographical latitude; is the time before or after noon; is the angle of declination from true south, defined as positive when east of south; and is a switch integer for the dial orientation. A partly south-facing dial has an value of those partly north-facing, a value of When such a dial faces south (), this formula reduces to the formula given above for vertical south-facing dials, i.e.
When a sundial is not aligned with a cardinal direction, the substyle of its gnomon is not aligned with the noon hour-line. The angle between the substyle and the noon hour-line is given by the formula
If a vertical sundial faces trUe south Or north ( or respectively), the angle and the substyle is aligned with the noon hour-line.
The height of the gnomon, that is the angle the style makes to the plate, is given by :
Reclining dials
The sundials described above have gnomons that are aligned with the Earth's rotational axis and cast their shadow onto a plane. If the plane is neither vertical nor horizontal nor equatorial, the sundial is said to be reclining or inclining. Such a sundial might be located on a south-facing roof, for example. The hour-lines for such a sundial can be calculated by slightly correcting the horizontal formula above
where is the desired angle of reclining relative to the local vertical, is the sundial's geographical latitude, is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and is the number of hours before or after noon. For example, the angle of the 3pm hour-line would equal the arctangent of since When (in other words, a south-facing vertical dial), we obtain the vertical dial formula above.
Some authors use a more specific nomenclature to describe the orientation of the shadow-receiving plane. If the plane's face points downwards towards the ground, it is said to be proclining or inclining, whereas a dial is said to be reclining when the dial face is pointing away from the ground. Many authors also often refer to reclined, proclined and inclined sundials in general as inclined sundials. It is also common in the latter case to measure the angle of inclination relative to the horizontal plane on the sun side of the dial.
In such texts, since the hour angle formula will often be seen written as :
The angle between the gnomon style and the dial plate, B, in this type of sundial is :
or :
Declining-reclining dials/ Declining-inclining dials
Some sundials both decline and recline, in that their shadow-receiving plane is not oriented with a cardinal direction (such as true north or true south) and is neither horizontal nor vertical nor equatorial. For example, such a sundial might be found on a roof that was not oriented in a cardinal direction.
The formulae describing the spacing of the hour-lines on such dials are rather more complicated than those for simpler dials.
There are various solution approaches, including some using the methods of rotation matrices, and some making a 3D model of the reclined-declined plane and its vertical declined counterpart plane, extracting the geometrical relationships between the hour angle components on both these planes and then reducing the trigonometric algebra.
One system of formulas for Reclining-Declining sundials: (as stated by Fennewick)
The angle between the noon hour-line and another hour-line is given by the formula below. Note that advances counterclockwise with respect to the zero hour angle for those dials that are partly south-facing and clockwise for those that are north-facing.
within the parameter ranges : and
Or, if preferring to use inclination angle, rather than the reclination, where :
within the parameter ranges : and
Here is the sundial's geographical latitude; is the orientation switch integer; is the time in hours before or after noon; and and are the angles of reclination and declination, respectively.
Note that is measured with reference to the vertical. It is positive when the dial leans back towards the horizon behind the dial and negative when the dial leans forward to the horizon on the Sun's side. Declination angle is defined as positive when moving east of true south.
Dials facing fully or partly south have while those partly or fully north-facing have an
Since the above expression gives the hour angle as an arctangent function, due consideration must be given to which quadrant of the sundial each hour belongs to before assigning the correct hour angle.
Unlike the simpler vertical declining sundial, this type of dial does not always show hour angles on its sunside face for all declinations between east and west. When a northern hemisphere partly south-facing dial reclines back (i.e. away from the Sun) from the vertical, the gnomon will become co-planar with the dial plate at declinations less than due east or due west. Likewise for southern hemisphere dials that are partly north-facing.
Were these dials reclining forward, the range of declination would actually exceed due east and due west.
In a similar way, northern hemisphere dials that are partly north-facing and southern hemisphere dials that are south-facing, and which lean forward toward their upward pointing gnomons, will have a similar restriction on the range of declination that is possible for a given reclination value.
The critical declination is a geometrical constraint which depends on the value of both the dial's reclination and its latitude :
As with the vertical declined dial, the gnomon's substyle is not aligned with the noon hour-line. The general formula for the angle between the substyle and the noon-line is given by :
The angle between the style and the plate is given by :
Note that for i.e. when the gnomon is coplanar with the dial plate, we have :
i.e. when the critical declination value.
Empirical method
Because of the complexity of the above calculations, using them for the practical purpose of designing a dial of this type is difficult and prone to error. It has been suggested that it is better to locate the hour lines empirically, marking the positions of the shadow of a style on a real sundial at hourly intervals as shown by a clock and adding/deducting that day's equation of time adjustment. See Empirical hour-line marking, above.
Spherical sundials
The surface receiving the shadow need not be a plane, but can have any shape, provided that the sundial maker is willing to mark the hour-lines. If the style is aligned with the Earth's rotational axis, a spherical shape is convenient since the hour-lines are equally spaced, as they are on the equatorial dial shown here; the sundial is equiangular. This is the principle behind the armillary sphere and the equatorial bow sundial. However, some equiangular sundials – such as the Lambert dial described below – are based on other principles.
In the equatorial bow sundial, the gnomon is a bar, slot or stretched wire parallel to the celestial axis. The face is a semicircle, corresponding to the equator of the sphere, with markings on the inner surface. This pattern, built a couple of meters wide out of temperature-invariant steel invar, was used to keep the trains running on time in France before World War I.
Among the most precise sundials ever made are two equatorial bows constructed of marble found in Yantra mandir. This collection of sundials and other astronomical instruments was built by Maharaja Jai Singh II at his then-new capital of Jaipur, India between 1727 and 1733. The larger equatorial bow is called the Samrat Yantra (The Supreme Instrument); standing at 27 meters, its shadow moves visibly at 1 mm per second, or roughly a hand's breadth (6 cm) every minute.
Cylindrical, conical, and other non-planar sundials
Other non-planar surfaces may be used to receive the shadow of the gnomon.
As an elegant alternative, the style (which could be created by a hole or slit in the circumference) may be located on the circumference of a cylinder or sphere, rather than at its central axis of symmetry.
In that case, the hour lines are again spaced equally, but at twice the usual angle, due to the geometrical inscribed angle theorem. This is the basis of some modern sundials, but it was also used in ancient times;
In another variation of the polar-axis-aligned cylindrical, a cylindrical dial could be rendered as a helical ribbon-like surface, with a thin gnomon located either along its center or at its periphery.
Movable-gnomon sundials
Sundials can be designed with a gnomon that is placed in a different position each day throughout the year. In other words, the position of the gnomon relative to the centre of the hour lines varies. The gnomon need not be aligned with the celestial poles and may even be perfectly vertical (the analemmatic dial). These dials, when combined with fixed-gnomon sundials, allow the user to determine true north with no other aid; the two sundials are correctly aligned if and only if they both show the same time.
Universal equinoctial ring dial
A universal equinoctial ring dial (sometimes called a ring dial for brevity, although the term is ambiguous), is a portable version of an armillary sundial, or was inspired by the mariner's astrolabe. It was likely invented by William Oughtred around 1600 and became common throughout Europe.
In its simplest form, the style is a thin slit that allows the Sun's rays to fall on the hour-lines of an equatorial ring. As usual, the style is aligned with the Earth's axis; to do this, the user may orient the dial towards true north and suspend the ring dial vertically from the appropriate point on the meridian ring. Such dials may be made self-aligning with the addition of a more complicated central bar, instead of a simple slit-style. These bars are sometimes an addition to a set of Gemma's rings. This bar could pivot about its end points and held a perforated slider that was positioned to the month and day according to a scale scribed on the bar. The time was determined by rotating the bar towards the Sun so that the light shining through the hole fell on the equatorial ring. This forced the user to rotate the instrument, which had the effect of aligning the instrument's vertical ring with the meridian.
When not in use, the equatorial and meridian rings can be folded together into a small disk.
In 1610, Edward Wright created the sea ring, which mounted a universal ring dial over a magnetic compass. This permitted mariners to determine the time and magnetic variation in a single step.
Analemmatic sundials
Analemmatic sundials are a type of horizontal sundial that has a vertical gnomon and hour markers positioned in an elliptical pattern. There are no hour lines on the dial and the time of day is read on the ellipse. The gnomon is not fixed and must change position daily to accurately indicate time of day.
Analemmatic sundials are sometimes designed with a human as the gnomon. Human gnomon analemmatic sundials are not practical at lower latitudes where a human shadow is quite short during the summer months. A 66 inch tall person casts a 4 inch shadow at 27° latitude on the summer solstice.
Foster-Lambert dials
The Foster-Lambert dial is another movable-gnomon sundial. In contrast to the elliptical analemmatic dial, the Lambert dial is circular with evenly spaced hour lines, making it an equiangular sundial, similar to the equatorial, spherical, cylindrical and conical dials described above. The gnomon of a Foster-Lambert dial is neither vertical nor aligned with the Earth's rotational axis; rather, it is tilted northwards by an angle α = 45° - (Φ/2), where Φ is the geographical latitude. Thus, a Foster-Lambert dial located at latitude 40° would have a gnomon tilted away from vertical by 25° in a northerly direction. To read the correct time, the gnomon must also be moved northwards by a distance
where R is the radius of the Foster-Lambert dial and δ again indicates the Sun's declination for that time of year.
Altitude-based sundials
Altitude dials measure the height of the Sun in the sky, rather than directly measuring its hour-angle about the Earth's axis. They are not oriented towards true north, but rather towards the Sun and generally held vertically. The Sun's elevation is indicated by the position of a nodus, either the shadow-tip of a gnomon, or a spot of light.
In altitude dials, the time is read from where the nodus falls on a set of hour-curves that vary with the time of year. Many such altitude-dials' construction is calculation-intensive, as also the case with many azimuth dials. But the capuchin dials (described below) are constructed and used graphically.
Altitude dials' disadvantages:
Since the Sun's altitude is the same at times equally spaced about noon (e.g., 9am and 3pm), the user had to know whether it was morning or afternoon. At, say, 3:00 pm, that is not a problem. But when the dial indicates a time 15 minutes from noon, the user likely will not have a way of distinguishing 11:45 from 12:15.
Additionally, altitude dials are less accurate near noon, because the sun's altitude is not changing rapidly then.
Many of these dials are portable and simple to use. As is often the case with other sundials, many altitude dials are designed for only one latitude. But the capuchin dial (described below) has a version that's adjustable for latitude.
describe the Universal Capuchin sundial.
Human shadows
The length of a human shadow (or of any vertical object) can be used to measure the sun's elevation and, thence, the time. The Venerable Bede gave a table for estimating the time from the length of one's shadow in feet, on the assumption that a monk's height is six times the length of his foot. Such shadow lengths will vary with the geographical latitude and with the time of year. For example, the shadow length at noon is short in summer months, and long in winter months.
Chaucer evokes this method a few times in his Canterbury Tales, as in his Parson's Tale.
An equivalent type of sundial using a vertical rod of fixed length is known as a backstaff dial.
Shepherd's dial – timesticks
A shepherd's dial – also known as a shepherd's column dial, pillar dial, cylinder dial or chilindre – is a portable cylindrical sundial with a knife-like gnomon that juts out perpendicularly. It is normally dangled from a rope or string so the cylinder is vertical. The gnomon can be twisted to be above a month or day indication on the face of the cylinder. This corrects the sundial for the equation of time. The entire sundial is then twisted on its string so that the gnomon aims toward the Sun, while the cylinder remains vertical. The tip of the shadow indicates the time on the cylinder. The hour curves inscribed on the cylinder permit one to read the time. Shepherd's dials are sometimes hollow, so that the gnomon can fold within when not in use.
The shepherd's dial is evoked in Henry VI, Part 3,
among other works of literature.
The cylindrical shepherd's dial can be unrolled into a flat plate. In one simple version, the front and back of the plate each have three columns, corresponding to pairs of months with roughly the same solar declination (June:July, May:August, April:September, March:October, February:November, and January:December). The top of each column has a hole for inserting the shadow-casting gnomon, a peg. Often only two times are marked on the column below, one for noon and the other for mid-morning / mid-afternoon.
Timesticks, clock spear, or shepherds' time stick, are based on the same principles as dials. The time stick is carved with eight vertical time scales for a different period of the year, each bearing a time scale calculated according to the relative amount of daylight during the different months of the year. Any reading depends not only on the time of day but also on the latitude and time of year.
A peg gnomon is inserted at the top in the appropriate hole or face for the season of the year, and turned to the Sun so that the shadow falls directly down the scale. Its end displays the time.
Ring dials
In a ring dial (also known as an Aquitaine or a perforated ring dial), the ring is hung vertically and oriented sideways towards the sun. A beam of light passes through a small hole in the ring and falls on hour-curves that are inscribed on the inside of the ring. To adjust for the equation of time, the hole is usually on a loose ring within the ring so that the hole can be adjusted to reflect the current month.
Card dials (Capuchin dials)
Card dials are another form of altitude dial. A card is aligned edge-on with the sun and tilted so that a ray of light passes through an aperture onto a specified spot, thus determining the sun's altitude. A weighted string hangs vertically downwards from a hole in the card, and carries a bead or knot. The position of the bead on the hour-lines of the card gives the time. In more sophisticated versions such as the Capuchin dial, there is only one set of hour-lines, i.e., the hour lines do not vary with the seasons. Instead, the position of the hole from which the weighted string hangs is varied according to the season.
The Capuchin sundials are constructed and used graphically, as opposed the direct hour-angle measurements of horizontal or equatorial dials; or the calculated hour angle lines of some altitude and azimuth dials.
In addition to the ordinary Capuchin dial, there is a universal Capuchin dial, adjustable for latitude.
Navicula
A navicula de Venetiis or "little ship of Venice" was an altitude dial used to tell time and which was shaped like a little ship. The cursor (with a plumb line attached) was slid up / down the mast to the correct latitude. The user then sighted the Sun through the pair of sighting holes at either end of the "ship's deck". The plumb line then marked what hour of the day it was.
Nodus-based sundials
Another type of sundial follows the motion of a single point of light or shadow, which may be called the nodus. For example, the sundial may follow the sharp tip of a gnomon's shadow, e.g., the shadow-tip of a vertical obelisk (e.g., the Solarium Augusti) or the tip of the horizontal marker in a shepherd's dial. Alternatively, sunlight may be allowed to pass through a small hole or reflected from a small (e.g., coin-sized) circular mirror, forming a small spot of light whose position may be followed. In such cases, the rays of light trace out a cone over the course of a day; when the rays fall on a surface, the path followed is the intersection of the cone with that surface. Most commonly, the receiving surface is a geometrical plane, so that the path of the shadow-tip or light-spot (called declination line) traces out a conic section such as a hyperbola or an ellipse. The collection of hyperbolae was called a pelekonon (axe) by the Greeks, because it resembles a double-bladed ax, narrow in the center (near the noonline) and flaring out at the ends (early morning and late evening hours).
There is a simple verification of hyperbolic declination lines on a sundial: the distance from the origin to the equinox line should be equal to harmonic mean of distances from the origin to summer and winter solstice lines.
Nodus-based sundials may use a small hole or mirror to isolate a single ray of light; the former are sometimes called aperture dials. The oldest example is perhaps the antiborean sundial (antiboreum), a spherical nodus-based sundial that faces true north; a ray of sunlight enters from the south through a small hole located at the sphere's pole and falls on the hour and date lines inscribed within the sphere, which resemble lines of longitude and latitude, respectively, on a globe.
Reflection sundials
Isaac Newton developed a convenient and inexpensive sundial, in which a small mirror is placed on the sill of a south-facing window. The mirror acts like a nodus, casting a single spot of light on the ceiling. Depending on the geographical latitude and time of year, the light-spot follows a conic section, such as the hyperbolae of the pelikonon. If the mirror is parallel to the Earth's equator, and the ceiling is horizontal, then the resulting angles are those of a conventional horizontal sundial. Using the ceiling as a sundial surface exploits unused space, and the dial may be large enough to be very accurate.
Multiple dials
Sundials are sometimes combined into multiple dials. If two or more dials that operate on different principles — such as an analemmatic dial and a horizontal or vertical dial — are combined, the resulting multiple dial becomes self-aligning, most of the time. Both dials need to output both time and declination. In other words, the direction of true north need not be determined; the dials are oriented correctly when they read the same time and declination. However, the most common forms combine dials are based on the same principle and the analemmatic does not normally output the declination of the sun, thus are not self-aligning.
Diptych (tablet) sundial
The diptych consisted of two small flat faces, joined by a hinge. Diptychs usually folded into little flat boxes suitable for a pocket. The gnomon was a string between the two faces. When the string was tight, the two faces formed both a vertical and horizontal sundial. These were made of white ivory, inlaid with black lacquer markings. The gnomons were black braided silk, linen or hemp string. With a knot or bead on the string as a nodus, and the correct markings, a diptych (really any sundial large enough) can keep a calendar well-enough to plant crops. A common error describes the diptych dial as self-aligning. This is not correct for diptych dials consisting of a horizontal and vertical dial using a string gnomon between faces, no matter the orientation of the dial faces. Since the string gnomon is continuous, the shadows must meet at the hinge; hence, any orientation of the dial will show the same time on both dials.
Multiface dials
A common type of multiple dial has sundials on every face of a Platonic solid (regular polyhedron), usually a cube.
Extremely ornate sundials can be composed in this way, by applying a sundial to every surface of a solid object.
In some cases, the sundials are formed as hollows in a solid object, e.g., a cylindrical hollow aligned with the Earth's rotational axis (in which the edges play the role of styles) or a spherical hollow in the ancient tradition of the hemisphaerium or the antiboreum. (See the History section above.) In some cases, these multiface dials are small enough to sit on a desk, whereas in others, they are large stone monuments.
A Polyhedral's dial faces can be designed to give the time for different time-zones simultaneously. Examples include the Scottish sundial of the 17th and 18th centuries, which was often an extremely complex shape of polyhedral, and even convex faces.
Prismatic dials
Prismatic dials are a special case of polar dials, in which the sharp edges of a prism of a concave polygon serve as the styles and the sides of the prism receive the shadow. Examples include a three-dimensional cross or star of David on gravestones.
Unusual sundials
Benoy dial
The Benoy dial was invented by Walter Gordon Benoy of Collingham, Nottinghamshire, England. Whereas a gnomon casts a sheet of shadow, his invention creates an equivalent sheet of light by allowing the Sun's rays through a thin slit, reflecting them from a long, slim mirror (usually half-cylindrical), or focusing them through a cylindrical lens. Examples of Benoy dials can be found in the United Kingdom at:
Carnfunnock Country Park, Antrim Northern Ireland
Upton Hall, British Horological Institute, Newark-on-Trent, Nottinghamshire
Within the collections of St Edmundsbury Heritage Service, Bury St Edmunds
Longleat, Wiltshire
Jodrell Bank Science Centre
Birmingham Botanical Gardens
Science Museum, London (inventory number 1975-318)
Bifilar sundial
Invented by the German mathematician Hugo Michnik in 1922, the bifilar sundial has two non-intersecting threads parallel to the dial. Usually the second thread is orthogonal to the first.
The intersection of the two threads' shadows gives the local solar time.
Digital sundial
A digital sundial indicates the current time with numerals formed by the sunlight striking it. Sundials of this type are installed in the Deutsches Museum in Munich and in the Sundial Park in Genk (Belgium), and a small version is available commercially. There is a patent for this type of sundial.
Globe dial
The globe dial is a sphere aligned with the Earth's rotational axis, and equipped with a spherical vane. Similar to sundials with a fixed axial style, a globe dial determines the time from the Sun's azimuthal angle in its apparent rotation about the earth. This angle can be determined by rotating the vane to give the smallest shadow.
Noon marks
The simplest sundials do not give the hours, but rather note the exact moment of 12:00 noon. In centuries past, such dials were used to set mechanical clocks, which were sometimes so inaccurate as to lose or gain significant time in a single day. The simplest noon-marks have a shadow that passes a mark. Then, an almanac can translate from local solar time and date to civil time. The civil time is used to set the clock. Some noon-marks include a figure-eight that embodies the equation of time, so that no almanac is needed.
In some U.S. colonial-era houses, a noon-mark might be carved into a floor or windowsill. Such marks indicate local noon, and provide a simple and accurate time reference for households to set their clocks. Some Asian countries had post offices set their clocks from a precision noon-mark. These in turn provided the times for the rest of the society. The typical noon-mark sundial was a lens set above an analemmatic plate. The plate has an engraved figure-eight shape, which corresponds to the equation of time (described above) versus the solar declination. When the edge of the Sun's image touches the part of the shape for the current month, this indicates that it is 12:00 noon.
Sundial cannon
A sundial cannon, sometimes called a 'meridian cannon', is a specialized sundial that is designed to create an 'audible noonmark', by automatically igniting a quantity of gunpowder at noon. These were novelties rather than precision sundials, sometimes installed in parks in Europe mainly in the late 18th or early 19th centuries. They typically consist of a horizontal sundial, which has in addition to a gnomon a suitably mounted lens, set to focus the rays of the sun at exactly noon on the firing pan of a miniature cannon loaded with gunpowder (but no ball). To function properly the position and angle of the lens must be adjusted seasonally.
Meridian lines
A horizontal line aligned on a meridian with a gnomon facing the noon-sun is termed a meridian line and does not indicate the time, but instead the day of the year. Historically they were used to accurately determine the length of the solar year. Examples are the Bianchini meridian line in Santa Maria degli Angeli e dei Martiri in Rome, and the Cassini line in San Petronio Basilica at Bologna.
Sundial mottoes
The association of sundials with time has inspired their designers over the centuries to display mottoes as part of the design. Often these cast the device in the role of memento mori, inviting the observer to reflect on the transience of the world and the inevitability of death. "Do not kill time, for it will surely kill thee." Other mottoes are more whimsical: "I count only the sunny hours," and "I am a sundial and I make a botch / of what is done far better by a watch." Collections of sundial mottoes have often been published through the centuries.
Use as a compass
If a horizontal-plate sundial is made for the latitude in which it is being used, and if it is mounted with its plate horizontal and its gnomon pointing to the celestial pole that is above the horizon, then it shows the correct time in apparent solar time. Conversely, if the directions of the cardinal points are initially unknown, but the sundial is aligned so it shows the correct apparent solar time as calculated from the reading of a clock, its gnomon shows the direction of True north or south, allowing the sundial to be used as a compass. The sundial can be placed on a horizontal surface, and rotated about a vertical axis until it shows the correct time. The gnomon will then be pointing to the north, in the northern hemisphere, or to the south in the southern hemisphere. This method is much more accurate than using a watch as a compass and can be used in places where the magnetic declination is large, making a magnetic compass unreliable. An alternative method uses two sundials of different designs. (See #Multiple dials, above.) The dials are attached to and aligned with each other, and are oriented so they show the same time. This allows the directions of the cardinal points and the apparent solar time to be determined simultaneously, without requiring a clock.
| Technology | Timekeeping | null |
72949 | https://en.wikipedia.org/wiki/Millimetre | Millimetre | The millimetre (international spelling; SI unit symbol mm) or millimeter (American spelling) is a unit of length in the International System of Units (SI), equal to one thousandth of a metre, which is the SI base unit of length. Therefore, there are one thousand millimetres in a metre. There are ten millimetres in a centimetre.
One millimetre is equal to micrometres or nanometres.
Since an inch is officially defined as exactly 25.4 millimetres, a millimetre is equal to exactly (≈ 0.03937) of an inch.
Definition
Since 1983, the metre has been defined as "the length of the path travelled by light in vacuum during a time interval of of a second". A millimetre, of a metre, is therefore the distance travelled by light in of a second.
Informal terminology
A common shortening of millimetre in spoken English is "mil". This can cause confusion in the United States, where "mil" traditionally means a thousandth of an inch.
Unicode symbols
For the purposes of compatibility with Chinese, Japanese and Korean (CJK) characters, Unicode has symbols for:
millimetre -
square millimetre -
cubic millimetre
In Japanese typography, these square symbols are used for laying out unit symbols without distorting the grid layout of text characters.
Measurement
On a metric ruler, the smallest measurements are normally millimetres. High-quality engineering rulers may be graduated in increments of 0.5 mm. Digital callipers are commonly capable of reading increments as small as 0.01 mm.
Microwaves with a frequency of 300 GHz have a wavelength of 1 mm. Using frequencies between 30 GHz and 300 GHz for data transmission, in contrast to the 300 MHz to 3 GHz normally used in mobile devices, has the potential to allow data transfer rates of 10 gigabits per second.
The smallest dimension the human eye can resolve is around 0.02 to 0.04 mm, approximately the width of a thin human hair. A sheet of paper is typically between 0.07 mm and 0.18 mm thick, with ordinary printer paper or copy paper approximately 0.1 mm thick.
| Physical sciences | Metric | Basics and measurement |
72967 | https://en.wikipedia.org/wiki/Cubic%20centimetre | Cubic centimetre | A cubic centimetre (or cubic centimeter in US English) (SI unit symbol: cm3; non-SI abbreviations: cc and ccm) is a commonly used unit of volume that corresponds to the volume of a cube that measures 1 cm × 1 cm × 1 cm. One cubic centimetre corresponds to a volume of one millilitre. The mass of one cubic centimetre of water at 3.98 °C (the temperature at which it attains its maximum density) is almost equal to one gram.
In internal combustion engines, "cc" refers to the total volume of its engine displacement in cubic centimetres. The displacement can be calculated using the formula
where is engine displacement, is the bore of the cylinders, is length of the stroke and is the number of cylinders.
Conversions
1 millilitre = 1 cm3
1 litre = 1000 cm3
1 cubic inch = .
Unicode character
The "cubic centimetre" symbol is encoded by Unicode at code point .
| Physical sciences | Volume | Basics and measurement |
73102 | https://en.wikipedia.org/wiki/Residue%20%28complex%20analysis%29 | Residue (complex analysis) | In mathematics, more specifically complex analysis, the residue is a complex number proportional to the contour integral of a meromorphic function along a path enclosing one of its singularities. (More generally, residues can be calculated for any function that is holomorphic except at the discrete points {ak}k, even if some of them are essential singularities.) Residues can be computed quite easily and, once known, allow the determination of general contour integrals via the residue theorem.
Definition
The residue of a meromorphic function at an isolated singularity , often denoted , , or , is the unique value such that has an analytic antiderivative in a punctured disk .
Alternatively, residues can be calculated by finding Laurent series expansions, and one can define the residue as the coefficient a−1 of a Laurent series.
The concept can be used to provide contour integration values of certain contour integral problems considered in the residue theorem. According to the residue theorem, for a meromorphic function , the residue at point is given as:
where is a positively oriented simple closed curve around and not including any other singularities on or inside the curve.
The definition of a residue can be generalized to arbitrary Riemann surfaces. Suppose is a 1-form on a Riemann surface. Let be meromorphic at some point , so that we may write in local coordinates as . Then, the residue of at is defined to be the residue of at the point corresponding to .
Contour integration
Contour integral of a monomial
Computing the residue of a monomial
makes most residue computations easy to do. Since path integral computations are homotopy invariant, we will let be the circle with radius going counter clockwise. Then, using the change of coordinates we find that
hence our integral now reads as
Thus, the residue of is 1 if integer and 0 otherwise.
Generalization to Laurent series
If a function is expressed as a Laurent series expansion around c as follows:Then, the residue at the point c is calculated as:using the results from contour integral of a monomial for counter clockwise contour integral around a point c. Hence, if a Laurent series representation of a function exists around c, then its residue around c is known by the coefficient of the term.
Application in residue theorem
For a meromorphic function , with a finite set of singularities within a positively oriented simple closed curve which does not pass through any singularity, the value of the contour integral is given according to residue theorem, as:where , the winding number, is if is in the interior of and if not, simplifying to:where are all isolated singularities within the contour .
Calculation of residues
Suppose a punctured disk D = {z : 0 < |z − c| < R} in the complex plane is given and f is a holomorphic function defined (at least) on D. The residue Res(f, c) of f at c is the coefficient a−1 of in the Laurent series expansion of f around c. Various methods exist for calculating this value, and the choice of which method to use depends on the function in question, and on the nature of the singularity.
According to the residue theorem, we have:
where γ traces out a circle around c in a counterclockwise manner and does not pass through or contain other singularities within it. We may choose the path γ to be a circle of radius ε around c. Since ε can be as small as we desire it can be made to contain only the singularity of c due to nature of isolated singularities. This may be used for calculation in cases where the integral can be calculated directly, but it is usually the case that residues are used to simplify calculation of integrals, and not the other way around.
Removable singularities
If the function f can be continued to a holomorphic function on the whole disk , then Res(f, c) = 0. The converse is not generally true.
Simple poles
If c is a simple pole of f, the residue of f is given by:
If that limit does not exist, then f instead has an essential singularity at c. If the limit is 0, then f is either analytic at c or has a removable singularity there. If the limit is equal to infinity, then the order of the pole is higher than 1.
It may be that the function f can be expressed as a quotient of two functions, , where g and h are holomorphic functions in a neighbourhood of c, with h(c) = 0 and h'(c) ≠ 0. In such a case, L'Hôpital's rule can be used to simplify the above formula to:
Limit formula for higher-order poles
More generally, if c is a pole of order p, then the residue of f around z = c can be found by the formula:
This formula can be very useful in determining the residues for low-order poles. For higher-order poles, the calculations can become unmanageable, and series expansion is usually easier. For essential singularities, no such simple formula exists, and residues must usually be taken directly from series expansions.
Residue at infinity
In general, the residue at infinity is defined as:
If the following condition is met:
then the residue at infinity can be computed using the following formula:
If instead
then the residue at infinity is
For functions meromorphic on the entire complex plane with finitely many singularities, the sum of the residues at the (necessarily) isolated singularities plus the residue at infinity is zero, which gives:
Series methods
If parts or all of a function can be expanded into a Taylor series or Laurent series, which may be possible if the parts or the whole of the function has a standard series expansion, then calculating the residue is significantly simpler than by other methods. The residue of the function is simply given by the coefficient of in the Laurent series expansion of the function.
Examples
Residue from series expansion
Example 1
As an example, consider the contour integral
where C is some simple closed curve about 0.
Let us evaluate this integral using a standard convergence result about integration by series. We can substitute the Taylor series for into the integrand. The integral then becomes
Let us bring the 1/z5 factor into the series. The contour integral of the series then writes
Since the series converges uniformly on the support of the integration path, we are allowed to exchange integration and summation.
The series of the path integrals then collapses to a much simpler form because of the previous computation. So now the integral around C of every other term not in the form cz−1 is zero, and the integral is reduced to
The value 1/4! is the residue of ez/z5 at z = 0, and is denoted
Example 2
As a second example, consider calculating the residues at the singularities of the functionwhich may be used to calculate certain contour integrals. This function appears to have a singularity at z = 0, but if one factorizes the denominator and thus writes the function asit is apparent that the singularity at z = 0 is a removable singularity and then the residue at z = 0 is therefore 0.
The only other singularity is at z = 1. Recall the expression for the Taylor series for a function g(z) about z = a:So, for g(z) = sin z and a = 1 we haveand for g(z) = 1/z and a = 1 we haveMultiplying those two series and introducing 1/(z − 1) gives usSo the residue of f(z) at z = 1 is sin 1.
Example 3
The next example shows that, computing a residue by series expansion, a major role is played by the Lagrange inversion theorem. Letbe an entire function, and letwith positive radius of convergence, and with . So has a local inverse at 0, and is meromorphic at 0. Then we have:Indeed,because the first series converges uniformly on any small circle around 0. Using the Lagrange inversion theoremand we get the above expression. For example, if and also , thenandThe first term contributes 1 to the residue, and the second term contributes 2 since it is asymptotic to .
Note that, with the corresponding stronger symmetric assumptions on and , it also followswhere is a local inverse of at 0.
| Mathematics | Complex analysis | null |
73218 | https://en.wikipedia.org/wiki/Surface%20weather%20analysis | Surface weather analysis | Surface weather analysis is a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations.
Weather maps are created by plotting or tracing the values of relevant quantities such as sea level pressure, temperature, and cloud cover onto a geographical map to help find synoptic scale features such as weather fronts.
The first weather maps in the 19th century were drawn well after the fact to help devise a theory on storm systems. After the advent of the telegraph, simultaneous surface weather observations became possible for the first time, and beginning in the late 1840s, the Smithsonian Institution became the first organization to draw real-time surface analyses. Use of surface analyses began first in the United States, spreading worldwide during the 1870s. Use of the Norwegian cyclone model for frontal analysis began in the late 1910s across Europe, with its use finally spreading to the United States during World War II.
Surface weather analyses have special symbols that show frontal systems, cloud cover, precipitation, or other important information. For example, an H may represent high pressure, implying clear skies and relatively warm weather. An L, on the other hand, may represent low pressure, which frequently accompanies precipitation. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. Areas of precipitation help determine the frontal type and location.
History of surface analysis
The use of weather charts in a modern sense began in the middle portion of the 19th century in order to devise a theory on storm systems. The development of a telegraph network by 1845 made it possible to gather weather information from multiple distant locations quickly enough to preserve its value for real-time applications. The Smithsonian Institution developed its network of observers over much of the central and eastern United States between the 1840s and 1860s. The U.S. Army Signal Corps inherited this network between 1870 and 1874 by an act of Congress, and expanded it to the west coast soon afterwards.
The weather data was at first less useful as a result of the different times at which weather observations were made. The first attempts at time standardization took hold in Great Britain by 1855. The entire United States did not finally come under the influence of time zones until 1905, when Detroit finally established standard time. Other countries followed the lead of the United States in taking simultaneous weather observations, starting in 1873. Other countries then began preparing surface analyses. The use of frontal zones on weather maps did not appear until the introduction of the Norwegian cyclone model in the late 1910s, despite Loomis' earlier attempt at a similar notion in 1841. Since the leading edge of air mass changes bore resemblance to the military fronts of World War I, the term "front" came into use to represent these lines.
Despite the introduction of the Norwegian cyclone model just after World War I, the United States did not formally analyze fronts on surface analyses until late 1942, when the WBAN Analysis Center opened in downtown Washington, D.C. The effort to automate map plotting began in the United States in 1969, with the process complete in the 1970s. Hong Kong completed their process of automated surface plotting by 1987. By 1999, computer systems and software had finally become sophisticated enough to allow for the ability to underlay on the same workstation satellite imagery, radar imagery, and model-derived fields such as atmospheric thickness and frontogenesis in combination with surface observations to make for the best possible surface analysis. In the United States, this development was achieved when Intergraph workstations were replaced by n-AWIPS workstations. By 2001, the various surface analyses done within the National Weather Service were combined into the Unified Surface Analysis, which is issued every six hours and combines the analyses of four different centers. Recent advances in both the fields of meteorology and geographic information systems have made it possible to devise finely tailored weather maps. Weather information can quickly be matched to relevant geographical detail. For instance, icing conditions can be mapped onto the road network. This will likely continue to lead to changes in the way surface analyses are created and displayed over the next several years.
Station model used on weather maps
When analyzing a weather map, a station model is plotted at each point of observation. Within the station model, the temperature, dewpoint, wind speed and direction, atmospheric pressure, pressure tendency, and ongoing weather are plotted. The circle in the middle represents cloud cover; fraction it is filled in represents the degree of overcast. Outside the United States, temperature and dewpoint are plotted in degrees Celsius. The wind barb points in the direction from which the wind is coming. Each full flag on the wind barb represents of wind, each half flag represents . When winds reach , a filled in triangle is used for each of wind. In the United States, rainfall plotted in the corner of the station model are in inches. The international standard rainfall measurement unit is the millimeter. Once a map has a field of station models plotted, the analyzing isobars (lines of equal pressure), isallobars (lines of equal pressure change), isotherms (lines of equal temperature), and isotachs (lines of equal wind speed) are drawn. The abstract weather symbols were devised to take up the least room possible on weather maps.
Synoptic scale features
A synoptic scale feature is one whose dimensions are large in scale, more than several hundred kilometers in length. Migratory pressure systems and frontal zones exist on this scale.
Pressure centers
Centers of surface high- and low-pressure areas that are found within closed isobars on a surface weather analysis are the absolute maxima and minima in the pressure field, and can tell a user in a glance what the general weather is in their vicinity. Weather maps in English-speaking countries will depict their highs as Hs and lows as Ls, while Spanish-speaking countries will depict their highs as As and lows as Bs.
Low pressure
Low-pressure systems, also known as cyclones, are located in minima in the pressure field. Rotation is inward at the surface and counterclockwise in the northern hemisphere as opposed to inward and clockwise in the southern hemisphere due to the Coriolis force. Weather is normally unsettled in the vicinity of a cyclone, with increased cloudiness, increased winds, increased temperatures, and upward motion in the atmosphere, which leads to an increased chance of precipitation. Polar lows can form over relatively mild ocean waters when cold air sweeps in from the ice cap. The relatively warmer water leads to upward convection, causing a low to form, and precipitation usually in the form of snow. Tropical cyclones and winter storms are intense varieties of low pressure. Over land, thermal lows are indicative of hot weather during the summer.
High pressure
High-pressure systems, also known as anticyclones, rotate outward at the surface and clockwise in the northern hemisphere as opposed to outward and counterclockwise in the southern hemisphere. Under surface highs, sinking of the atmosphere slightly warms the air by compression, leading to clearer skies, winds that are lighter, and a reduced chance of precipitation. The descending air is dry, hence less energy is required to raise its temperature. If high pressure persists, air pollution will build up due to pollutants trapped near the surface caused by the subsiding motion associated with the high.
Fronts
Fronts in meteorology are boundaries between air masses that have different density, air temperature, and humidity. Strictly speaking, the front is marked at the warmer edge of a frontal zone where the gradient is very large. When a front passes over a point, it is marked by changes in temperature, moisture, wind speed and direction, a minimum of atmospheric pressure, and a change in the cloud pattern, sometimes with precipitation. Cold fronts develop where the cold air mass is advancing, warm fronts where the warm air is advancing, and a stationary front is not moving. Fronts classically wrap around low pressure centers as indicated in the image here depicted for the northern hemisphere. On a larger scale, the Earth's polar front is a sharpening of the general equator-to-pole temperature gradient, underlying a high-altitude jet stream for reasons of thermal wind balance. Fronts usually travel from west to east, although they can move in a north-south direction or even east to west (a "backdoor" front) as airflow wraps around a low pressure center. Frontal zones can be distorted by such geographic features as mountains and large bodies of water.
Cold front
A cold front is located at the leading edge of a sharp temperature gradient on an isotherm analysis, often marked by a sharp surface pressure trough. Cold fronts can move up to twice as quickly as warm fronts and produce sharper changes in weather since cold air is denser than warm air and rapidly lifts as well as pushes the warmer air. Cold fronts are typically accompanied by a narrow band of clouds, showers and thunderstorms. On a weather map, the surface position of the cold front is marked with a blue line of triangles (pips) pointing in the direction of travel, at the leading edge of the cooler air mass.
Warm front
Warm fronts mark the position on the Earth's surface where a relatively warm body of air is advancing into colder air. The front is marked on the warm edge of the gradient in isotherms, and lies within a low pressure trough that tends to be broader and weaker than that of a cold front. Warm fronts move more slowly than cold fronts because cold air is denser, and is only pushed along (not lifted from) the Earth's surface. The warm air mass overrides the cold air mass, so temperature and cloud changes occur at higher altitudes before those at the surface. Clouds ahead of the warm front are mostly stratiform with precipitation that increases gradually as the front approaches. Ahead of a warm front, descending cloud bases will often begin with cirrus and cirrostratus (high-level), then altostratus (mid-level) clouds, and eventually lower in the atmosphere as the front passes through. Fog can precede a warm front when precipitation falls into areas of colder air, but increasing surface temperatures and wind tend to dissipate it after a warm front passes through. Cases with environmental instability can be conducive to thunderstorm development. On weather maps, the surface location of a warm front is marked with a red line of half circles pointing in the direction of travel.
Occluded front
The classical view of an occluded front is that they are formed when a cold front overtakes a warm front. A more modern view suggests that they form directly during the wrap-up of the baroclinic zone during cyclogenesis, and lengthen due to flow deformation and rotation around the cyclone.
Occluded fronts are indicated on a weather map by a purple line with alternating half-circles and triangles pointing in direction of travel: that is, with a mixture of warm and cold frontal colors and symbols. Occlusions can be divided into warm vs. cold types. In a cold occlusion, the air mass overtaking the warm front is cooler than the cool air ahead of the warm front, and plows under both air masses. In a warm occlusion, the air mass overtaking the warm front is not as cool as the cold air ahead of the warm front, and rides over the colder air mass while lifting the warm air. Occluded fronts are indicated on a weather map by a purple line with alternating half-circles and triangles pointing in direction of travel.
Occluded fronts usually form around low pressure systems in the mature or late stages of their life cycle, but some continue to deepen after occlusion, and some do not form occluded fronts at all. The weather associated with an occluded front includes a variety of cloud and precipitation patterns, including dry slots and banded precipitation. Cold, warm and occluded fronts often meet at the point of occlusion or triple point.
Stationary fronts and shearlines
A stationary front is a non-moving boundary between two different air masses. They tend to remain in the same area for long periods of time, sometimes undulating in waves. Often a less-steep temperature gradient continues behind (on the cool side of) the sharp frontal zone with more widely spaced isotherms. A wide variety of weather can be found along a stationary front, characterized more by its prolonged presence than by a specific type. Stationary fronts may dissipate after several days, but can change into a cold or warm front if conditions aloft change, driving one air mass toward the other. Stationary fronts are marked on weather maps with alternating red half-circles and blue spikes pointing in opposite directions, indicating no significant movement.
As airmass temperatures equalize, stationary fronts may become smaller in scale, degenerating to a narrow zone where wind direction changes over a short distance, known as a shear line, depicted as a blue line of single alternating dots and dashes.
Mesoscale features
Mesoscale features are smaller than synoptic scale systems like fronts, but larger than storm-scale systems like thunderstorms. Horizontal dimensions generally range from over ten kilometres to several hundred kilometres.
Dry line
The dry line is the boundary between dry and moist air masses east of mountain ranges with similar orientation to the Rockies, depicted at the leading edge of the dew point, or moisture, gradient. Near the surface, warm moist air that is denser than warmer, dryer air wedges under the drier air in a manner similar to that of a cold front wedging under warmer air. When the warm moist air wedged under the drier mass heats up, it becomes less dense and rises and sometimes forms thunderstorms. At higher altitudes, the warm moist air is less dense than the cooler, drier air and the boundary slope reverses. In the vicinity of the reversal aloft, severe weather is possible, especially when a triple point is formed with a cold front.
During daylight hours, drier air from aloft drifts down to the surface, causing an apparent movement of the dryline eastward. At night, the boundary reverts to the west as there is no longer any solar heating to help mix the lower atmosphere. If enough moisture converges upon the dryline, it can be the focus of afternoon and evening thunderstorms. A dry line is depicted on United States surface analyses as a brown line with scallops, or bumps, facing into the moist sector. Dry lines are one of the few surface fronts where the special shapes along the drawn boundary do not necessarily reflect the boundary's direction of motion.
Outflow boundaries and squall lines
Organized areas of thunderstorm activity not only reinforce pre-existing frontal zones, but they can outrun cold fronts. This outrunning occurs in a pattern where the upper level jet splits into two streams. The resultant mesoscale convective system (MCS) forms at the point of the upper level split in the wind pattern at the area of the best low-level inflow. The convection then moves east and equatorward into the warm sector, parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge where the significant wind shifts and pressure rises. Even weaker and less organized areas of thunderstorms will lead to locally cooler air and higher pressures, and outflow boundaries exist ahead of this type of activity, "SQLN" or "SQUALL LINE", while outflow boundaries are depicted as troughs with a label of "OUTFLOW BOUNDARY" or "OUTFLOW BNDRY".
Sea and land breeze fronts
Sea breeze fronts occur on sunny days when the landmass warms the air above it to a temperature above the water temperature. Similar boundaries form downwind on lakes and rivers during the day, as well as offshore landmasses at night. Since the specific heat of water is so high, there is little diurnal temperature change in bodies of water, even on the sunniest days. The water temperature varies less than . By contrast, the land, with a lower specific heat, can vary several degrees in a matter of hours.
During the afternoon, air pressure decreases over the land as the warmer air rises. The relatively cooler air over the sea rushes in to replace it. The result is a relatively cool onshore wind. This process usually reverses at night where the water temperature is higher relative to the landmass, leading to an offshore land breeze. However, if water temperatures are colder than the land at night, the sea breeze may continue, only somewhat abated. This is typically the case along the California coast, for example.
If enough moisture exists, thunderstorms can form along sea breeze fronts that then can send out outflow boundaries. This causes chaotic wind/pressure regimes if the steering flow is weak. Like all other surface features, sea breeze fronts lie inside troughs of low pressure.
Microscale features
Descending reflectivity core
A descending reflectivity core (DRC) is a meteorological phenomenon observed in supercell thunderstorms, characterized by a localized, small-scale area of enhanced radar reflectivity that descends from the echo overhang into the lower levels of the storm.
| Physical sciences | Meteorology: General | Earth science |
73222 | https://en.wikipedia.org/wiki/Parakeet | Parakeet | A parakeet is any one of many small- to medium-sized species of parrot, in multiple genera, that generally has long tail feathers.
Etymology and naming
The name parakeet is derived from the French word perroquet, which is reflected in some older spellings that are still sometimes encountered, including paroquet or paraquet. However, in modern French, perruche is used to refer to parakeets and similar-sized parrots.
In American English, the word parakeet usually refers to the budgerigar, which is one species of parakeet.
Summary
Parakeets comprise about 115 species of birds that are seed-eating parrots of small size, slender build, and long, tapering tails. The Australian budgerigar, also known as "budgie", Melopsittacus undulatus, is probably the most common parakeet. It was first described by zoologists in 1891. It is the most popular species of parakeet kept as a pet in North America and Europe.
The term "grass parakeet" (or grasskeet) refers to many small Australian parakeets native to grasslands such as the genus Neophema and the princess parrot. The Australian rosellas are also parakeets. Many of the smaller, long-tailed species of lories may be referred to as "lorikeets". The vernacular name ring-necked parakeet (not to be confused with the Australian ringneck) refers to a species of the genus Psittacula native to Africa and Asia that is popular as a pet and has become feral in many cities outside its natural range.
In aviculture, the term "conure" is used for small to medium-sized parakeets of the genera Aratinga, Pyrrhura, and a few other genera of the tribe Arini, which are mainly endemic to South America. As they are not all from one genus, taxonomists tend to avoid the term. Other South American species commonly called parakeets include the genus Brotogeris parakeets, the monk parakeet, and lineolated parakeets, although lineolateds have short tails.
A larger species may be referred to as "parrot" or "parakeet" interchangeably. For example, "Alexandrine parrot" and "Alexandrine parakeet" are two common names for the same species, Psittacula eupatria, which is one of the largest species normally referred to as a parakeet.
Many different species of parakeets are bred and sold commercially as pets, the budgerigar being the third most popular pet in the world, after cats and dogs.
Breeding
Parakeets often breed more readily in groups; however, there can be conflicts between breeding pairs and individuals especially if space is limited. The presence of other parakeets encourages a pair to breed, which is why breeding in a group is better. Despite this, many breeders choose to breed in pairs to both avoid conflicts and know offspring's parentage with certainty. Budgerigars lay an average of 4-6 eggs, while other parakeet species may lay an average of 4-6 eggs.
American population
There is a growing population of monk parakeets in Brooklyn and Queens, although the species have been reported in all five boroughs of New York City.
European population
Belgian population
, an estimated 10,000 parakeets lived in Brussels, the capital of Belgium. The total made them one of the most populous birds in the city, behind only pigeons and sparrows.
Spain's parakeet control measures
According to a 2018 report, Spanish authorities drew up plans to curb the ever-growing population of parakeets, which reached 30,000 in locations such as Malaga.
United Kingdom
In December 2019, Steven Le Comber, of Queen Mary University in London, UK, published an analysis in the Journal of Zoology based on geographic profiling methods. It concluded that the thriving rose-ringed parakeet population in the United Kingdom had grown from numerous small-scale accidental and intentional pet releases. Previous theories had included a pair released by Jimi Hendrix on Carnaby Street and an arrival in 1951 when Humphrey Bogart and Katharine Hepburn visited London with various animals to film The African Queen, set in the equatorial swamps of east Africa.
| Biology and health sciences | Psittaciformes | Animals |
73231 | https://en.wikipedia.org/wiki/Weather%20forecasting | Weather forecasting | Weather forecasting is the application of science and technology to predict the conditions of the atmosphere for a given location and time. People have attempted to predict the weather informally for millennia and formally since the 19th century.
Weather forecasts are made by collecting quantitative data about the current state of the atmosphere, land, and ocean and using meteorology to project how the atmosphere will change at a given place. Once calculated manually based mainly upon changes in barometric pressure, current weather conditions, and sky conditions or cloud cover, weather forecasting now relies on computer-based models that take many atmospheric factors into account. Human input is still required to pick the best possible model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases.
The inaccuracy of forecasting is due to the chaotic nature of the atmosphere; the massive computational power required to solve the equations that describe the atmosphere, the land, and the ocean; the error involved in measuring the initial conditions; and an incomplete understanding of atmospheric and related processes. Hence, forecasts become less accurate as the difference between the current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus helps narrow the error and provide confidence in the forecast.
There is a vast variety of end uses for weather forecasts. Weather warnings are important because they are used to protect lives and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to traders within commodity markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, many people use weather forecasts to determine what to wear on a given day. Since outdoor activities are severely curtailed by heavy rain, snow and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them.
Weather forecasting is a part of the economy. For example, in 2009, the US spent approximately $5.8 billion on it, producing benefits estimated at six times as much.
History
Ancient forecasting
In 650 BC, the Babylonians predicted the weather from cloud patterns as well as astrology. In about 350 BC, Aristotle described weather patterns in Meteorologica. Later, Theophrastus compiled a book on weather forecasting, called the Book of Signs. Chinese weather prediction lore extends at least as far back as 300 BC, which was also around the same time ancient Indian astronomers developed weather-prediction methods. In the New Testament, Jesus is quoted as referring to deciphering and understanding local weather patterns, by saying, "When evening comes, you say, 'It will be fair weather, for the sky is red', and in the morning, 'Today it will be stormy, for the sky is red and overcast.' You know how to interpret the appearance of the sky, but you cannot interpret the signs of the times."
In 904 AD, Ibn Wahshiyya's Nabatean Agriculture, translated into Arabic from an earlier Aramaic work, discussed the weather forecasting of atmospheric changes and signs from the planetary astral alterations; signs of rain based on observation of the lunar phases; and weather forecasts based on the movement of winds.
Ancient weather forecasting methods usually relied on observed patterns of events, also termed pattern recognition. For example, it was observed that if the sunset was particularly red, the following day often brought fair weather. This experience accumulated over the generations to produce weather lore. However, not all of these predictions prove reliable, and many of them have since been found not to stand up to rigorous statistical testing.
Modern methods
It was not until the invention of the electric telegraph in 1835 that the modern age of weather forecasting began. Before that, the fastest that distant weather reports could travel was around 160 kilometres per day (100 mi/d), but was more typically 60–120 kilometres per day (40–75 mi/day) (whether by land or by sea). By the late 1840s, the telegraph allowed reports of weather conditions from a wide area to be received almost instantaneously, allowing forecasts to be made from knowledge of weather conditions further upwind.
The two men credited with the birth of forecasting as a science were an officer of the Royal Navy Francis Beaufort and his protégé Robert FitzRoy. Both were influential men in British naval and governmental circles, and though ridiculed in the press at the time, their work gained scientific credence, was accepted by the Royal Navy, and formed the basis for all of today's weather forecasting knowledge.
Beaufort developed the Wind Force Scale and Weather Notation coding, which he was to use in his journals for the remainder of his life. He also promoted the development of reliable tide tables around British shores, and with his friend William Whewell, expanded weather record-keeping at 200 British coast guard stations.
Robert FitzRoy was appointed in 1854 as chief of a new department within the Board of Trade to deal with the collection of weather data at sea as a service to mariners. This was the forerunner of the modern Meteorological Office. All ship captains were tasked with collating data on the weather and computing it, with the use of tested instruments that were loaned for this purpose.
A storm in October 1859 that caused the loss of the Royal Charter inspired FitzRoy to develop charts to allow predictions to be made, which he called "forecasting the weather", thus coining the term "weather forecast". Fifteen land stations were established to use the telegraph to transmit to him daily reports of weather at set times leading to the first gale warning service. His warning service for shipping was initiated in February 1861, with the use of telegraph communications. The first daily weather forecasts were published in The Times in 1861. In the following year a system was introduced of hoisting storm warning cones at the principal ports when a gale was expected. The "Weather Book" which FitzRoy published in 1863 was far in advance of the scientific opinion of the time.
As the electric telegraph network expanded, allowing for the more rapid dissemination of warnings, a national observational network was developed, which could then be used to provide synoptic analyses. To shorten detailed weather reports into more affordable telegrams, senders encoded weather information in telegraphic code, such as the one developed by the U.S. Army Signal Corps. Instruments to continuously record variations in meteorological parameters using photography were supplied to the observing stations from Kew Observatory – these cameras had been invented by Francis Ronalds in 1845 and his barograph had earlier been used by FitzRoy.
To convey accurate information, it soon became necessary to have a standard vocabulary describing clouds; this was achieved by means of a series of classifications first achieved by Luke Howard in 1802, and standardized in the International Cloud Atlas of 1896.
Numerical prediction
It was not until the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, English scientist Lewis Fry Richardson published "Weather Prediction By Numerical Process", after finding notes and derivations he worked on as an ambulance driver in World War I. He described therein how small terms in the prognostic fluid dynamics equations governing atmospheric flow could be neglected, and a finite differencing scheme in time and space could be devised, to allow numerical prediction solutions to be found.
Richardson envisioned a large auditorium of thousands of people performing the calculations and passing them to others. However, the sheer number of calculations required was too large to be completed without the use of computers, and the size of the grid and time steps led to unrealistic results in deepening systems. It was later found, through numerical analysis, that this was due to numerical instability. The first computerised weather forecast was performed by a team composed of American meteorologists Jule Charney, Philip Duncan Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and ENIAC programmer Klara Dan von Neumann. Practical use of numerical weather prediction began in 1955, spurred by the development of programmable electronic computers.
Broadcasts
The first ever daily weather forecasts were published in The Times on August 1, 1861, and the first weather maps were produced later in the same year. In 1911, the Met Office began issuing the first marine weather forecasts via radio transmission. These included gale and storm warnings for areas around Great Britain. In the United States, the first public radio forecasts were made in 1925 by Edward B. "E.B." Rideout, on WEEI, the Edison Electric Illuminating station in Boston. Rideout came from the U.S. Weather Bureau, as did WBZ weather forecaster G. Harold Noyes in 1931.
The world's first televised weather forecasts, including the use of weather maps, were experimentally broadcast by the BBC in November 1936. This was brought into practice in 1949, after World War II. George Cowling gave the first weather forecast while being televised in front of the map in 1954. In America, experimental television forecasts were made by James C. Fidler in Cincinnati in either 1940 or 1947 on the DuMont Television Network. In the late 1970s and early 1980s, John Coleman, the first weatherman for the American Broadcasting Company (ABC)'s Good Morning America, pioneered the use of on-screen weather satellite data and computer graphics for television forecasts. In 1982, Coleman partnered with Landmark Communications CEO Frank Batten to launch The Weather Channel (TWC), a 24-hour cable network devoted to national and local weather reports. Some weather channels have started broadcasting on live streaming platforms such as YouTube and Periscope to reach more viewers.
Numerical weather prediction
The basic idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The main inputs from country-based weather services are surface observations from automated weather stations at ground level over land and from weather buoys at sea. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. Sites launch radiosondes, which rise through the depth of the troposphere and well into the stratosphere. Data from weather satellites are used in areas where traditional data sources are not available. Compared with similar data from radiosondes, the satellite data has the advantage of global coverage, but at a lower accuracy and resolution. Meteorological radar provide information on precipitation location and intensity, which can be used to estimate precipitation accumulations over time. Additionally, if a pulse Doppler weather radar is used then wind speed and direction can be determined. These methods, however, leave an in-situ observational gap in the lower atmosphere (from 100 m to 6 km above ground level). To reduce this gap, in the late 1990s weather drones started to be considered for obtaining data from those altitudes. Research has been growing significantly since the 2010s, and weather-drone data may in future be added to numerical weather models.
Commerce provides pilot reports along aircraft routes, and ship reports along shipping routes. Research flights using reconnaissance aircraft fly in and around weather systems of interest such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems that cause significant uncertainty in forecast guidance, or are expected to be of high impact three–seven days into the future over the downstream continent.
Models are initialized using this observed data. The irregularly spaced observations are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms (usually an evenly spaced grid). The data are then used in the model as the starting point for a forecast. Commonly, the set of equations used to predict the physics and dynamics of the atmosphere are called primitive equations. These are initialized from the analysis data and rates of change are determined. The rates of change predict the state of the atmosphere a short time into the future. The equations are then applied to this new atmospheric state to find new rates of change, which predict the atmosphere at a yet further time into the future. This time stepping procedure is continually repeated until the solution reaches the desired forecast time.
The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The Met Office's Unified Model is run six days into the future, the European Centre for Medium-Range Weather Forecasts model is run out to 10 days into the future, while the Global Forecast System model run by the Environmental Modeling Center is run 16 days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog. The raw output is often modified before being presented as the forecast. This can be in the form of statistical techniques to remove known biases in the model, or of adjustment to take into account consensus among other numerical weather forecasts. MOS or model output statistics is a technique used to interpret numerical model output and produce site-specific guidance. This guidance is presented in coded numerical form, and can be obtained for nearly all National Weather Service reporting stations in the United States. As proposed by Edward Lorenz in 1963, long range forecasts, those made at a range of two weeks or more cannot definitively predict the state of the atmosphere, owing to the chaotic nature of the fluid dynamics equations involved. In numerical models, extremely small errors in initial values double roughly every five days for variables such as temperature and wind velocity.
Essentially, a model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the velocity vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation mesoscale models as well. The equations used are nonlinear partial differential equations, which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models use spectral methods for the horizontal dimensions and finite difference methods for the vertical dimension, while regional and other global models usually use finite-difference methods in all three dimensions.
Techniques
Persistence
The simplest method of forecasting the weather, persistence, relies upon today's conditions to forecast tomorrow's. This can be valid when the weather achieves a steady state, such as during the summer season in the tropics. This method strongly depends upon the presence of a stagnant weather pattern. Therefore, when in a fluctuating pattern, it becomes inaccurate. It can be useful in both short- and long-range forecast|long range forecasts.
Barometer
Measurements of barometric pressure and the pressure tendency (the change of pressure over time) have been used in forecasting since the late 19th century. The larger the change in pressure, especially if more than , the larger the change in weather can be expected. If the pressure drop is rapid, a low pressure system is approaching, and there is a greater chance of rain. Rapid pressure rises are associated with improving weather conditions, such as clearing skies.
Observation
Along with pressure tendency, the condition of the sky is one of the more important parameters used to forecast weather in mountainous areas. Thickening of cloud cover or the invasion of a higher cloud deck is indicative of rain in the near future. High thin cirrostratus clouds can create halos around the sun or moon, which indicates an approach of a warm front and its associated rain. Morning fog portends fair conditions, as rainy conditions are preceded by wind or clouds that prevent fog formation. The approach of a line of thunderstorms could indicate the approach of a cold front. Cloud-free skies are indicative of fair weather for the near future. A bar can indicate a coming tropical cyclone. The use of sky cover in weather prediction has led to various weather lore over the centuries.
Nowcasting
The forecasting of the weather for the following six hours is often referred to as nowcasting. In this time range it is possible to forecast smaller features such as individual showers and thunderstorms with reasonable accuracy, as well as other features too small to be resolved by a computer model. A human given the latest radar, satellite and observational data will be able to make a better analysis of the small scale features present and so will be able to make a more accurate forecast for the following few hours. However, there are now expert systems using those data and mesoscale numerical model to make better extrapolation, including evolution of those features in time. Accuweather is known for a Minute-Cast, which is a minute-by-minute precipitation forecast for the next two hours.
Atmospheric model
In the past, human forecasters were responsible for generating the weather forecast based upon available observations. Today, human input is generally confined to choosing a model based on various parameters, such as model biases and performance. Using a consensus of forecast models, as well as ensemble members of the various models, can help reduce forecast error. However, regardless how small the average error becomes with any individual system, large errors within any particular piece of guidance are still possible on any given model run. Humans are required to interpret the model data into weather forecasts that are understandable to the end user. Humans can use knowledge of local effects that may be too small in size to be resolved by the model to add information to the forecast. While increasing accuracy of forecasting models implies that humans may no longer be needed in the forecasting process at some point in the future, there is currently still a need for human intervention.
Analog
The analog technique is a complex way of making a forecast, requiring the forecaster to remember a previous weather event that is expected to be mimicked by an upcoming event. What makes it a difficult technique to use is that there is rarely a perfect analog for an event in the future. Some call this type of forecasting pattern recognition. It remains a useful method of observing rainfall over data voids such as oceans, as well as the forecasting of precipitation amounts and distribution in the future. A similar technique is used in medium range forecasting, which is known as teleconnections, when systems in other locations are used to help pin down the location of another system within the surrounding regime. An example of teleconnections are by using El Niño-Southern Oscillation (ENSO) related phenomena.
Artificial intelligence
Initial attempts to use artificial intelligence began in the 2010s. Huawei's Pangu-Weather model, Google's GraphCast, WindBorne's WeatherMesh model, Nvidia's FourCastNet, and the European Centre for Medium-Range Weather Forecasts' Artificial Intelligence/Integrated Forecasting System, or AIFS all appeared in 2022–2023. In 2024, AIFS started to publish real-time forecasts, showing specific skill at predicting hurricane tracks, but lower-performing on the intensity changes of such storms relative to physics-based models.
Such models use no physics-based atmosphere modeling or large language models. Instead, they learn purely from data such as the ECMWF re-analysis ERA5. These models typically require far less compute than physics-based models.
Microsoft's Aurora system offers global 10-day weather and 5-day air pollution (, NO, , , , and particulates) forecasts with claimed accuracy similar to physics-based models, but at orders-of-magnitude lower cost. Aurora was trained on more than a million hours of data from six weather/climate models.
In 2024, a group of researchers at Google's DeepMind AI research laboratories published a paper in Nature to descrivbe their machine-learning model, called GenCast, that is expected to produce more accurate forecasts than the best traditional weather forecasting systems.
Communicating forecasts to the public
Most end users of forecasts are members of the general public. Thunderstorms can create strong winds and dangerous lightning strikes that can lead to deaths, power outages, and widespread hail damage. Heavy snow or rain can bring transportation and commerce to a stand-still, as well as cause flooding in low-lying areas. Excessive heat or cold waves can sicken or kill those with inadequate utilities, and droughts can impact water usage and destroy vegetation.
Several countries employ government agencies to provide forecasts and watches/warnings/advisories to the public to protect life and property and maintain commercial interests. Knowledge of what the end user needs from a weather forecast must be taken into account to present the information in a useful and understandable way. Examples include the National Oceanic and Atmospheric Administration's National Weather Service (NWS) and Environment Canada's Meteorological Service (MSC). Traditionally, newspaper, television, and radio have been the primary outlets for presenting weather forecast information to the public. In addition, some cities had weather beacons. Increasingly, the internet is being used due to the vast amount of specific information that can be found. In all cases, these outlets update their forecasts on a regular basis.
Severe weather alerts and advisories
A major part of modern weather forecasting is the severe weather alerts and advisories that the national weather services issue in the case that severe or hazardous weather is expected. This is done to protect life and property. Some of the most commonly known of severe weather advisories are the severe thunderstorm and tornado warning, as well as the severe thunderstorm and tornado watch. Other forms of these advisories include winter weather, high wind, flood, tropical cyclone, and fog. Severe weather advisories and alerts are broadcast through the media, including radio, using emergency systems as the Emergency Alert System, which break into regular programming.
Low temperature forecast
The low temperature forecast for the current day is calculated using the lowest temperature found between 7pm that evening through 7am the following morning. So, in short, today's forecasted low is most likely tomorrow's low temperature.
Specialist forecasting
There are a number of sectors with their own specific needs for weather forecasts and specialist services are provided to these users as given below:
Air traffic
Because the aviation industry is especially sensitive to the weather, accurate weather forecasting is essential. Fog or exceptionally low ceilings can prevent many aircraft from landing and taking off. Turbulence and icing are also significant in-flight hazards. Thunderstorms are a problem for all aircraft because of severe turbulence due to their updrafts and outflow boundaries, icing due to the heavy precipitation, as well as large hail, strong winds, and lightning, all of which can cause severe damage to an aircraft in flight. Volcanic ash is also a significant problem for aviation, as aircraft can lose engine power within ash clouds. On a day-to-day basis airliners are routed to take advantage of the jet stream tailwind to improve fuel efficiency. Aircrews are briefed prior to takeoff on the conditions to expect en route and at their destination. Additionally, airports often change which runway is being used to take advantage of a headwind. This reduces the distance required for takeoff, and eliminates potential crosswinds.
Marine
Commercial and recreational use of waterways can be limited significantly by wind direction and speed, wave periodicity and heights, tides, and precipitation. These factors can each influence the safety of marine transit. Consequently, a variety of codes have been established to efficiently transmit detailed marine weather forecasts to vessel pilots via radio, for example the MAFOR (marine forecast). Typical weather forecasts can be received at sea through the use of RTTY, Navtex and Radiofax.
Agriculture
Farmers rely on weather forecasts to decide what work to do on any particular day. For example, drying hay is only feasible in dry weather. Prolonged periods of dryness can ruin cotton, wheat, and corn crops. While corn crops can be ruined by drought, their dried remains can be used as a cattle feed substitute in the form of silage. Frosts and freezes play havoc with crops both during the spring and fall. For example, peach trees in full bloom can have their potential peach crop decimated by a spring freeze. Orange groves can suffer significant damage during frosts and freezes, regardless of their timing.
Forestry
Forecasting of wind, precipitation and humidity is essential for preventing and controlling wildfires. Indices such as the Forest fire weather index and the Haines Index, have been developed to predict the areas more at risk of fire from natural or human causes. Conditions for the development of harmful insects can also be predicted by forecasting the weather.
Utility companies
Electricity and gas companies rely on weather forecasts to anticipate demand, which can be strongly affected by the weather. They use the quantity termed the degree day to determine how strong of a use there will be for heating (heating degree day) or cooling (cooling degree day). These quantities are based on a daily average temperature of . Cooler temperatures force heating degree days (one per degree Fahrenheit), while warmer temperatures force cooling degree days. In winter, severe cold weather can cause a surge in demand as people turn up their heating. Similarly, in summer a surge in demand can be linked with the increased use of air conditioning systems in hot weather. By anticipating a surge in demand, utility companies can purchase additional supplies of power or natural gas before the price increases, or in some circumstances, supplies are restricted through the use of brownouts and blackouts.
Other commercial companies
Increasingly, private companies pay for weather forecasts tailored to their needs so that they can increase their profits or avoid large losses. For example, supermarket chains may change the stocks on their shelves in anticipation of different consumer spending habits in different weather conditions. Weather forecasts can be used to invest in the commodity market, such as futures in oranges, corn, soybeans, and oil.
Military applications
United Kingdom
The British Royal Navy, working with the Met Office, has its own specialist branch of weather observers and forecasters, as part of the Hydrographic and Meteorological (HM) specialisation, who monitor and forecast operational conditions across the globe, to provide accurate and timely weather and oceanographic information to submarines, ships and Fleet Air Arm aircraft.
A mobile unit in the Royal Air Force, working with the Met Office, forecasts the weather for regions in which British and allied armed forces are deployed. A group based at Camp Bastion used to provide forecasts for the British armed forces in Afghanistan.
United States
Similar to the private sector, military weather forecasters present weather conditions to the war fighter community. Military weather forecasters provide pre-flight and in-flight weather briefs to pilots and provide real time resource protection services for military installations.
Naval forecasters cover the waters and ship weather forecasts. The United States Navy provides a special service for itself and the rest of the federal government by issuing forecasts for tropical cyclones across the Pacific and Indian Oceans through its Joint Typhoon Warning Center.
Within the United States, the 557th Weather Wing provides weather forecasting for the Air Force and the Army. Air Force forecasters cover air operations in both wartime and peacetime and provide Army support; United States Coast Guard marine science technicians provide ship forecasts for ice breakers and various other operations within their realm; and Marine forecasters provide support for ground- and air-based United States Marine Corps operations. All four of the mentioned military branches have their initial enlisted meteorology technical training at Keesler Air Force Base. Military and civilian forecasters actively cooperate in analyzing, creating and critiquing weather forecast products.
| Physical sciences | Meteorology: General | null |
73316 | https://en.wikipedia.org/wiki/Tacoma%20Narrows%20Bridge | Tacoma Narrows Bridge | The Tacoma Narrows Bridge is a pair of twin suspension bridges that span the Tacoma Narrows strait of Puget Sound in Pierce County, Washington. The bridges connect the city of Tacoma with the Kitsap Peninsula and carry State Route 16 (known as Primary State Highway 14 until 1964) over the strait. Historically, the name "Tacoma Narrows Bridge" has applied to the original bridge, nicknamed "Galloping Gertie", which opened in July 1940 but collapsed possibly because of aeroelastic flutter four months later, as well as to the successor of that bridge, which opened in 1950 and still stands today as the westbound lanes of the present-day two-bridge complex.
The original Tacoma Narrows Bridge opened on July 1, 1940. The original bridge received its nickname "Galloping Gertie" for the vertical movement of the deck observed by construction workers during windy conditions. While engineers and engineering professor F. B. Farquharson were hired to seek ways to stop the odd movements, months' experiments were unsuccessful. The bridge became known for its pitching deck, and collapsed into Puget Sound the morning of November 7, 1940, under high wind conditions. Engineering issues, as well as the United States' involvement in World War II, postponed plans to replace the bridge for several years; the new bridge was opened on October 14, 1950.
By 1990, population growth and development on the Kitsap Peninsula had caused traffic on the bridge to exceed its design capacity; as a result, in 1998 Washington voters approved a measure to support building a parallel bridge. After a series of protests and court battles, construction began in 2002 and the new bridge opened to carry eastbound traffic on July 16, 2007, while the 1950 bridge was reconfigured to carry westbound traffic.
At the time of their construction, both the 1940 and 1950 bridges were the third-longest suspension bridges in the world in terms of main span length, behind the Golden Gate Bridge and George Washington Bridge. The 1950 and 2007 bridges are as of 2017 the fifth-longest suspension bridge spans in the United States and the 43rd-longest in the world.
Tolls were charged on the bridge for the entire four-month service life of the original span, as well as the first 15 years of the 1950 bridge. In 1965, the bridge's construction bonds plus interest were paid off, and the state ceased toll collection on the bridge. Over 40 years later, tolls were reinstated as part of the financing of the twin span, and are at present collected only from vehicles traveling eastbound.
Original bridge (1940)
The desire for the construction of a bridge in this location dates back to 1889 with a Northern Pacific Railway proposal for a trestle bridge; however, it was only in the late 1920s that interest in this project was revived. In 1937, the Washington State legislature created the Washington State Toll Bridge Authority and appropriated $5,000 to study the request by Tacoma and Pierce County for a bridge over the Narrows. The bridge was designed by Leon Moisseiff and cost $6.4 million.
The first Tacoma Narrows Bridge opened to traffic on July 1, 1940. Its main span collapsed into the Tacoma Narrows four months later on November 7, 1940, at 11:00 a.m. (Pacific time) possibly as a result of aeroelastic flutter caused by a wind. The bridge collapse had lasting effects on science and engineering. In many undergraduate physics texts, the event is presented as an example of elementary forced resonance, with the wind providing an external periodic frequency that matched the natural structural frequency; the cause is still debated by engineers today. A contributing factor was its solid sides which did not allow wind to pass through the bridge's deck. Thus, its design allowed the bridge to catch the wind and sway, which ultimately took it down. It was the first suspension bridge to utilize these solid I-beams as a form of support for the road deck, as other bridges would incorporate trusses in their designs in order to catch the wind. Its failure also boosted research in the field of bridge aerodynamics and aeroelastic fields which have influenced the designs of all the world's great long-span bridges built since 1940.
There were no human deaths in the collapse of the bridge. The only fatality was a Cocker Spaniel named Tubby, who perished after he was abandoned in a car on the bridge by his owner, Leonard Coatsworth. Professor Frederick Burt Farquharson, an engineer from the University of Washington who had been involved in the design of the bridge, tried to rescue Tubby but was bitten by the terrified dog when he attempted to remove him. The collapse of the bridge was recorded on Kodachrome 16 mm film by Barney Elliott and Harbine Monroe, owners of The Camera Shop in Tacoma, and shows Farquharson leaving the bridge after trying to rescue Tubby and making observations in the middle of the bridge. The film was subsequently sold to Paramount Studios, who then duplicated the footage for newsreels in black-and-white and distributed the film worldwide to movie theaters. Castle Films also received distribution rights for 8 mm home video.
Elliott and Monroe's original films of the construction and collapse of the bridge were shot on 16 mm Kodachrome color film, but most copies in circulation are in black and white because newsreels of the day copied the film onto 35 mm black-and-white stock. There were also film speed discrepancies between Monroe and Elliot's footage, with Monroe filming his footage in 24 fps while Elliott had filmed his footage at 16 fps. As a result, most copies in circulation also show the bridge oscillating approximately 50% faster than real time, due to an assumption during conversion that the film was shot at 24 frames per second rather than the actual 16 fps. In 1998, The Tacoma Narrows Bridge Collapse was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". This footage is commonly shown to engineering, architecture, and physics students as a means to teach about engineering disaster.
The dismantling of the towers and side spans — having survived the collapse of the main span, but being damaged beyond repair — began shortly after the collapse and continued into May 1943. The United States' participation in World War II, as well as engineering and finance issues, delayed plans to replace the bridge.
Westbound bridge (1950)
After the infamous fall of the original bridge, Professor Farquharson was commissioned again to test new designs for the bridge at the University of Washington, the home of these models. Tests ensured the new design would have a different outcome in high wind conditions than the first and construction began on April 12, 1948, following the completion of a financing and insurance plan. A major earthquake struck the construction site on April 13, 1949, but the only damage was to a cable that fell into the water and was recovered; a fire two months later on the west tower damaged equipment and tools but did not cause permanent damage. The towers were complete in July 1949 and the cables for the new bridge were finished on January 16, 1950. The current westbound bridge was designed and rebuilt with open trusses, stiffening struts and openings in the roadway to let wind through. It opened on October 14, 1950, and is 5,979 feet (1822 m) long—40 feet (12 m) longer than the first bridge. The new bridge cost $18 million to construct. Local residents nicknamed the new bridge Sturdy Gertie, as the oscillations that plagued the previous design had been eliminated. This bridge and its new parallel eastbound bridge are currently the fifth-longest suspension bridges in the United States.
When built, the westbound bridge was the third longest suspension bridge span in the world. Like other modern suspension bridges, the westbound bridge was built with steel plates that feature sharp entry edges rather than the flat plate sides used in the original Tacoma Narrows Bridge (see the suspension bridge article for an example).
The bridge was designed to handle 60,000 vehicles a day. It carried both westbound and eastbound traffic until the eastbound bridge opened on July 15, 2007. During the Hanukkah Eve windstorm of 2006, the bridge was closed for the first time in its operating existence due to heavy winds but reopened approximately 6 hours later.
Eastbound bridge (2007)
In 1998, voters in several Washington counties approved an advisory measure to create a second Narrows span. Construction of the new span, which carries eastbound traffic parallel to the current bridge, began on October 4, 2002, and was completed in July 2007. The Washington State Department of Transportation (WSDOT) signed a design-and-construction agreement with Bechtel and Kiewit Pacific Co., who then engaged in a joint venture to construct eastbound. It was estimated by WSDOT that the project would cost $849 million to complete, but ultimately finished under budget at $786 million due to not using the funds allocated to emergency scenarios.
On July 15, 2007, the eastbound section opened to a ceremonial 5K run across the newly constructed bridge. About 10,000 people participated in the event. After the run finished, a ceremonial ribbon cutting event took place on eastbound. WSDOT estimated 40,000 people would be in attendance for the opening, but 60,000 ultimately attended. A select few Washington State government officials partook in the ribbon cutting, such as Washington State Treasurer Michael Murphy, State Representative Pat Lantz, Chief of the Washington State Patrol John Batiste, and State Speaker of the House Frank Chopp. The bridge was dedicated in honor of State Representative Ruth Fisher and State Senator Robert "Bob" Oke, a South Kitsap resident, one of the main proponents of building the second span across Puget Sound between the Kitsap Peninsula and Tacoma.
The eastbound bridge has an overall length of , and a main span of , making it the fifth largest suspension bridge in the United States. In comparison, the Golden Gate bridge in San Francisco has a total length of m or 1.7 miles.
WSDOT collects a toll before entering the eastbound span. Tolls currently are $4.50 for "Good to Go" account holders with in-vehicle transponders, $5.50 for cash/credit card customers, and $6.50 for those who choose Pay-By-Mail. The existing span had been free of tolls since 1965. The new bridge was the first facility to use the new Good To Go electronic toll collection system. Tolls on the bridge are expected to pay off the loans and deferred sales tax by 2033.
| Technology | Bridges | null |
73321 | https://en.wikipedia.org/wiki/Avalanche | Avalanche | An avalanche is a rapid flow of snow down a slope, such as a hill or mountain. Avalanches can be triggered spontaneously, by factors such as increased precipitation or snowpack weakening, or by external means such as humans, other animals, and earthquakes. Primarily composed of flowing snow and air, large avalanches have the capability to capture and move ice, rocks, and trees.
Avalanches occur in two general forms, or combinations thereof: slab avalanches made of tightly packed snow, triggered by a collapse of an underlying weak snow layer, and loose snow avalanches made of looser snow. After being set off, avalanches usually accelerate rapidly and grow in mass and volume as they capture more snow. If an avalanche moves fast enough, some of the snow may mix with the air, forming a powder snow avalanche.
Though they appear to share similarities, avalanches are distinct from slush flows, mudslides, rock slides, and serac collapses. They are also different from large scale movements of ice. Avalanches can happen in any mountain range that has an enduring snowpack. They are most frequent in winter or spring, but may occur at any time of the year. In mountainous areas, avalanches are among the most serious natural hazards to life and property, so great efforts are made in avalanche control. There are many classification systems for the different forms of avalanches. Avalanches can be described by their size, destructive potential, initiation mechanism, composition, and dynamics.
Formation
Most avalanches occur spontaneously during storms under increased load due to snowfall and/or erosion. Metamorphic changes in the snowpack, such as melting due to solar radiation, is the second-largest cause of natural avalanches. Other natural causes include rain, earthquakes, rockfall, and icefall. Artificial triggers of avalanches include skiers, snowmobiles, and controlled explosive work. Contrary to popular belief, avalanches are not triggered by loud sound; the pressure from sound is orders of magnitude too small to trigger an avalanche.
Avalanche initiation can start at a point with only a small amount of snow moving initially; this is typical of wet snow avalanches or avalanches in dry unconsolidated snow. However, if the snow has sintered into a stiff slab overlying a weak layer, then fractures can propagate very rapidly, so that a large volume of snow, possibly thousands of cubic metres, can start moving almost simultaneously.
A snowpack will fail when the load exceeds the strength. The load is straightforward; it is the weight of the snow. However, the strength of the snowpack is much more difficult to determine and is extremely heterogeneous. It varies in detail with properties of the snow grains, size, density, morphology, temperature, water content; and the properties of the bonds between the grains. These properties may all metamorphose in time according to the local humidity, water vapour flux, temperature and heat flux. The top of the snowpack is also extensively influenced by incoming radiation and the local air flow. One of the aims of avalanche research is to develop and validate computer models that can describe the evolution of the seasonal snowpack over time. A complicating factor is the complex interaction of terrain and weather, which causes significant spatial and temporal variability of the depths, crystal forms, and layering of the seasonal snowpack.
Slab avalanches
Slab avalanches are formed frequently in snow that has been deposited, or redeposited by wind. They have the characteristic appearance of a block (slab) of snow cut out from its surroundings by fractures. Elements of slab avalanches include a crown fracture at the top of the start zone, flank fractures on the sides of the start zones, and a fracture at the bottom called the stauchwall. The crown and flank fractures are vertical walls in the snow delineating the snow that was entrained in the avalanche from the snow that remained on the slope. Slabs can vary in thickness from a few centimetres to three metres. Slab avalanches account for around 90% of avalanche-related fatalities.
Powder snow avalanches
The largest avalanches form turbulent suspension currents known as powder snow avalanches or mixed avalanches, a kind of gravity current. These consist of a powder cloud, which overlies a dense avalanche. They can form from any type of snow or initiation mechanism, but usually occur with fresh dry powder. They can exceed speeds of , and masses of 1,000,000 tons; their flows can travel long distances along flat valley bottoms and even uphill for short distances.
Wet snow avalanches
In contrast to powder snow avalanches, wet snow avalanches are a low velocity suspension of snow and water, with the flow confined to the track surface (McClung, 1999, p. 108). The low speed of travel is due to the friction between the sliding surface of the track and the water saturated flow. Despite the low speed of travel (≈10–40 km/h), wet snow avalanches are capable of generating powerful destructive forces, due to the large mass and density. The body of the flow of a wet snow avalanche can plough through soft snow, and can scour boulders, earth, trees, and other vegetation; leaving exposed and often scored ground in the avalanche track. Wet snow avalanches can be initiated from either loose snow releases, or slab releases, and only occur in snowpacks that are water saturated and isothermally equilibrated to the melting point of water. The isothermal characteristic of wet snow avalanches has led to the secondary term of isothermal slides found in the literature (for example in Daffern, 1999, p. 93). At temperate latitudes wet snow avalanches are frequently associated with climatic avalanche cycles at the end of the winter season, when there is significant daytime warming.
Ice avalanche
An ice avalanche occurs when a large piece of ice, such as from a serac or calving glacier, falls onto ice (such as the Khumbu Icefall), triggering a movement of broken ice chunks. The resulting movement is more analogous to a rockfall or a landslide than a snow avalanche. They are typically very difficult to predict and almost impossible to mitigate.
Avalanche pathway
As an avalanche moves down a slope it follows a certain pathway that is dependent on the slope's degree of steepness and the volume of snow/ice involved in the mass movement. The origin of an avalanche is called the Starting Point and typically occurs on a 30–45 degree slope. The body of the pathway is called the Track of the avalanche and usually occurs on a 20–30 degree slope. When the avalanche loses its momentum and eventually stops it reaches the Runout Zone. This usually occurs when the slope has reached a steepness that is less than 20 degrees. These degrees are not consistently true due to the fact that each avalanche is unique depending on the stability of the snowpack that it was derived from as well as the environmental or human influences that triggered the mass movement.
Injuries and deaths
People caught in avalanches can die from suffocation, trauma, or hypothermia. From "1950–1951 to 2020–2021" there were 1,169 people who died in avalanches in the United States. For the 11-year period ending April 2006, 445 people died in avalanches throughout North America. On average, 28 people die in avalanches every winter in the United States.
In 2001 it was reported that globally an average of 150 people die each year from avalanches. From 2014-2024, the majority of those killed in avalanches in the United States were skiing (91) followed by snowmobiling (71), snowshoeing/climbing/hiking (38), and snowboarding (20). Three of the deadliest recorded avalanches have killed over a thousand people each.
Terrain, snowpack, weather
Doug Fesler and Jill Fredston developed a conceptual model of the three primary elements of avalanches: terrain, weather, and snowpack. Terrain describes the places where avalanches occur, weather describes the meteorological conditions that create the snowpack, and snowpack describes the structural characteristics of snow that make avalanche formation possible.
Terrain
Avalanche formation requires a slope shallow enough for snow to accumulate but steep enough for the snow to accelerate once set in motion by the combination of mechanical failure (of the snowpack) and gravity. The angle of the slope that can hold snow, called the angle of repose, depends on a variety of factors, such as crystal form and moisture content. Some forms of drier and colder snow will only stick to shallower slopes, while wet and warm snow can bond to very steep surfaces. In coastal mountains, such as the Cordillera del Paine region of Patagonia, deep snowpacks collect on vertical and even overhanging rock faces. The slope angle that can allow moving snow to accelerate depends on a variety of factors such as the snow's shear strength (which is itself dependent upon crystal form) and the configuration of layers and inter-layer interfaces.
The snowpack on slopes with sunny exposures is strongly influenced by sunshine. Diurnal cycles of thawing and refreezing can stabilize the snowpack by promoting settlement. Strong freeze-thaw cycles result in the formation of surface crusts during the night and of unstable surface snow during the day. Slopes in the lee of a ridge or of another wind obstacle accumulate more snow and are more likely to include pockets of deep snow, wind slabs, and cornices, all of which, when disturbed, may result in avalanche formation. Conversely, the snowpack on a windward slope is often much shallower than on a lee slope.
Avalanches and avalanche paths share common elements: a start zone where the avalanche originates, a track along which the avalanche flows, and a runout zone where the avalanche comes to rest. The debris deposit is the accumulated mass of the avalanched snow once it has come to rest in the run-out zone. For the image at left, many small avalanches form in this avalanche path every year, but most of these avalanches do not run the full vertical or horizontal length of the path. The frequency with which avalanches form in a given area is known as the return period.
The start zone of an avalanche must be steep enough to allow snow to accelerate once set in motion, additionally convex slopes are less stable than concave slopes because of the disparity between the tensile strength of snow layers and their compressive strength. The composition and structure of the ground surface beneath the snowpack influences the stability of the snowpack, either being a source of strength or weakness. Avalanches are unlikely to form in very thick forests, but boulders and sparsely distributed vegetation can create weak areas deep within the snowpack through the formation of strong temperature gradients. Full-depth avalanches (avalanches that sweep a slope virtually clean of snow cover) are more common on slopes with smooth ground, such as grass or rock slabs.
Generally speaking, avalanches follow drainages down-slope, frequently sharing drainage features with summertime watersheds. At and below tree line, avalanche paths through drainages are well defined by vegetation boundaries called trim lines, which occur where avalanches have removed trees and prevented regrowth of large vegetation. Engineered drainages, such as the avalanche dam on Mount Stephen in Kicking Horse Pass, have been constructed to protect people and property by redirecting the flow of avalanches. Deep debris deposits from avalanches will collect in catchments at the terminus of a run out, such as gullies and river beds.
Slopes flatter than 25 degrees or steeper than 60 degrees typically have a lower incidence of avalanches. Human-triggered avalanches have the greatest incidence when the snow's angle of repose is between 35 and 45 degrees; the critical angle, the angle at which human-triggered avalanches are most frequent, is 38 degrees. When the incidence of human triggered avalanches is normalized by the rates of recreational use, however, hazard increases uniformly with slope angle, and no significant difference in hazard for a given exposure direction can be found. The rule of thumb is: A slope that is flat enough to hold snow but steep enough to ski has the potential to generate an avalanche, regardless of the angle.
Snowpack structure and characteristics
The snowpack is composed of ground-parallel layers that accumulate over the winter. Each layer contains ice grains that are representative of the distinct meteorological conditions during which the snow formed and was deposited. Once deposited, a snow layer continues to evolve under the influence of the meteorological conditions that prevail after deposition.
For an avalanche to occur, it is necessary that a snowpack have a weak layer (or instability) below a slab of cohesive snow. In practice the formal mechanical and structural factors related to snowpack instability are not directly observable outside of laboratories, thus the more easily observed properties of the snow layers (e.g. penetration resistance, grain size, grain type, temperature) are used as index measurements of the mechanical properties of the snow (e.g. tensile strength, friction coefficients, shear strength, and ductile strength). This results in two principal sources of uncertainty in determining snowpack stability based on snow structure: First, both the factors influencing snow stability and the specific characteristics of the snowpack vary widely within small areas and time scales, resulting in significant difficulty extrapolating point observations of snow layers across different scales of space and time. Second, the relationship between readily observable snowpack characteristics and the snowpack's critical mechanical properties has not been completely developed.
While the deterministic relationship between snowpack characteristics and snowpack stability is still a matter of ongoing scientific study, there is a growing empirical understanding of the snow composition and deposition characteristics that influence the likelihood of an avalanche. Observation and experience has shown that newly fallen snow requires time to bond with the snow layers beneath it, especially if the new snow falls during very cold and dry conditions. If ambient air temperatures are cold enough, shallow snow above or around boulders, plants, and other discontinuities in the slope, weakens from rapid crystal growth that occurs in the presence of a critical temperature gradient. Large, angular snow crystals are indicators of weak snow, because such crystals have fewer bonds per unit volume than small, rounded crystals that pack tightly together. Consolidated snow is less likely to slough than loose powdery layers or wet isothermal snow; however, consolidated snow is a necessary condition for the occurrence of slab avalanches, and persistent instabilities within the snowpack can hide below well-consolidated surface layers. Uncertainty associated with the empirical understanding of the factors influencing snow stability leads most professional avalanche workers to recommend conservative use of avalanche terrain relative to current snowpack instability.
Weather
Avalanches only occur in a standing snowpack. Typically winter seasons at high latitudes, high altitudes, or both have weather that is sufficiently unsettled and cold enough for precipitated snow to accumulate into a seasonal snowpack. Continentality, through its potentiating influence on the meteorological extremes experienced by snowpacks, is an important factor in the evolution of instabilities, and consequential occurrence of avalanches faster stabilization of the snowpack after storm cycles. The evolution of the snowpack is critically sensitive to small variations within the narrow range of meteorological conditions that allow for the accumulation of snow into a snowpack. Among the critical factors controlling snowpack evolution are: heating by the sun, radiational cooling, vertical temperature gradients in standing snow, snowfall amounts, and snow types. Generally, mild winter weather will promote the settlement and stabilization of the snowpack; conversely, very cold, windy, or hot weather will weaken the snowpack.
At temperatures close to the freezing point of water, or during times of moderate solar radiation, a gentle freeze-thaw cycle will take place. The melting and refreezing of water in the snow strengthens the snowpack during the freezing phase and weakens it during the thawing phase. A rapid rise in temperature, to a point significantly above the freezing point of water, may cause avalanche formation at any time of year.
Persistent cold temperatures can either prevent new snow from stabilizing or destabilize the existing snowpack. Cold air temperatures on the snow surface produce a temperature gradient in the snow, because the ground temperature at the base of the snowpack is usually around 0 °C, and the ambient air temperature can be much colder. When a temperature gradient greater than 10 °C change per vertical meter of snow is sustained for more than a day, angular crystals called depth hoar or facets begin forming in the snowpack because of rapid moisture transport along the temperature gradient. These angular crystals, which bond poorly to one another and the surrounding snow, often become a persistent weakness in the snowpack. When a slab lying on top of a persistent weakness is loaded by a force greater than the strength of the slab and persistent weak layer, the persistent weak layer can fail and generate an avalanche.
Any wind stronger than a light breeze can contribute to a rapid accumulation of snow on sheltered slopes downwind. Wind slabs form quickly and, if present, weaker snow below the slab may not have time to adjust to the new load. Even on a clear day, wind can quickly load a slope with snow by blowing snow from one place to another. Top-loading occurs when wind deposits snow from the top of a slope; cross-loading occurs when wind deposits snow parallel to the slope. When a wind blows over the top of a mountain, the leeward, or downwind, side of the mountain experiences top-loading, from the top to the bottom of that lee slope. When the wind blows across a ridge that leads up the mountain, the leeward side of the ridge is subject to cross-loading. Cross-loaded wind-slabs are usually difficult to identify visually.
Snowstorms and rainstorms are important contributors to avalanche danger. Heavy snowfall will cause instability in the existing snowpack, both because of the additional weight and because the new snow has insufficient time to bond to underlying snow layers. Rain has a similar effect. In the short term, rain causes instability because, like a heavy snowfall, it imposes an additional load on the snowpack and once rainwater seeps down through the snow, acts as a lubricant, reducing the natural friction between snow layers that holds the snowpack together. Most avalanches happen during or soon after a storm.
Daytime exposure to sunlight will rapidly destabilize the upper layers of the snowpack if the sunlight is strong enough to melt the snow, thereby reducing its hardness. During clear nights, the snowpack can re-freeze when ambient air temperatures fall below freezing, through the process of long-wave radiative cooling, or both. Radiative heat loss occurs when the night air is significantly cooler than the snowpack, and the heat stored in the snow is re-radiated into the atmosphere.
Dynamics
When a slab avalanche forms, the slab disintegrates into increasingly smaller fragments as the snow travels downhill. If the fragments become small enough the outer layer of the avalanche, called a saltation layer, takes on the characteristics of a fluid. When sufficiently fine particles are present they can become airborne and, given a sufficient quantity of airborne snow, this portion of the avalanche can become separated from the bulk of the avalanche and travel a greater distance as a powder snow avalanche. Scientific studies using radar, following the 1999 Galtür avalanche disaster, confirmed the hypothesis that a saltation layer forms between the surface and the airborne components of an avalanche, which can also separate from the bulk of the avalanche.
Driving an avalanche is the component of the avalanche's weight parallel to the slope; as the avalanche progresses any unstable snow in its path will tend to become incorporated, so increasing the overall weight. This force will increase as the steepness of the slope increases, and diminish as the slope flattens. Resisting this are a number of components that are thought to interact with each other: the friction between the avalanche and the surface beneath; friction between the air and snow within the fluid; fluid-dynamic drag at the leading edge of the avalanche; shear resistance between the avalanche and the air through which it is passing, and shear resistance between the fragments within the avalanche itself. An avalanche will continue to accelerate until the resistance exceeds the forward force.
Modeling
Attempts to model avalanche behaviour date from the early 20th century, notably the work of Professor Lagotala in preparation for the 1924 Winter Olympics in Chamonix. His method was developed by A. Voellmy and popularised following the publication in 1955 of his Ueber die Zerstoerungskraft von Lawinen (On the Destructive Force of Avalanches).
Voellmy used a simple empirical formula, treating an avalanche as a sliding block of snow moving with a drag force that was proportional to the square of the speed of its flow:
He and others subsequently derived other formulae that take other factors into account, with the Voellmy-Salm-Gubler and the Perla-Cheng-McClung models becoming most widely used as simple tools to model flowing (as opposed to powder snow) avalanches.
Since the 1990s many more sophisticated models have been developed. In Europe much of the recent work was carried out as part of the SATSIE (Avalanche Studies and Model Validation in Europe) research project supported by the European Commission which produced the leading-edge MN2L model, now in use with the Service Restauration des Terrains en Montagne (Mountain Rescue Service) in France, and D2FRAM (Dynamical Two-Flow-Regime Avalanche Model), which was still undergoing validation as of 2007. Other known models are the SAMOS-AT avalanche simulation software and the RAMMS software.
Human involvement
How to prevent avalanches
Preventative measures are employed in areas where avalanches pose a significant threat to people, such as ski resorts, mountain towns, roads, and railways. There are several ways to prevent avalanches and lessen their power and develop preventative measures to reduce the likelihood and size of avalanches by disrupting the structure of the snowpack, while passive measures reinforce and stabilize the snowpack in situ. The simplest active measure is repeatedly traveling on a snowpack as snow accumulates; this can be by means of boot-packing, ski-cutting, or machine grooming. Explosives are used extensively to prevent avalanches, by triggering smaller avalanches that break down instabilities in the snowpack, and removing overburden that can result in larger avalanches. Explosive charges are delivered by a number of methods including hand-tossed charges, helicopter-dropped bombs, Gazex concussion lines, and ballistic projectiles launched by air cannons and artillery. Passive preventive systems such as snow fences and light walls can be used to direct the placement of snow. Snow builds up around the fence, especially the side that faces the prevailing winds. Downwind of the fence, snow build-up is lessened. This is caused by the loss of snow at the fence that would have been deposited and the pickup of the snow that is already there by the wind, which was depleted of snow at the fence. When there is a sufficient density of trees, they can greatly reduce the strength of avalanches. They hold snow in place and when there is an avalanche, the impact of the snow against the trees slows it down. Trees can either be planted or they can be conserved, such as in the building of a ski resort, to reduce the strength of avalanches.
In turn, socio-environmental changes can influence the occurrence of damaging avalanches: some studies linking changes in land-use/land-cover patterns and the evolution of snow avalanche damage in mid latitude mountains show the importance of the role played by vegetation cover, that is at the root of the increase of damage when the protective forest is deforested (because of demographic growth, intensive grazing and industrial or legal causes), and at the root of the decrease of damage because of the transformation of a traditional land-management system based on overexploitation into a system based on land marginalization and reforestation, something that has happened mainly since the mid-20th century in mountain environments of developed countries.
Mitigation
In many areas, regular avalanche tracks can be identified and precautions can be taken to minimize damage, such as the prevention of development in these areas. To mitigate the effect of avalanches the construction of artificial barriers can be very effective in reducing avalanche damage. There are several types: One kind of barrier (snow net) uses a net strung between poles that are anchored by guy wires in addition to their foundations. These barriers are similar to those used for rockslides. Another type of barrier is a rigid fence-like structure (snow fence) and may be constructed of steel, wood or pre-stressed concrete. They usually have gaps between the beams and are built perpendicular to the slope, with reinforcing beams on the downhill side. Rigid barriers are often considered unsightly, especially when many rows must be built. They are also expensive and vulnerable to damage from falling rocks in the warmer months. In addition to industrially manufactured barriers, landscaped barriers, called avalanche dams stop or deflect avalanches with their weight and strength. These barriers are made out of concrete, rocks, or earth. They are usually placed right above the structure, road, or railway that they are trying to protect, although they can also be used to channel avalanches into other barriers. Occasionally, earth mounds are placed in the avalanche's path to slow it down. Finally, along transportation corridors, large shelters, called snow sheds, can be built directly in the slide path of an avalanche to protect traffic from avalanches.
Early warning systems
Warning systems can detect avalanches which develop slowly, such as ice avalanches caused by icefalls from glaciers. Interferometric radars, high-resolution cameras, or motion sensors can monitor instable areas over a long term, lasting from days to years. Experts interpret the recorded data and are able to recognize upcoming ruptures in order to initiate appropriate measures. Such systems (e.g. the monitoring of the Weissmies glacier in Switzerland) can recognize events several days in advance.
Alarm systems
Modern radar technology enables the monitoring of large areas and the localization of avalanches at any weather condition, by day and by night. Complex alarm systems are able to detect avalanches within a short time in order to close (e.g. roads and rails) or evacuate (e.g. construction sites) endangered areas. An example of such a system is installed on the only access road of Zermatt in Switzerland. Two radars monitor the slope of a mountain above the road. The system automatically closes the road by activating several barriers and traffic lights within seconds such that no people are harmed.
Survival, rescue, and recovery
Avalanche accidents are broadly differentiated into 2 categories: accidents in recreational settings, and accidents in residential, industrial, and transportation settings. This distinction is motivated by the observed difference in the causes of avalanche accidents in the two settings. In the recreational setting most accidents are caused by the people involved in the avalanche. In a 1996 study, Jamieson et al. (pages 7–20) found that 83% of all avalanches in the recreational setting were caused by those who were involved in the accident. In contrast, all the accidents in the residential, industrial, and transportation settings were due to spontaneous natural avalanches. Because of the difference in the causes of avalanche accidents, and the activities pursued in the two settings, avalanche and disaster management professionals have developed two related preparedness, rescue, and recovery strategies for each of the settings.
Notable avalanches
Two avalanches occurred in March 1910 in the Cascade and Selkirk Mountain ranges; on 1 March the Wellington avalanche killed 96 in Washington state, United States. Three days later 62 railroad workers were killed in the Rogers Pass avalanche in British Columbia, Canada.
During World War I, an estimated 40,000 to 80,000 soldiers died as a result of avalanches during the mountain campaign in the Alps at the Austrian-Italian front, many of which were caused by artillery fire. Some 10,000 men, from both sides, died in avalanches in December 1916.
In the northern hemisphere winter of 1950–1951 approximately 649 avalanches were recorded in a three-month period throughout the Alps in Austria, France, Switzerland, Italy and Germany. This series of avalanches killed around 265 people and was termed the Winter of Terror.
The avalanche in Biały Jar occurred on 20 March 1968, sweeping away 24 people who were walking along the bottom of Biały Jar ravine in the Giant Mountains. Five of them, who were thrown aside by the avalanche, managed to survive. The remaining 19 people – including 13 Russians, 4 citizens of East Germany, and two Polish citizens – lost their lives. A total of 1,100 people took part in the rescue operation.
A mountain climbing camp on Lenin Peak, in what is now Kyrgyzstan, was wiped out in 1990 when an earthquake triggered a large avalanche that overran the camp. Forty-three climbers were killed.
In 1993, the Bayburt Üzengili avalanche killed 60 individuals in Üzengili in the province of Bayburt, Turkey.
A large avalanche in Montroc, France, in 1999, 300,000 cubic metres of snow slid on a 30° slope, achieving a speed in the region of . It killed 12 people in their chalets under 100,000 tons of snow, deep. The mayor of Chamonix was convicted of second-degree murder for not evacuating the area, but received a suspended sentence.
The small Austrian village of Galtür was hit by the Galtür avalanche in 1999. The village was thought to be in a safe zone but the avalanche was exceptionally large and flowed into the village. Thirty-one people died.
On 1 December 2000, the Glory Bowl Avalanche formed on Mt. Glory which is located within the Teton Mountain Range in Wyoming, United States. Joel Roof was snowboarding recreationally in this backcountry, bowl-shaped run and triggered the avalanche. He was carried nearly 2,000 feet to the base of the mountain and was not successfully rescued.
On 28 January 2003, the Tatra Mountains avalanche swept away nine out of a thirteen-member group heading to the summit of Rysy in the Tatra Mountains. The participants of the trip were students from the I Leon Kruczkowski High School in Tychy and individuals associated with the school's sports club.
On 3 July 2022 a serac collapsed on the Marmolada Glacier, Italy, causing an avalanche that killed 11 alpinists and injured eight.
Classification of avalanches
European avalanche risk
In Europe, the avalanche risk is widely rated on the following scale, which was adopted in April 1993 to replace the earlier non-standard national schemes. Descriptions were last updated in May 2003 to enhance uniformity.
In France, most avalanche deaths occur at risk levels 3 and 4. In Switzerland most occur at levels 2 and 3. It is thought that this may be due to national differences of interpretation when assessing the risks.
[1] Stability:
Generally described in more detail in the avalanche bulletin (regarding the altitude, aspect, type of terrain etc.)
[2] additional load:
heavy: two or more skiers or boarders without spacing between them, a single hiker or climber, a grooming machine, avalanche blasting
light: a single skier or snowboarder smoothly linking turns and without falling, a group of skiers or snowboarders with a minimum 10 m gap between each person, a single person on snowshoes
Gradient:
gentle slopes: with an incline below about 30°
steep slopes: with an incline over 30°
very steep slopes: with an incline over 35°
extremely steep slopes: extreme in terms of the incline (over 40°), the terrain profile, proximity of the ridge, smoothness of underlying ground
European avalanche size table
Avalanche size:
North American Avalanche Danger Scale
In the United States and Canada, the following avalanche danger scale is used. Descriptors vary depending on country.
Avalanche problems
There are nine different types of avalanche problems:
Storm slab
Wind slab
Wet slab avalanches
Persistent slab
Deep persistent slab
Loose dry avalanches
Loose wet avalanches
Glide avalanches
Cornice fall
Canadian classification for avalanche size
The Canadian classification for avalanche size is based upon the consequences of the avalanche. Half sizes are commonly used.
United States classification for avalanche size
The size of avalanches are classified using two scales; size relative to destructive force or D-scale and size relative to the avalanche path or R-scale. Both size scales range from 1 to 5 with the D size scale half sizes can be used.
Rutschblock Test
Slab avalanche hazard analysis can be done using the Rutschblock Test. A 2 m wide block of snow is isolated from the rest of the slope and progressively loaded. The result is a rating of slope stability on a seven step scale. (Rutsch means slide in German.)
Avalanches and climate change
Avalanche formation and frequency is highly affected by weather patterns and the local climate. Snowpack layers will form differently depending on whether snow is falling in very cold or very warm conditions, and very dry or very humid conditions. Thus, climate change may affect when, where, and how often avalanches occur, and may also change the type of avalanches that are occurring.
Impacts on avalanche type and frequency
Overall, a rising seasonal snow line and a decrease in the number of days with snow cover are predicted. Climate change-caused temperature increases and changes in precipitation patterns will likely differ between the different mountain regions, and the impacts of these changes on avalanches will change at different elevations. In the long term, avalanche frequency at lower elevations is expected to decline corresponding to a decrease in snow cover and depth, and a short-term increase in the number of wet avalanches are predicted.
Precipitation is expected to increase, meaning more snow or rain depending on the elevation. Higher elevations predicted to remain above the seasonal snow line will likely see an increase in avalanche activity due to the increases in precipitation during the winter season. Storm precipitation intensity is also expected to increase, which is likely to lead to more days with enough snowfall to cause the snowpack to become unstable. Moderate and high elevations may see an increase in volatile swings from one weather extreme to the other. Predictions also show an increase in the number of rain on snow events, and wet avalanche cycles occurring earlier in the spring during the remainder of this century.
Impacts on burial survival rate
The warm, wet snowpacks that are likely to increase in frequency due to climate change may also make avalanche burials more deadly. Warm snow has a higher moisture content and is therefore denser than colder snow. Denser avalanche debris decreases the ability for a buried person to breath and the amount of time they have before they run out of oxygen. This increases the likelihood of death by asphyxia in the event of a burial. Additionally, the predicted thinner snowpacks may increase the frequency of injuries due to trauma, such as a buried skier striking a rock or tree.
Avalanches of dust on Mars
| Physical sciences | Natural disasters | null |
73348 | https://en.wikipedia.org/wiki/Calendula | Calendula | Calendula () is a genus of about 15–20 species of annual and perennial herbaceous plants in the daisy family, Asteraceae that are often known as marigolds. They are native to Europe, North Africa, Macaronesia and West Asia, and have their center of diversity in the Mediterranean Region. Other plants known as marigolds include corn marigold, desert marigold, marsh marigold, and plants of the genus Tagetes.
The genus name Calendula is a modern Latin diminutive of calendae, meaning "little calendar", "little clock" or possibly "little weather-glass". The common name "marigold", a contraction of "Mary's gold" refers to the Virgin Mary. The most commonly cultivated and used member of the genus is Calendula officinalis, the pot marigold. Popular herbal and cosmetic products named "Calendula" invariably derive from C.officinalis.
Uses
History
Calendula was not a major medicinal herb but it was used in historic times for headaches, red eye, fever and toothaches. As late as the 17th century Nicholas Culpeper claimed Calendula benefited the heart, but it was not considered an especially efficacious medicine.
In historic times Calendula was more often used for magical purposes than medicinal ones. One 16th-century potion containing Calendula claimed to reveal fairies. An unmarried woman with two suitors would take a blend of powdered Calendula, marjoram, wormwood and thyme simmered in honey and white wine used as an ointment in a ritual to reveal her true match.
Ancient Romans and Greeks used the golden Calendula in many rituals and ceremonies, sometimes wearing crowns or garlands made from the flowers. One of its nicknames is "Mary's Gold", referring to the flowers' use in early Christian events in some countries. Calendula flowers are sacred flowers in India and have been used to decorate the statues of Hindu deities since early times.
The most common use in historic times was culinary, however, and the plant was used for both its color and its flavor. They were used for dumplings, wine, oatmeal and puddings. In English cuisine Calendula were often cooked in the same pot with spinach, or used to flavor stewed birds. According to sixteenth-century Englishman John Gerard, every proper soup of Dutch cuisine in his era would include Calendula petals.
Culinary
Also known as "poor man's saffron," the petals are edible and can be used fresh in salads or dried and used to color cheese or as a substitute for saffron. Calendulas have a mildly sweet taste that is slightly bitter, and as it dries these flavors become more intense. It can be used to add color to soups, stews, poultry dishes, custards and liquors.
The common name for Calendula officinalis in Britain is 'pot-marigold,' named so because of its use in broths and soups.
Dyes
Dye can be extracted from the flower and produce shades of honey, gold, orange, light brown, and vibrant yellow.
Chemistry
The flowers of C. officinalis contain flavonol glycosides, triterpene oligoglycosides, oleanane-type triterpene glycosides, saponins, and a sesquiterpene glucoside.
Pharmacological effects
Calendula officinalis oil is still used medicinally as an anti-inflammatory and a remedy for healing wounds. Calendula ointments are skin products available for use on minor cuts, burns, and skin irritation; though evidence of their effectiveness is weak.
Plant pharmacological studies have suggested that Calendula extracts have antiviral, antigenotoxic, and anti-inflammatory properties in vitro. In herbalism, Calendula in suspension or in tincture is used topically for treating acne, reducing inflammation, controlling bleeding, and soothing irritated tissue.
Limited evidence indicates Calendula cream or ointment is effective in treating radiation dermatitis. Topical application of C. officinalis ointment has helped to prevent dermatitis and pain; thus reducing the incidence rate of skipped radiation treatments in randomized trials.
Calendula has been used traditionally for abdominal cramps and constipation. In experiments with rabbit jejunum, the aqueous-ethanol extract of C. officinalis flowers was shown to have both spasmolytic and spasmogenic effects, thus providing a scientific rationale for this traditional use. An aqueous extract of C. officinalis obtained by a novel extraction method has demonstrated antitumor (cytotoxic) activity and immunomodulatory properties (lymphocyte activation) in vitro, as well as antitumor activity in mice.
Calendula plants are known to cause allergic reactions in susceptible individuals, and should be avoided during pregnancy.
Diversity
Species include:
Calendula arvensis (Vaill.) L. – field marigold, wild marigold
Calendula denticulata Schousb. ex Willd.
Calendula eckerleinii Ohle
Calendula incana Willd.
Calendula incana subsp. algarbiensis (Boiss.) Ohle
Calendula incana subsp. maderensis (DC.) Ohle – Madeiran marigold
Calendula incana subsp. maritima (Guss.) Ohle – sea marigold
Calendula incana subsp. microphylla (Lange) Ohle
Calendula lanzae Maire
Calendula maritima Guss. - sea marigold
Calendula maroccana (Ball) Ball
Calendula maroccana subsp. maroccana
Calendula maroccana subsp. murbeckii (Lanza) Ohle
Calendula meuselii Ohle
Calendula officinalis L. – pot marigold, garden marigold, ruddles, Scottish marigold
Calendula palaestina Boiss.
Calendula stellata Cav.
Calendula suffruticosa Vahl
Calendula suffruticosa subsp. balansae (Boiss. & Reut.) Ohle
Calendula suffruticosa subsp. boissieri Lanza
Calendula suffruticosa subsp. fulgida (Raf.) Guadagno
Calendula suffruticosa subsp. lusitanica (Boiss.) Ohle
Calendula suffruticosa subsp. maritima (Guss.) Meikle
Calendula suffruticosa subsp. monardii (Boiss. & Reut.) Ohle
Calendula suffruticosa subsp. tomentosa Murb.
Calendula tripterocarpa Rupr.
Gallery
| Biology and health sciences | Asterales | Plants |
73390 | https://en.wikipedia.org/wiki/Residue%20theorem | Residue theorem | In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. The residue theorem should not be confused with special cases of the generalized Stokes' theorem; however, the latter can be used as an ingredient of its proof.
Statement of Cauchy's residue theorem
The statement is as follows:
Residue theorem: Let be a simply connected open subset of the complex plane containing a finite list of points and a function holomorphic on Letting be a closed rectifiable curve in and denoting the residue of at each point by and the winding number of around by the line integral of around is equal to times the sum of residues, each counted as many times as winds around the respective point:
If is a positively oriented simple closed curve, is if is in the interior of and if not, therefore
with the sum over those inside
The relationship of the residue theorem to Stokes' theorem is given by the Jordan curve theorem. The general plane curve must first be reduced to a set of simple closed curves whose total is equivalent to for integration purposes; this reduces the problem to finding the integral of along a Jordan curve with interior The requirement that be holomorphic on is equivalent to the statement that the exterior derivative on Thus if two planar regions and of enclose the same subset of the regions and lie entirely in hence
is well-defined and equal to zero. Consequently, the contour integral of along is equal to the sum of a set of integrals along paths each enclosing an arbitrarily small region around a single — the residues of (up to the conventional factor at Summing over we recover the final expression of the contour integral in terms of the winding numbers
In order to evaluate real integrals, the residue theorem is used in the following manner: the integrand is extended to the complex plane and its residues are computed (which is usually easy), and a part of the real axis is extended to a closed curve by attaching a half-circle in the upper or lower half-plane, forming a semicircle. The integral over this curve can then be computed using the residue theorem. Often, the half-circle part of the integral will tend towards zero as the radius of the half-circle grows, leaving only the real-axis part of the integral, the one we were originally interested in.
Calculation of residues
Examples
An integral along the real axis
The integral
arises in probability theory when calculating the characteristic function of the Cauchy distribution. It resists the techniques of elementary calculus but can be evaluated by expressing it as a limit of contour integrals.
Suppose and define the contour that goes along the real line from to and then counterclockwise along a semicircle centered at 0 from to . Take to be greater than 1, so that the imaginary unit is enclosed within the curve. Now consider the contour integral
Since is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator is zero. Since , that happens only where or . Only one of those points is in the region bounded by this contour. Because is
the residue of at is
According to the residue theorem, then, we have
The contour may be split into a straight part and a curved arc, so that
and thus
Using some estimations, we have
and
The estimate on the numerator follows since , and for complex numbers along the arc (which lies in the upper half-plane), the argument of lies between 0 and . So,
Therefore,
If then a similar argument with an arc that winds around rather than shows that
and finally we have
(If then the integral yields immediately to elementary calculus methods and its value is .)
Evaluating zeta functions
The fact that has simple poles with residue 1 at each integer can be used to compute the sum
Consider, for example, . Let be the rectangle that is the boundary of with positive orientation, with an integer . By the residue formula,
The left-hand side goes to zero as since is uniformly bounded on the contour, thanks to using on the left and right side of the contour, and so the integrand has order over the entire contour. On the other hand,
where the Bernoulli number
(In fact, .) Thus, the residue is . We conclude:
which is a proof of the Basel problem.
The same argument works for all where is a positive integer, giving usThe trick does not work when , since in this case, the residue at zero vanishes, and we obtain the useless identity .
Evaluating Eisenstein series
The same trick can be used to establish the sum of the Eisenstein series:
| Mathematics | Complex analysis | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.