id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
32407
https://en.wikipedia.org/wiki/Virgo%20%28constellation%29
Virgo (constellation)
Virgo is one of the constellations of the zodiac. Its name is Latin for maiden, and its old astronomical symbol is . Between Leo to the west and Libra to the east, it is the second-largest constellation in the sky (after Hydra) and the largest constellation in the zodiac. The ecliptic intersects the celestial equator within this constellation and Pisces. Underlying these technical two definitions, the sun passes directly overhead of the equator, within this constellation, at the September equinox. Virgo can be easily found through its brightest star, Spica. Location Virgo is prominent in the spring sky in the Northern Hemisphere, visible all night in March and April. As the largest zodiac constellation, the Sun takes 44 days to pass through it, longer than any other. From 1990 and until 2062, this will take place from September 16 to October 30. It is located in the third quadrant of the Southern Hemisphere (SQ3) and can be seen at latitudes between +80° and -80°. The bright star Spica makes it easy to locate Virgo, as it can be found by following the curve of the Big Dipper/Plough to Arcturus in Boötes and continuing from there in the same curve ("follow the arc to Arcturus and speed on to Spica"). Due to the effects of precession, the autumn equinox point lies within the boundaries of Virgo very close to β Virginis. This is one of the two points in the sky where the celestial equator crosses the ecliptic (the other being the vernal equinox point in the constellation of Pisces). From the 18th century to the 4th century BC, the Sun was in Libra on the autumnal equinox, shifting into Virgo thereafter. This point will pass into the neighboring constellation of Leo around the year 2440. Features Stars Besides Spica, other bright stars in Virgo include β Virginis (Zavijava), γ Virginis (Porrima), δ Virginis (Auva) and ε Virginis (Vindemiatrix). Other fainter stars that were also given names are ζ Virginis (Heze), η Virginis (Zaniah), ι Virginis (Syrma), κ Virginis (Kang), λ Virginis (Khambalia) and φ Virginis (Elgafar). The 7 main stars of Virgo form 2 distinct star patterns: Beta, Gamma, Delta, Epsilon and Eta Virginis; form an asterism known as "The Bowl of Virgo". Together with Spica and Theta Virginis, they form a Y shape. The star 70 Virginis has one of the first known extrasolar planetary systems with one confirmed planet 7.5 times the mass of Jupiter. The star Chi Virginis has one of the most massive planets ever detected, with a mass of 11.1 times that of Jupiter. The sun-like star 61 Virginis has three known planets: one is a super-Earth and two are Neptune-mass planets. SS Virginis is a variable star with a noticeable red color. It varies in magnitude from a minimum of 9.6 to a maximum of 6.0 over approximately one year. Exoplanets There are 35 verified exoplanets orbiting 29 stars in Virgo, including PSR B1257+12 (three planets), 70 Virginis (one planet), Chi Virginis (one planet), 61 Virginis (three planets), NY Virginis (two planets), and 59 Virginis (one planet). Deep-sky objects Because of the presence of a galaxy cluster (consequently called the Virgo Cluster) within its borders 5° to 12° west of ε Vir (Vindemiatrix), this constellation is especially rich in galaxies. Some examples are Messier 49 (elliptical), Messier 58 (spiral), Messier 59 (elliptical), Messier 60 (elliptical), Messier 61 (spiral), Messier 84 (lenticular), Messier 86 (lenticular), Messier 87 (elliptical and a famous radio source), Messier 89 (elliptical) and Messier 90 (spiral). A noted galaxy that is not part of the cluster is the Sombrero Galaxy (M104), an unusual spiral galaxy. It is located about 10° due west of Spica. NGC 4639 is a face-on barred spiral galaxy located from Earth (redshift 0.0034). Its outer arms have a high number of Cepheid variables, which are used as standard candles to determine astronomical distances. Because of this, astronomers used several Cepheid variables in NGC 4639 to calibrate type Ia supernovae as standard candles for more distant galaxies. Virgo possesses several galaxy clusters, one of which is HCG 62. A Hickson Compact Group, HCG 62 is at a distance of from Earth (redshift 0.0137) and possesses a large central elliptical galaxy. It has a heterogeneous halo of extremely hot gas, posited to be due to the active galactic nucleus at the core of the central elliptical galaxy. M87 is the largest galaxy in the Virgo cluster, and is at a distance of from Earth (redshift 0.0035). It is a major radio source, partially due to its jet of electrons being flung out of the galaxy by its central supermassive black hole. Because this jet is visible in several different wavelengths, it is of interest to astronomers who wish to observe black holes in a unique galaxy. On April 10, 2019, astronomers from the Event Horizon Telescope project released an image of its central black hole; the first direct image of one. With a mass of at least 7.2 billion times that of the Sun, it is the most massive black hole within the immediate vicinity of the Milky Way. M84 is another elliptical radio galaxy in the constellation of Virgo; it is at a distance of (redshift 0.0035) as well. Astronomers have surmised that the speed of the gas clouds orbiting the core (approximately ) indicates the presence of an object with a mass 300 million times that of the sun, which is most likely a black hole. The Sombrero Galaxy, M104, is an edge-on spiral galaxy located 28 million light-years from Earth (redshift 0.0034). It has a bulge at its center made up of older stars that are larger than normal. It is surrounded by large, bright globular clusters and has a very prominent dust lane made up of polycyclic aromatic hydrocarbons. NGC 4438 is a peculiar galaxy with an active galactic nucleus, at a distance of from Earth (redshift 0.0035). Its supermassive black hole is ejecting jets of matter, creating bubbles with a diameter of up to . NGC 4261 also has a black hole from its center with a mass of 1.2 billion solar masses. It is located at a distance of from Earth (redshift 0.0075), and has an unusually dusty disk with a diameter of . Along with M84 and M87, NGC 4261 has strong emissions in the radio spectrum. Virgo is also home to the quasar 3C 273 which was the first quasar ever to be identified. With a magnitude of ~12.9, it is also the optically brightest quasar in the sky. Mythology In the Babylonian MUL.APIN (c. 10th century BC), part of this constellation was known as "The Furrow", representing the goddess Shala and her ear of grain. One star in this constellation, Spica, retains this tradition as it is Latin for "ear of grain", one of the major products of the Mesopotamian furrow. For this reason the constellation became associated with fertility. The constellation of Virgo in Hipparchus corresponds to two Babylonian constellations: the "Furrow" in the eastern sector of Virgo and the "Frond of Erua" in the western sector. The Frond of Erua was depicted as a goddess holding a palm-frond – a motif that still occasionally appears in much later depictions of Virgo. Early Greek astronomy associated the Babylonian constellation with their goddess of wheat, agriculture and autumn, Demeter. The Romans associated it with their goddess Ceres. Alternatively, the constellation was sometimes identified as the virgin goddess Iustitia or Astraea, holding the scales of justice in her hand (that now are separated as the constellation Libra). Another Greek myth from later, Classical times, identifies Virgo as Erigone, the daughter of Icarius of Athens. Icarius, who had been favored by Dionysus and was killed by his shepherds while they were intoxicated after which Erigone hanged herself in grief; in versions of this myth, Dionysus is said to have placed the father and daughter in the stars as Boötes and Virgo respectively. Another figure who is associated with the constellation Virgo was the spring goddess Persephone, the daughter of Zeus and Demeter who had married Hades and resided in the Underworld. In the Poeticon Astronomicon by Hyginus (1st century BC), Parthenos () is the daughter of Apollo and Chrysothemis, who died a maiden and was placed among the stars as the constellation. Diodorus Siculus has an alternative account, according to which Parthenos was the daughter of Staphylus and Chrysothemis, sister of Rhoeo and Molpadia (Hemithea). After a suicide attempt she and Hemithea were carried by Apollo to Chersonesus, where she became a local goddess. Strabo also mentions a goddess named Parthenos worshipped throughout Chersonesus. During the Middle Ages, Virgo sometimes was associated with the Blessed Virgin Mary. In Greek mythology, the constellation is also associated with the daughter of Zeus, Dike the goddess of justice, who is represented holding the scales of justice. Gallery
Physical sciences
Zodiac
Astronomy
32410
https://en.wikipedia.org/wiki/Vehicle
Vehicle
A vehicle () is a machine designed for self-propulsion, usually to transport people, cargo, or both. The term "vehicle" typically refers to land vehicles such as human-powered vehicles (e.g. bicycles, tricycles, velomobiles), animal-powered transports (e.g. horse-drawn carriages/wagons, ox carts, dog sleds), motor vehicles (e.g. motorcycles, cars, trucks, buses, mobility scooters) and railed vehicles (trains, trams and monorails), but more broadly also includes cable transport (cable cars and elevators), watercraft (ships, boats and underwater vehicles), amphibious vehicles (e.g. screw-propelled vehicles, hovercraft, seaplanes), aircraft (airplanes, helicopters, gliders and aerostats) and space vehicles (spacecraft, spaceplanes and launch vehicles). This article primarily concerns the more ubiquitous land vehicles, which can be broadly classified by the type of contact interface with the ground: wheels, tracks, rails or skis, as well as the non-contact technologies such as maglev. ISO 3833-1977 is the international standard for road vehicle types, terms and definitions. History It is estimated by historians that boats have been used since prehistory; rock paintings depicting boats, dated from around 50,000 to 15,000 BC, were found in Australia. The oldest boats found by archaeological excavation are logboats, with the oldest logboat found, the Pesse canoe found in a bog in the Netherlands, being carbon dated to 8040–7510 BC, making it 9,500–10,000 years old, A 7,000 year-old seagoing boat made from reeds and tar has been found in Kuwait. Boats were used between 4000 -3000 BC in Sumer, ancient Egypt and in the Indian Ocean. There is evidence of camel pulled wheeled vehicles about 4000–3000 BC. The earliest evidence of a wagonway, a predecessor of the railway, found so far was the long Diolkos wagonway, which transported boats across the Isthmus of Corinth in Greece since around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. In 200 CE, Ma Jun built a south-pointing chariot, a vehicle with an early form of guidance system. The stagecoach, a four-wheeled vehicle drawn by horses, originated in 13th century England. Railways began reappearing in Europe after the Dark Ages. The earliest known record of a railway in Europe from this period is a stained-glass window in the Minster of Freiburg im Breisgau dating from around 1350. In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel. 1769: Nicolas-Joseph Cugnot is often credited with building the first self-propelled mechanical vehicle or automobile in 1769. In Russia, in the 1780s, Ivan Kulibin developed a human-pedalled, three-wheeled carriage with modern features such as a flywheel, brake, gear box and bearings; however, it was not developed further. In 1783, the Montgolfier brothers developed the first balloon vehicle. In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, which many believe was the first demonstration of a steam-powered road vehicle, though it could not maintain sufficient steam pressure for long periods and was of little practical use. In 1817, The Laufmaschine ("running machine"), invented by the German Baron Karl von Drais, became the first human means of transport to make use of the two-wheeler principle. It is regarded as the forerunner of the modern bicycle (and motorcycle). In 1885, Karl Benz built (and subsequently patented) the Benz Patent-Motorwagen, the first automobile, powered by his own four-stroke cycle gasoline engine. In 1885, Otto Lilienthal began experimental gliding and achieved the first sustained, controlled, reproducible flights. In 1903, the Wright brothers flew the Wright Flyer, the first controlled, powered aircraft, in Kitty Hawk, North Carolina. In 1907, Gyroplane No.I became the first tethered rotorcraft to fly. The same year, the Cornu helicopter became the first rotorcraft to achieve free flight. In 1928, Opel initiated the Opel-RAK program, the first large-scale rocket program. The Opel RAK.1 became the first rocket car; the following year, it also became the first rocket-powered aircraft. In 1961, the Soviet space program's Vostok 1 carried Yuri Gagarin into space. In 1969, NASA's Apollo 11 achieved the first Moon landing. In 2010, the number of motor vehicles in operation worldwide surpassed 1 billion, roughly one for every seven people. Types of vehicles There are over 1 billion bicycles in use worldwide. In 2002 there were an estimated 590 million cars and 205 million motorcycles in service in the world. At least 500 million Chinese Flying Pigeon bicycles have been made, more than any other single model of vehicle. The most-produced model of motor vehicle is the Honda Super Cub motorcycle, having sold 60 million units in 2008. The most-produced car model is the Toyota Corolla, with at least 35 million made by 2010. The most common fixed-wing airplane is the Cessna 172, with about 44,000 having been made as of 2017. The Soviet Mil Mi-8, at 17,000, is the most-produced helicopter. The top commercial jet airliner is the Boeing 737, at about 10,000 in 2018. At around 14,000 for both, the most produced trams are the KTM-5 and Tatra T3. The most common trolleybus is ZiU-9. Locomotion Locomotion consists of a means that allows displacement with little opposition, a power source to provide the required kinetic energy and a means to control the motion, such as a brake and steering system. By far, most vehicles use wheels which employ the principle of rolling to enable displacement with very little rolling friction. Energy source It is essential that a vehicle have a source of energy to drive it. Energy can be extracted from external sources, as in the cases of a sailboat, a solar-powered car, or an electric streetcar that uses overhead lines. Energy can also be stored, provided it can be converted on demand and the storing medium's energy density and power density are sufficient to meet the vehicle's needs. Human power is a simple source of energy that requires nothing more than humans. Despite the fact that humans cannot exceed for meaningful amounts of time, the land speed record for human-powered vehicles (unpaced) is , as of 2009 on a recumbent bicycle. The energy source used to power vehicles is fuel. External combustion engines can use almost anything that burns as fuel, whilst internal combustion engines and rocket engines are designed to burn a specific fuel, typically gasoline, diesel or ethanol. Food is the fuel used to power non-motor vehicles such as cycles, rickshaws and other pedestrian-controlled vehicles. Another common medium for storing energy is batteries, which have the advantages of being responsive, useful in a wide range of power levels, environmentally friendly, efficient, simple to install, and easy to maintain. Batteries also facilitate the use of electric motors, which have their own advantages. On the other hand, batteries have low energy densities, short service life, poor performance at extreme temperatures, long charging times, and difficulties with disposal (although they can usually be recycled). Like fuel, batteries store chemical energy and can cause burns and poisoning in event of an accident. Batteries also lose effectiveness with time. The issue of charge time can be resolved by swapping discharged batteries with charged ones; however, this incurs additional hardware costs and may be impractical for larger batteries. Moreover, there must be standard batteries for battery swapping to work at a gas station. Fuel cells are similar to batteries in that they convert from chemical to electrical energy, but have their own advantages and disadvantages. Electrified rails and overhead cables are a common source of electrical energy on subways, railways, trams, and trolleybuses. Solar energy is a more modern development, and several solar vehicles have been successfully built and tested, including Helios, a solar-powered aircraft. Nuclear power is a more exclusive form of energy storage, currently limited to large ships and submarines, mostly military. Nuclear energy can be released by a nuclear reactor, nuclear battery, or repeatedly detonating nuclear bombs. There have been two experiments with nuclear-powered aircraft, the Tupolev Tu-119 and the Convair X-6. Mechanical strain is another method of storing energy, whereby an elastic band or metal spring is deformed and releases energy as it is allowed to return to its ground state. Systems employing elastic materials suffer from hysteresis, and metal springs are too dense to be useful in many cases. Flywheels store energy in a spinning mass. Because a light and fast rotor is energetically favorable, flywheels can pose a significant safety hazard. Moreover, flywheels leak energy fairly quickly and affect a vehicle's steering through the gyroscopic effect. They have been used experimentally in gyrobuses. Wind energy is used by sailboats and land yachts as the primary source of energy. It is very cheap and fairly easy to use, the main issues being dependence on weather and upwind performance. Balloons also rely on the wind to move horizontally. Aircraft flying in the jet stream may get a boost from high altitude winds. Compressed gas is currently an experimental method of storing energy. In this case, compressed gas is simply stored in a tank and released when necessary. Like elastics, they have hysteresis losses when gas heats up during compression. Gravitational potential energy is a form of energy used in gliders, skis, bobsleds and numerous other vehicles that go down hill. Regenerative braking is an example of capturing kinetic energy where the brakes of a vehicle are augmented with a generator or other means of extracting energy. Motors and engines When needed, the energy is taken from the source and consumed by one or more motors or engines. Sometimes there is an intermediate medium, such as the batteries of a diesel submarine. Most motor vehicles have internal combustion engines. They are fairly cheap, easy to maintain, reliable, safe and small. Since these engines burn fuel, they have long ranges but pollute the environment. A related engine is the external combustion engine. An example of this is the steam engine. Aside from fuel, steam engines also need water, making them impractical for some purposes. Steam engines also need time to warm up, whereas IC engines can usually run right after being started, although this may not be recommended in cold conditions. Steam engines burning coal release sulfur into the air, causing harmful acid rain. While intermittent internal combustion engines were once the primary means of aircraft propulsion, they have been largely superseded by continuous internal combustion engines, such as gas turbines. Turbine engines are light and, particularly when used on aircraft, efficient. On the other hand, they cost more and require careful maintenance. They can also be damaged by ingesting foreign objects, and they produce a hot exhaust. Trains using turbines are called gas turbine-electric locomotives. Examples of surface vehicles using turbines are M1 Abrams, MTT Turbine SUPERBIKE and the Millennium. Pulse jet engines are similar in many ways to turbojets but have almost no moving parts. For this reason, they were very appealing to vehicle designers in the past; however, their noise, heat, and inefficiency have led to their abandonment. A historical example of the use of a pulse jet was the V-1 flying bomb. Pulse jets are still occasionally used in amateur experiments. With the advent of modern technology, the pulse detonation engine has become practical and was successfully tested on a Rutan VariEze. While the pulse detonation engine is much more efficient than the pulse jet and even turbine engines, it still suffers from extreme noise and vibration levels. Ramjets also have few moving parts, but they only work at high speed, so their use is restricted to tip jet helicopters and high speed aircraft such as the Lockheed SR-71 Blackbird. Rocket engines are primarily used on rockets, rocket sleds and experimental aircraft. Rocket engines are extremely powerful. The heaviest vehicle ever to leave the ground, the Saturn V rocket, was powered by five F-1 rocket engines generating a combined 180 million horsepower (134.2 gigawatt). Rocket engines also have no need to "push off" anything, a fact that the New York Times denied in error. Rocket engines can be particularly simple, sometimes consisting of nothing more than a catalyst, as in the case of a hydrogen peroxide rocket. This makes them an attractive option for vehicles such as jet packs. Despite their simplicity, rocket engines are often dangerous and susceptible to explosions. The fuel they run off may be flammable, poisonous, corrosive or cryogenic. They also suffer from poor efficiency. For these reasons, rocket engines are only used when absolutely necessary. Electric motors are used in electric vehicles such as electric bicycles, electric scooters, small boats, subways, trains, trolleybuses, trams and experimental aircraft. Electric motors can be very efficient: over 90% efficiency is common. Electric motors can also be built to be powerful, reliable, low-maintenance and of any size. Electric motors can deliver a range of speeds and torques without necessarily using a gearbox (although it may be more economical to use one). Electric motors are limited in their use chiefly by the difficulty of supplying electricity. Compressed gas motors have been used on some vehicles experimentally. They are simple, efficient, safe, cheap, reliable and operate in a variety of conditions. One of the difficulties met when using gas motors is the cooling effect of expanding gas. These engines are limited by how quickly they absorb heat from their surroundings. The cooling effect can, however, double as air conditioning. Compressed gas motors also lose effectiveness with falling gas pressure. Ion thrusters are used on some satellites and spacecraft. They are only effective in a vacuum, which limits their use to spaceborne vehicles. Ion thrusters run primarily off electricity, but they also need a propellant such as caesium, or, more recently xenon. Ion thrusters can achieve extremely high speeds and use little propellant; however, they are power-hungry. Converting energy to work The mechanical energy that motors and engines produce must be converted to work by wheels, propellers, nozzles, or similar means. Aside from converting mechanical energy into motion, wheels allow a vehicle to roll along a surface and, with the exception of railed vehicles, to be steered. Wheels are ancient technology, with specimens being discovered from over 5000 years ago. Wheels are used in a plethora of vehicles, including motor vehicles, armoured personnel carriers, amphibious vehicles, airplanes, trains, skateboards and wheelbarrows. Nozzles are used in conjunction with almost all reaction engines. Vehicles using nozzles include jet aircraft, rockets, and personal watercraft. While most nozzles take the shape of a cone or bell, some unorthodox designs have been created such as the aerospike. Some nozzles are intangible, such as the electromagnetic field nozzle of a vectored ion thruster. Continuous track is sometimes used instead of wheels to power land vehicles. Continuous track has the advantages of a larger contact area, easy repairs on small damage, and high maneuverability. Examples of vehicles using continuous tracks are tanks, snowmobiles and excavators. Two continuous tracks used together allow for steering. The largest land vehicle in the world, the Bagger 293, is propelled by continuous tracks. Propellers (as well as screws, fans and rotors) are used to move through a fluid. Propellers have been used as toys since ancient times; however, it was Leonardo da Vinci who devised what was one of the earliest propeller driven vehicles, the "aerial-screw". In 1661, Toogood & Hays adopted the screw for use as a ship propeller. Since then, the propeller has been tested on many terrestrial vehicles, including the Schienenzeppelin train and numerous cars. In modern times, propellers are most prevalent on watercraft and aircraft, as well as some amphibious vehicles such as hovercraft and ground-effect vehicles. Intuitively, propellers cannot work in space as there is no working fluid; however, some sources have suggested that since space is never empty, a propeller could be made to work in space. Similarly to propeller vehicles, some vehicles use wings for propulsion. Sailboats and sailplanes are propelled by the forward component of lift generated by their sails/wings. Ornithopters also produce thrust aerodynamically. Ornithopters with large rounded leading edges produce lift by leading-edge suction forces. Research at the University of Toronto Institute for Aerospace Studies lead to a flight with an actual ornithopter on July 31, 2010. Paddle wheels are used on some older watercraft and their reconstructions. These ships were known as paddle steamers. Because paddle wheels simply push against the water, their design and construction is very simple. The oldest such ship in scheduled service is the Skibladner. Many pedalo boats also use paddle wheels for propulsion. Screw-propelled vehicles are propelled by auger-like cylinders fitted with helical flanges. Because they can produce thrust on both land and water, they are commonly used on all-terrain vehicles. The ZiL-2906 was a Soviet-designed screw-propelled vehicle designed to retrieve cosmonauts from the Siberian wilderness. Friction All or almost all of the useful energy produced by the engine is usually dissipated as friction; so minimizing frictional losses is very important in many vehicles. The main sources of friction are rolling friction and fluid drag (air drag or water drag). Wheels have low bearing friction, and pneumatic tires give low rolling friction. Steel wheels on steel tracks are lower still. Aerodynamic drag can be reduced by streamlined design features. Friction is desirable and important in supplying traction to facilitate motion on land. Most land vehicles rely on friction for accelerating, decelerating and changing direction. Sudden reductions in traction can cause loss of control and accidents. Control Steering Most vehicles, with the notable exception of railed vehicles, have at least one steering mechanism. Wheeled vehicles steer by angling their front or rear wheels. The B-52 Stratofortress has a special arrangement in which all four main wheels can be angled. Skids can also be used to steer by angling them, as in the case of a snowmobile. Ships, boats, submarines, dirigibles and aeroplanes usually have a rudder for steering. On an airplane, ailerons are used to bank the airplane for directional control, sometimes assisted by the rudder. Stopping With no power applied, most vehicles come to a stop due to friction. But it is often required to stop a vehicle faster than by friction alone, so almost all vehicles are equipped with a braking system. Wheeled vehicles are typically equipped with friction brakes, which use the friction between brake pads (stators) and brake rotors to slow the vehicle. Many airplanes have high-performance versions of the same system in their landing gear for use on the ground. A Boeing 757 brake, for example, has 3 stators and 4 rotors. The Space Shuttle also uses frictional brakes on its wheels. As well as frictional brakes, hybrid and electric cars, trolleybuses and electric bicycles can also use regenerative brakes to recycle some of the vehicle's potential energy. High-speed trains sometimes use frictionless Eddy-current brakes; however, widespread application of the technology has been limited by overheating and interference issues. Aside from landing gear brakes, most large aircraft have other ways of decelerating. In aircraft, air brakes are aerodynamic surfaces that provide braking force by increasing the frontal cross section, thus increasing the increasing the aerodynamic drag of the aircraft. These are usually implemented as flaps that oppose air flow when extended and are flush with the aircraft when retracted. Reverse thrust is also used in many aeroplane engines. Propeller aircraft achieve reverse thrust by reversing the pitch of the propellers, while jet aircraft do so by redirecting their engine exhausts forward. On aircraft carriers, arresting gears are used to stop an aircraft. Pilots may even apply full forward throttle on touchdown, in case the arresting gear does not catch and a go around is needed. Parachutes are used to slow down vehicles travelling very fast. Parachutes have been used in land, air and space vehicles such as the ThrustSSC, Eurofighter Typhoon and Apollo Command Module. Some older Soviet passenger jets had braking parachutes for emergency landings. Boats use similar devices called sea anchors to maintain stability in rough seas. To further increase the rate of deceleration or where the brakes have failed, several mechanisms can be used to stop a vehicle. Cars and rolling stock usually have hand brakes that, while designed to secure an already parked vehicle, can provide limited braking should the primary brakes fail. A secondary procedure called forward-slip is sometimes used to slow airplanes by flying at an angle, causing more drag. Legislation Motor vehicle and trailer categories are defined according to the following international classification: Category M: passenger vehicles. Category N: motor vehicles for the carriage of goods. Category O: trailers and semi-trailers. European Union In the European Union the classifications for vehicle types are defined by: Commission Directive 2001/116/EC of 20 December 2001, adapting to technical progress Council Directive 70/156/EEC on the approximation of the laws of the Member States relating to the type-approval of motor vehicles and their trailers Directive 2002/24/EC of the European Parliament and of the Council of 18 March 2002 relating to the type-approval of two or three wheeled motor vehicles and repealing Council Directive 92/61/EEC European Community is based on the Community's WVTA (whole vehicle type-approval) system. Under this system, manufacturers can obtain certification for a vehicle type in one Member State if it meets the EC technical requirements and then market it EU-wide with no need for further tests. Total technical harmonization already has been achieved in three vehicle categories (passenger cars, motorcycles, and tractors) and soon will extend to other vehicle categories (coaches and utility vehicles). It is essential that European car manufacturers be ensured access to as large a market as possible. While the Community type-approval system allows manufacturers to fully benefit fully from internal market opportunities, worldwide technical harmonization in the context of the United Nations Economic Commission for Europe (UNECE) offers a market beyond European borders. Licensing In many cases, it is unlawful to operate a vehicle without a license or certification. The least strict form of regulation usually limits what passengers the driver may carry or prohibits them completely (e.g., a Canadian ultralight license without endorsements). The next level of licensing may allow passengers, but without any form of compensation or payment. A private driver's license usually has these conditions. Commercial licenses that allow the transport of passengers and cargo are more tightly regulated. The most strict form of licensing is generally reserved for school buses, hazardous materials transports and emergency vehicles. The driver of a motor vehicle is typically required to hold a valid driver's license while driving on public lands, whereas the pilot of an aircraft must have a license at all times, regardless of where in the jurisdiction the aircraft is flying. Registration Vehicles are often required to be registered. Registration may be for purely legal reasons, for insurance reasons, or to help law enforcement recover stolen vehicles. The Toronto Police Service, for example, offers free and optional bicycle registration online. On motor vehicles, registration often takes the form of a vehicle registration plate, which makes it easy to identify a vehicle. In Russia, trucks and buses have their licence plate numbers repeated in large black letters on the back. On aircraft, a similar system is used, where a tail number is painted on various surfaces. Like motor vehicles and aircraft, watercraft also have registration numbers in most jurisdictions; however, the vessel name is still the primary means of identification as has been the case since ancient times. For this reason, duplicate registration names are generally rejected. In Canada, boats with an engine power of or greater require registration, leading to the ubiquitous "" engine. Registration may be conditional on the vehicle being approved for use on public highways, as in the case of the UK and Ontario. Many U.S. states also have requirements for vehicles operating on public highways. Aircraft have more stringent requirements, as they pose a high risk of damage to people and property in the event of an accident. In the U.S., the FAA requires aircraft to have an airworthiness certificate. Because U.S. aircraft must be flown for some time before they are certified, there is a provision for an experimental airworthiness certificate. FAA experimental aircraft are restricted in operation, including no overflights of populated areas, in busy airspace, or with unessential passengers. Materials and parts used in FAA certified aircraft must meet the criteria set forth by the technical standard orders. Mandatory safety equipment In many jurisdictions, the operator of a vehicle is legally obligated to carry safety equipment with or on them. Common examples include seat belts in cars, helmets on motorcycles and bicycles, fire extinguishers on boats, buses and airplanes, and life jackets on boats and commercial aircraft. Passenger aircraft carry a great deal of safety equipment, including inflatable slides, rafts, oxygen masks, oxygen tanks, life jackets, satellite beacons and first aid kits. Some equipment, such as life jackets has led to debate regarding their usefulness. In the case of Ethiopian Airlines Flight 961, the life jackets saved many people but also led to many deaths when passengers inflated their vests prematurely. Right-of-way There are specific real-estate arrangements made to allow vehicles to travel from one place to another. The most common arrangements are public highways, where appropriately licensed vehicles can navigate without hindrance. These highways are on public land and are maintained by the government. Similarly, toll routes are open to the public after paying a toll. These routes and the land they rest on may be government-owned, privately owned or a combination of both. Some routes are privately owned but grant access to the public. These routes often have a warning sign stating that the government does not maintain them. An example of this are byways in England and Wales. In Scotland, land is open to unmotorized vehicles if it meets certain criteria. Public land is sometimes open to use by off-road vehicles. On U.S. public land, the Bureau of Land Management (BLM) decides where vehicles may be used. Railways often pass over land not owned by the railway company. The right to this land is granted to the railway company through mechanisms such as easement. Watercraft are generally allowed to navigate public waters without restriction as long as they do not cause a disturbance. Passing through a lock, however, may require paying a toll. Despite the common law tradition Cuius est solum, eius est usque ad coelum et ad inferos of owning all the air above one's property, the U.S. Supreme Court ruled that aircraft in the U.S. have the right to use air above someone else's property without their consent. While the same rule generally applies in all jurisdictions, some countries, such as Cuba and Russia, have taken advantage of air rights on a national level to earn money. There are some areas that aircraft are barred from overflying. This is called prohibited airspace. Prohibited airspace is usually strictly enforced due to potential damage from espionage or attack. In the case of Korean Air Lines Flight 007, the airliner entered prohibited airspace over Soviet territory and was shot down as it was leaving. Safety Several different metrics used to compare and evaluate the safety of different vehicles. The main three are deaths per billion passenger-journeys, deaths per billion passenger-hours and deaths per billion passenger-kilometers.
Technology
Basics_11
null
32416
https://en.wikipedia.org/wiki/Valley
Valley
A valley is an elongated low area often running between hills or mountains and typically containing a river or stream running from one end to the other. Most valleys are formed by erosion of the land surface by rivers or streams over a very long period. Some valleys are formed through erosion by glacial ice. These glaciers may remain present in valleys in high mountains or polar areas. At lower latitudes and altitudes, these glacially formed valleys may have been created or enlarged during ice ages but now are ice-free and occupied by streams or rivers. In desert areas, valleys may be entirely dry or carry a watercourse only rarely. In areas of limestone bedrock, dry valleys may also result from drainage now taking place underground rather than at the surface. Rift valleys arise principally from earth movements, rather than erosion. Many different types of valleys are described by geographers, using terms that may be global in use or else applied only locally. Formation of valleys Valleys may arise through several different processes. Most commonly, they arise from erosion over long periods by moving water and are known as river valleys. Typically small valleys containing streams feed into larger valleys which in turn feed into larger valleys again, eventually reaching the ocean or perhaps an internal drainage basin. In polar areas and at high altitudes, valleys may be eroded by glaciers; these typically have a U-shaped profile in cross-section, in contrast to river valleys, which tend to have a V-shaped profile. Other valleys may arise principally through tectonic processes such as rifting. All three processes can contribute to the development of a valley over geological time. The flat (or relatively flat) portion of a valley between its sides is referred to as the valley floor. The valley floor is typically formed by river sediments and may have fluvial terraces. River valleys The development of a river valley is affected by the character of the bedrock over which the river or stream flows, the elevational difference between its top and bottom, and indeed the climate. Typically the flow will increase downstream and the gradient will decrease. In the upper valley, the stream will most effectively erode its bed through corrasion to produce a steep-sided V-shaped valley. The presence of more resistant rock bands, of geological faults, fractures, and folds may determine the course of the stream and result in a twisting course with interlocking spurs. In the middle valley, as numerous streams have coalesced, the valley is typically wider, the flow slower and both erosion and deposition may take place. More lateral erosion takes place in the middle section of a river's course, as strong currents on the outside of its curve erode the bank. Conversely, deposition may take place on the inside of curves where the current is much slacker, the process leading to the river assuming a meandering character. In the lower valley, gradients are lowest, meanders may be much broader and a broader floodplain may result. Deposition dominates over erosion. A typical river basin or drainage basin will incorporate each of these different types of valleys. Some sections of a stream or river valleys may have vertically incised their course to such an extent that the valley they occupy is best described as a gorge, ravine, or canyon. Rapid down-cutting may result from localized uplift of the land surface or rejuvenation of the watercourse as a result for example of a reduction in the base level to which the river is eroded, e.g. lowered global sea level during an ice age. Such rejuvenation may also result in the production of river terraces. Glacial valleys There are various forms of valleys associated with glaciation. True glacial valleys are those that have been cut by a glacier which may or may not still occupy the valley at the present day. Such valleys may also be known as glacial troughs. They typically have a U-shaped cross-section and are characteristic landforms of mountain areas where glaciation has occurred or continues to take place. The uppermost part of a glacial valley frequently consists of one or more 'armchair-shaped' hollows, or 'cirques', excavated by the rotational movement downslope of a cirque glacier. During glacial periods, for example, the Pleistocene ice ages, it is in these locations that glaciers initially form and then, as the ice age proceeds, extend downhill through valleys that have previously been shaped by water rather than ice. Abrasion by rock material embedded within the moving glacial ice causes the widening and deepening of the valley to produce the characteristic U or trough shape with relatively steep, even vertical sides and a relatively flat bottom. Interlocking spurs associated with the development of river valleys are preferentially eroded to produce truncated spurs, typical of glaciated mountain landscapes. The upper end of the trough below the ice-contributing cirques may be a trough-end. Valley steps (or 'rock steps') can result from differing erosion rates due to both the nature of the bedrock (hardness and jointing for example) and the power of the moving ice. In places, a rock basin may be excavated which may later be filled with water to form a ribbon lake or else by sediments. Such features are found in coastal areas as fjords. The shape of the valley which results from all of these influences may only become visible upon the recession of the glacier that forms it. A river or stream may remain in the valley; if it is smaller than one would expect given the size of its valley, it can be considered an example of a misfit stream. Other interesting glacially carved valleys include: Yosemite Valley (United States) Side valleys of the Austrian river Salzach for their parallel directions and hanging mouths. That of the St. Mary River in Glacier National Park in Montana, United States. Tunnel A tunnel valley is a large, long, U-shaped valley originally cut under the glacial ice near the margin of continental ice sheets such as that now covering Antarctica and formerly covering portions of all continents during past glacial ages. Such valleys can be up to long, wide, and deep (its depth may vary along its length). Tunnel valleys were formed by subglacial water erosion. They once served as subglacial drainage pathways carrying large volumes of meltwater. Their cross-sections exhibit steep-sided flanks similar to fjord walls, and their flat bottoms are typical of subglacial glacial erosion. Meltwater In northern Central Europe, the Scandinavian ice sheet during the various ice ages advanced slightly uphill against the lie of the land. As a result, its meltwaters flowed parallel to the ice margin to reach the North Sea basin, forming huge, flat valleys known as Urstromtäler. Unlike the other forms of glacial valleys, these were formed by glacial meltwaters. Transition forms and shoulders Depending on the topography, the rock types, and the climate, a variety of transitional forms between V-, U- and plain valleys can form. The floor or bottom of these valleys can be broad or narrow, but all valleys have a shoulder. The broader a mountain valley, the lower its shoulders are located in most cases. An important exception is canyons where the shoulder almost is near the top of the valley's slope. In the Alps – e.g. the Tyrolean Inn valley – the shoulders are quite low (100–200 meters above the bottom). Many villages are located here (esp. on the sunny side) because the climate is very mild: even in winter when the valley's floor is filled with fog, these villages are in sunshine. In some stress-tectonic regions of the Rocky Mountains or the Alps (e.g. Salzburg), the side valleys are parallel to each other, and are hanging. Smaller streams flow into rivers as deep canyons or waterfalls. Hanging tributary A hanging valley is a tributary valley that is higher than the main valley. They are most commonly associated with U-shaped valleys, where a tributary glacier flows into a glacier of larger volume. The main glacier erodes a deep U-shaped valley with nearly vertical sides, while the tributary glacier, with a smaller volume of ice, makes a shallower U-shaped valley. Since the surfaces of the glaciers were originally at the same elevation, the shallower valley appears to be 'hanging' above the main valley. Often, waterfalls form at or near the outlet of the upper valley. Hanging valleys also occur in fjord systems underwater. The branches of Sognefjord are much shallower than the main fjord. The mouth of Fjærlandsfjord is about deep while the main fjord nearby is deep. The mouth of Ikjefjord is only deep while the main fjord is around at the same point. Glaciated terrain is not the only site of hanging streams and valleys. Hanging valleys are also simply the product of varying rates of erosion of the main valley and the tributary valleys. The varying rates of erosion are associated with the composition of the adjacent rocks in the different valley locations. The tributary valleys are eroded and deepened by glaciers or erosion at a slower rate than that of the main valley floor; thus the difference in the two valleys' depth increases over time. The tributary valley, composed of more resistant rock, then hangs over the main valley. Trough-shaped Trough-shaped valleys also form in regions of heavy topographic denudation. By contrast with glacial U-shaped valleys, there is less downward and sideways erosion. The severe downslope denudation results in gently sloping valley sides; their transition to the actual valley bottom is unclear. Trough-shaped valleys occur mainly in periglacial regions and in tropical regions of variable wetness. Both climates are dominated by heavy denudation. Box Box valleys have wide, relatively level floors and steep sides. They are common in periglacial areas and occur in mid-latitudes, but also occur in tropical and arid regions. Rift Rift valleys, such as the Albertine Rift and Gregory Rift are formed by the expansion of the Earth's crust due to tectonic activity beneath the Earth's surface. Terms for valleys There are many terms used for different sorts of valleys. They include: Cove: A small valley, closed at one or both ends, in the central or southern Appalachian Mountains which sometimes results from the erosion of a geologic window. Dell: A small, secluded, and often wooded valley. Dry valley: A valley not created by sustained surface water flow. Erosional valley: A valley formed by erosion. Hollow: A term used regionally for a small valley surrounded by mountains or ridges. In Ireland, New England, Appalachia, and the Ozarks of Arkansas and Missouri, a hollow is a small valley or dry stream bed; often called a holler. Longitudinal valley: An elongated valley found between two nearly-parallel mountain chains. Steephead valley: A deep, narrow, flat-bottomed valley with an abrupt ending. Strike valley: A valley typically developed parallel to a cuesta from more readily eroded strata. Structural valley: A valley formed by geologic events such as drop faults or the rise of highlands. Similar geographical features such as gullies, chines, and kloofs, are not usually referred to as valleys. British regional terms for valleys The terms corrie, glen, and strath are all Anglicisations of Gaelic terms and are commonly encountered in place-names in Scotland and other areas where Gaelic was once widespread. Strath signifies a wide valley between hills, the floor of which is either level or slopes gently. A glen is a river valley which is steeper and narrower than a strath. A corrie is a basin-shaped hollow in a mountain. Each of these terms also occurs in parts of the world formerly colonized by Britain. Corrie is used more widely by geographers as a synonym for (glacial) cirque, as is the word cwm borrowed from Welsh. The word dale occurs widely in place names in the north of England and, to a lesser extent, in southern Scotland. As a generic name for a type of valley, the term typically refers to a wide valley, though there are many much smaller stream valleys within the Yorkshire Dales which are named "(specific name) Dale". Clough is a word in common use in northern England for a narrow valley with steep sides. Gill is used to describe a ravine containing a mountain stream in Cumbria and the Pennines. The term combe (also encountered as coombe) is widespread in southern England and describes a short valley set into a hillside. Other terms for small valleys such as hope, dean, slade, slack and bottom are commonly encountered in place-names in various parts of England but are no longer in general use as synonyms for valley. The term vale is used in England and Wales to describe a wide river valley, usually with a particularly wide flood plain or flat valley bottom. In Southern England, vales commonly occur between the outcrops of different relatively erosion-resistant rock formations, where less resistant rock, often claystone has been eroded. An example is the Vale of White Horse in Oxfordshire. Human settlement Some of the first human complex societies originated in river valleys, such as that of the Nile, Tigris-Euphrates, Indus, Ganges, Yangtze, Yellow River, Mississippi, and arguably the Amazon. In prehistory, the rivers were used as a source of fresh water and food (fish and game), as well as a place to wash and a sewer. The proximity of water moderated temperature extremes and provided a source for irrigation, stimulating the development of agriculture. Most of the first civilizations developed from these river valley communities. Siting of settlements within valleys is influenced by many factors, including the need to avoid flooding and the location of river crossing points. Notable examples Africa Albertine Rift East African Rift Ethiopian Rift Valley Great Rift Valley Nile Valley (Egypt/Sudan/Ethiopia/Uganda) Nugaal Valley (Somalia) Umba Valley (Tanzania) Valley of the Kings (Egypt) Asia List of valleys in India List of valleys in Pakistan Beqaa Valley (Lebanon) Dang Valley (Western Nepal) Emin Valley (Kazakhstan) Ihlara, Turkey Jordan Rift Valley (Jordan - Israel) Jordan Valley Kathmandu (Nepal) Klang Valley (Malaysia) Mahaweli (Sri Lanka) Panjshir Valley (Afghanistan) Valleys of China Baligou Valley Emin Valley Heizhu Valley Insukati Valley Jiuzhaigou Valley Mutou Valley Oceania Barossa Valley (Australia) Bulolo Valley (Papua New Guinea) Cagayan Valley (Philippines) Capertee Valley (Australia) Hunter Valley (Australia) Hutt Valley (New Zealand) Kangaroo Valley (Australia) Markham Valley (Papua New Guinea) Strath Taieri (New Zealand) Swan Valley (Australia) Europe Bergensdalen (Vestland, Norway) Dalen, Telemark (Telemark, Norway) Danube Valley (Eastern Europe) Evrotas Valley, Sparta (Greece) Glen Coe (Scotland, United Kingdom) Great Glen (Scotland, United Kingdom) Gudbrandsdalen (Oppland, Norway) Hallingdalen (Buskerud, Norway) Heddal (Telemark, Norway) Iron Gate (Romania/Serbia) Lauterbrunnen Valley (Bern, Switzerland) Loire Valley with its famous castles (France) Midt-Telemark (Telemark, Norway) Nant Ffrancon (Wales, United Kingdom) Numedalen (Buskerud, Norway) Østerdalen (Hedmark, Norway) Po Valley, (Italy) Rhone Valley from the Matterhorn to Grenoble and Lyon (France) Romsdalen (Møre Og Romsdal, Norway) Setesdal (Agder, Norway) South Wales Valleys (Wales, United Kingdom) Upper Rhine Valley or Upper Rhine Plain, an old graben system. (France and Germany) Vestfjorddalen (Norway) North America Central Valley (California) Coachella Valley (California) Cumberland Valley (Maryland/Pennsylvania) Cuyahoga Valley (Ohio) Death Valley (California) Fraser Canyon (British Columbia) Fraser Valley (British Columbia) Grand Canyon (Arizona, United States) Hell's Gate (British Columbia) Hudson Valley (New York) Imperial Valley (California) Las Vegas Valley (Nevada) Missouri River Valley (Missouri) Monument Valley (Arizona, Utah) Napa Valley (California) Okanagan Valley (British Columbia) Ottawa Valley (Ontario/Quebec) Palo Duro Canyon (Texas) Valley of the Sun (Arizona) Rio Grande Valley (Texas) Rocky Mountain Trench (British Columbia/Montana) Saint Lawrence Valley (Ontario/Quebec/New York) Salt Lake Valley (Utah) San Fernando Valley (California) Shenandoah Valley (Virginia/West Virginia) Sonoma Valley (California) Toluca Valley (Mexico) Valley of the Gods (Utah) Valley of Mexico (Mexico) Willamette Valley (Oregon) Yosemite Valley (California) South America Aburra Valley (Colombia) Calchaquí Valleys (Argentina) Cauca Valley (Colombia) Ischigualasto Valley of the Moon (Argentina) Paraíba Valley (Brazil) Antarctica West Antarctic Rift System Extraterrestrial valleys Numerous elongate depressions have been identified on the surface of Mars, Venus, the Moon, and other planets and their satellites and are known as valles (singular: 'vallis'). Deeper valleys with steeper sides (akin to canyons) on certain of these bodies are known as chasmata (singular: 'chasma'). Long narrow depressions are referred to as fossae (singular: 'fossa'). These are the Latin terms for 'valley, 'gorge' and 'ditch' respectively. The German term 'rille' or Latin term 'rima' (signifying 'cleft') is used for certain other elongate depressions on the Moon.
Physical sciences
Landforms
null
32431
https://en.wikipedia.org/wiki/Vanadium
Vanadium
Vanadium is a chemical element; it has symbol V and atomic number 23. It is a hard, silvery-grey, malleable transition metal. The elemental metal is rarely found in nature, but once isolated artificially, the formation of an oxide layer (passivation) somewhat stabilizes the free metal against further oxidation. Spanish-Mexican scientist Andrés Manuel del Río discovered compounds of vanadium in 1801 by analyzing a new lead-bearing mineral he called "brown lead". Though he initially presumed its qualities were due to the presence of a new element, he was later erroneously convinced by French chemist Hippolyte Victor Collet-Descotils that the element was just chromium. Then in 1830, Nils Gabriel Sefström generated chlorides of vanadium, thus proving there was a new element, and named it "vanadium" after the Scandinavian goddess of beauty and fertility, Vanadís (Freyja). The name was based on the wide range of colors found in vanadium compounds. Del Río's lead mineral was ultimately named vanadinite for its vanadium content. In 1867, Henry Enfield Roscoe obtained the pure element. Vanadium occurs naturally in about 65 minerals and fossil fuel deposits. It is produced in China and Russia from steel smelter slag. Other countries produce it either from magnetite directly, flue dust of heavy oil, or as a byproduct of uranium mining. It is mainly used to produce specialty steel alloys such as high-speed tool steels, and some aluminium alloys. The most important industrial vanadium compound, vanadium pentoxide, is used as a catalyst for the production of sulfuric acid. The vanadium redox battery for energy storage may be an important application in the future. Large amounts of vanadium ions are found in a few organisms, possibly as a toxin. The oxide and some other salts of vanadium have moderate toxicity. Particularly in the ocean, vanadium is used by some life forms as an active center of enzymes, such as the vanadium bromoperoxidase of some ocean algae. History Vanadium was discovered in Mexico in 1801 by the Spanish mineralogist Andrés Manuel del Río. Del Río extracted the element from a sample of Mexican "brown lead" ore, later named vanadinite. He found that its salts exhibit a wide variety of colors, and as a result, he named the element panchromium (Greek: παγχρώμιο "all colors"). Later, del Río renamed the element erythronium (Greek: ερυθρός "red") because most of the salts turned red upon heating. In 1805, French chemist Hippolyte Victor Collet-Descotils, backed by del Río's friend Baron Alexander von Humboldt, incorrectly declared that del Río's new element was an impure sample of chromium. Del Río accepted Collet-Descotils' statement and retracted his claim. In 1831 Swedish chemist Nils Gabriel Sefström rediscovered the element in a new oxide he found while working with iron ores. Later that year, Friedrich Wöhler confirmed that this element was identical to that found by del Río and hence confirmed del Río's earlier work. Sefström chose a name beginning with V, which had not yet been assigned to any element. He called the element vanadium after Old Norse Vanadís (another name for the Norse Vanir goddess Freyja, whose attributes include beauty and fertility), because of the many beautifully colored chemical compounds it produces. On learning of Wöhler's findings, del Río began to passionately argue that his old claim be recognized, but the element kept the name vanadium. In 1831, the geologist George William Featherstonhaugh suggested that vanadium should be renamed "rionium" after del Río, but this suggestion was not followed. As vanadium is usually found combined with other elements, the isolation of vanadium metal was difficult. In 1831, Berzelius reported the production of the metal, but Henry Enfield Roscoe showed that Berzelius had produced the nitride, vanadium nitride (VN). Roscoe eventually produced the metal in 1867 by reduction of vanadium(II) chloride, VCl2, with hydrogen. In 1927, pure vanadium was produced by reducing vanadium pentoxide with calcium. The first large-scale industrial use of vanadium was in the steel alloy chassis of the Ford Model T, inspired by French race cars. Vanadium steel allowed reduced weight while increasing tensile strength (). For the first decade of the 20th century, most vanadium ore were mined by the American Vanadium Company from the Minas Ragra in Peru. Later, the demand for uranium rose, leading to increased mining of that metal's ores. One major uranium ore was carnotite, which also contains vanadium. Thus, vanadium became available as a by-product of uranium production. Eventually, uranium mining began to supply a large share of the demand for vanadium. In 1911, German chemist Martin Henze discovered vanadium in the hemovanadin proteins found in blood cells (or coelomic cells) of Ascidiacea (sea squirts). Characteristics Vanadium is an average-hard, ductile, steel-blue metal. Vanadium is usually described as "soft", because it is ductile, malleable, and not brittle. Vanadium is harder than most metals and steels (see Hardnesses of the elements (data page) and iron). It has good resistance to corrosion and it is stable against alkalis and sulfuric and hydrochloric acids. It is oxidized in air at about 933 K (660 °C, 1220 °F), although an oxide passivation layer forms even at room temperature. It also reacts with hydrogen peroxide. Isotopes Naturally occurring vanadium is composed of one stable isotope, 51V, and one radioactive isotope, 50V. The latter has a half-life of 2.71×1017 years and a natural abundance of 0.25%. 51V has a nuclear spin of , which is useful for NMR spectroscopy. Twenty-four artificial radioisotopes have been characterized, ranging in mass number from 40 to 65. The most stable of these isotopes are 49V with a half-life of 330 days, and 48V with a half-life of 16.0 days. The remaining radioactive isotopes have half-lives shorter than an hour, most below 10 seconds. At least four isotopes have metastable excited states. Electron capture is the main decay mode for isotopes lighter than 51V. For the heavier ones, the most common mode is beta decay. The electron capture reactions lead to the formation of element 22 (titanium) isotopes, while beta decay leads to element 24 (chromium) isotopes. Compounds The chemistry of vanadium is noteworthy for the accessibility of the four adjacent oxidation states 2–5. In an aqueous solution, vanadium forms metal aquo complexes of which the colors are lilac [V(H2O)6]2+, green [V(H2O)6]3+, blue [VO(H2O)5]2+, yellow-orange oxides [VO(H2O)5]3+, the formula for which depends on pH. Vanadium(II) compounds are reducing agents, and vanadium(V) compounds are oxidizing agents. Vanadium(IV) compounds often exist as vanadyl derivatives, which contain the VO2+ center. Ammonium vanadate(V) (NH4VO3) can be successively reduced with elemental zinc to obtain the different colors of vanadium in these four oxidation states. Lower oxidation states occur in compounds such as V(CO)6, and substituted derivatives. Vanadium pentoxide is a commercially important catalyst for the production of sulfuric acid, a reaction that exploits the ability of vanadium oxides to undergo redox reactions. The vanadium redox battery utilizes all four oxidation states: one electrode uses the +5/+4 couple and the other uses the +3/+2 couple. Conversion of these oxidation states is illustrated by the reduction of a strongly acidic solution of a vanadium(V) compound with zinc dust or amalgam. The initial yellow color characteristic of the pervanadyl ion [VO2(H2O)4]+ is replaced by the blue color of [VO(H2O)5]2+, followed by the green color of [V(H2O)6]3+ and then the violet color of [V(H2O)6]2+. Another potential vanadium battery based on VB2 uses multiple oxidation state to allow for 11 electrons to be released per VB2, giving it higher energy capacity by order of compared to Li-ion and gasoline per unit volume. VB2 batteries can be further enhanced as air batteries, allowing for even higher energy density and lower weight than lithium battery or gasoline, even though recharging remains a challenge. Oxyanions In an aqueous solution, vanadium(V) forms an extensive family of oxyanions as established by 51V NMR spectroscopy. The interrelationships in this family are described by the predominance diagram, which shows at least 11 species, depending on pH and concentration. The tetrahedral orthovanadate ion, , is the principal species present at pH 12–14. Similar in size and charge to phosphorus(V), vanadium(V) also parallels its chemistry and crystallography. Orthovanadate V is used in protein crystallography to study the biochemistry of phosphate. Besides that, this anion also has been shown to interact with the activity of some specific enzymes. The tetrathiovanadate [VS4]3− is analogous to the orthovanadate ion. At lower pH values, the monomer [HVO4]2− and dimer [V2O7]4− are formed, with the monomer predominant at a vanadium concentration of less than c. 10−2M (pV > 2, where pV is equal to the minus value of the logarithm of the total vanadium concentration/M). The formation of the divanadate ion is analogous to the formation of the dichromate ion. As the pH is reduced, further protonation and condensation to polyvanadates occur: at pH 4–6 [H2VO4]− is predominant at pV greater than ca. 4, while at higher concentrations trimers and tetramers are formed. Between pH 2–4 decavanadate predominates, its formation from orthovanadate is represented by this condensation reaction: 10 [VO4]3− + 24 H+ → [V10O28]6− + 12 H2O In decavanadate, each V(V) center is surrounded by six oxide ligands. Vanadic acid, H3VO4, exists only at very low concentrations because protonation of the tetrahedral species [H2VO4]− results in the preferential formation of the octahedral [VO2(H2O)4]+ species. In strongly acidic solutions, pH < 2, [VO2(H2O)4]+ is the predominant species, while the oxide V2O5 precipitates from solution at high concentrations. The oxide is formally the acid anhydride of vanadic acid. The structures of many vanadate compounds have been determined by X-ray crystallography. Vanadium(V) forms various peroxo complexes, most notably in the active site of the vanadium-containing bromoperoxidase enzymes. The species VO(O2)(H2O)4+ is stable in acidic solutions. In alkaline solutions, species with 2, 3 and 4 peroxide groups are known; the last forms violet salts with the formula M3V(O2)4 nH2O (M= Li, Na, etc.), in which the vanadium has an 8-coordinate dodecahedral structure. Halide derivatives Twelve binary halides, compounds with the formula VXn (n=2..5), are known. VI4, VCl5, VBr5, and VI5 do not exist or are extremely unstable. In combination with other reagents, VCl4 is used as a catalyst for the polymerization of dienes. Like all binary halides, those of vanadium are Lewis acidic, especially those of V(IV) and V(V). Many of the halides form octahedral complexes with the formula VXnL6−n (X= halide; L= other ligand). Many vanadium oxyhalides (formula VOmXn) are known. The oxytrichloride and oxytrifluoride (VOCl3 and VOF3) are the most widely studied. Akin to POCl3, they are volatile, adopt tetrahedral structures in the gas phase, and are Lewis acidic. Coordination compounds Complexes of vanadium(II) and (III) are reducing, while those of V(IV) and V(V) are oxidants. The vanadium ion is rather large and some complexes achieve coordination numbers greater than 6, as is the case in [V(CN)7]4−. Oxovanadium(V) also forms 7 coordinate coordination complexes with tetradentate ligands and peroxides and these complexes are used for oxidative brominations and thioether oxidations. The coordination chemistry of V4+ is dominated by the vanadyl center, VO2+, which binds four other ligands strongly and one weakly (the one trans to the vanadyl center). An example is vanadyl acetylacetonate (V(O)(O2C5H7)2). In this complex, the vanadium is 5-coordinate, distorted square pyramidal, meaning that a sixth ligand, such as pyridine, may be attached, though the association constant of this process is small. Many 5-coordinate vanadyl complexes have a trigonal bipyramidal geometry, such as VOCl2(NMe3)2. The coordination chemistry of V5+ is dominated by the relatively stable dioxovanadium coordination complexes which are often formed by aerial oxidation of the vanadium(IV) precursors indicating the stability of the +5 oxidation state and ease of interconversion between the +4 and +5 states. Organometallic compounds The organometallic chemistry of vanadium is welldeveloped. Vanadocene dichloride is a versatile starting reagent and has applications in organic chemistry. Vanadium carbonyl, V(CO)6, is a rare example of a paramagnetic metal carbonyl. Reduction yields V (isoelectronic with Cr(CO)6), which may be further reduced with sodium in liquid ammonia to yield V (isoelectronic with Fe(CO)5). Occurrence Metallic vanadium is rare in nature (known as native vanadium), having been found among fumaroles of the Colima Volcano, but vanadium compounds occur naturally in about 65 different minerals. Vanadium began to be used in the manufacture of special steels in 1896. At that time, very few deposits of vanadium ores were known. Between 1899 and 1906, the main deposits exploited were the mines of Santa Marta de los Barros (Badajoz), Spain. Vanadinite was extracted from these mines. At the beginning of the 20th century, a large deposit of vanadium ore was discovered near Junín, Cerro de Pasco, Peru (now the Minas Ragra vanadium mine). For several years this patrónite (VS4) deposit was an economically significant source for vanadium ore. In 1920 roughly two-thirds of the worldwide production was supplied by the mine in Peru. With the production of uranium in the 1910s and 1920s from carnotite () vanadium became available as a side product of uranium production. Vanadinite () and other vanadium bearing minerals are only mined in exceptional cases. With the rising demand, much of the world's vanadium production is now sourced from vanadium-bearing magnetite found in ultramafic gabbro bodies. If this titanomagnetite is used to produce iron, most of the vanadium goes to the slag and is extracted from it. Vanadium is mined mostly in China, South Africa and eastern Russia. In 2022 these three countries mined more than 96% of the 100,000 tons of produced vanadium, with China providing 70%. Fumaroles of Colima are known of being vanadium-rich, depositing other vanadium minerals, that include shcherbinaite (V2O5) and colimaite (K3VS4). Vanadium is also present in bauxite and deposits of crude oil, coal, oil shale, and tar sands. In crude oil, concentrations up to 1200 ppm have been reported. When such oil products are burned, traces of vanadium may cause corrosion in engines and boilers. An estimated 110,000 tons of vanadium per year are released into the atmosphere by burning fossil fuels. Black shales are also a potential source of vanadium. During WWII some vanadium was extracted from alum shales in the south of Sweden. In the universe, the cosmic abundance of vanadium is 0.0001%, making the element nearly as common as copper or zinc. Vanadium is the 19th most abundant element in the crust. It is detected spectroscopically in light from the Sun and sometimes in the light from other stars. The vanadyl ion is also abundant in seawater, having an average concentration of 30 nM (1.5 mg/m3). Some mineral water springs also contain the ion in high concentrations. For example, springs near Mount Fuji contain as much as 54 μg per liter. Production Vanadium metal is obtained by a multistep process that begins with roasting crushed ore with NaCl or Na2CO3 at about 850 °C to give sodium metavanadate (NaVO3). An aqueous extract of this solid is acidified to produce "red cake", a polyvanadate salt, which is reduced with calcium metal. As an alternative for small-scale production, vanadium pentoxide is reduced with hydrogen or magnesium. Many other methods are also used, in all of which vanadium is produced as a byproduct of other processes. Purification of vanadium is possible by the crystal bar process developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925. It involves the formation of the metal iodide, in this example vanadium(III) iodide, and the subsequent decomposition to yield pure metal: 2 V + 3 I2 2 VI3 Most vanadium is used as a steel alloy called ferrovanadium. Ferrovanadium is produced directly by reducing a mixture of vanadium oxide, iron oxides and iron in an electric furnace. The vanadium ends up in pig iron produced from vanadium-bearing magnetite. Depending on the ore used, the slag contains up to 25% of vanadium. Applications Alloys Approximately 85% of the vanadium produced is used as ferrovanadium or as a steel additive. The considerable increase of strength in steel containing small amounts of vanadium was discovered in the early 20th century. Vanadium forms stable nitrides and carbides, resulting in a significant increase in the strength of steel. From that time on, vanadium steel was used for applications in axles, bicycle frames, crankshafts, gears, and other critical components. There are two groups of vanadium steel alloys. Vanadium high-carbon steel alloys contain 0.15–0.25% vanadium, and high-speed tool steels (HSS) have a vanadium content of 1–5%. For high-speed tool steels, a hardness above HRC 60 can be achieved. HSS steel is used in surgical instruments and tools. Powder-metallurgic alloys contain up to 18% percent vanadium. The high content of vanadium carbides in those alloys increases wear resistance significantly. One application for those alloys is tools and knives. Vanadium stabilizes the beta form of titanium and increases the strength and temperature stability of titanium. Mixed with aluminium in titanium alloys, it is used in jet engines, high-speed airframes and dental implants. The most common alloy for seamless tubing is Titanium 3/2.5 containing 2.5% vanadium, the titanium alloy of choice in the aerospace, defense, and bicycle industries. Another common alloy, primarily produced in sheets, is Titanium 6AL-4V, a titanium alloy with 6% aluminium and 4% vanadium. Several vanadium alloys show superconducting behavior. The first A15 phase superconductor was a vanadium compound, V3Si, which was discovered in 1952. Vanadium-gallium tape is used in superconducting magnets (17.5 teslas or 175,000 gauss). The structure of the superconducting A15 phase of V3Ga is similar to that of the more common Nb3Sn and Nb3Ti. It has been found that a small amount, 40 to 270 ppm, of vanadium in Wootz steel significantly improved the strength of the product, and gave it the distinctive patterning. The source of the vanadium in the original Wootz steel ingots remains unknown. Vanadium can be used as a substitute for molybdenum in armor steel, though the alloy produced is far more brittle and prone to spalling on non-penetrating impacts. The Third Reich was one of the most prominent users of such alloys, in armored vehicles like Tiger II or Jagdtiger. Catalysts Vanadium compounds are used extensively as catalysts; Vanadium pentoxide V2O5, is used as a catalyst in manufacturing sulfuric acid by the contact process In this process sulfur dioxide () is oxidized to the trioxide (): In this redox reaction, sulfur is oxidized from +4 to +6, and vanadium is reduced from +5 to +4: V2O5 + SO2 → 2 VO2 + SO3 The catalyst is regenerated by oxidation with air: 4 VO2 + O2 → 2 V2O5 Similar oxidations are used in the production of maleic anhydride: C4H10 + 3.5 O2 → C4H2O3 + 4 H2O Phthalic anhydride and several other bulk organic compounds are produced similarly. These green chemistry processes convert inexpensive feedstocks to highly functionalized, versatile intermediates. Vanadium is an important component of mixed metal oxide catalysts used in the oxidation of propane and propylene to acrolein, acrylic acid or the ammoxidation of propylene to acrylonitrile. Other uses The vanadium redox battery, a type of flow battery, is an electrochemical cell consisting of aqueous vanadium ions in different oxidation states. Batteries of this type were first proposed in the 1930s and developed commercially from the 1980s onwards. Cells use +5 and +2 formal oxidization state ions. Vanadium redox batteries are used commercially for grid energy storage. Vanadate can be used for protecting steel against rust and corrosion by conversion coating. Vanadium foil is used in cladding titanium to steel because it is compatible with both iron and titanium. The moderate thermal neutron-capture cross-section and the short half-life of the isotopes produced by neutron capture makes vanadium a suitable material for the inner structure of a fusion reactor. Vanadium can be added in small quantities < 5% to LFP battery cathodes to increase ionic conductivity. Proposed Lithium vanadium oxide has been proposed for use as a high energy density anode for lithium-ion batteries, at 745 Wh/L when paired with a lithium cobalt oxide cathode. Vanadium phosphates have been proposed as the cathode in the lithium vanadium phosphate battery, another type of lithium-ion battery. Biological role Vanadium has a more significant role in marine environments than terrestrial ones. Vanadoenzymes Several species of marine algae produce vanadium bromoperoxidase as well as the closely related chloroperoxidase (which may use a heme or vanadium cofactor) and iodoperoxidases. The bromoperoxidase produces an estimated 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. Most naturally occurring organobromine compounds are produced by this enzyme, catalyzing the following reaction (R-H is hydrocarbon substrate): A vanadium nitrogenase is used by some nitrogen-fixing micro-organisms, such as Azotobacter. In this role, vanadium serves in place of the more common molybdenum or iron, and gives the nitrogenase slightly different properties. Vanadium accumulation in tunicates Vanadium is essential to tunicates, where it is stored in the highly acidified vacuoles of certain blood cell types, designated vanadocytes. Vanabins (vanadium-binding proteins) have been identified in the cytoplasm of such cells. The concentration of vanadium in the blood of ascidian tunicates is as much as ten million times higher than the surrounding seawater, which normally contains 1 to 2 μg/L. The function of this vanadium concentration system and these vanadium-bearing proteins is still unknown, but the vanadocytes are later deposited just under the outer surface of the tunic, where they may deter predation. Fungi Amanita muscaria and related species of macrofungi accumulate vanadium (up to 500 mg/kg in dry weight). Vanadium is present in the coordination complex amavadin in fungal fruit-bodies. The biological importance of the accumulation is unknown. Toxic or peroxidase enzyme functions have been suggested. Mammals Deficiencies in vanadium result in reduced growth in rats. The U.S. Institute of Medicine has not confirmed that vanadium is an essential nutrient for humans, so neither a Recommended Dietary Intake nor an Adequate Intake have been established. Dietary intake is estimated at 6 to 18 μg/day, with less than 5% absorbed. The Tolerable Upper Intake Level (UL) of dietary vanadium, beyond which adverse effects may occur, is set at 1.8 mg/day. Research Vanadyl sulfate as a dietary supplement has been researched as a means of increasing insulin sensitivity or otherwise improving glycemic control in people who are diabetic. Some of the trials had significant treatment effects but were deemed as being of poor study quality. The amounts of vanadium used in these trials (30 to 150 mg) far exceeded the safe upper limit. The conclusion of the systemic review was "There is no rigorous evidence that oral vanadium supplementation improves glycaemic control in type 2 diabetes. The routine use of vanadium for this purpose cannot be recommended." In astrobiology, it has been suggested that discrete vanadium accumulations on Mars could be a potential microbial biosignature when used in conjunction with Raman spectroscopy and morphology. Safety All vanadium compounds should be considered toxic. Tetravalent VOSO4 has been reported to be at least 5 times more toxic than trivalent V2O3. The US Occupational Safety and Health Administration (OSHA) has set an exposure limit of 0.05 mg/m3 for vanadium pentoxide dust and 0.1 mg/m3 for vanadium pentoxide fumes in workplace air for an 8-hour workday, 40-hour work week. The US National Institute for Occupational Safety and Health (NIOSH) has recommended that 35 mg/m3 of vanadium be considered immediately dangerous to life and health, that is, likely to cause permanent health problems or death. Vanadium compounds are poorly absorbed through the gastrointestinal system. Inhalation of vanadium and vanadium compounds results primarily in adverse effects on the respiratory system. Quantitative data are, however, insufficient to derive a subchronic or chronic inhalation reference dose. Other effects have been reported after oral or inhalation exposures on blood parameters, liver, neurological development, and other organs in rats. There is little evidence that vanadium or vanadium compounds are reproductive toxins or teratogens. Vanadium pentoxide was reported to be carcinogenic in male rats and in male and female mice by inhalation in an NTP study, although the interpretation of the results has been disputed a few years after the report. The carcinogenicity of vanadium has not been determined by the United States Environmental Protection Agency. Vanadium traces in diesel fuels are the main fuel component in high temperature corrosion. During combustion, vanadium oxidizes and reacts with sodium and sulfur, yielding vanadate compounds with melting points as low as , which attack the passivation layer on steel and render it susceptible to corrosion. The solid vanadium compounds also abrade engine components.
Physical sciences
Chemical elements_2
null
32435
https://en.wikipedia.org/wiki/Vellum
Vellum
Vellum is prepared animal skin or membrane, typically used as writing material. It is often distinguished from parchment, either by being made from calfskin (rather than the skin of other animals), or simply by being of a higher quality. Vellum is prepared for writing and printing on single pages, scrolls, and codices (books). Modern scholars and experts often prefer to use the broader term "membrane", which avoids the need to draw a distinction between vellum and parchment. It may be very hard to determine the animal species involved (let alone its age) without detailed scientific analysis. Vellum is generally smooth and durable, but there are great variations in its texture which are affected by the way it is made and the quality of the skin. The making involves the cleaning, bleaching, stretching on a frame (a "herse"), and scraping of the skin with a crescent-shaped knife (a "lunarium" or "lunellum"). To create tension, the process goes back and forth between scraping, wetting and drying. Scratching the surface with pumice, and treating with lime or chalk to make it suitable for writing or printing ink can create a final look. Modern "paper vellum" is made of plant cellulose fibers and gets its name from its similar usage to actual vellum, as well as its high quality. It is used for a variety of purposes including tracing, technical drawings, plans and blueprints. Tracing paper is essentially the same thing, however the quality level differs, sometimes greatly. Terminology Though Christopher de Hamel, an expert on medieval manuscripts, writes that "for most purposes the words parchment and vellum are interchangeable", a number of distinctions have been made in the past and present. The word "vellum" is borrowed from Old French vélin 'calfskin', derived in turn from the Latin word vitulinum 'made from calf'. However, in Europe, from Roman times, the word was used for the best quality of prepared skin, regardless of the animal from which the hide was obtained. Calf, sheep, and goat were all commonly used, and other animals, including pig, deer, donkey, horse, or camel were used on occasion. The best quality, "uterine vellum", was said to be made from the skins of stillborn or unborn animals, although the term was also applied to fine quality skins made from young animals. However, there has long been much blurring of the boundaries between these terms. In 1519, William Horman could write in his Vulgaria: "That stouffe that we wrytte upon, and is made of beestis skynnes, is somtyme called parchement, somtyme velem, somtyme abortyve, somtyme membraan." Writing in 1936, Lee Ustick explained that: French sources, closer to the original etymology, tend to define velin as from calf only, while the British Standards Institution defines parchment as made from the split skin of several species, and vellum from the unsplit skin. In the usage of modern practitioners of the artistic crafts of writing, illuminating, lettering, and bookbinding, "vellum" is normally reserved for calfskin, while any other skin is called "parchment". Manufacture Vellum allows some light to pass through it. It is made from the skin of a young animal. The skin is washed with water and lime (calcium hydroxide), and then soaked in lime for several days to soften and remove the hair. Once clear, the two sides of the skin are distinct: the body side and the hairy side. The "inside body side" of the skin is usually the lighter and more refined of the two. The hair follicles may be visible on the outer side, together with any scars from when the animal was alive. The membrane can also show the pattern of the animal's vein network called the "veining" of the sheet. The makers remove any remaining hair ("scudding") and dry the skin by attaching it to a frame (a "herse"). They attach the skin at points around the edge with cords and wrap the part next to these points around a pebble (a "pippin"). They then use a crescent shaped knife, (a "lunarium" or "lunellum"), to clean off any remaining hairs. The makers thoroughly clean the skin and process it into sheets once it is completely dry. They can extract many sheets from the piece of skin. The number of sheets depends on the size of the skin and the required length and breadth of each individual sheet. For example, the average calfskin could provide roughly three and a half medium sheets of writing material. The makers can double it when they fold the skin into two conjoined leaves, also known as a bifolium. Historians have found evidence of manuscripts where the scribe wrote down the medieval instructions now followed by modern membrane makers. The makers rubbed them with a round, flat object ("pouncing") to ensure that the ink would adhere to the surface. Even so, ink would gradually flake off of the membrane, especially if it was used in a scroll that was frequently rolled and unrolled. Manuscripts Preparing manuscripts Once the vellum is prepared, traditionally a quire is formed of a group of several sheets. Raymond Clemens and Timothy Graham point out, in their Introduction to Manuscript Studies, that "the quire was the scribe's basic writing unit throughout the Middle Ages". Guidelines are then made on the membrane. They note pricking' is the process of making holes in a sheet of parchment (or membrane) in preparation of its ruling. The lines were then made by ruling between the prick marks ...The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns". Usage Most of the finer sort of medieval manuscripts, whether illuminated or not, were written on vellum. Some Gandhāran Buddhist texts were written on vellum, and all Sifrei Torah (Hebrew: ספר תורה Sefer Torah; plural: ספרי תורה, Sifrei Torah) are written on kosher klaf or vellum. A quarter of the 180-copy edition of Johannes Gutenberg's first Bible printed in 1455 with movable type was also printed on vellum, presumably because his market expected this for a high-quality book. Paper was used for most book-printing, as it was cheaper and easier to process through a printing press and to bind. The twelfth-century Winchester Bible was also written on approximately 250 calfskins. In art, vellum was used for paintings, especially if they needed to be sent long distances, before canvas became widely used in about 1500, and continued to be used for drawings, and watercolours. Old master prints were sometimes printed on vellum, especially for presentation copies, until at least the seventeenth century. Limp vellum or limp-parchment bindings were used frequently in the 16th and 17th centuries, and were sometimes gilt but were also often not embellished. In later centuries vellum has been more commonly used like leather, that is, as the covering for stiff board bindings. Vellum can be stained virtually any color but seldom is, as a great part of its beauty and appeal rests in its faint grain and hair markings, as well as its warmth and simplicity. Lasting in excess of 1,000 years—for example, Pastoral Care (Troyes, Bibliothèque Municipale, MS 504), dates from about 600 and is in excellent condition—animal vellum can be far more durable than paper. For this reason, many important documents are written on animal vellum, such as diplomas. Referring to a diploma as a "sheepskin" alludes to the time when diplomas were written on vellum made from animal hides. Modern usage British Acts of Parliament are still printed on vellum for archival purposes, as are those of the Republic of Ireland. In February 2016, the UK House of Lords announced that legislation would be printed on archival paper instead of the traditional vellum from April 2016. However, Cabinet Office Minister Matthew Hancock intervened by agreeing to fund the continued use of vellum from the Cabinet Office budget. On 2017, the House of Commons Commission agreed that it would provide front and back vellum covers for record copies of Acts. Today, because of low demand and complicated manufacturing process, animal vellum is expensive and hard to find. The only UK company still producing traditional parchment and vellum is William Cowley (established 1870), which is based in Newport Pagnell, Buckinghamshire. A modern imitation is made of cotton. Known as paper vellum, this material is considerably cheaper than animal vellum and can be found in most art and drafting supply stores. Some brands of writing paper and other sorts of paper use the term "vellum" to suggest quality. Vellum is still used for Jewish scrolls, of the Torah in particular, for luxury bookbinding, memorial books, and for various documents in calligraphy. It is also used on instruments such as the banjo and the bodhran, although synthetic skins are available for these instruments and have become more commonly used. The Catholic Church still issues its decrees and diplomas for its officials on vellum. Paper vellum Modern imitation vellum is made from plasticized rag cotton or fibers from interior tree bark. Terms include: paper vellum, Japanese vellum, and vegetable vellum. Paper vellum is usually translucent and its various sizes are often used in applications where tracing is required, such as architectural plans. Its dimensions are more stable than a linen or paper sheet, which is frequently critical in the development of large scaled drawings such as blueprints. Paper vellum has also become extremely important in hand or chemical reproduction technology for dissemination of plan copies. Like a high-quality traditional vellum, paper vellum could be produced thin enough to be virtually transparent to strong light, enabling a source drawing to be used directly in the reproduction of field-used drawings. Preservation Vellum is ideally stored in a stable environment with constant temperature and 30% (± 5%) relative humidity. If vellum is stored in an environment with less than 11% relative humidity, it becomes fragile, and vulnerable to mechanical stresses. However, if it is stored in an environment with greater than 40% relative humidity, it becomes vulnerable to gelation and to mould or fungus growth. The optimal relative humidity for proper storage of vellum does not overlap that of paper, which poses a challenge for libraries. The optimal temperature for the keeping of vellum is approximately .
Technology
Materials
null
32441
https://en.wikipedia.org/wiki/Video
Video
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types. Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities, and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcasts, magnetic tape, optical discs, computer files, and network streaming. Etymology The word video comes from the Latin verb video (I see). History Analog video Video developed from facsimile systems developed in the mid-19th century. Early mechanical video scanners, such as the Nipkow disk, were patented as early as 1884, however, it took several decades before practical video systems could be developed, many decades after film. Film records using a sequence of miniature photographic images visible to the eye when the film is physically examined. Video, by contrast, encodes images electronically, turning the images into analog or digital electronic signals for transmission or recording. Video technology was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) television systems. Video was originally exclusively live technology. Live video cameras used an electron beam, which would scan a photoconductive plate with the desired image and produce a voltage signal proportional to the brightness in each part of the image. The signal could then be sent to televisions, where another beam would receive and display the image. Charles Ginsburg led an Ampex research team to develop one of the first practical video tape recorders (VTR). In 1951, the first VTR captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape. Video recorders were sold for $50,000 in 1956, and videotapes cost US$300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market. Digital video Digital video is capable of higher quality and, eventually, a much lower cost than earlier analog technology. After the commercial introduction of the DVD in 1997 and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allow even inexpensive personal computers and smartphones to capture, store, edit, and transmit digital video, further reducing the cost of video production and allowing programmers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition are in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology. the use of digital cameras in Hollywood has surpassed the use of film cameras. Characteristics of video streams Number of frames per second Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa, etc.) specify 25 frame/s, while NTSC standards (United States, Canada, Japan, etc.) specify 29.97 frame/s. Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second. Interlaced vs. progressive Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning. In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display. NTSC, PAL, and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second. When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple line doubling—artifacts, such as flickering or "comb" effects in moving parts of the image that appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an LCD television, digital video projector, or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material. Aspect ratio Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, and this can be described by a ratio between width and height. The ratio of width to height for a traditional television screen is 4:3, or about 1.33:1. High-definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1. Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display. The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Reportgrowing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat's are watched in their entirety nine times more frequently than landscape video ads. Color model and depth The color model uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically, YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television, and YCbCr is used for digital video. The number of distinct colors a pixel can represent depends on the color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes. Video quality Video quality can be measured with formal metrics like peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video, followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying." Video compression method (digital only) Uncompressed video delivers maximum quality, but at a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVD, Blu-ray, and satellite television, and MPEG-4, used for AVCHD, mobile phones (3GP), and the Internet. Stereoscopic Stereoscopic video for 3D film and other applications can be displayed using several different methods: Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters. Anaglyph 3D, where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcasts or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content. One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image for each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications, such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two. Formats Different layers of video transmission and storage each provide their own set of formats to choose from. For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution, and color space. Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, for which a number is available. Analog video Analog video is a video signal represented by one or more analog signals. Analog color video signals include luminance (Y) and chrominance (C). When combined into one channel, as is the case among others with NTSC, PAL, and SECAM, it is called composite video. Analog video may be carried in separate channels, as in two-channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications. Digital video Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface. Transport medium Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards. Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream, SMPTE 2022 and SMPTE 2110. Display standards Digital television Digital television broadcasts use the MPEG-2 and other video coding formats and include: ATSC – United States, Canada, Mexico, Korea Digital Video Broadcasting (DVB) – Europe ISDB – Japan ISDB-Tb – uses the MPEG-4 video coding format – Brazil, Argentina Digital multimedia broadcasting (DMB) – Korea Analog television Analog television broadcast standards include: Field-sequential color system (FCS) – US, Russia; obsolete Multiplexed Analogue Components (MAC) – Europe; obsolete Multiple sub-Nyquist sampling encoding (MUSE) – Japan NTSC – United States, Canada, Japan EDTV-II "Clear-Vision" - NTSC extension, Japan PAL – Europe, Asia, Oceania PAL-M – PAL variation, Brazil PAL-N – PAL variation, Argentina, Paraguay and Uruguay PALplus – PAL extension, Europe RS-343 (military) SECAM – France, former Soviet Union, Central Africa CCIR System A CCIR System B CCIR System G CCIR System H CCIR System I CCIR System M An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval. Computer displays Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available. Recording Early television was almost exclusively a live medium, with some programs recorded to film for historical purposes using Kinescope. The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers, or consumers; or were important historically. VERA (BBC experimental format ca. 1952) 2" Quadruplex videotape (Ampex 1956) 1" Type A videotape (Ampex) 1/2" EIAJ (1969) U-matic 3/4" (Sony) 1/2" Cartrivision (Avco) VCR, VCR-LP, SVR 1" Type B videotape (Robert Bosch GmbH) 1" Type C videotape (Ampex, Marconi and Sony) 2" Helical Scan Videotape (IVC) (1975) Betamax (Sony) (1975) VHS (JVC) (1976) Video 2000 (Philips) (1979) 1/4" CVC (Funai) (1980) Betacam (Sony) (1982) VHS-C (JVC) (1982) HDVS (Sony) (1984) Video8 (Sony) (1986) Betacam SP (Sony) (1987) S-VHS (JVC) (1987) Pixelvision (Fisher-Price) (1987) UniHi 1/2" HD (1988) Hi8 (Sony) (mid-1990s) W-VHS (JVC) (1994) Digital video tape recorders offered improved quality compared to analog recorders. Betacam IMX (Sony) D-VHS (JVC) D-Theater D1 (Sony) D2 (Sony) D3 D5 HD D6 (Philips) Digital-S D9 (JVC) Digital Betacam (Sony) Digital8 (Sony) DV (including DVC-Pro) HDCAM (Sony) HDV ProHD (JVC) MicroMV MiniDV Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats. Blu-ray Disc (Sony) China Blue High-definition Disc (CBHD) DVD (was Super Density Disc, DVD Forum) Professional Disc Universal Media Disc (UMD) (Sony) Enhanced Versatile Disc (EVD, Chinese government-sponsored) HD DVD (NEC and Toshiba) HD-VMD Capacitance Electronic Disc Laserdisc (MCA and Philips) Television Electronic Disc (Teldec and Telefunken) VHD (JVC) Video CD Digital encoding formats A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder. The compressed data format usually conforms to a standard video coding format. The compression is typically lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video. CCIR 601 (ITU-T) H.261 (ITU-T) H.263 (ITU-T) H.264/MPEG-4 AVC (ITU-T + ISO) H.265 M-JPEG (ISO) MPEG-1 (ISO) MPEG-2 (ITU-T + ISO) MPEG-4 (ISO) Ogg-Theora VP8-WebM VC-1 (SMPTE)
Technology
Media and communication
null
32473
https://en.wikipedia.org/wiki/Vaccination
Vaccination
Vaccination is the administration of a vaccine to help the immune system develop immunity from a disease. Vaccines contain a microorganism or virus in a weakened, live or killed state, or proteins or toxins from the organism. In stimulating the body's adaptive immunity, they help prevent sickness from an infectious disease. When a sufficiently large percentage of a population has been vaccinated, herd immunity results. Herd immunity protects those who may be immunocompromised and cannot get a vaccine because even a weakened version would harm them. The effectiveness of vaccination has been widely studied and verified. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the elimination of diseases such as polio and tetanus from much of the world. According to the World Health Organization (WHO), vaccination prevents 3.5–5 million deaths per year. A WHO-funded study by The Lancet estimates that, during the 50-year period starting in 1974, vaccination prevented 154 million deaths, including 146 million among children under age 5. However, some diseases have seen rising cases due to relatively low vaccination rates attributable partly to vaccine hesitancy. The first disease people tried to prevent by inoculation was most likely smallpox, with the first recorded use of variolation occurring in the 16th century in China. It was also the first disease for which a vaccine was produced. Although at least six people had used the same principles years earlier, the smallpox vaccine was invented in 1796 by English physician Edward Jenner. He was the first to publish evidence that it was effective and to provide advice on its production. Louis Pasteur furthered the concept through his work in microbiology. The immunization was called vaccination because it was derived from a virus affecting cows ( 'cow'). Smallpox was a contagious and deadly disease, causing the deaths of 20–60% of infected adults and over 80% of infected children. When smallpox was finally eradicated in 1979, it had already killed an estimated 300–500 million people in the 20th century. Vaccination and immunization have a similar meaning in everyday language. This is distinct from inoculation, which uses unweakened live pathogens. Vaccination efforts have been met with some reluctance on scientific, ethical, political, medical safety, and religious grounds, although no major religions oppose vaccination, and some consider it an obligation due to the potential to save lives. In the United States, people may receive compensation for alleged injuries under the National Vaccine Injury Compensation Program. Early success brought widespread acceptance, and mass vaccination campaigns have greatly reduced the incidence of many diseases in numerous geographic regions. The Centers for Disease Control and Prevention lists vaccination as one of the ten great public health achievements of the 20th century in the U.S. Mechanism of function Vaccines are a way of artificially activating the immune system to protect against infectious disease. The activation occurs through priming the immune system with an immunogen. Stimulating immune responses with an infectious agent is known as immunization. Vaccination includes various ways of administering immunogens. Most vaccines are administered before a patient has contracted a disease to help increase future protection. However, some vaccines are administered after the patient already has contracted a disease. Vaccines given after exposure to smallpox are reported to offer some protection from disease or may reduce the severity of disease. The first rabies immunization was given by Louis Pasteur to a child after he was bitten by a rabid dog. Since its discovery, the rabies vaccine has been proven effective in preventing rabies in humans when administered several times over 14 days along with rabies immune globulin and wound care. Other examples include experimental AIDS, cancer and Alzheimer's disease vaccines. Such immunizations aim to trigger an immune response more rapidly and with less harm than natural infection. Most vaccines are given by injection as they are not absorbed reliably through the intestines. Live attenuated polio, rotavirus, some typhoid, and some cholera vaccines are given orally to produce immunity in the bowel. While vaccination provides a lasting effect, it usually takes several weeks to develop. This differs from passive immunity (the transfer of antibodies, such as in breastfeeding), which has immediate effect. A vaccine failure is when an organism contracts a disease in spite of being vaccinated against it. Primary vaccine failure occurs when an organism's immune system does not produce antibodies when first vaccinated. Vaccines can fail when several series are given and fail to produce an immune response. The term "vaccine failure" does not necessarily imply that the vaccine is defective. Most vaccine failures are simply due to individual variations in immune response. Vaccination versus inoculation The term "inoculation" is often used interchangeably with "vaccination." However, while related, the terms are not synonymous. Vaccination is treatment of an individual with an attenuated (i.e. less virulent) pathogen or other immunogen, whereas inoculation, also called variolation in the context of smallpox prophylaxis, is treatment with unattenuated variola virus taken from a pustule or scab of a smallpox patient into the superficial layers of the skin, commonly the upper arm. Variolation was often done 'arm-to-arm' or, less effectively, 'scab-to-arm', and often caused the patient to become infected with smallpox, which in some cases resulted in severe disease. Vaccinations began in the late 18th century with the work of Edward Jenner and the smallpox vaccine. Preventing disease versus preventing infection Some vaccines, like the smallpox vaccine, prevent infection. Their use results in sterilizing immunity and can help eradicate a disease if there is no animal reserve. Other vaccines, including those for , help to (temporarily) lower the chance of severe disease for individuals, without necessarily reducing the probability of becoming infected. Safety Vaccine development and approval Just like any medication or procedure, no vaccine can be 100% safe or effective for everyone because each person's body can react differently. While minor side effects, such as soreness or low grade fever, are relatively common, serious side effects are very rare and occur in about 1 out of every 100,000 vaccinations and typically involve allergic reactions that can cause hives or difficulty breathing. However, vaccines are the safest they ever have been in history and each vaccine undergoes rigorous clinical trials to ensure their safety and efficacy before approval by authorities such as the US Food and Drug Administration (FDA). Prior to human testing, vaccines are tested on cell cultures and the results modelled to assess how they will interact with the immune system. During the next round of testing, researchers study vaccines in animals, including mice, rabbits, guinea pigs, and monkeys. Vaccines that pass each of these stages of testing are then approved by the public health safety authority (FDA in the United States) to start a three-phase series of human testing, advancing to higher phases only if they are deemed safe and effective at the previous phase. The people in these trials participate voluntarily and are required to prove they understand the purpose of the study and the potential risks. During phase I trials, a vaccine is tested in a group of about 20 people with the primary goal of assessing the vaccine's safety. Phase II trials expand the testing to include 50 to several hundred people. During this stage, the vaccine's safety continues to be evaluated and researchers also gather data on the effectiveness and the ideal dose of the vaccine. Vaccines determined to be safe and efficacious then advance to phase III trials, which focuses on the efficacy of the vaccine in hundreds to thousands of volunteers. This phase can take several years to complete and researchers use this opportunity to compare the vaccinated volunteers to those who have not been vaccinated to highlight any true reactions to the vaccine that occur. If a vaccine passes all of the phases of testing, the manufacturer can then apply for license of the vaccine through the relevant regulatory authorities such as the FDA in US. Before regulatory authorities approve use in the general public, they extensively review the results of the clinical trials, safety tests, purity tests, and manufacturing methods and establish that the manufacturer itself is up to government standards in many other areas. After regulatory approval, the regulators continue to monitor the manufacturing protocols, batch purity, and the manufacturing facility itself. Additionally, vaccines also undergo phase IV trials, which monitor the safety and efficacy of vaccines in tens of thousands of people, or more, across many years. Side effects The Centers for Disease Control and Prevention (CDC) has compiled a list of vaccines and their possible side effects. The risk of side effects varies between vaccines. Notable vaccine investigations In 1976 in the United States, a mass swine flu vaccination programme was discontinued after 362 cases of Guillain–Barré syndrome among 45 million vaccinated people. William Foege of the CDC estimated that the incidence of Guillain-Barré was four times higher in vaccinated people than in those not receiving the swine flu vaccine. Dengvaxia, the only approved vaccine for Dengue fever, was found to increase the risk of hospitalization for Dengue fever by 1.58 times in children of 9 years or younger, resulting in the suspension of a mass vaccination program in the Philippines in 2017. Pandemrix a vaccine for the H1N1 pandemic of 2009 given to around 31 million people was found to have a higher level of adverse events than alternative vaccines resulting in legal action. In a response to the narcolepsy reports following immunization with Pandemrix, the CDC carried out a population-based study and found the FDA-approved 2009 H1N1 flu shots were not associated with an increased risk for the neurological disorder. Ingredients The ingredients of vaccines can vary greatly from one to the next and no two vaccines are the same. The CDC has compiled a list of vaccines and their ingredients that is readily accessible on their website. Aluminium Aluminium is an adjuvant ingredient in some vaccines. An adjuvant is a type of ingredient that is used to help the body's immune system create a stronger immune response after receiving the vaccination. Aluminium is in a salt form (the ionic version of an element) and is used in the following compounds: aluminium hydroxide, aluminium phosphate, and aluminium potassium sulfate. For a given element, the ion form has different properties from the elemental form. Although it is possible to have aluminium toxicity, aluminium salts have been used effectively and safely since the 1930s when they were first used with the diphtheria and tetanus vaccines. Although there is a small increase in the chance of having a local reaction to a vaccine with an aluminium salt (redness, soreness, and swelling), there is no increased risk of any serious reactions. Mercury Certain vaccines once contained a compound called thiomersal or thimerosal, which is an organic compound containing mercury. Organomercury is commonly found in two forms. The methylmercury cation (with one carbon atom) is found in mercury-contaminated fish and is the form that people might ingest in mercury-polluted areas (Minamata disease), whereas the ethylmercury cation (with two carbon atoms) is present in thimerosal, linked to thiosalicylate. Although both are organomercury compounds, they do not have the same chemical properties and interact with the human body differently. Ethylmercury is cleared from the body faster than methylmercury and is less likely to cause toxic effects. Thimerosal was used as a preservative to prevent the growth of bacteria and fungi in vials that contain more than one dose of a vaccine. This helps reduce the risk of potential infections or serious illness that could occur from contamination of a vaccine vial. Although there was a small increase in risk of injection site redness and swelling with vaccines containing thimerosal, there was no increased risk of serious harm or autism. Even though evidence supports the safety and efficacy of thimerosal in vaccines, thimerosal was removed from childhood vaccines in the United States in 2001 as a precaution. Monitoring CDC Immunization Safety Office initiatives Vaccine Adverse Event Reporting System (VAERS) |Food and Drug Administration (FDA) Center for Biologics Evaluation and Research (CBER) |Immunization Action Coalition (IAC) Vaccine Safety Datalink (VSD) |Health Resources and Service Administration (HRSA) |Institute for Safe Medication Practices (ISMP) Clinical Immunization Safety Assessment (CISA) Project National Institutes of Health (NIH) National Vaccine Program Office (NVPO) The administration protocols, efficacy, and adverse events of vaccines are monitored by organizations of the US federal government, including the CDC and FDA, and independent agencies are constantly re-evaluating vaccine practices. As with all medications, vaccine use is determined by public health research, surveillance, and reporting to governments and the public. Usage The World Health Organization (WHO) has estimated that vaccination prevents 3.5–5 million deaths per year, and up to 1.5 million children die each year due to diseases that could have been prevented by vaccination. They estimate that 29% of deaths of children under five-years-old in 2013 were vaccine preventable. In other developing parts of the world, they are faced with the challenge of having a decreased availability of resources and vaccinations. Countries such as those in Sub-Saharan Africa cannot afford to provide the full range of childhood vaccinations. In 2024, a WHO/UNICEF report found “the number of children who received three doses of the vaccine against diphtheria, tetanus and pertussis (DTP) in 2023 – a key marker for global immunization coverage – stalled at 84% (108 million). However, the number of children who did not receive a single dose of the vaccine increased from 13.9 million in 2022 to 14.5 million in 2023. More than half of unvaccinated children live in the 31 countries with fragile, conflict-affected and vulnerable settings.” United States Vaccines have led to major decreases in the prevalence of infectious diseases in the United States. In 2007, studies regarding the effectiveness of vaccines on mortality or morbidity rates of those exposed to various diseases found almost 100% decreases in death rates, and about a 90% decrease in exposure rates. Vaccination adoption is reduced among some populations, such as those with low incomes, people with limited access to health care, and members of certain racial and ethnic minorities. Distrust of health-care providers, language barriers, and misleading or false information also contribute to lower adoption, as does anti-vaccine activism. Most government and private health insurance plans cover recommended vaccines at no charge when received by providers in their networks. The federal Vaccines for Children Program and the Social Security Act are among the major sources of financial support for vaccination of those in lower-income groups. The Centers for Disease Control and Prevention (CDC) publishes uniform national vaccine recommendations and immunization schedules, although state and local governments, as well as nongovernmental organizations, may have their own policies. History Before the first vaccinations, in the sense of using cowpox to inoculate people against smallpox, people have been inoculated in China and elsewhere, before being copied in the west, by using smallpox, called variolation. The earliest hints of the practice of variolation for smallpox in China come during the 10th century. The Chinese also practiced the oldest documented use of variolation, which comes from Wan Quan's (1499–1582) Douzhen Xinfa (痘疹心法) of 1549. They implemented a method of "nasal insufflation" administered by blowing powdered smallpox material, usually scabs, up the nostrils. Various insufflation techniques have been recorded throughout the sixteenth and seventeenth centuries within China. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. In France, Voltaire reports that the Chinese have practiced variolation "these hundred years". In 1796, Edward Jenner, a doctor in Berkeley in Gloucestershire, England, tested a common theory that a person who had contracted cowpox would be immune from smallpox. To test the theory, he took cowpox vesicles from a milkmaid named Sarah Nelmes with which he infected an eight-year-old boy named James Phipps, and two months later he inoculated the boy with smallpox, and smallpox did not develop. In 1798, Jenner published An Inquiry Into the Causes and Effects of the Variolæ Vaccinæ which created widespread interest. He distinguished 'true' and 'spurious' cowpox (which did not give the desired effect) and developed an "arm-to-arm" method of propagating the vaccine from the vaccinated individual's pustule. Early attempts at confirmation were confounded by contamination with smallpox, but despite controversy within the medical profession and religious opposition to the use of animal material, by 1801 his report was translated into six languages and over 100,000 people were vaccinated. The term vaccination was coined in 1800 by the surgeon Richard Dunning in his text Some observations on vaccination. In 1802, the Scottish physician Helenus Scott vaccinated dozens of children in Bombay against smallpox using Jenner's cowpox vaccine. In the same year Scott penned a letter to the editor in the Bombay Courier, declaring that "We have it now in our power to communicate the benefits of this important discovery to every part of India, perhaps to China and the whole eastern world". Subsequently, vaccination became firmly established in British India. A vaccination campaign was started in the new British colony of Ceylon in 1803. By 1807 the British had vaccinated more than a million Indians and Sri Lankans against smallpox. Also in 1803 the Spanish Balmis Expedition launched the first transcontinental effort to vaccinate people against smallpox. Following a smallpox epidemic in 1816 the Kingdom of Nepal ordered smallpox vaccine and requested the English veterinarian William Moorcroft to help in launching a vaccination campaign. In the same year a law was passed in Sweden to require the vaccination of children against smallpox by the age of two. Prussia briefly introduced compulsory vaccination in 1810 and again in the 1920s, but decided against a compulsory vaccination law in 1829. A law on compulsory smallpox vaccination was introduced in the Province of Hanover in the 1820s. In 1826, in Kragujevac, future prince Mihailo of Serbia was the first person to be vaccinated against smallpox in the principality of Serbia. Following a smallpox epidemic in 1837 that caused 40,000 deaths, the British government initiated a concentrated vaccination policy, starting with the Vaccination Act of 1840, which provided for universal vaccination and prohibited variolation. The Vaccination Act 1853 introduced compulsory smallpox vaccination in England and Wales. The law followed a severe outbreak of smallpox in 1851 and 1852. It provided that the poor law authorities would continue to dispense vaccination to all free of charge, but that records were to be kept on vaccinated children by the network of births registrars. It was accepted at the time, that voluntary vaccination had not reduced smallpox mortality, but the Vaccination Act 1853 was so badly implemented that it had little impact on the number of children vaccinated in England and Wales. The U.S. Supreme Court upheld compulsory vaccination laws in the 1905 landmark case Jacobson v. Massachusetts, ruling that laws could require vaccination to protect the public from dangerous communicable diseases. However, in practice the U.S. had the lowest rate of vaccination among industrialized nations in the early 20th century. Compulsory vaccination laws began to be enforced in the U.S. after World War II. In 1959, the WHO called for the eradication of smallpox worldwide, as smallpox was still endemic in 33 countries. In the 1960s six to eight children died each year in the U.S. from vaccination-related complications. According to the WHO there were in 1966 about 100 million cases of smallpox worldwide, causing an estimated two million deaths. In the 1970s there was such a small risk of contracting smallpox that the U.S. Public Health Service recommended for routine smallpox vaccination to be ended. By 1974 the WHO smallpox vaccination program had confined smallpox to parts of Pakistan, India, Bangladesh, Ethiopia and Somalia. In 1977 the WHO recorded the last case of smallpox infection acquired outside a laboratory in Somalia. In 1980 the WHO officially declared the world free of smallpox. In 1974 the WHO adopted the goal of universal vaccination by 1990 to protect children against six preventable infectious diseases: measles, poliomyelitis, diphtheria, whooping cough, tetanus, and tuberculosis. In the 1980s only 20 to 40% of children in developing countries were vaccinated against these six diseases. In wealthy nations the number of measles cases had dropped dramatically after the introduction of the measles vaccine in 1963. WHO figures demonstrate that in many countries a decline in measles vaccination leads to a resurgence in measles cases. Measles are so contagious that public health experts believe a vaccination rate of 100% is needed to control the disease. Despite decades of mass vaccination polio remains a threat in India, Nigeria, Somalia, Niger, Afghanistan, Bangladesh and Indonesia. By 2006 global health experts concluded that the eradication of polio was only possible if the supply of drinking water and sanitation facilities were improved in slums. The deployment of a combined DPT vaccine against diphtheria, pertussis (whooping cough), and tetanus in the 1950s was considered a major advancement for public health. But in the course of vaccination campaigns that spanned decades, DPT vaccines became associated with large number of cases with side effects. Despite improved DPT vaccines coming onto the market in the 1990s DPT vaccines became the focus of anti-vaccination campaigns in wealthy nations. As immunization rates fell outbreaks of pertussis increased in many countries. In 2000, the Global Alliance for Vaccines and Immunization was established to strengthen routine vaccinations and introduce new and underused vaccines in countries with a per capita GDP of under US$1000. UNICEF has reported on the extent to which children missed out on vaccinations from 2020 onwards due to the COVID-19 pandemic. By summer 2023, the organisation described vaccination programs as getting "back on track". Vaccination policy To eliminate the risk of outbreaks of some diseases, at various times governments and other institutions have employed policies requiring vaccination for all people. For example, an 1853 law required universal vaccination against smallpox in England and Wales, with fines levied on people who did not comply. Common contemporary U.S. vaccination policies require that children receive recommended vaccinations before entering public school. Beginning with early vaccination in the nineteenth century, these policies were resisted by a variety of groups, collectively called antivaccinationists, who object on scientific, ethical, political, medical safety, religious, and other grounds. Common objections are that vaccinations do not work, that compulsory vaccination constitutes excessive government intervention in personal matters, or that the proposed vaccinations are not sufficiently safe. Many modern vaccination policies allow exemptions for people who have compromised immune systems, allergies to the components used in vaccinations or strongly held objections. In countries with limited financial resources, limited vaccination coverage results in greater morbidity and mortality due to infectious disease. More affluent countries are able to subsidize vaccinations for at-risk groups, resulting in more comprehensive and effective coverage. In Australia, for example, the Government subsidizes vaccinations for seniors and indigenous Australians. Public Health Law Research, an independent US based organization, reported in 2009 that there is insufficient evidence to assess the effectiveness of requiring vaccinations as a condition for specified jobs as a means of reducing incidence of specific diseases among particularly vulnerable populations; that there is sufficient evidence supporting the effectiveness of requiring vaccinations as a condition for attending child care facilities and schools; and that there is strong evidence supporting the effectiveness of standing orders, which allow healthcare workers without prescription authority to administer vaccine as a public health intervention. Fractional dose vaccination Fractional dose vaccination reduces the dose of a vaccine to allow more individuals to be vaccinated with a given vaccine stock, trading societal benefit for individual protection. Based on the nonlinearity properties of many vaccines, it is effective in poverty diseases and promises benefits in pandemic waves, e.g. in COVID-19, when vaccine supply is limited. Litigation Allegations of vaccine injuries in recent decades have appeared in litigation in the U.S. Some families have won substantial awards from sympathetic juries, even though most public health officials have said that the claims of injuries were unfounded. In response, several vaccine makers stopped production, which the US government believed could be a threat to public health, so laws were passed to shield manufacturers from liabilities stemming from vaccine injury claims. The safety and side effects of multiple vaccines have been tested to uphold the viability of vaccines as a barrier against disease. The influenza vaccine was tested in controlled trials and proven to have negligible side effects equal to that of a placebo. Some concerns from families might have arisen from social beliefs and norms that cause them to mistrust or refuse vaccinations, contributing to this discrepancy in side effects that were unfounded. Opposition Opposition to vaccination, from a wide array of vaccine critics, has existed since the earliest vaccination campaigns. It is widely accepted that the benefits of preventing serious illness and death from infectious diseases greatly outweigh the risks of rare serious adverse effects following immunization. Some studies have claimed to show that current vaccine schedules increase infant mortality and hospitalization rates; those studies, however, are correlational in nature and therefore cannot demonstrate causal effects, and the studies have also been criticized for cherry picking the comparisons they report, for ignoring historical trends that support an opposing conclusion, and for counting vaccines in a manner that is "completely arbitrary and riddled with mistakes". Various disputes have arisen over the morality, ethics, effectiveness, and safety of vaccination. Some vaccination critics say that vaccines are ineffective against disease or that vaccine safety studies are inadequate. Some religious groups do not allow vaccination, and some political groups oppose mandatory vaccination on the grounds of individual liberty. In response, concern has been raised that spreading unfounded information about the medical risks of vaccines increases rates of life-threatening infections, not only in the children whose parents refused vaccinations, but also in those who cannot be vaccinated due to age or immunodeficiency, who could contract infections from unvaccinated carriers (see herd immunity). Some parents believe vaccinations cause autism, although there is no scientific evidence to support this idea. In 2011, Andrew Wakefield, a leading proponent of the theory that MMR vaccine causes autism, was found to have been financially motivated to falsify research data and was subsequently stripped of his medical license. In the United States people who refuse vaccines for non-medical reasons have made up a large percentage of the cases of measles, and subsequent cases of permanent hearing loss and death caused by the disease. Many parents do not vaccinate their children because they feel that diseases are no longer present due to vaccination. This is a false assumption, since diseases held in check by immunization programs can and do still return if immunization is dropped. These pathogens could possibly infect vaccinated people, due to the pathogen's ability to mutate when it is able to live in unvaccinated hosts. Vaccination and autism The notion of a connection between vaccines and autism originated in a 1998 paper published in The Lancet whose lead author was the physician Andrew Wakefield. His study concluded that eight of the 12 patients, ages 3 years to 10 years, developed behavioral symptoms consistent with autism following the MMR vaccine (an immunization against measles, mumps, and rubella). The article was widely criticized for lack of scientific rigor and it was proven that Wakefield falsified data in the article. In 2004, 10 of the original 12 co-authors (not including Wakefield) published a retraction of the article and stated the following: "We wish to make it clear that in this paper no causal link was established between MMR vaccine and autism as the data were insufficient." In 2010, The Lancet officially retracted the article, stating that several elements of the article were incorrect, including falsified data and protocols. The article has sparked a much greater anti-vaccination movement, particularly in the United States, and even though the article was shown to be fraudulent and was heavily retracted, one in four parents still believe that vaccines can cause autism. To date, all validated and definitive studies have shown that there is no correlation between vaccines and autism. One of the studies published in 2015 confirms there is no link between autism and the MMR vaccine. Infants were given a health plan, that included an MMR vaccine, and were continuously studied until they reached five years old. There was no link between the vaccine and children who had a normally developed sibling or a sibling that had autism making them a higher risk for developing autism themselves. It can be difficult to correct the memory of humans when wrong information is received prior to correct information. Even though there is much evidence to go against the Wakefield study and retractions were published by most of the co-authors, many people continue to believe and base decisions on the study as it still lingers in their memory. Studies and research are being conducted to determine effective ways to correct misinformation in the public memory. Routes of administration A vaccine administration may be oral, by injection (intramuscular, intradermal, subcutaneous), by puncture, transdermal or intranasal. Several recent clinical trials have aimed to deliver the vaccines via mucosal surfaces to be up-taken by the common mucosal immunity system, thus avoiding the need for injections. Economics of vaccination Health is often used as one of the metrics for determining the economic prosperity of a country. This is because healthier individuals are generally better suited to contributing to the economic development of a country than the sick. There are many reasons for this. For instance, a person who is vaccinated for influenza not only protects themselves from the risk of influenza, but simultaneously also prevents themselves from infecting those around them. This leads to a healthier society, which allows individuals to be more economically productive. Children are consequently able to attend school more often and have been shown to do better academically. Similarly, adults are able to work more often, more efficiently, and more effectively. Costs and benefits On the whole, vaccinations induce a net benefit to society. Vaccines are often noted for their high Return on investment (ROI) values, especially when considering the long-term effects. Some vaccines have much higher ROI values than others. Studies have shown that the ratios of vaccination benefits to costs can differ substantially—from 27:1 for diphtheria/pertussis, to 13.5:1 for measles, 4.76:1 for varicella, and 0.68–1.1 : 1 for pneumococcal conjugate. Some governments choose to subsidize the costs of vaccines, due to some of the high ROI values attributed to vaccinations. The United States subsidizes over half of all vaccines for children, which costs between $400 and $600 each. Although most children do get vaccinated, the adult population of the US is still below the recommended immunization levels. Many factors can be attributed to this issue. Many adults who have other health conditions are unable to be safely immunized, whereas others opt not to be immunized for the sake of private financial benefits. Many Americans are underinsured, and, as such, are required to pay for vaccines out-of-pocket. Others are responsible for paying high deductibles and co-pays. Although vaccinations usually induce long-term economic benefits, many governments struggle to pay the high short-term costs associated with labor and production. Consequently, many countries neglect to provide such services. According to a 2021 paper, vaccinations against haemophilus influenzae type b, hepatitis B, human papillomavirus, Japanese encephalitis, measles, neisseria meningitidis serogroup A, rotavirus, rubella, streptococcus pneumoniae, and yellow fever have prevented an estimated 50 million deaths from 2000 to 2019. The paper "represents the largest assessment of vaccine impact before COVID-19-related disruptions". According to a June 2022 study, COVID19 vaccinations prevented an additional 14.4 to 19.8 million deaths in 185 countries and territories from 8 December 2020 to 8 December 2021. They estimated that it would cost between $2.8 billion and $3.7 billion to develop at least one vaccine for each of them. This should be set against the potential cost of an outbreak. The 2003 SARS outbreak in East Asia cost $54 billion. Game theory uses utility functions to model costs and benefits, which may include financial and non-financial costs and benefits. In recent years, it has been argued that game theory can effectively be used to model vaccine uptake in societies. Researchers have used game theory for this purpose to analyse vaccination uptake in the context of diseases such as influenza and measles. Gallery
Biology and health sciences
Drugs and pharmacology
null
32476
https://en.wikipedia.org/wiki/Vagina
Vagina
In mammals and other animals, the vagina (: vaginas or vaginae) is the elastic, muscular reproductive organ of the female genital tract. In humans, it extends from the vulval vestibule to the cervix (neck of the uterus). The vaginal introitus is normally partly covered by a thin layer of mucosal tissue called the hymen. The vagina allows for copulation and birth. It also channels menstrual flow, which occurs in humans and closely related primates as part of the menstrual cycle. To accommodate smoother penetration of the vagina during sexual intercourse or other sexual activity, vaginal moisture increases during sexual arousal in human females and other female mammals. This increase in moisture provides vaginal lubrication, which reduces friction. The texture of the vaginal walls creates friction for the penis during sexual intercourse and stimulates it toward ejaculation, enabling fertilization. Along with pleasure and bonding, women's sexual behavior with other people can result in sexually transmitted infections (STIs), the risk of which can be reduced by recommended safe sex practices. Other health issues may also affect the human vagina. The vagina has evoked strong reactions in societies throughout history, including negative perceptions and language, cultural taboos, and their use as symbols for female sexuality, spirituality, or regeneration of life. In common speech, the word "vagina" is often used incorrectly to refer to the vulva or to the female genitals in general. Etymology and definition The term vagina is from Latin vāgīna, meaning "sheath" or "scabbard". The vagina may also be referred to as the birth canal in the context of pregnancy and childbirth. Although by its dictionary and anatomical definitions, the term vagina refers exclusively to the specific internal structure, it is colloquially used to refer to the vulva or to both the vagina and vulva. Using the term vagina to mean "vulva" can pose medical or legal confusion; for example, a person's interpretation of its location might not match another's interpretation of the location. Medically, one description of the vagina is that it is the canal between the hymen (or remnants of the hymen) and the cervix, while a legal description is that it begins at the vulva (between the labia). It may be that the incorrect use of the term vagina is due to not as much thought going into the anatomy of the female genitals as has gone into the study of male genitals, and that this has contributed to an absence of correct vocabulary for the external female genitalia among both the general public and health professionals. Because a better understanding of female genitalia can help combat sexual and psychological harm with regard to female development, researchers endorse correct terminology for the vulva. Structure Gross anatomy The human vagina is an elastic, muscular canal that extends from the vulva to the cervix. The opening of the vagina lies in the urogenital triangle. The urogenital triangle is the front triangle of the perineum and also consists of the urethral opening and associated parts of the external genitalia. The vaginal canal travels upwards and backwards, between the urethra at the front, and the rectum at the back. Near the upper vagina, the cervix protrudes into the vagina on its front surface at approximately a 90 degree angle. The vaginal and urethral openings are protected by the labia. When not sexually aroused, the vagina is a collapsed tube, with the front and back walls placed together. The lateral walls, especially their middle area, are relatively more rigid. Because of this, the collapsed vagina has an H-shaped cross section. Behind, the upper vagina is separated from the rectum by the recto-uterine pouch, the middle vagina by loose connective tissue, and the lower vagina by the perineal body. Where the vaginal lumen surrounds the cervix of the uterus, it is divided into four continuous regions (vaginal fornices); these are the anterior, posterior, right lateral, and left lateral fornices. The posterior fornix is deeper than the anterior fornix. Supporting the vagina are its upper, middle, and lower third muscles and ligaments. The upper third are the levator ani muscles, and the transcervical, pubocervical, and sacrocervical ligaments. It is supported by the upper portions of the cardinal ligaments and the parametrium. The middle third of the vagina involves the urogenital diaphragm. It is supported by the levator ani muscles and the lower portion of the cardinal ligaments. The lower third is supported by the perineal body, or the urogenital and pelvic diaphragms. The lower third may also be described as being supported by the perineal body and the pubovaginal part of the levator ani muscle. Vaginal opening and hymen The vaginal opening (also known as the vaginal introitus and the Latin ostium vaginae) is at the posterior end of the vulval vestibule, behind the urethral opening. The term introitus is more technically correct than "opening", since the vagina is usually collapsed, with the opening closed. The opening to the vagina is normally obscured by the labia minora (inner lips), but may be exposed after vaginal delivery. The hymen is a thin layer of mucosal tissue that surrounds or partially covers the vaginal opening. The effects of intercourse and childbirth on the hymen vary. Where it is broken, it may completely disappear or remnants known as carunculae myrtiformes may persist. Otherwise, being very elastic, it may return to its normal position. Additionally, the hymen may be lacerated by disease, injury, medical examination, masturbation or physical exercise. For these reasons, virginity cannot be definitively determined by examining the hymen. Variations and size The length of the vagina varies among women of child-bearing age. Because of the presence of the cervix in the front wall of the vagina, there is a difference in length between the front wall, approximately 7.5 cm (2.5 to 3 in) long, and the back wall, approximately 9 cm (3.5 in) long. During sexual arousal, the vagina expands both in length and width. If a woman stands upright, the vaginal canal points in an upward-backward direction and forms an angle of approximately 45 degrees with the uterus. The vaginal opening and hymen also vary in size; in children, although the hymen commonly appears crescent-shaped, many shapes are possible. Development The vaginal plate is the precursor to the vagina. During development, the vaginal plate begins to grow where the fused ends of the paramesonephric ducts (Müllerian ducts) enter the back wall of the urogenital sinus as the sinus tubercle. As the plate grows, it significantly separates the cervix and the urogenital sinus; eventually, the central cells of the plate break down to form the vaginal lumen. This usually occurs by the twenty to twenty-fourth week of development. If the lumen does not form, or is incomplete, membranes known as vaginal septa can form across or around the tract, causing obstruction of the outflow tract later in life. There are conflicting views on the embryologic origin of the vagina. The majority view is Koff's 1933 description, which posits that the upper two-thirds of the vagina originate from the caudal part of the Müllerian duct, while the lower part of the vagina develops from the urogenital sinus. Other views are Bulmer's 1957 description that the vaginal epithelium derives solely from the urogenital sinus epithelium, and Witschi's 1970 research, which reexamined Koff's description and concluded that the sinovaginal bulbs are the same as the lower portions of the Wolffian ducts. Witschi's view is supported by research by Acién et al., Bok and Drews. Robboy et al. reviewed Koff and Bulmer's theories, and support Bulmer's description in light of their own research. The debates stem from the complexity of the interrelated tissues and the absence of an animal model that matches human vaginal development. Because of this, study of human vaginal development is ongoing and may help resolve the conflicting data. Microanatomy The vaginal wall from the lumen outwards consists firstly of a mucosa of stratified squamous epithelium that is not keratinized, with a lamina propria (a thin layer of connective tissue) underneath it. Secondly, there is a layer of smooth muscle with bundles of circular fibers internal to longitudinal fibers (those that run lengthwise). Lastly, is an outer layer of connective tissue called the adventitia. Some texts list four layers by counting the two sublayers of the mucosa (epithelium and lamina propria) separately. The smooth muscular layer within the vagina has a weak contractive force that can create some pressure in the lumen of the vagina. Much stronger contractive force, such as during childbirth, comes from muscles in the pelvic floor that are attached to the adventitia around the vagina. The lamina propria is rich in blood vessels and lymphatic channels. The muscular layer is composed of smooth muscle fibers, with an outer layer of longitudinal muscle, an inner layer of circular muscle, and oblique muscle fibers between. The outer layer, the adventitia, is a thin dense layer of connective tissue and it blends with loose connective tissue containing blood vessels, lymphatic vessels and nerve fibers that are between pelvic organs. The vaginal mucosa is absent of glands. It forms folds (transverse ridges or rugae), which are more prominent in the outer third of the vagina; their function is to provide the vagina with increased surface area for extension and stretching. The epithelium of the ectocervix (the portion of the uterine cervix extending into the vagina) is an extension of, and shares a border with, the vaginal epithelium. The vaginal epithelium is made up of layers of cells, including the basal cells, the parabasal cells, the superficial squamous flat cells, and the intermediate cells. The basal layer of the epithelium is the most mitotically active and reproduces new cells. The superficial cells shed continuously and basal cells replace them. Estrogen induces the intermediate and superficial cells to fill with glycogen. Cells from the lower basal layer transition from active metabolic activity to death (apoptosis). In these mid-layers of the epithelia, the cells begin to lose their mitochondria and other organelles. The cells retain a usually high level of glycogen compared to other epithelial tissue in the body. Under the influence of maternal estrogen, the vagina of a newborn is lined by thick stratified squamous epithelium (or mucosa) for two to four weeks after birth. Between then to puberty, the epithelium remains thin with only a few layers of cuboidal cells without glycogen. The epithelium also has few rugae and is red in color before puberty. When puberty begins, the mucosa thickens and again becomes stratified squamous epithelium with glycogen containing cells, under the influence of the girl's rising estrogen levels. Finally, the epithelium thins out from menopause onward and eventually ceases to contain glycogen, because of the lack of estrogen. Flattened squamous cells are more resistant to both abrasion and infection. The permeability of the epithelium allows for an effective response from the immune system since antibodies and other immune components can easily reach the surface. The vaginal epithelium differs from the similar tissue of the skin. The epidermis of the skin is relatively resistant to water because it contains high levels of lipids. The vaginal epithelium contains lower levels of lipids. This allows the passage of water and water-soluble substances through the tissue. Keratinization happens when the epithelium is exposed to the dry external atmosphere. In abnormal circumstances, such as in pelvic organ prolapse, the mucosa may be exposed to air, becoming dry and keratinized. Blood and nerve supply Blood is supplied to the vagina mainly via the vaginal artery, which emerges from a branch of the internal iliac artery or the uterine artery. The vaginal arteries anastamose (are joined) along the side of the vagina with the cervical branch of the uterine artery; this forms the azygos artery, which lies on the midline of the anterior and posterior vagina. Other arteries which supply the vagina include the middle rectal artery and the internal pudendal artery, all branches of the internal iliac artery. Three groups of lymphatic vessels accompany these arteries; the upper group accompanies the vaginal branches of the uterine artery; a middle group accompanies the vaginal arteries; and the lower group, draining lymph from the area outside the hymen, drain to the inguinal lymph nodes. Ninety-five percent of the lymphatic channels of the vagina are within 3 mm of the surface of the vagina. Two main veins drain blood from the vagina, one on the left and one on the right. These form a network of smaller veins, the vaginal venous plexus, on the sides of the vagina, connecting with similar venous plexuses of the uterus, bladder, and rectum. These ultimately drain into the internal iliac veins. The nerve supply of the upper vagina is provided by the sympathetic and parasympathetic areas of the pelvic plexus. The lower vagina is supplied by the pudendal nerve. Function Secretions Vaginal secretions are primarily from the uterus, cervix, and vaginal epithelium in addition to minuscule vaginal lubrication from the Bartholin's glands upon sexual arousal. It takes little vaginal secretion to make the vagina moist; secretions may increase during sexual arousal, the middle of or a little prior to menstruation, or during pregnancy. Menstruation (also known as a "period" or "monthly") is the regular discharge of blood and mucosal tissue (known as menses) from the inner lining of the uterus through the vagina. The vaginal mucous membrane varies in thickness and composition during the menstrual cycle, which is the regular, natural change that occurs in the female reproductive system (specifically the uterus and ovaries) that makes pregnancy possible. Different hygiene products such as tampons, menstrual cups, and sanitary napkins are available to absorb or capture menstrual blood. The Bartholin's glands, located near the vaginal opening, were originally considered the primary source for vaginal lubrication, but further examination showed that they provide only a few drops of mucus. Vaginal lubrication is mostly provided by plasma seepage known as transudate from the vaginal walls. This initially forms as sweat-like droplets, and is caused by increased fluid pressure in the tissue of the vagina (vasocongestion), resulting in the release of plasma as transudate from the capillaries through the vaginal epithelium. Before and during ovulation, the mucous glands within the cervix secrete different variations of mucus, which provides an alkaline, fertile environment in the vaginal canal that is favorable to the survival of sperm. Following menopause, vaginal lubrication naturally decreases. Sexual stimulation Nerve endings in the vagina can provide pleasurable sensations when the vagina is stimulated during sexual activity. Women may derive pleasure from one part of the vagina, or from a feeling of closeness and fullness during vaginal penetration. Because the vagina is not rich in nerve endings, women often do not receive sufficient sexual stimulation, or orgasm, solely from vaginal penetration. Although the literature commonly cites a greater concentration of nerve endings and therefore greater sensitivity near the vaginal entrance (the outer one-third or lower third), some scientific examinations of vaginal wall innervation indicate no single area with a greater density of nerve endings. Other research indicates that only some women have a greater density of nerve endings in the anterior vaginal wall. Because of the fewer nerve endings in the vagina, childbirth pain is significantly more tolerable. Pleasure can be derived from the vagina in a variety of ways. In addition to penile penetration, pleasure can come from masturbation, fingering, or specific sex positions (such as the missionary position or the spoons sex position). Heterosexual couples may engage in fingering as a form of foreplay to incite sexual arousal or as an accompanying act, or as a type of birth control, or to preserve virginity. Less commonly, they may use non penile-vaginal sexual acts as a primary means of sexual pleasure. In contrast, lesbians and other women who have sex with women commonly engage in fingering as a main form of sexual activity. Some women and couples use sex toys, such as a vibrator or dildo, for vaginal pleasure. Most women require direct stimulation of the clitoris to orgasm. The clitoris plays a part in vaginal stimulation. It is a sex organ of multiplanar structure containing an abundance of nerve endings, with a broad attachment to the pubic arch and extensive supporting tissue to the labia. Research indicates that it forms a tissue cluster with the vagina. This tissue is perhaps more extensive in some women than in others, which may contribute to orgasms experienced vaginally. During sexual arousal, and particularly the stimulation of the clitoris, the walls of the vagina lubricate. This begins after ten to thirty seconds of sexual arousal, and increases in amount the longer the woman is aroused. It reduces friction or injury that can be caused by insertion of the penis into the vagina or other penetration of the vagina during sexual activity. The vagina lengthens during the arousal, and can continue to lengthen in response to pressure; as the woman becomes fully aroused, the vagina expands in length and width, while the cervix retracts. With the upper two-thirds of the vagina expanding and lengthening, the uterus rises into the greater pelvis, and the cervix is elevated above the vaginal floor, resulting in tenting of the mid-vaginal plane. This is known as the tenting or ballooning effect. As the elastic walls of the vagina stretch or contract, with support from the pelvic muscles, to wrap around the inserted penis (or other object), this creates friction for the penis and helps to cause a man to experience orgasm and ejaculation, which in turn enables fertilization. An area in the vagina that may be an erogenous zone is the G-spot. It is typically defined as being located at the anterior wall of the vagina, a couple or few inches in from the entrance, and some women experience intense pleasure, and sometimes an orgasm, if this area is stimulated during sexual activity. A G-spot orgasm may be responsible for female ejaculation, leading some doctors and researchers to believe that G-spot pleasure comes from the Skene's glands, a female homologue of the prostate, rather than any particular spot on the vaginal wall; other researchers consider the connection between the Skene's glands and the G-spot area to be weak. The G-spot's existence (and existence as a distinct structure) is still under dispute because reports of its location can vary from woman to woman, it appears to be nonexistent in some women, and it is hypothesized to be an extension of the clitoris and therefore the reason for orgasms experienced vaginally. Childbirth The vagina is the birth canal for the delivery of a baby. When labor nears, several signs may occur, including vaginal discharge and the rupture of membranes (water breaking). The latter results in a gush or small stream of amniotic fluid from the vagina. Water breaking most commonly happens at the beginning of labor. It happens before labor if there is a premature rupture of membranes, which occurs in 10% of cases. Among women giving birth for the first time, Braxton Hicks contractions are mistaken for actual contractions, but they are instead a way for the body to prepare for true labor. They do not signal the beginning of labor, but they are usually very strong in the days leading up to labor. As the body prepares for childbirth, the cervix softens, thins, moves forward to face the front, and begins to open. This allows the fetus to settle into the pelvis, a process known as lightening. As the fetus settles into the pelvis, pain from the sciatic nerves, increased vaginal discharge, and increased urinary frequency can occur. While lightening is likelier to happen after labor has begun for women who have given birth before, it may happen ten to fourteen days before labor in women experiencing labor for the first time. The fetus begins to lose the support of the cervix when contractions begin. With cervical dilation reaching 10 cm to accommodate the head of the fetus, the head moves from the uterus to the vagina. The elasticity of the vagina allows it to stretch to many times its normal diameter in order to deliver the child. Vaginal births are more common, but if there is a risk of complications a caesarean section (C-section) may be performed. The vaginal mucosa has an abnormal accumulation of fluid (edematous) and is thin, with few rugae, a little after birth. The mucosa thickens and rugae return in approximately three weeks once the ovaries regain usual function and estrogen flow is restored. The vaginal opening gapes and is relaxed, until it returns to its approximate pre-pregnant state six to eight weeks after delivery, known as the postpartum period; however, the vagina will continue to be larger in size than it was previously. After giving birth, there is a phase of vaginal discharge called lochia that can vary significantly in the amount of loss and its duration but can go on for up to six weeks. Vaginal microbiota The vaginal flora is a complex ecosystem that changes throughout life, from birth to menopause. The vaginal microbiota resides in and on the outermost layer of the vaginal epithelium. This microbiome consists of species and genera, which typically do not cause symptoms or infections in women with normal immunity. The vaginal microbiome is dominated by Lactobacillus species. These species metabolize glycogen, breaking it down into sugar. Lactobacilli metabolize the sugar into glucose and lactic acid. Under the influence of hormones, such as estrogen, progesterone and follicle-stimulating hormone (FSH), the vaginal ecosystem undergoes cyclic or periodic changes. Clinical significance Pelvic examinations Vaginal health can be assessed during a pelvic examination, along with the health of most of the organs of the female reproductive system. Such exams may include the Pap test (or cervical smear). In the United States, Pap test screening is recommended starting around 21 years of age until the age of 65. However, other countries do not recommend pap testing in non-sexually active women. Guidelines on frequency vary from every three to five years. Routine pelvic examination on women who are not pregnant and lack symptoms may be more harmful than beneficial. A normal finding during the pelvic exam of a pregnant woman is a bluish tinge to the vaginal wall. Pelvic exams are most often performed when there are unexplained symptoms of discharge, pain, unexpected bleeding or urinary problems. During a pelvic exam, the vaginal opening is assessed for position, symmetry, presence of the hymen, and shape. The vagina is assessed internally by the examiner with gloved fingers, before the speculum is inserted, to note the presence of any weakness, lumps or nodules. Inflammation and discharge are noted if present. During this time, the Skene's and Bartolin's glands are palpated to identify abnormalities in these structures. After the digital examination of the vagina is complete, the speculum, an instrument to visualize internal structures, is carefully inserted to make the cervix visible. Examination of the vagina may also be done during a cavity search. Lacerations or other injuries to the vagina can occur during sexual assault or other sexual abuse. These can be tears, bruises, inflammation and abrasions. Sexual assault with objects can damage the vagina and X-ray examination may reveal the presence of foreign objects. If consent is given, a pelvic examination is part of the assessment of sexual assault. Pelvic exams are also performed during pregnancy, and women with high risk pregnancies have exams more often. Medications Intravaginal administration is a route of administration where the medication is inserted into the vagina as a creme or tablet. Pharmacologically, this has the potential advantage of promoting therapeutic effects primarily in the vagina or nearby structures (such as the vaginal portion of cervix) with limited systemic adverse effects compared to other routes of administration. Medications used to ripen the cervix and induce labor are commonly administered via this route, as are estrogens, contraceptive agents, propranolol, and antifungals. Vaginal rings can also be used to deliver medication, including birth control in contraceptive vaginal rings. These are inserted into the vagina and provide continuous, low dose and consistent drug levels in the vagina and throughout the body. Before the baby emerges from the womb, an injection for pain control during childbirth may be administered through the vaginal wall and near the pudendal nerve. Because the pudendal nerve carries motor and sensory fibers that innervate the pelvic muscles, a pudendal nerve block relieves birth pain. The medicine does not harm the child, and is without significant complications. Infections, diseases, and safe sex Vaginal infections or diseases include yeast infection, vaginitis, sexually transmitted infections (STIs) and cancer. Lactobacillus gasseri and other Lactobacillus species in the vaginal flora provide some protection from infections by their secretion of bacteriocins and hydrogen peroxide. The healthy vagina of a woman of child-bearing age is acidic, with a pH normally ranging between 3.8 and 4.5. The low pH prohibits growth of many strains of pathogenic microbes. The acidic balance of the vagina may also be affected by semen, pregnancy, menstruation, diabetes or other illness, birth control pills, certain antibiotics, poor diet, and stress. Any of these changes to the acidic balance of the vagina may contribute to yeast infection. An elevated pH (greater than 4.5) of the vaginal fluid can be caused by an overgrowth of bacteria as in bacterial vaginosis, or in the parasitic infection trichomoniasis, both of which have vaginitis as a symptom. Vaginal flora populated by a number of different bacteria characteristic of bacterial vaginosis increases the risk of adverse pregnancy outcomes. During a pelvic exam, samples of vaginal fluids may be taken to screen for sexually transmitted infections or other infections. Because the vagina is self-cleansing, it usually does not need special hygiene. Clinicians generally discourage the practice of douching for maintaining vulvovaginal health. Since the vaginal flora gives protection against disease, a disturbance of this balance may lead to infection and abnormal discharge. Vaginal discharge may indicate a vaginal infection by color and odor, or the resulting symptoms of discharge, such as irritation or burning. Abnormal vaginal discharge may be caused by STIs, diabetes, douches, fragranced soaps, bubble baths, birth control pills, yeast infection (commonly as a result of antibiotic use) or another form of vaginitis. While vaginitis is an inflammation of the vagina, and is attributed to infection, hormonal issues, or irritants, vaginismus is an involuntary tightening of the vagina muscles during vaginal penetration that is caused by a conditioned reflex or disease. Vaginal discharge due to yeast infection is usually thick, creamy in color and odorless, while discharge due to bacterial vaginosis is gray-white in color, and discharge due to trichomoniasis is usually a gray color, thin in consistency, and has a fishy odor. Discharge in 25% of the trichomoniasis cases is yellow-green. HIV/AIDS, human papillomavirus (HPV), genital herpes and trichomoniasis are some STIs that may affect the vagina, and health sources recommend safe sex (or barrier method) practices to prevent the transmission of these and other STIs. Safe sex commonly involves the use of condoms, and sometimes female condoms (which give women more control). Both types can help avert pregnancy by preventing semen from coming in contact with the vagina. There is, however, little research on whether female condoms are as effective as male condoms at preventing STIs, and they are slightly less effective than male condoms at preventing pregnancy, which may be because the female condom fits less tightly than the male condom or because it can slip into the vagina and spill semen. The vaginal lymph nodes often trap cancerous cells that originate in the vagina. These nodes can be assessed for the presence of disease. Selective surgical removal (rather than total and more invasive removal) of vaginal lymph nodes reduces the risk of complications that can accompany more radical surgeries. These selective nodes act as sentinel lymph nodes. Instead of surgery, the lymph nodes of concern are sometimes treated with radiation therapy administered to the patient's pelvic, inguinal lymph nodes, or both. Vaginal cancer and vulvar cancer are very rare, and primarily affect older women. Cervical cancer (which is relatively common) increases the risk of vaginal cancer, which is why there is a significant chance for vaginal cancer to occur at the same time as, or after, cervical cancer. It may be that their causes are the same. Cervical cancer may be prevented by pap smear screening and HPV vaccines, but HPV vaccines only cover HPV types 16 and 18, the cause of 70% of cervical cancers. Some symptoms of cervical and vaginal cancer are dyspareunia, and abnormal vaginal bleeding or vaginal discharge, especially after sexual intercourse or menopause. However, most cervical cancers are asymptomatic (present no symptoms). Vaginal intracavity brachytherapy (VBT) is used to treat endometrial, vaginal and cervical cancer. An applicator is inserted into the vagina to allow the administration of radiation as close to the site of the cancer as possible. Survival rates increase with VBT when compared to external beam radiation therapy. By using the vagina to place the emitter as close to the cancerous growth as possible, the systemic effects of radiation therapy are reduced and cure rates for vaginal cancer are higher. Research is unclear on whether treating cervical cancer with radiation therapy increases the risk of vaginal cancer. Effects of aging and childbirth Age and hormone levels significantly correlate with the pH of the vagina. Estrogen, glycogen and lactobacilli impact these levels. At birth, the vagina is acidic with a pH of approximately 4.5, and ceases to be acidic by three to six weeks of age, becoming alkaline. Average vaginal pH is 7.0 in pre-pubertal girls. Although there is a high degree of variability in timing, girls who are approximately seven to twelve years of age will continue to have labial development as the hymen thickens and the vagina elongates to approximately 8 cm. The vaginal mucosa thickens and the vaginal pH becomes acidic again. Girls may also experience a thin, white vaginal discharge called leukorrhea. The vaginal microbiota of adolescent girls aged 13 to 18 years is similar to women of reproductive age, who have an average vaginal pH of 3.8–4.5, but research is not as clear on whether this is the same for premenarcheal or perimenarcheal girls. The vaginal pH during menopause is 6.5–7.0 (without hormone replacement therapy), or 4.5–5.0 with hormone replacement therapy. After menopause, the body produces less estrogen. This causes atrophic vaginitis (thinning and inflammation of the vaginal walls), which can lead to vaginal itching, burning, bleeding, soreness, or vaginal dryness (a decrease in lubrication). Vaginal dryness can cause discomfort on its own or discomfort or pain during sexual intercourse. Hot flashes are also characteristic of menopause. Menopause also affects the composition of vaginal support structures. The vascular structures become fewer with advancing age. Specific collagens become altered in composition and ratios. It is thought that the weakening of the support structures of the vagina is due to the physiological changes in this connective tissue. Menopausal symptoms can be eased by estrogen-containing vaginal creams, non-prescription, non-hormonal medications, vaginal estrogen rings such as the Femring, or other hormone replacement therapies, but there are risks (including adverse effects) associated with hormone replacement therapy. Vaginal creams and vaginal estrogen rings may not have the same risks as other hormone replacement treatments. Hormone replacement therapy can treat vaginal dryness, but a personal lubricant may be used to temporarily remedy vaginal dryness specifically for sexual intercourse. Some women have an increase in sexual desire following menopause. It may be that menopausal women who continue to engage in sexual activity regularly experience vaginal lubrication similar to levels in women who have not entered menopause, and can enjoy sexual intercourse fully. They may have less vaginal atrophy and fewer problems concerning sexual intercourse. Vaginal changes that happen with aging and childbirth include mucosal redundancy, rounding of the posterior aspect of the vagina with shortening of the distance from the distal end of the anal canal to the vaginal opening, diastasis or disruption of the pubococcygeus muscles caused by poor repair of an episiotomy, and blebs that may protrude beyond the area of the vaginal opening. Other vaginal changes related to aging and childbirth are stress urinary incontinence, rectocele, and cystocele. Physical changes resulting from pregnancy, childbirth, and menopause often contribute to stress urinary incontinence. If a woman has weak pelvic floor muscle support and tissue damage from childbirth or pelvic surgery, a lack of estrogen can further weaken the pelvic muscles and contribute to stress urinary incontinence. Pelvic organ prolapse, such as a rectocele or cystocele, is characterized by the descent of pelvic organs from their normal positions to impinge upon the vagina. A reduction in estrogen does not cause rectocele, cystocele or uterine prolapse, but childbirth and weakness in pelvic support structures can. Prolapse may also occur when the pelvic floor becomes injured during a hysterectomy, gynecological cancer treatment, or heavy lifting. Pelvic floor exercises such as Kegel exercises can be used to strengthen the pelvic floor muscles, preventing or arresting the progression of prolapse. There is no evidence that doing Kegel exercises isotonically or with some form of weight is superior; there are greater risks with using weights since a foreign object is introduced into the vagina. During the third stage of labor, while the infant is being born, the vagina undergoes significant changes. A gush of blood from the vagina may be seen right before the baby is born. Lacerations to the vagina that can occur during birth vary in depth, severity and the amount of adjacent tissue involvement. The laceration can be so extensive as to involve the rectum and anus. This event can be especially distressing to a new mother. When this occurs, fecal incontinence develops and stool can leave through the vagina. Close to 85% of spontaneous vaginal births develop some form of tearing. Out of these, 60–70% require suturing. Lacerations from labor do not always occur. Surgery The vagina, including the vaginal opening, may be altered as a result of surgeries such as an episiotomy, vaginectomy, vaginoplasty or labiaplasty. Those who undergo vaginoplasty are usually older and have given birth. A thorough examination of the vagina before a vaginoplasty is standard, as well as a referral to a urogynecologist to diagnose possible vaginal disorders. With regard to labiaplasty, reduction of the labia minora is quick without hindrance, complications are minor and rare, and can be corrected. Any scarring from the procedure is minimal, and long-term problems have not been identified. During an episiotomy, a surgical incision is made during the second stage of labor to enlarge the vaginal opening for the baby to pass through. Although its routine use is no longer recommended, and not having an episiotomy is found to have better results than an episiotomy, it is one of the most common medical procedures performed on women. The incision is made through the skin, vaginal epithelium, subcutaneous fat, perineal body and superficial transverse perineal muscle and extends from the vagina to the anus. Episiotomies can be painful after delivery. Women often report pain during sexual intercourse up to three months after laceration repair or an episiotomy. Some surgical techniques result in less pain than others. The two types of episiotomies performed are the medial incision and the medio-lateral incision. The median incision is a perpendicular cut between the vagina and the anus and is the most common. The medio-lateral incision is made between the vagina at an angle and is not as likely to tear through to the anus. The medio-lateral cut takes more time to heal than the median cut. Vaginectomy is surgery to remove all or part of the vagina, and is usually used to treat malignancy. Removal of some or all of the sexual organs can result in damage to the nerves and leave behind scarring or adhesions. Sexual function may also be impaired as a result, as in the case of some cervical cancer surgeries. These surgeries can impact pain, elasticity, vaginal lubrication and sexual arousal. This often resolves after one year but may take longer. Women, especially those who are older and have had multiple births, may choose to surgically correct vaginal laxity. This surgery has been described as vaginal tightening or rejuvenation. While a woman may experience an improvement in self-image and sexual pleasure by undergoing vaginal tightening or rejuvenation, there are risks associated with the procedures, including infection, narrowing of the vaginal opening, insufficient tightening, decreased sexual function (such as pain during sexual intercourse), and rectovaginal fistula. Women who undergo this procedure may unknowingly have a medical issue, such as a prolapse, and an attempt to correct this is also made during the surgery. Surgery on the vagina can be elective or cosmetic. Women who seek cosmetic surgery can have congenital conditions, physical discomfort or wish to alter the appearance of their genitals. Concerns over average genital appearance or measurements are largely unavailable and make defining a successful outcome for such surgery difficult. A number of sex reassignment surgeries are available to transgender people. Although not all intersex conditions require surgical treatment, some choose genital surgery to correct atypical anatomical conditions. Anomalies and other health issues Vaginal anomalies are defects that result in an abnormal or absent vagina. The most common obstructive vaginal anomaly is an imperforate hymen, a condition in which the hymen obstructs menstrual flow or other vaginal secretions. Another vaginal anomaly is a transverse vaginal septum, which partially or completely blocks the vaginal canal. The precise cause of an obstruction must be determined before it is repaired, since corrective surgery differs depending on the cause. In some cases, such as isolated vaginal agenesis, the external genitalia may appear normal. Abnormal openings known as fistulas can cause urine or feces to enter the vagina, resulting in incontinence. The vagina is susceptible to fistula formation because of its proximity to the urinary and gastrointestinal tracts. Specific causes are manifold and include obstructed labor, hysterectomy, malignancy, radiation, episiotomy, and bowel disorders. A small number of vaginal fistulas are congenital. Various surgical methods are employed to repair fistulas. Untreated, fistulas can result in significant disability and have a profound impact on quality of life. Vaginal evisceration is a serious complication of a vaginal hysterectomy and occurs when the vaginal cuff ruptures, allowing the small intestine to protrude from the vagina. Cysts may also affect the vagina. Various types of vaginal cysts can develop on the surface of the vaginal epithelium or in deeper layers of the vagina and can grow to be as large as 7 cm. Often, they are an incidental finding during a routine pelvic examination. Vaginal cysts can mimic other structures that protrude from the vagina such as a rectocele and cystocele. Cysts that can be present include Müllerian cysts, Gartner's duct cysts, and epidermoid cysts. A vaginal cyst is most likely to develop in women between the ages of 30 and 40. It is estimated that 1 out of 200 women has a vaginal cyst. The Bartholin's cyst is of vulvar rather than vaginal origin, but it presents as a lump at the vaginal opening. It is more common in younger women and is usually without symptoms, but it can cause pain if an abscess forms, block the entrance to the vulval vestibule if large, and impede walking or cause painful sexual intercourse. Society and culture Perceptions, symbolism and vulgarity Various perceptions of the vagina have existed throughout history, including the belief it is the center of sexual desire, a metaphor for life via birth, inferior to the penis, unappealing to sight or smell, or vulgar. These views can largely be attributed to sex differences, and how they are interpreted. David Buss, an evolutionary psychologist, stated that because a penis is significantly larger than a clitoris and is readily visible while the vagina is not, and males urinate through the penis, boys are taught from childhood to touch their penises while girls are often taught that they should not touch their own genitalia, which implies that there is harm in doing so. Buss attributed this as the reason many women are not as familiar with their genitalia, and that researchers assume these sex differences explain why boys learn to masturbate before girls and do so more often. The word vagina is commonly avoided in conversation, and many people are confused about the vagina's anatomy and may be unaware that it is not used for urination. This is exacerbated by phrases such as "boys have a penis, girls have a vagina", which causes children to think that girls have one orifice in the pelvic area. Author Hilda Hutcherson stated, "Because many [women] have been conditioned since childhood through verbal and nonverbal cues to think of [their] genitals as ugly, smelly and unclean, [they] aren't able to fully enjoy intimate encounters" because of fear that their partner will dislike the sight, smell, or taste of their genitals. She argued that women, unlike men, did not have locker room experiences in school where they compared each other's genitals, which is one reason so many women wonder if their genitals are normal. Scholar stated that having a vagina meant she would typically be treated less well than her vagina-less counterparts and subject to inequalities (such as job inequality), which she categorized as being treated like a second-class citizen. Negative views of the vagina are simultaneously contrasted by views that it is a powerful symbol of female sexuality, spirituality, or life. Author Denise Linn stated that the vagina "is a powerful symbol of womanliness, openness, acceptance, and receptivity. It is the inner valley spirit". Sigmund Freud placed significant value on the vagina, postulating the concept that vaginal orgasm is separate from clitoral orgasm, and that, upon reaching puberty, the proper response of mature women is a changeover to vaginal orgasms (meaning orgasms without any clitoral stimulation). This theory made many women feel inadequate, as the majority of women cannot achieve orgasm via vaginal intercourse alone. Regarding religion, the womb represents a powerful symbol as the yoni in Hinduism, which represents "the feminine potency", and this may indicate the value that Hindu society has given female sexuality and the vagina's ability to deliver life; however, yoni as a representation of "womb" is not the primary denotation. While, in ancient times, the vagina was often considered equivalent (homologous) to the penis, with anatomists Galen (129 AD – 200 AD) and Vesalius (1514–1564) regarding the organs as structurally the same except for the vagina being inverted, anatomical studies over latter centuries showed the clitoris to be the penile equivalent. Another perception of the vagina was that the release of vaginal fluids would cure or remedy a number of ailments; various methods were used over the centuries to release "female seed" (via vaginal lubrication or female ejaculation) as a treatment for (suffocation of the womb,  'suffocation from retained seed'), green sickness, and possibly for female hysteria. Reported methods for treatment included a midwife rubbing the walls of the vagina or insertion of the penis or penis-shaped objects into the vagina. Symptoms of the female hysteria diagnosis – a concept that is no longer recognized by medical authorities as a medical disorder – included faintness, nervousness, insomnia, fluid retention, heaviness in abdomen, muscle spasm, shortness of breath, irritability, loss of appetite for food or sex, and a propensity for causing trouble. It may be that women who were considered suffering from female hysteria condition would sometimes undergo "pelvic massage" – stimulation of the genitals by the doctor until the woman experienced "hysterical paroxysm" (i.e., orgasm). In this case, paroxysm was regarded as a medical treatment, and not a sexual release. The vagina has been given many vulgar names, three of which are pussy, twat, and cunt. Cunt is also used as a derogatory epithet referring to people of either sex. This usage is relatively recent, dating from the late nineteenth century. Reflecting different national usages, cunt is described as "an unpleasant or stupid person" in the Compact Oxford English Dictionary, whereas the Merriam-Webster has a usage of the term as "usually disparaging and obscene: woman", noting that it is used in the United States as "an offensive way to refer to a woman". Random House defines it as "a despicable, contemptible or foolish man". Some feminists of the 1970s sought to eliminate disparaging terms such as cunt. Twat is widely used as a derogatory epithet, especially in British English, referring to a person considered obnoxious or stupid. Pussy can indicate "cowardice or weakness", and "the human vulva or vagina" or by extension "sexual intercourse with a woman". In English, the use of the word pussy to refer to women is considered derogatory or demeaning, treating people as sexual objects. In literature and art The vagina loquens, or "talking vagina", is a significant tradition in literature and art, dating back to the ancient folklore motifs of the "talking cunt". These tales usually involve vaginas talking by the effect of magic or charms, and often admitting to their lack of chastity. Other folk tales relate the vagina as having teeth – vagina dentata (Latin for "toothed vagina"). These carry the implication that sexual intercourse might result in injury, emasculation, or castration for the man involved. These stories were frequently told as cautionary tales warning of the dangers of unknown women and to discourage rape. In 1966, the French artist Niki de Saint Phalle collaborated with Dadaist artist Jean Tinguely and Per Olof Ultvedt on a large sculpture installation entitled (also spelled , which means "she-a cathedral") for Moderna Museet, in Stockholm, Sweden. The outer form is a giant, reclining sculpture of a woman which visitors can enter through a door-sized vaginal opening between her spread legs. The Vagina Monologues, a 1996 episodic play by Eve Ensler, has contributed to making female sexuality a topic of public discourse. It is made up of a varying number of monologues read by a number of women. Initially, Ensler performed every monologue herself, with subsequent performances featuring three actresses; latter versions feature a different actress for every role. Each of the monologues deals with an aspect of the feminine experience, touching on matters such as sexual activity, love, rape, menstruation, female genital mutilation, masturbation, birth, orgasm, the various common names for the vagina, or simply as a physical aspect of the body. A recurring theme throughout the pieces is the vagina as a tool of female empowerment, and the ultimate embodiment of individuality. Influence on modification Societal views, influenced by tradition, a lack of knowledge on anatomy, or sexism, can significantly impact a person's decision to alter their own or another person's genitalia. Women may want to alter their genitalia (vagina or vulva) because they believe that its appearance, such as the length of the labia minora covering the vaginal opening, is not normal, or because they desire a smaller vaginal opening or tighter vagina. Women may want to remain youthful in appearance and sexual function. These views are often influenced by the media, including pornography, and women can have low self-esteem as a result. They may be embarrassed to be naked in front of a sexual partner and may insist on having sex with the lights off. When modification surgery is performed purely for cosmetic reasons, it is often viewed poorly, and some doctors have compared such surgeries to female genital mutilation (FGM). Female genital mutilation, also known as female circumcision or female genital cutting, is genital modification with no health benefits. The most severe form is Type III FGM, which is infibulation and involves removing all or part of the labia and the vagina being closed up. A small hole is left for the passage of urine and menstrual blood, and the vagina is opened up for sexual intercourse and childbirth. Significant controversy surrounds female genital mutilation, with the World Health Organization (WHO) and other health organizations campaigning against the procedures on behalf of human rights, stating that it is "a violation of the human rights of girls and women" and "reflects deep-rooted inequality between the sexes". Female genital mutilation has existed at one point or another in almost all human civilizations, most commonly to exert control over the sexual behavior, including masturbation, of girls and women. It is carried out in several countries, especially in Africa, and to a lesser extent in other parts of the Middle East and Southeast Asia, on girls from a few days old to mid-adolescent, often to reduce sexual desire in an effort to preserve vaginal virginity. Comfort Momoh stated it may be that female genital mutilation was "practiced in ancient Egypt as a sign of distinction among the aristocracy"; there are reports that traces of infibulation are on Egyptian mummies. Custom and tradition are the most frequently cited reasons for the practice of female genital mutilation. Some cultures believe that female genital mutilation is part of a girl's initiation into adulthood and that not performing it can disrupt social and political cohesion. In these societies, a girl is often not considered an adult unless she has undergone the procedure. Other animals The vagina is a structure of animals in which the female is internally fertilized, rather than by traumatic insemination used by some invertebrates. Although research on the vagina is especially lacking for different animals, its location, structure and size are documented as varying among species. In therian mammals (placentals and marsupials), the vagina leads from the uterus to the exterior of the female body. Female placentals have two openings in the vulva; these are the urethral opening for the urinary tract and the vaginal opening for the genital tract. Depending on the species, these openings may be within the internal urogenital sinus or on the external vestibule. Female marsupials have two lateral vaginas, which lead to separate uteri, but both open externally through the same orifice; a third canal, which is known as the median vagina, and can be transitory or permanent, is used for birth. The female spotted hyena does not have an external vaginal opening. Instead, the vagina exits through the clitoris, allowing the females to urinate, copulate and give birth through the clitoris. In female canids, the vagina contracts during copulation, forming a copulatory tie. Female cetaceans have vaginal folds that are not found in other mammals. Monotremes, birds, reptiles and amphibians have a cloaca and is the single external opening for the gastrointestinal, urinary, and reproductive tracts. Some of these vertebrates have a part of the oviduct that leads to the cloaca. Chickens have a vaginal aperture that opens from the vertical apex of the cloaca. The vagina extends upward from the aperture and becomes the egg gland. In some jawless fish, there is neither oviduct nor vagina and instead the egg travels directly through the body cavity (and is fertilised externally as in most fish and amphibians). In insects and other invertebrates, the vagina can be a part of the oviduct (see insect reproductive system). Birds have a cloaca into which the urinary, reproductive tract (vagina) and gastrointestinal tract empty. Females of some waterfowl species have developed vaginal structures called dead end sacs and clockwise coils to protect themselves from sexual coercion. A lack of research on the vagina and other female genitalia, especially for different animals, has stifled knowledge on female sexual anatomy. One explanation for why male genitalia is studied more includes penises being significantly simpler to analyze than female genital cavities, because male genitals usually protrude and are therefore easier to assess and measure. By contrast, female genitals are more often concealed, and require more dissection, which in turn requires more time. Another explanation is that a main function of the penis is to impregnate, while female genitals may alter shape upon interaction with male organs, especially as to benefit or hinder reproductive success. Non-human primates are optimal models for human biomedical research because humans and non-human primates share physiological characteristics as a result of evolution. While menstruation is heavily associated with human females, and they have the most pronounced menstruation, it is also typical of ape relatives and monkeys. Female macaques menstruate, with a cycle length over the course of a lifetime that is comparable to that of female humans. Estrogens and progestogens in the menstrual cycles and during premenarche and postmenopause are also similar in female humans and macaques; however, only in macaques does keratinization of the epithelium occur during the follicular phase. The vaginal pH of macaques also differs, with near-neutral to slightly alkaline median values and is widely variable, which may be due to its lack of lactobacilli in the vaginal flora. This is one reason why, although macaques are used for studying HIV transmission and testing microbicides, animal models are not often used in the study of sexually transmitted infections, such as trichomoniasis. Another is that such conditions' causes are inextricably bound to humans' genetic makeup, making results from other species difficult to apply to humans.
Biology and health sciences
Reproductive system
null
32478
https://en.wikipedia.org/wiki/Vim%20%28text%20editor%29
Vim (text editor)
Vim (; vi improved) is a free and open-source, screen-based text editor program. It is an improved clone of Bill Joy's vi. Vim's author, Bram Moolenaar, derived Vim from a port of the Stevie editor for Amiga and released a version to the public in 1991. Vim is designed for use both from a command-line interface and as a standalone application in a graphical user interface. Since its release for the Amiga, cross-platform development has made it available on many other systems. In 2018, it was voted the most popular editor amongst Linux Journal readers; in 2015 the Stack Overflow developer survey found it to be the third most popular text editor, and in 2019 the fifth most popular development environment. History Vim's forerunner, Stevie (ST Editor for VI Enthusiasts), was created by Tim Thompson for the Atari ST in 1987 and further developed by Tony Andrews and G.R. (Fred) Walter. It was one of the first popularized clones of Vi, and did not use Vi's source code. The source code for Vi used the Ed text editor developed under AT&T, and therefore Vi could only be used by those with an AT&T source license. Basing Vim on the source code for Stevie meant the program could be distributed without requiring the AT&T source license. Basing his work on Stevie, Bram Moolenaar began working on Vim for the Amiga computer in 1988, with the first public release (Vim v1.14) in 1991. At the time of its first release, the name "Vim" was an acronym for "Vi IMitation", but this changed to "'Vi IMproved" late in 1993. Release history License Vim is released under the Vim license, which includes some charityware clauses that encourage users who enjoy the software to consider donating to children in Uganda. The Vim license is compatible with the GNU General Public License through a special clause allowing distribution of modified copies under the GNU GPL version 2.0 or later. Interface Like vi, Vim's interface is not based on menus or icons but on commands given in a text user interface; its GUI mode, gVim, adds menus and toolbars for commonly used commands but the full functionality is still expressed through its command line mode. Vi (and by extension Vim) tends to allow a typist to keep their fingers on the home row, which can be an advantage for a touch typist. Vim has a built-in tutorial for beginners called vimtutor, which is usually installed along with Vim, but is a separate executable and can be run with a shell command. The Vim Users' Manual details Vim's features and can be read from within Vim, or found online. Vim also has a built-in help facility (using the :help command) which allows users to query and navigate through commands and features. Registers Vim features various special memory entries called registers (not to be confused with hardware or processor registers). When cutting, deleting, copying, or pasting text the user can choose to store the manipulated text in a register. There are 36 general-purpose registers associated with letters and numbers ([a-z0-9]) and a range of special ones that either contain special values (current filename, last command, etc.) or serve a special purpose. Modes Like vi, vim supports multiple editing modes. Depending on the mode, typed characters are interpreted either as sequences of commands or are inserted as text. In Vim there are 14 editing modes, 7 basic modes and 7 variants: Normal mode – used for editor commands. This is generally the default mode and by default hitting returns the editor to this mode. Insert mode – used for typing text in a way similar to most modern editors. In this mode, opened text in buffers can be modified with the text entered from the keyboard. Visual mode – used to select areas of text. Commands can be run on the selected area – moving, editing, filtering via built-in or external command, etc. Visual linewise, a subtype of visual mode which selects one or more whole lines Visual blockwise, another subtype which selects a rectangular block of text across one or more lines Select mode – similar to visual, but the commands are not interpreted, instead, highlighted text is directly replaced by input from the keyboard; similar to the selection mode used in editors on Microsoft Windows platforms Command-line or Cmdline mode – provides a single line input at the bottom of the Vim window. Commands (beginning with ) and some other keys for specific actions (including pattern search and the filter command) activate this mode. On completion of the command, Vim returns to the previous mode. Ex mode mode – accepts a sequence of commands. Terminal-Job mode - Interacting with a job in a terminal window. Customization Vim is highly customizable and extensible, making it an attractive tool for users who demand a large amount of control and flexibility over their text editing environment. Text input is facilitated by a variety of features designed to increase keyboard efficiency. Users can execute complex commands with "key bindings," which can be customized and extended. The "recording" feature allows for the creation of macros to automate sequences of keystrokes and call internal or user-defined functions and mappings. Abbreviations, similar to macros and key mappings, facilitate the expansion of short strings of text into longer ones and can also be used to correct mistakes. Vim also features an "easy" mode for users looking for a simpler text editing solution. There are many plugins available that extend or add new functionality to Vim. These plugins are usually written in Vim's internal scripting language, vimscript (also known as VimL), but can be written in other languages as well. There are projects bundling together complex scripts and customizations and aimed at turning Vim into a tool for a specific task or adding a major flavour to its behaviour. Examples include Cream, which makes Vim behave like a click-and-type editor, or VimOutliner, which provides a comfortable outliner for users of Unix-like systems. Features and improvements over vi Vim has a vi compatibility mode, but when that mode is not used, Vim has many enhancements over vi. However even in compatibility mode, Vim is not entirely compatible with vi as defined in the Single Unix Specification and POSIX (e.g., Vim does not support vi's open mode, only visual mode). Vim's developers state that it is "very much compatible with Vi". Some of Vim's enhancements include completion functions, comparison and merging of files (known as vimdiff), a comprehensive integrated help system, extended regular expressions, scripting languages (both native and through alternative scripting interpreters such as Perl, Python, Ruby, Tcl, etc.) including support for plugins, a graphical user interface (gvim), limited integrated development environment-like features, mouse interaction (both with and without the GUI), folding, editing of compressed or archived files in gzip, bzip2, zip, and tar format and files over network protocols such as SSH, FTP, and HTTP, session state preservation, spell checking, split (horizontal and vertical) and tabbed windows, Unicode and other multi-language support, syntax highlighting, trans-session command, search and cursor position histories, multiple level and branching undo/redo history which can persist across editing sessions, and visual mode. While running, Vim saves the user's changes in a swap file with the ".swp" extension. This file can be used to recover after a crash. If a user tries to open a file and a swap file already exists, Vim will warn the user, and if the user proceeds, Vim will use a swap file with the extension ".swo" (or, if there is already more than one swap file, ".swn", ".swm", etc.). The feature can be disabled. Vim script Vim script (also called Vimscript or VimL) is the scripting language built into Vim. Based on the ex editor language of the original vi editor, early versions of Vim added commands for control flow and function definitions. Since version 7, Vim script also supports more advanced data types such as lists and dictionaries and a simple form of object-oriented programming. Built-in functions such as map() and filter() allow a basic form of functional programming, and Vim script has lambda since version 8.0. Vim script is mostly written in an imperative programming style. Vim macros can contain a sequence of normal-mode commands, but can also invoke ex commands or functions written in Vim script for more complex tasks. Almost all extensions (called plugins or more commonly scripts) of the core Vim functionality are written in Vim script, but plugins can also utilize other languages like Perl, Python, Lua, Ruby, Tcl, or Racket. These plugins can be installed manually, or through a plugin manager such as Vundle, Pathogen, or Vim-Plug. Vim script files are stored as plain text, similarly to other code, and the filename extension is usually .vim. One notable exception to that is Vim's config file, .vimrc. Examples " This is the Hello World program in Vim script. echo "Hello, world!" " This is a simple while loop in Vim script. let i = 1 while i < 5 echo "count is" i let i += 1 endwhile unlet i Availability While vi was originally available only on Unix operating systems, Vim has been ported to many operating systems including AmigaOS (the initial target platform), Atari MiNT, BeOS, DOS, Windows starting from Windows NT 3.1, OS/2, OS/390, MorphOS, OpenVMS, QNX, RISC OS, Linux, BSD, and Classic Mac OS. Also, Vim is shipped with Apple macOS. Independent ports of Vim are available for Android and iOS. Neovim Neovim is a fork of Vim that strives to improve the extensibility and maintainability of Vim. Some features of the fork include built-in Language Server Protocol (LSP) support, support for asynchronous I/O, and support for Lua scripting using luaJIT language interpreter. The project is free software and its source code is available on GitHub. Neovim has the same configuration syntax as Vim prior to vim9script; thus the same configuration file can be used with both editors, although there are minor differences in details of options. If the added features of Neovim are not used, Neovim is compatible with almost all of Vim's features. The Neovim project was started in 2014, after a patch to Vim supporting multi-threading was rejected. Neovim had a successful fundraising in March 2014, supporting at least one full-time developer. Several frontends are under development which make use of Neovim's capabilities. With the 0.5 release of Neovim on 2 July 2021, it gained built-in support for the LSP, Tree-sitter, and more complete Lua support – including the support for configuration scripts written in Lua instead of VimL. Gallery
Technology
Office and data management
null
32496
https://en.wikipedia.org/wiki/Vacuum%20tube
Vacuum tube
A vacuum tube, electron tube, valve (British usage), or tube (North America) is a device that controls electric current flow in a high vacuum between electrodes to which an electric potential difference has been applied. The type known as a thermionic tube or thermionic valve utilizes thermionic emission of electrons from a hot cathode for fundamental electronic functions such as signal amplification and current rectification. Non-thermionic types such as a vacuum phototube, however, achieve electron emission through the photoelectric effect, and are used for such purposes as the detection of light intensities. In both types, the electrons are accelerated from the cathode to the anode by the electric field in the tube. The simplest vacuum tube, the diode (i.e. Fleming valve), was invented in 1904 by John Ambrose Fleming. It contains only a heated electron-emitting cathode and an anode. Electrons can flow in only one direction through the devicefrom the cathode to the anode. Adding one or more control grids within the tube allows the current between the cathode and anode to be controlled by the voltage on the grids. These devices became a key component of electronic circuits for the first half of the twentieth century. They were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analog and early digital computers. Although some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, and created the discipline of electronics. In the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, safer, cooler, and more efficient, reliable, durable, and economical than thermionic tubes. Beginning in the mid-1960s, thermionic tubes were being replaced by the transistor. However, the cathode-ray tube (CRT) remained the basis for television monitors and oscilloscopes until the early 21st century. Thermionic tubes are still employed in some applications, such as the magnetron used in microwave ovens, certain high-frequency amplifiers, and high end audio amplifiers, which many audio enthusiasts prefer for their "warmer" tube sound, and amplifiers for electric musical instruments such as guitars (for desired effects, such as "overdriving" them to achieve a certain sound or tone). Not all electronic circuit valves or electron tubes are vacuum tubes. Gas-filled tubes are similar devices, but containing a gas, typically at low pressure, which exploit phenomena related to electric discharge in gases, usually without a heater. Classifications One classification of thermionic vacuum tubes is by the number of active electrodes. A device with two active elements is a diode, usually used for rectification. Devices with three elements are triodes used for amplification and switching. Additional electrodes create tetrodes, pentodes, and so forth, which have multiple additional functions made possible by the additional controllable electrodes. Other classifications are: by frequency range (audio, radio, VHF, UHF, microwave) by power rating (small-signal, audio power, high-power radio transmitting) by cathode/filament type (indirectly heated, directly heated) and warm-up time (including "bright-emitter" or "dull-emitter") by characteristic curves design (e.g., sharp- versus remote-cutoff in some pentodes) by application (receiving, transmitting, amplifying or switching, rectification, mixing) specialized parameters (long life, very low microphonic sensitivity and low-noise audio amplification, rugged or military versions) specialized functions (light or radiation detectors, video imaging tubes) tubes used to display information ("magic eye" tubes, vacuum fluorescent displays, CRTs) Vacuum tubes may have other components and functions than those described above, and are described elsewhere. These include as cathode-ray tubes, which create a beam of electrons for display purposes (such as the television picture tube, in electron microscopy, and in electron beam lithography); X-ray tubes; phototubes and photomultipliers (which rely on electron flow through a vacuum where electron emission from the cathode depends on energy from photons rather than thermionic emission). Description A vacuum tube consists of two or more electrodes in a vacuum inside an airtight envelope. Most tubes have glass envelopes with a glass-to-metal seal based on kovar sealable borosilicate glasses, although ceramic and metal envelopes (atop insulating bases) have been used. The electrodes are attached to leads which pass through the envelope via an airtight seal. Most vacuum tubes have a limited lifetime, due to the filament or heater burning out or other failure modes, so they are made as replaceable units; the electrode leads connect to pins on the tube's base which plug into a tube socket. Tubes were a frequent cause of failure in electronic equipment, and consumers were expected to be able to replace tubes themselves. In addition to the base terminals, some tubes had an electrode terminating at a top cap. The principal reason for doing this was to avoid leakage resistance through the tube base, particularly for the high impedance grid input. The bases were commonly made with phenolic insulation which performs poorly as an insulator in humid conditions. Other reasons for using a top cap include improving stability by reducing grid-to-anode capacitance, improved high-frequency performance, keeping a very high plate voltage away from lower voltages, and accommodating one more electrode than allowed by the base. There was even an occasional design that had two top cap connections. The earliest vacuum tubes evolved from incandescent light bulbs, containing a filament sealed in an evacuated glass envelope. When hot, the filament in a vacuum tube (a cathode) releases electrons into the vacuum, a process called thermionic emission. This can produce a controllable unidirectional current though the vacuum known as the Edison effect. A second electrode, the anode or plate, will attract those electrons if it is at a more positive voltage. The result is a net flow of electrons from the filament to plate. However, electrons cannot flow in the reverse direction because the plate is not heated and does not emit electrons. The filament has a dual function: it emits electrons when heated; and, together with the plate, it creates an electric field due to the potential difference between them. Such a tube with only two electrodes is termed a diode, and is used for rectification. Since current can only pass in one direction, such a diode (or rectifier) will convert alternating current (AC) to pulsating DC. Diodes can therefore be used in a DC power supply, as a demodulator of amplitude modulated (AM) radio signals and for similar functions. Early tubes used the filament as the cathode; this is called a "directly heated" tube. Most modern tubes are "indirectly heated" by a "heater" element inside a metal tube that is the cathode. The heater is electrically isolated from the surrounding cathode and simply serves to heat the cathode sufficiently for thermionic emission of electrons. The electrical isolation allows all the tubes' heaters to be supplied from a common circuit (which can be AC without inducing hum) while allowing the cathodes in different tubes to operate at different voltages. H. J. Round invented the indirectly heated tube around 1913. The filaments require constant and often considerable power, even when amplifying signals at the microwatt level. Power is also dissipated when the electrons from the cathode slam into the anode (plate) and heat it; this can occur even in an idle amplifier due to the quiescent current necessary to ensure linearity and low distortion. In a power amplifier, this heating can be considerable and can destroy the tube if driven beyond its safe limits. Since the tube contains a vacuum, the anodes in most small and medium power tubes are cooled by radiation through the glass envelope. In some special high power applications, the anode forms part of the vacuum envelope to conduct heat to an external heat sink, usually cooled by a blower, or water-jacket. Klystrons and magnetrons often operate their anodes (called collectors in klystrons) at ground potential to facilitate cooling, particularly with water, without high-voltage insulation. These tubes instead operate with high negative voltages on the filament and cathode. Except for diodes, additional electrodes are positioned between the cathode and the plate (anode). These electrodes are referred to as grids as they are not solid electrodes but sparse elements through which electrons can pass on their way to the plate. The vacuum tube is then known as a triode, tetrode, pentode, etc., depending on the number of grids. A triode has three electrodes: the anode, cathode, and one grid, and so on. The first grid, known as the control grid, (and sometimes other grids) transforms the diode into a voltage-controlled device: the voltage applied to the control grid affects the current between the cathode and the plate. When held negative with respect to the cathode, the control grid creates an electric field that repels electrons emitted by the cathode, thus reducing or even stopping the current between cathode and anode. As long as the control grid is negative relative to the cathode, essentially no current flows into it, yet a change of several volts on the control grid is sufficient to make a large difference in the plate current, possibly changing the output by hundreds of volts (depending on the circuit). The solid-state device which operates most like the pentode tube is the junction field-effect transistor (JFET), although vacuum tubes typically operate at over a hundred volts, unlike most semiconductors in most applications. History and development The 19th century saw increasing research with evacuated tubes, such as the Geissler and Crookes tubes. The many scientists and inventors who experimented with such tubes include Thomas Edison, Eugen Goldstein, Nikola Tesla, and Johann Wilhelm Hittorf. With the exception of early light bulbs, such tubes were only used in scientific research or as novelties. The groundwork laid by these scientists and inventors, however, was critical to the development of subsequent vacuum tube technology. Although thermionic emission was originally reported in 1873 by Frederick Guthrie, it was Thomas Edison's apparently independent discovery of the phenomenon in 1883, referred to as the Edison effect, that became well known. Although Edison was aware of the unidirectional property of current flow between the filament and the anode, his interest (and patent) concentrated on the sensitivity of the anode current to the current through the filament (and thus filament temperature). It was years later that John Ambrose Fleming applied the rectifying property of the Edison effect to detection of radio signals, as an improvement over the magnetic detector. Amplification by vacuum tube became practical only with Lee de Forest's 1907 invention of the three-terminal "audion" tube, a crude form of what was to become the triode. Being essentially the first electronic amplifier, such tubes were instrumental in long-distance telephony (such as the first coast-to-coast telephone line in the US) and public address systems, and introduced a far superior and versatile technology for use in radio transmitters and receivers. Diodes At the end of the 19th century, radio or wireless technology was in an early stage of development and the Marconi Company was engaged in development and construction of radio communication systems. Guglielmo Marconi appointed English physicist John Ambrose Fleming as scientific advisor in 1899. Fleming had been engaged as scientific advisor to Edison Telephone (1879), as scientific advisor at Edison Electric Light (1882), and was also technical consultant to Edison-Swan. One of Marconi's needs was for improvement of the detector, a device that extracts information from a modulated radio frequency. Marconi had developed a magnetic detector, which was less responsive to natural sources of radio frequency interference than the coherer, but the magnetic detector only provided an audio frequency signal to a telephone receiver. A reliable detector that could drive a printing instrument was needed. As a result of experiments conducted on Edison effect bulbs, Fleming developed a vacuum tube that he termed the oscillation valve because it passed current in only one direction. The cathode was a carbon lamp filament, heated by passing current through it, that produced thermionic emission of electrons. Electrons that had been emitted from the cathode were attracted to the plate (anode) when the plate was at a positive voltage with respect to the cathode. Electrons could not pass in the reverse direction because the plate was not heated and not capable of thermionic emission of electrons. Fleming filed a patent for these tubes, assigned to the Marconi company, in the UK in November 1904 and this patent was issued in September 1905. Later known as the Fleming valve, the oscillation valve was developed for the purpose of rectifying radio frequency current as the detector component of radio receiver circuits. While offering no advantage over the electrical sensitivity of crystal detectors, the Fleming valve offered advantage, particularly in shipboard use, over the difficulty of adjustment of the crystal detector and the susceptibility of the crystal detector to being dislodged from adjustment by vibration or bumping. Triodes In the 19th century, telegraph and telephone engineers had recognized the need to extend the distance that signals could be transmitted. In 1906, Robert von Lieben filed for a patent for a cathode-ray tube which used an external magnetic deflection coil and was intended for use as an amplifier in telephony equipment. This von Lieben magnetic deflection tube was not a successful amplifier, however, because of the power used by the deflection coil. Von Lieben would later make refinements to triode vacuum tubes. Lee de Forest is credited with inventing the triode tube in 1907 while experimenting to improve his original (diode) Audion. By placing an additional electrode between the filament (cathode) and plate (anode), he discovered the ability of the resulting device to amplify signals. As the voltage applied to the control grid (or simply "grid") was lowered from the cathode's voltage to somewhat more negative voltages, the amount of current from the filament to the plate would be reduced. The negative electrostatic field created by the grid in the vicinity of the cathode would inhibit the passage of emitted electrons and reduce the current to the plate. With the voltage of the grid less than that of the cathode, no direct current could pass from the cathode to the grid. Thus a change of voltage applied to the grid, requiring very little power input to the grid, could make a change in the plate current and could lead to a much larger voltage change at the plate; the result was voltage and power amplification. In 1908, de Forest was granted a patent () for such a three-electrode version of his original Audion for use as an electronic amplifier in radio communications. This eventually became known as the triode. De Forest's original device was made with conventional vacuum technology. The vacuum was not a "hard vacuum" but rather left a very small amount of residual gas. The physics behind the device's operation was also not settled. The residual gas would cause a blue glow (visible ionization) when the plate voltage was high (above about 60 volts). In 1912, de Forest and John Stone Stone brought the Audion for demonstration to AT&T's engineering department. Dr. Harold D. Arnold of AT&T recognized that the blue glow was caused by ionized gas. Arnold recommended that AT&T purchase the patent, and AT&T followed his recommendation. Arnold developed high-vacuum tubes which were tested in the summer of 1913 on AT&T's long-distance network. The high-vacuum tubes could operate at high plate voltages without a blue glow. Finnish inventor Eric Tigerstedt significantly improved on the original triode design in 1914, while working on his sound-on-film process in Berlin, Germany. Tigerstedt's innovation was to make the electrodes concentric cylinders with the cathode at the centre, thus greatly increasing the collection of emitted electrons at the anode. Irving Langmuir at the General Electric research laboratory (Schenectady, New York) had improved Wolfgang Gaede's high-vacuum diffusion pump and used it to settle the question of thermionic emission and conduction in a vacuum. Consequently, General Electric started producing hard vacuum triodes (which were branded Pliotrons) in 1915. Langmuir patented the hard vacuum triode, but de Forest and AT&T successfully asserted priority and invalidated the patent. Pliotrons were closely followed by the French type 'TM' and later the English type 'R' which were in widespread use by the allied military by 1916. Historically, vacuum levels in production vacuum tubes typically ranged from 10 μPa down to 10 nPa ( down to ). The triode and its derivatives (tetrodes and pentodes) are transconductance devices, in which the controlling signal applied to the grid is a voltage, and the resulting amplified signal appearing at the anode is a current. Compare this to the behavior of the bipolar junction transistor, in which the controlling signal is a current and the output is also a current. For vacuum tubes, transconductance or mutual conductance () is defined as the change in the plate(anode)/cathode current divided by the corresponding change in the grid to cathode voltage, with a constant plate(anode) to cathode voltage. Typical values of for a small-signal vacuum tube are 1 to 10 millisiemens. It is one of the three 'constants' of a vacuum tube, the other two being its gain μ and plate resistance or . The Van der Bijl equation defines their relationship as follows: The non-linear operating characteristic of the triode caused early tube audio amplifiers to exhibit harmonic distortion at low volumes. Plotting plate current as a function of applied grid voltage, it was seen that there was a range of grid voltages for which the transfer characteristics were approximately linear. To use this range, a negative bias voltage had to be applied to the grid to position the DC operating point in the linear region. This was called the idle condition, and the plate current at this point the "idle current". The controlling voltage was superimposed onto the bias voltage, resulting in a linear variation of plate current in response to positive and negative variation of the input voltage around that point. This concept is called grid bias. Many early radio sets had a third battery called the "C battery" (unrelated to the present-day C cell, for which the letter denotes its size and shape). The C battery's positive terminal was connected to the cathode of the tubes (or "ground" in most circuits) and whose negative terminal supplied this bias voltage to the grids of the tubes. Later circuits, after tubes were made with heaters isolated from their cathodes, used cathode biasing, avoiding the need for a separate negative power supply. For cathode biasing, a relatively low-value resistor is connected between the cathode and ground. This makes the cathode positive with respect to the grid, which is at ground potential for DC. However C batteries continued to be included in some equipment even when the "A" and "B" batteries had been replaced by power from the AC mains. That was possible because there was essentially no current draw on these batteries; they could thus last for many years (often longer than all the tubes) without requiring replacement. When triodes were first used in radio transmitters and receivers, it was found that tuned amplification stages had a tendency to oscillate unless their gain was very limited. This was due to the parasitic capacitance between the plate (the amplifier's output) and the control grid (the amplifier's input), known as the Miller capacitance. Eventually the technique of neutralization was developed whereby the RF transformer connected to the plate (anode) would include an additional winding in the opposite phase. This winding would be connected back to the grid through a small capacitor, and when properly adjusted would cancel the Miller capacitance. This technique was employed and led to the success of the Neutrodyne radio during the 1920s. However, neutralization required careful adjustment and proved unsatisfactory when used over a wide range of frequencies. Tetrodes and pentodes To combat the stability problems of the triode as a radio frequency amplifier due to grid-to-plate capacitance, the physicist Walter H. Schottky invented the tetrode or screen grid tube in 1919. He showed that the addition of an electrostatic shield between the control grid and the plate could solve the problem. This design was refined by Hull and Williams. The added grid became known as the screen grid or shield grid. The screen grid is operated at a positive voltage significantly less than the plate voltage and it is bypassed to ground with a capacitor of low impedance at the frequencies to be amplified. This arrangement substantially decouples the plate and the control grid, eliminating the need for neutralizing circuitry at medium wave broadcast frequencies. The screen grid also largely reduces the influence of the plate voltage on the space charge near the cathode, permitting the tetrode to produce greater voltage gain than the triode in amplifier circuits. While the amplification factors of typical triodes commonly range from below ten to around 100, tetrode amplification factors of 500 are common. Consequently, higher voltage gains from a single tube amplification stage became possible, reducing the number of tubes required. Screen grid tubes were marketed by late 1927. However, the useful region of operation of the screen grid tube as an amplifier was limited to plate voltages greater than the screen grid voltage, due to secondary emission from the plate. In any tube, electrons strike the plate with sufficient energy to cause the emission of electrons from its surface. In a triode this secondary emission of electrons is not important since they are simply re-captured by the plate. But in a tetrode they can be captured by the screen grid since it is also at a positive voltage, robbing them from the plate current and reducing the amplification of the tube. Since secondary electrons can outnumber the primary electrons over a certain range of plate voltages, the plate current can decrease with increasing plate voltage. This is the dynatron region or tetrode kink and is an example of negative resistance which can itself cause instability. Another undesirable consequence of secondary emission is that screen current is increased, which may cause the screen to exceed its power rating. The otherwise undesirable negative resistance region of the plate characteristic was exploited with the dynatron oscillator circuit to produce a simple oscillator only requiring connection of the plate to a resonant LC circuit to oscillate. The dynatron oscillator operated on the same principle of negative resistance as the tunnel diode oscillator many years later. The dynatron region of the screen grid tube was eliminated by adding a grid between the screen grid and the plate to create the pentode. The suppressor grid of the pentode was usually connected to the cathode and its negative voltage relative to the anode repelled secondary electrons so that they would be collected by the anode instead of the screen grid. The term pentode means the tube has five electrodes. The pentode was invented in 1926 by Bernard D. H. Tellegen and became generally favored over the simple tetrode. Pentodes are made in two classes: those with the suppressor grid wired internally to the cathode (e.g. EL84/6BQ5) and those with the suppressor grid wired to a separate pin for user access (e.g. 803, 837). An alternative solution for power applications is the beam tetrode or beam power tube, discussed below. Multifunction and multisection tubes Superheterodyne receivers require a local oscillator and mixer, combined in the function of a single pentagrid converter tube. Various alternatives such as using a combination of a triode with a hexode and even an octode have been used for this purpose. The additional grids include control grids (at a low potential) and screen grids (at a high voltage). Many designs use such a screen grid as an additional anode to provide feedback for the oscillator function, whose current adds to that of the incoming radio frequency signal. The pentagrid converter thus became widely used in AM receivers, including the miniature tube version of the "All American Five". Octodes, such as the 7A8, were rarely used in the United States, but much more common in Europe, particularly in battery operated radios where the lower power consumption was an advantage. To further reduce the cost and complexity of radio equipment, two separate structures (triode and pentode for instance) can be combined in the bulb of a single multisection tube. An early example is the Loewe 3NF. This 1920s device has three triodes in a single glass envelope together with all the fixed capacitors and resistors required to make a complete radio receiver. As the Loewe set had only one tube socket, it was able to substantially undercut the competition, since, in Germany, state tax was levied by the number of sockets. However, reliability was compromised, and production costs for the tube were much greater. In a sense, these were akin to integrated circuits. In the United States, Cleartron briefly produced the "Multivalve" triple triode for use in the Emerson Baby Grand receiver. This Emerson set also has a single tube socket, but because it uses a four-pin base, the additional element connections are made on a "mezzanine" platform at the top of the tube base. By 1940 multisection tubes had become commonplace. There were constraints, however, due to patents and other licensing considerations (see British Valve Association). Constraints due to the number of external pins (leads) often forced the functions to share some of those external connections such as their cathode connections (in addition to the heater connection). The RCA Type 55 is a double diode triode used as a detector, automatic gain control rectifier and audio preamplifier in early AC powered radios. These sets often include the 53 Dual Triode Audio Output. Another early type of multi-section tube, the 6SN7, is a "dual triode" which performs the functions of two triode tubes while taking up half as much space and costing less. The 12AX7 is a dual "high mu" (high voltage gain) triode in a miniature enclosure, and became widely used in audio signal amplifiers, instruments, and guitar amplifiers. The introduction of the miniature tube base (see below) which can have 9 pins, more than previously available, allowed other multi-section tubes to be introduced, such as the 6GH8/ECF82 triode-pentode, quite popular in television receivers. The desire to include even more functions in one envelope resulted in the General Electric Compactron which has 12 pins. A typical example, the 6AG11, contains two triodes and two diodes. Some otherwise conventional tubes do not fall into standard categories; the 6AR8, 6JH8 and 6ME8 have several common grids, followed by a pair of beam deflection electrodes which deflected the current towards either of two anodes. They were sometimes known as the 'sheet beam' tubes and used in some color TV sets for color demodulation. The similar 7360 was popular as a balanced SSB (de)modulator. Beam power tubes A beam tetrode (or "beam power tube") forms the electron stream from the cathode into multiple partially collimated beams to produce a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode when the anode potential is less than that of the screen grid. Formation of beams also reduces screen grid current. In some cylindrically symmetrical beam power tubes, the cathode is formed of narrow strips of emitting material that are aligned with the apertures of the control grid, reducing control grid current. This design helps to overcome some of the practical barriers to designing high-power, high-efficiency power tubes. Manufacturer's data sheets often use the terms beam pentode or beam power pentode instead of beam power tube, and use a pentode graphic symbol instead of a graphic symbol showing beam forming plates. Beam power tubes offer the advantages of a longer load line, less screen current, higher transconductance and lower third harmonic distortion than comparable power pentodes. Beam power tubes can be connected as triodes for improved audio tonal quality but in triode mode deliver significantly reduced power output. Gas-filled tubes Gas-filled tubes such as discharge tubes and cold cathode tubes are not hard vacuum tubes, though are always filled with gas at less than sea-level atmospheric pressure. Types such as the voltage-regulator tube and thyratron resemble hard vacuum tubes and fit in sockets designed for vacuum tubes. Their distinctive orange, red, or purple glow during operation indicates the presence of gas; electrons flowing in a vacuum do not produce light within that region. These types may still be referred to as "electron tubes" as they do perform electronic functions. High-power rectifiers use mercury vapor to achieve a lower forward voltage drop than high-vacuum tubes. Miniature tubes Early tubes used a metal or glass envelope atop an insulating bakelite base. In 1938 a technique was developed to use an all-glass construction with the pins fused in the glass base of the envelope. This allowed the design of a much smaller tube profile, known as the miniature tube, having seven or nine pins. Making tubes smaller reduced the voltage where they could safely operate, and also reduced the power dissipation of the filament. Miniature tubes became predominant in consumer applications such as radio receivers and hi-fi amplifiers. However, the larger older styles continued to be used especially as higher-power rectifiers, in higher-power audio output stages and as transmitting tubes. Sub-miniature tubes Sub-miniature tubes with a size roughly that of half a cigarette were used in consumer applications as hearing-aid amplifiers. These tubes did not have pins plugging into a socket but were soldered in place. The "acorn tube" (named due to its shape) was also very small, as was the metal-cased RCA nuvistor from 1959, about the size of a thimble. The nuvistor was developed to compete with the early transistors and operated at higher frequencies than those early transistors could. The small size supported especially high-frequency operation; nuvistors were used in aircraft radio transceivers, UHF television tuners, and some HiFi FM radio tuners (Sansui 500A) until replaced by high-frequency capable transistors. Improvements in construction and performance The earliest vacuum tubes strongly resembled incandescent light bulbs and were made by lamp manufacturers, who had the equipment needed to manufacture glass envelopes and the vacuum pumps required to evacuate the enclosures. de Forest used Heinrich Geissler's mercury displacement pump, which left behind a partial vacuum. The development of the diffusion pump in 1915 and improvement by Irving Langmuir led to the development of high-vacuum tubes. After World War I, specialized manufacturers using more economical construction methods were set up to fill the growing demand for broadcast receivers. Bare tungsten filaments operated at a temperature of around 2200 °C. The development of oxide-coated filaments in the mid-1920s reduced filament operating temperature to a dull red heat (around 700 °C), which in turn reduced thermal distortion of the tube structure and allowed closer spacing of tube elements. This in turn improved tube gain, since the gain of a triode is inversely proportional to the spacing between grid and cathode. Bare tungsten filaments remain in use in small transmitting tubes but are brittle and tend to fracture if handled roughlye.g. in the postal services. These tubes are best suited to stationary equipment where impact and vibration is not present. Indirectly heated cathodes The desire to power electronic equipment using AC mains power faced a difficulty with respect to the powering of the tubes' filaments, as these were also the cathode of each tube. Powering the filaments directly from a power transformer introduced mains-frequency (50 or 60 Hz) hum into audio stages. The invention of the "equipotential cathode" reduced this problem, with the filaments being powered by a balanced AC power transformer winding having a grounded center tap. A superior solution, and one which allowed each cathode to "float" at a different voltage, was that of the indirectly heated cathode: a cylinder of oxide-coated nickel acted as an electron-emitting cathode and was electrically isolated from the filament inside it. Indirectly heated cathodes enable the cathode circuit to be separated from the heater circuit. The filament, no longer electrically connected to the tube's electrodes, became simply known as a "heater", and could as well be powered by AC without any introduction of hum. In the 1930s, indirectly heated cathode tubes became widespread in equipment using AC power. Directly heated cathode tubes continued to be widely used in battery-powered equipment as their filaments required considerably less power than the heaters required with indirectly heated cathodes. Tubes designed for high gain audio applications may have twisted heater wires to cancel out stray electric fields, fields that could induce objectionable hum into the program material. Heaters may be energized with either alternating current (AC) or direct current (DC). DC is often used where low hum is required. Use in electronic computers Vacuum tubes used as switches made electronic computing possible for the first time, but the cost and relatively short mean time to failure of tubes were limiting factors. "The common wisdom was that valveswhich, like light bulbs, contained a hot glowing filamentcould never be used satisfactorily in large numbers, for they were unreliable, and in a large installation too many would fail in too short a time". Tommy Flowers, who later designed Colossus, "discovered that, so long as valves were switched on and left on, they could operate reliably for very long periods, especially if their 'heaters' were run on a reduced current". In 1934 Flowers built a successful experimental installation using over 3,000 tubes in small independent modules; when a tube failed, it was possible to switch off one module and keep the others going, thereby reducing the risk of another tube failure being caused; this installation was accepted by the Post Office (who operated telephone exchanges). Flowers was also a pioneer of using tubes as very fast (compared to electromechanical devices) electronic switches. Later work confirmed that tube unreliability was not as serious an issue as generally believed; the 1946 ENIAC, with over 17,000 tubes, had a tube failure (which took 15 minutes to locate) on average every two days. The quality of the tubes was a factor, and the diversion of skilled people during the Second World War lowered the general quality of tubes. During the war Colossus was instrumental in breaking German codes. After the war, development continued with tube-based computers including, military computers ENIAC and Whirlwind, the Ferranti Mark 1 (one of the first commercially available electronic computers), and UNIVAC I, also available commercially. Advances using subminiature tubes included the Jaincomp series of machines produced by the Jacobs Instrument Company of Bethesda, Maryland. Models such as its Jaincomp-B employed just 300 such tubes in a desktop-sized unit that offered performance to rival many of the then room-sized machines. Colossus Colossus I and its successor Colossus II (Mk2) were designed by Tommy Flowers and built by the General Post Office for Bletchley Park (BP) during World War II to substantially speed up the task of breaking the German high level Lorenz encryption. Colossus replaced an earlier machine based on relay and switch logic (the Heath Robinson). Colossus was able to break in a matter of hours messages that had previously taken several weeks; it was also much more reliable. Colossus was the first use of vacuum tubes working in concert on such a large scale for a single machine. Tommy Flowers (who conceived Colossus) wrote that most radio equipment was "carted round, dumped around, switched on and off and generally mishandled. But I'd introduced valves into telephone equipment in large numbers before the war and I knew that if you never moved them and never switched them on and off they would go on forever". Colossus was "that reliable, extremely reliable". On its first day at BP a problem with a known answer was set. To the amazement of BP (Station X), after running for four hours with each run taking half an hour the answer was the same every time (the Robinson did not always give the same answer). Colossus I used about 1600 valves, and Colossus II about 2400 valves (some sources say 1500 (Mk I) and 2500 (Mk II); the Robinson used about a hundred valves; some sources say fewer). Whirlwind and "special-quality" tubes To meet the reliability requirements of the 1951 US digital computer Whirlwind, "special-quality" tubes with extended life, and a long-lasting cathode in particular, were produced. The problem of short lifetime was traced largely to evaporation of silicon, used in the tungsten alloy to make the heater wire easier to draw. The silicon forms barium orthosilicate at the interface between the nickel sleeve and the cathode barium oxide coating. This "cathode interface" is a high-resistance layer (with some parallel capacitance) which greatly reduces the cathode current when the tube is switched into conduction mode. Elimination of silicon from the heater wire alloy (and more frequent replacement of the wire drawing dies) allowed the production of tubes that were reliable enough for the Whirlwind project. High-purity nickel tubing and cathode coatings free of materials such as silicates and aluminum that can reduce emissivity also contribute to long cathode life. The first such "computer tube" was Sylvania's 7AK7 pentode of 1948 (these replaced the 7AD7, which was supposed to be better quality than the standard 6AG7 but proved too unreliable). Computers were the first tube devices to run tubes at cutoff (enough negative grid voltage to make them cease conduction) for quite-extended periods of time. Running in cutoff with the heater on accelerates cathode poisoning and the output current of the tube will be greatly reduced when switched into conduction mode. The 7AK7 tubes improved the cathode poisoning problem, but that alone was insufficient to achieve the required reliability. Further measures included switching off the heater voltage when the tubes were not required to conduct for extended periods, turning on and off the heater voltage with a slow ramp to avoid thermal shock on the heater element, and stress testing the tubes during offline maintenance periods to bring on early failure of weak units. Another commonly used computer tube was the 5965, also labeled as E180CC. This, according to a memorandom from MIT for Project Whirwind, was developed for IBM by General Electric, primarily for use in the IBM 701 calculators, and was designated as a general-purpose triode tube. The tubes developed for Whirlwind were later used in the giant SAGE air-defense computer system. By the late 1950s, it was routine for special-quality small-signal tubes to last for hundreds of thousands of hours if operated conservatively. This increased reliability also made mid-cable amplifiers in submarine cables possible. Heat generation and cooling A considerable amount of heat is produced when tubes operate, from both the filament (heater) and the stream of electrons bombarding the plate. In power amplifiers, this source of heat is greater than cathode heating. A few types of tube permit operation with the anodes at a dull red heat; in other types, red heat indicates severe overload. The requirements for heat removal can significantly change the appearance of high-power vacuum tubes. High power audio amplifiers and rectifiers required larger envelopes to dissipate heat. Transmitting tubes could be much larger still. Heat escapes the device by black-body radiation from the anode (plate) as infrared radiation, and by convection of air over the tube envelope. Convection is not possible inside most tubes since the anode is surrounded by vacuum. Tubes which generate relatively little heat, such as the 1.4-volt filament directly heated tubes designed for use in battery-powered equipment, often have shiny metal anodes. 1T4, 1R5 and 1A7 are examples. Gas-filled tubes such as thyratrons may also use a shiny metal anode since the gas present inside the tube allows for heat convection from the anode to the glass enclosure. The anode is often treated to make its surface emit more infrared energy. High-power amplifier tubes are designed with external anodes that can be cooled by convection, forced air or circulating water. The water-cooled 80 kg, 1.25 MW 8974 is among the largest commercial tubes available today. In a water-cooled tube, the anode voltage appears directly on the cooling water surface, thus requiring the water to be an electrical insulator to prevent high voltage leakage through the cooling water to the radiator system. Water as usually supplied has ions that conduct electricity; deionized water, a good insulator, is required. Such systems usually have a built-in water-conductance monitor which will shut down the high-tension supply if the conductance becomes too high. The screen grid may also generate considerable heat. Limits to screen grid dissipation, in addition to plate dissipation, are listed for power devices. If these are exceeded then tube failure is likely. Tube packages Most modern tubes have glass envelopes, but metal, fused quartz (silica) and ceramic have also been used. A first version of the 6L6 used a metal envelope sealed with glass beads, while a glass disk fused to the metal was used in later versions. Metal and ceramic are used almost exclusively for power tubes above 2 kW dissipation. The nuvistor was a modern receiving tube using a very small metal and ceramic package. The internal elements of tubes have always been connected to external circuitry via pins at their base which plug into a socket. Subminiature tubes were produced using wire leads rather than sockets, however, these were restricted to rather specialized applications. In addition to the connections at the base of the tube, many early triodes connected the grid using a metal cap at the top of the tube; this reduces stray capacitance between the grid and the plate leads. Tube caps were also used for the plate (anode) connection, particularly in transmitting tubes and tubes using a very high plate voltage. High-power tubes such as transmitting tubes have packages designed more to enhance heat transfer. In some tubes, the metal envelope is also the anode. The 4CX1000A is an external anode tube of this sort. Air is blown through an array of fins attached to the anode, thus cooling it. Power tubes using this cooling scheme are available up to 150 kW dissipation. Above that level, water or water-vapor cooling are used. The highest-power tube currently available is the Eimac , a forced water-cooled power tetrode capable of dissipating 2.5 megawatts. By comparison, the largest power transistor can only dissipate about 1 kilowatt. Names The generic name "[thermionic] valve" used in the UK derives from the unidirectional current flow allowed by the earliest device, the thermionic diode emitting electrons from a heated filament, by analogy with a non-return valve in a water pipe. The US names "vacuum tube", "electron tube", and "thermionic tube" all simply describe a tubular envelope which has been evacuated ("vacuum"), has a heater and controls electron flow. In many cases, manufacturers and the military gave tubes designations that said nothing about their purpose (e.g., 1614). In the early days some manufacturers used proprietary names which might convey some information, but only about their products; the KT66 and KT88 were "kinkless tetrodes". Later, consumer tubes were given names that conveyed some information, with the same name often used generically by several manufacturers. In the US, Radio Electronics Television Manufacturers' Association (RETMA) designations comprise a number, followed by one or two letters, and a number. The first number is the (rounded) heater voltage; the letters designate a particular tube but say nothing about its structure; and the final number is the total number of electrodes (without distinguishing between, say, a tube with many electrodes, or two sets of electrodes in a single envelopea double triode, for example). For example, the 12AX7 is a double triode (two sets of three electrodes plus heater) with a 12.6V heater (which, as it happens, can also be connected to run from 6.3V). The "AX" designates this tube's characteristics. Similar, but not identical, tubes are the 12AD7, 12AE7...12AT7, 12AU7, 12AV7, 12AW7 (rare), 12AY7, and the 12AZ7. A system widely used in Europe known as the Mullard–Philips tube designation, also extended to transistors, uses a letter, followed by one or more further letters, and a number. The type designator specifies the heater voltage or current (one letter), the functions of all sections of the tube (one letter per section), the socket type (first digit), and the particular tube (remaining digits). For example, the ECC83 (equivalent to the 12AX7) is a 6.3V (E) double triode (CC) with a miniature base (8). In this system special-quality tubes (e.g., for long-life computer use) are indicated by moving the number immediately after the first letter: the E83CC is a special-quality equivalent of the ECC83, the E55L a power pentode with no consumer equivalent. Special-purpose tubes Some special-purpose tubes are constructed with particular gases in the envelope. For instance, voltage-regulator tubes contain various inert gases such as argon, helium or neon, which will ionize at predictable voltages. The thyratron is a special-purpose tube filled with low-pressure gas or mercury vapor. Like vacuum tubes, it contains a hot cathode and an anode, but also a control electrode which behaves somewhat like the grid of a triode. When the control electrode starts conduction, the gas ionizes, after which the control electrode can no longer stop the current; the tube "latches" into conduction. Removing anode (plate) voltage lets the gas de-ionize, restoring its non-conductive state. Some thyratrons can carry large currents for their physical size. One example is the miniature type 2D21, often seen in 1950s jukeboxes as control switches for relays. A cold-cathode version of the thyratron, which uses a pool of mercury for its cathode, is called an ignitron; some can switch thousands of amperes. Thyratrons containing hydrogen have a very consistent time delay between their turn-on pulse and full conduction; they behave much like modern silicon-controlled rectifiers, also called thyristors due to their functional similarity to thyratrons. Hydrogen thyratrons have long been used in radar transmitters. A specialized tube is the krytron, which is used for rapid high-voltage switching. Krytrons are used to initiate the detonations used to set off a nuclear weapon; krytrons are heavily controlled at an international level. X-ray tubes are used in medical imaging among other uses. X-ray tubes used for continuous-duty operation in fluoroscopy and CT imaging equipment may use a focused cathode and a rotating anode to dissipate the large amounts of heat thereby generated. These are housed in an oil-filled aluminum housing to provide cooling. The photomultiplier tube is an extremely sensitive detector of light, which uses the photoelectric effect and secondary emission, rather than thermionic emission, to generate and amplify electrical signals. Nuclear medicine imaging equipment and liquid scintillation counters use photomultiplier tube arrays to detect low-intensity scintillation due to ionizing radiation. The Ignatron tube was used in resistance welding equipment in the early 1970s. The Ignatron had a cathode, anode and an igniter. The tube base was filled with mercury and the tube was used as a very high current switch. A large current potential was placed between the anode and cathode of the tube but was only permitted to conduct when the igniter in contact with the mercury had enough current to vaporize the mercury and complete the circuit. Because this was used in resistance welding there were two Ignatrons for the two phases of an AC circuit. Because of the mercury at the bottom of the tube they were extremely difficult to ship. These tubes were eventually replaced by SCRs (Silicon Controlled Rectifiers). Powering the tube Batteries Batteries provided the voltages required by tubes in early radio sets. Three different voltages were generally required, using three different batteries designated as the A, B, and C battery. The "A" battery or LT (low-tension) battery provided the filament voltage. Tube heaters were designed for single, double or triple-cell lead-acid batteries, giving nominal heater voltages of 2 V, 4 V or 6 V. In portable radios, dry batteries were sometimes used with 1.5 or 1 V heaters. Reducing filament consumption improved the life span of batteries. By 1955 towards the end of the tube era, tubes using only 50 mA down to as little as 10 mA for the heaters had been developed. The high voltage applied to the anode (plate) was provided by the "B" battery or the HT (high-tension) supply or battery. These were generally of dry cell construction and typically came in 22.5-, 45-, 67.5-, 90-, 120- or 135-volt versions. After the use of B-batteries was phased out and rectified line-power was employed to produce the high voltage needed by tubes' plates, the term "B+" persisted in the US when referring to the high voltage source. Most of the rest of the English speaking world refers to this supply as just HT (high tension). Early sets used a grid bias battery or "C" battery which was connected to provide a negative voltage. Since no current flows through a tube's grid connection, these batteries had no current drain and lasted the longest, usually limited by their own shelf life. The supply from the grid bias battery was rarely, if ever, disconnected when the radio was otherwise switched off. Even after AC power supplies became commonplace, some radio sets continued to be built with C batteries, as they would almost never need replacing. However more modern circuits were designed using cathode biasing, eliminating the need for a third power supply voltage; this became practical with tubes using indirect heating of the cathode along with the development of resistor/capacitor coupling which replaced earlier interstage transformers. The "C battery" for bias is a designation having no relation to the "C cell" battery size. AC power Battery replacement was a major operating cost for early radio receiver users. The development of the battery eliminator, and, in 1925, batteryless receivers operated by household power, reduced operating costs and contributed to the growing popularity of radio. A power supply using a transformer with several windings, one or more rectifiers (which may themselves be vacuum tubes), and large filter capacitors provided the required direct current voltages from the alternating current source. As a cost reduction measure, especially in high-volume consumer receivers, all the tube heaters could be connected in series across the AC supply using heaters requiring the same current and with a similar warm-up time. In one such design, a tap on the tube heater string supplied the 6 volts needed for the dial light. By deriving the high voltage from a half-wave rectifier directly connected to the AC mains, the heavy and costly power transformer was eliminated. This also allowed such receivers to operate on direct current, a so-called AC/DC receiver design. Many different US consumer AM radio manufacturers of the era used a virtually identical circuit, given the nickname All American Five. Where the mains voltage was in the 100–120 V range, this limited voltage proved suitable only for low-power receivers. Television receivers either required a transformer or could use a voltage doubling circuit. Where 230 V nominal mains voltage was used, television receivers as well could dispense with a power transformer. Transformer-less power supplies required safety precautions in their design to limit the shock hazard to users, such as electrically insulated cabinets and an interlock tying the power cord to the cabinet back, so the line cord was necessarily disconnected if the user or service person opened the cabinet. A cheater cord was a power cord ending in the special socket used by the safety interlock; servicers could then power the device with the hazardous voltages exposed. To avoid the warm-up delay, "instant on" television receivers passed a small heating current through their tubes even when the set was nominally off. At switch on, full heating current was provided and the set would play almost immediately. Reliability One reliability problem of tubes with oxide cathodes is the possibility that the cathode may slowly become "poisoned" by gas molecules from other elements in the tube, which reduce its ability to emit electrons. Trapped gases or slow gas leaks can also damage the cathode or cause plate (anode) current runaway due to ionization of free gas molecules. Vacuum hardness and proper selection of construction materials are the major influences on tube lifetime. Depending on the material, temperature and construction, the surface material of the cathode may also diffuse onto other elements. The resistive heaters that heat the cathodes may break in a manner similar to incandescent lamp filaments, but rarely do, since they operate at much lower temperatures than lamps. The heater's failure mode is typically a stress-related fracture of the tungsten wire or at a weld point and generally occurs after accruing many thermal (power on-off) cycles. Tungsten wire has a very low resistance when at room temperature. A negative temperature coefficient device, such as a thermistor, may be incorporated in the equipment's heater supply or a ramp-up circuit may be employed to allow the heater or filaments to reach operating temperature more gradually than if powered-up in a step-function. Low-cost radios had tubes with heaters connected in series, with a total voltage equal to that of the line (mains). Some receivers made before World War II had series-string heaters with total voltage less than that of the mains. Some had a resistance wire running the length of the power cord to drop the voltage to the tubes. Others had series resistors made like regular tubes; they were called ballast tubes. Following World War II, tubes intended to be used in series heater strings were redesigned to all have the same ("controlled") warm-up time. Earlier designs had quite-different thermal time constants. The audio output stage, for instance, had a larger cathode and warmed up more slowly than lower-powered tubes. The result was that heaters that warmed up faster also temporarily had higher resistance, because of their positive temperature coefficient. This disproportionate resistance caused them to temporarily operate with heater voltages well above their ratings, and shortened their life. Another important reliability problem is caused by air leakage into the tube. Usually oxygen in the air reacts chemically with the hot filament or cathode, quickly ruining it. Designers developed tube designs that sealed reliably. This was why most tubes were constructed of glass. Metal alloys (such as Cunife and Fernico) and glasses had been developed for light bulbs that expanded and contracted in similar amounts, as temperature changed. These made it easy to construct an insulating envelope of glass, while passing connection wires through the glass to the electrodes. When a vacuum tube is overloaded or operated past its design dissipation, its anode (plate) may glow red. In consumer equipment, a glowing plate is universally a sign of an overloaded tube. However, some large transmitting tubes are designed to operate with their anodes at red, orange, or in rare cases, white heat. "Special quality" versions of standard tubes were often made, designed for improved performance in some respect, such as a longer life cathode, low noise construction, mechanical ruggedness via ruggedized filaments, low microphony, for applications where the tube will spend much of its time cut off, etc. The only way to know the particular features of a special quality part is by reading the datasheet. Names may reflect the standard name (12AU7==>12AU7A, its equivalent ECC82==>E82CC, etc.), or be absolutely anything (standard and special-quality equivalents of the same tube include 12AU7, ECC82, B329, CV491, E2163, E812CC, M8136, CV4003, 6067, VX7058, 5814A and 12AU7A). The longest recorded valve life was earned by a Mazda AC/P pentode valve (serial No. 4418) in operation at the BBC's main Northern Ireland transmitter at Lisnagarvey. The valve was in service from 1935 until 1961 and had a recorded life of 232,592 hours. The BBC maintained meticulous records of their valves' lives with periodic returns to their central valve stores. Vacuum A vacuum tube needs an extremely high vacuum (or hard vacuum, from X-ray terminology) to avoid the consequences of generating positive ions within the tube. Residual gas atoms ionize when struck by an electron and can adversely affect the cathode, reducing emission. Larger amounts of residual gas can create a visible glow discharge between the tube electrodes and cause overheating of the electrodes, producing more gas, damaging the tube and possibly other components due to excess current. To avoid these effects, the residual pressure within the tube must be low enough that the mean free path of an electron is much longer than the size of the tube (so an electron is unlikely to strike a residual atom and very few ionized atoms will be present). Commercial vacuum tubes are evacuated at manufacture to about . To prevent gases from compromising the tube's vacuum, modern tubes are constructed with getters, which are usually metals that oxidize quickly, barium being the most common. For glass tubes, while the tube envelope is being evacuated, the internal parts except the getter are heated by RF induction heating to evolve any remaining gas from the metal parts. The tube is then sealed and the getter trough or pan, for flash getters, is heated to a high temperature, again by radio frequency induction heating, which causes the getter material to vaporize and react with any residual gas. The vapor is deposited on the inside of the glass envelope, leaving a silver-colored metallic patch that continues to absorb small amounts of gas that may leak into the tube during its working life. Great care is taken with the valve design to ensure this material is not deposited on any of the working electrodes. If a tube develops a serious leak in the envelope, this deposit turns a white color as it reacts with atmospheric oxygen. Large transmitting and specialized tubes often use more exotic getter materials, such as zirconium. Early gettered tubes used phosphorus-based getters, and these tubes are easily identifiable, as the phosphorus leaves a characteristic orange or rainbow deposit on the glass. The use of phosphorus was short-lived and was quickly replaced by the superior barium getters. Unlike the barium getters, the phosphorus did not absorb any further gases once it had fired. Getters act by chemically combining with residual or infiltrating gases, but are unable to counteract (non-reactive) inert gases. A known problem, mostly affecting valves with large envelopes such as cathode-ray tubes and camera tubes such as iconoscopes, orthicons, and image orthicons, comes from helium infiltration. The effect appears as impaired or absent functioning, and as a diffuse glow along the electron stream inside the tube. This effect cannot be rectified (short of re-evacuation and resealing), and is responsible for working examples of such tubes becoming rarer and rarer. Unused ("New Old Stock") tubes can also exhibit inert gas infiltration, so there is no long-term guarantee of these tube types surviving into the future. Transmitting tubes Large transmitting tubes have carbonized tungsten filaments containing a small trace (1% to 2%) of thorium. An extremely thin (molecular) layer of thorium atoms forms on the outside of the wire's carbonized layer and, when heated, serve as an efficient source of electrons. The thorium slowly evaporates from the wire surface, while new thorium atoms diffuse to the surface to replace them. Such thoriated tungsten cathodes usually deliver lifetimes in the tens of thousands of hours. The end-of-life scenario for a thoriated-tungsten filament is when the carbonized layer has mostly been converted back into another form of tungsten carbide and emission begins to drop off rapidly; a complete loss of thorium has never been found to be a factor in the end-of-life in a tube with this type of emitter. WAAY-TV in Huntsville, Alabama achieved 163,000 hours (18.6 years) of service from an Eimac external cavity klystron in the visual circuit of its transmitter; this is the highest documented service life for this type of tube. It has been said that transmitters with vacuum tubes are better able to survive lightning strikes than transistor transmitters do. While it was commonly believed that vacuum tubes were more efficient than solid-state circuits at RF power levels above approximately 20 kilowatts, this is no longer the case, especially in medium wave (AM broadcast) service where solid-state transmitters at nearly all power levels have measurably higher efficiency. FM broadcast transmitters with solid-state power amplifiers up to approximately 15 kW also show better overall power efficiency than tube-based power amplifiers. Receiving tubes Cathodes in small "receiving" tubes are coated with a mixture of barium oxide and strontium oxide, sometimes with addition of calcium oxide or aluminium oxide. An electric heater is inserted into the cathode sleeve and insulated from it electrically by a coating of aluminum oxide. This complex construction causes barium and strontium atoms to diffuse to the surface of the cathode and emit electrons when heated to about 780 degrees Celsius. Failure modes Catastrophic failures A catastrophic failure is one that suddenly makes the vacuum tube unusable. A crack in the glass envelope will allow air into the tube and destroy it. Cracks may result from stress in the glass, bent pins or impacts; tube sockets must allow for thermal expansion, to prevent stress in the glass at the pins. Stress may accumulate if a metal shield or other object presses on the tube envelope and causes differential heating of the glass. Glass may also be damaged by high-voltage arcing. Tube heaters may also fail without warning, especially if exposed to over voltage or as a result of manufacturing defects. Tube heaters do not normally fail by evaporation like lamp filaments since they operate at much lower temperature. The surge of inrush current when the heater is first energized causes stress in the heater and can be avoided by slowly warming the heaters, gradually increasing current with a NTC thermistor included in the circuit. Tubes intended for series-string operation of the heaters across the supply have a specified controlled warm-up time to avoid excess voltage on some heaters as others warm up. Directly heated filament-type cathodes as used in battery-operated tubes or some rectifiers may fail if the filament sags, causing internal arcing. Excess heater-to-cathode voltage in indirectly heated cathodes can break down the insulation between elements and destroy the heater. Arcing between tube elements can destroy the tube. An arc can be caused by applying voltage to the anode (plate) before the cathode has come up to operating temperature, or by drawing excess current through a rectifier, which damages the emission coating. Arcs can also be initiated by any loose material inside the tube, or by excess screen voltage. An arc inside the tube allows gas to evolve from the tube materials, and may deposit conductive material on internal insulating spacers. Tube rectifiers have limited current capability and exceeding ratings will eventually destroy a tube. Degenerative failures Degenerative failures are those caused by the slow deterioration of performance over time. Overheating of internal parts, such as control grids or mica spacer insulators, can result in trapped gas escaping into the tube; this can reduce performance. A getter is used to absorb gases evolved during tube operation but has only a limited ability to combine with gas. Control of the envelope temperature prevents some types of gassing. A tube with an unusually high level of internal gas may exhibit a visible blue glow when plate voltage is applied. The getter (being a highly reactive metal) is effective against many atmospheric gases but has no (or very limited) chemical reactivity to inert gases such as helium. One progressive type of failure, especially with physically large envelopes such as those used by camera tubes and cathode-ray tubes, comes from helium infiltration. The exact mechanism is not clear: the metal-to-glass lead-in seals are one possible infiltration site. Gas and ions within the tube contribute to grid current which can disturb operation of a vacuum-tube circuit. Another effect of overheating is the slow deposit of metallic vapors on internal spacers, resulting in inter-element leakage. Tubes on standby for long periods, with heater voltage applied, may develop high cathode interface resistance and display poor emission characteristics. This effect occurred especially in pulse and digital circuits, where tubes had no plate current flowing for extended times. Tubes designed specifically for this mode of operation were made. Cathode depletion is the loss of emission after thousands of hours of normal use. Sometimes emission can be restored for a time by raising heater voltage, either for a short time or a permanent increase of a few percent. Cathode depletion was uncommon in signal tubes but was a frequent cause of failure of monochrome television cathode-ray tubes. Usable life of this expensive component was sometimes extended by fitting a boost transformer to increase heater voltage. Other failures Vacuum tubes may develop defects in operation that make an individual tube unsuitable in a given device, although it may perform satisfactorily in another application. Microphonics refers to internal vibrations of tube elements which modulate the tube's signal in an undesirable way; sound or vibration pick-up may affect the signals, or even cause uncontrolled howling if a feedback path (with greater than unity gain) develops between a microphonic tube and, for example, a loudspeaker. Leakage current between AC heaters and the cathode may couple into the circuit, or electrons emitted directly from the ends of the heater may also inject hum into the signal. Leakage current due to internal contamination may also inject noise. Some of these effects make tubes unsuitable for small-signal audio use, although unobjectionable for other purposes. Selecting the best of a batch of nominally identical tubes for critical applications can produce better results. Tube pins can develop non-conducting or high resistance surface films due to heat or dirt. Pins can be cleaned to restore conductance. Testing Vacuum tubes can be tested outside of their circuitry using a vacuum tube tester. Other vacuum tube devices Most small signal vacuum tube devices have been superseded by semiconductors, but some vacuum tube electronic devices are still in common use. The magnetron is the type of tube used in all microwave ovens. In spite of the advancing state of the art in power semiconductor technology, the vacuum tube still has reliability and cost advantages for high-frequency RF power generation. Some tubes, such as magnetrons, traveling-wave tubes, Carcinotrons, and klystrons, combine magnetic and electrostatic effects. These are efficient (usually narrow-band) RF generators and still find use in radar, microwave ovens and industrial heating. Traveling-wave tubes (TWTs) are very good amplifiers and are even used in some communications satellites. High-powered klystron amplifier tubes can provide hundreds of kilowatts in the UHF range. Cathode-ray tubes The cathode-ray tube (CRT) is a vacuum tube used particularly for display purposes. Although there are still many televisions and computer monitors using cathode-ray tubes, they are rapidly being replaced by flat panel displays whose quality has greatly improved even as their prices drop. This is also true of digital oscilloscopes (based on internal computers and analog-to-digital converters), although traditional analog scopes (dependent upon CRTs) continue to be produced, are economical, and preferred by many technicians. At one time many radios used "magic eye tubes", a specialized sort of CRT used in place of a meter movement to indicate signal strength or input level in a tape recorder. A modern indicator device, the vacuum fluorescent display (VFD) is also a sort of cathode-ray tube. The X-ray tube is a type of cathode-ray tube that generates X-rays when high voltage electrons hit the anode. Gyrotrons or vacuum masers, used to generate high-power millimeter band waves, are magnetic vacuum tubes in which a small relativistic effect, due to the high voltage, is used for bunching the electrons. Gyrotrons can generate very high powers (hundreds of kilowatts)., Free-electron lasers, used to generate high-power coherent light and even X-rays, are highly relativistic vacuum tubes driven by high-energy particle accelerators. Thus, these are sorts of cathode-ray tubes. Electron multipliers A photomultiplier is a phototube whose sensitivity is greatly increased through the use of electron multiplication. This works on the principle of secondary emission, whereby a single electron emitted by the photocathode strikes a special sort of anode known as a dynode causing more electrons to be released from that dynode. Those electrons are accelerated toward another dynode at a higher voltage, releasing more secondary electrons; as many as 15 such stages provide a huge amplification. Despite great advances in solid-state photodetectors (e.g. Single-photon avalanche diode), the single-photon detection capability of photomultiplier tubes makes this vacuum tube device excel in certain applications. Such a tube can also be used for detection of ionizing radiation as an alternative to the Geiger–Müller tube (itself not an actual vacuum tube). Historically, the image orthicon TV camera tube widely used in television studios prior to the development of modern CCD arrays also used multistage electron multiplication. For decades, electron-tube designers tried to augment amplifying tubes with electron multipliers in order to increase gain, but these suffered from short life because the material used for the dynodes "poisoned" the tube's hot cathode. (For instance, the interesting RCA 1630 secondary-emission tube was marketed, but did not last.) However, eventually, Philips of the Netherlands developed the EFP60 tube that had a satisfactory lifetime and was used in at least one product, a laboratory pulse generator. By that time, however, transistors were rapidly improving, making such developments superfluous. One variant called a "channel electron multiplier" does not use individual dynodes but consists of a curved tube, such as a helix, coated on the inside with material with good secondary emission. One type had a funnel of sorts to capture the secondary electrons. The continuous dynode was resistive, and its ends were connected to enough voltage to create repeated cascades of electrons. The microchannel plate consists of an array of single stage electron multipliers over an image plane; several of these can then be stacked. This can be used, for instance, as an image intensifier in which the discrete channels substitute for focusing. Tektronix made a high-performance wideband oscilloscope CRT with a channel electron multiplier plate behind the phosphor layer. This plate was a bundled array of a huge number of short individual c.e.m. tubes that accepted a low-current beam and intensified it to provide a display of practical brightness. (The electron optics of the wideband electron gun could not provide enough current to directly excite the phosphor.) Vacuum tubes in the 21st century Industrial, commercial, and military niche applications Although vacuum tubes have been largely replaced by solid-state devices in most amplifying, switching, and rectifying applications, there are certain exceptions. In addition to the special functions noted above, tubes have some niche applications. In general, vacuum tubes are much less susceptible than corresponding solid-state components to transient overvoltages, such as mains voltage surges or lightning, the electromagnetic pulse effect of nuclear explosions, or geomagnetic storms produced by giant solar flares. This property kept them in use for certain military applications long after more practical and less expensive solid-state technology was available for the same applications, as for example with the MiG-25. Vacuum tubes are practical alternatives to solid-state devices in generating high power at radio frequencies in applications such as industrial radio frequency heating, particle accelerators, and broadcast transmitters. This is particularly true at microwave frequencies where such devices as the klystron and traveling-wave tube provide amplification at power levels unattainable using semiconductor devices. The household microwave oven uses a magnetron tube to efficiently generate hundreds of watts of microwave power. Solid-state devices such as gallium nitride are promising replacements, but are very expensive and in early stages of development. In military applications, a high-power vacuum tube can generate a 10–100 megawatt signal that can burn out an unprotected receiver's frontend. Such devices are considered non-nuclear electromagnetic weapons; they were introduced in the late 1990s by both the U.S. and Russia. In music Tube amplifiers remain commercially viable in three niches where their warm sound, performance when overdriven, and ability to replicate prior-era tube-based recording are prized: audiophile equipment, musical instrument amplifiers, and devices used in recording studios. Many guitarists prefer using valve amplifiers to solid-state models, often due to the way they tend to distort when overdriven. Any amplifier can only accurately amplify a signal to a certain volume; past this limit, the amplifier will begin to distort the signal. Different circuits will distort the signal in different ways; some guitarists prefer the distortion characteristics of vacuum tubes. Most popular vintage models use vacuum tubes. Displays Cathode-ray tube The cathode-ray tube was the dominant display technology for televisions and computer monitors at the start of the 21st century. However, rapid advances and falling prices of LCD flat panel technology soon took the place of CRTs in these devices. By 2010, most CRT production had ended. Vacuum tubes using field electron emitters In the early years of the 21st century there has been renewed interest in vacuum tubes, this time with the electron emitter formed on a flat silicon substrate, as in integrated circuit technology. This subject is now called vacuum nanoelectronics. The most common design uses a cold cathode in the form of a large-area field electron source (for example a field emitter array). With these devices, electrons are field-emitted from a large number of closely spaced individual emission sites. Such integrated microtubes may find application in microwave devices including mobile phones, for Bluetooth and Wi-Fi transmission, and in radar and satellite communication. , they were being studied for possible applications in field emission display technology, but there were significant production problems. As of 2014, NASA's Ames Research Center was reported to be working on vacuum-channel transistors produced using CMOS techniques. Characteristics Space charge of a vacuum tube When a cathode is heated and reaches an operating temperature around , free electrons are driven from its surface. These free electrons form a cloud in the empty space between the cathode and the anode, known as the space charge. This space charge cloud supplies the electrons that create the current flow from the cathode to the anode. As electrons are drawn to the anode during the operation of the circuit, new electrons will boil off the cathode to replenish the space charge. The space charge is an example of an electric field. Voltage – Current characteristics of vacuum tube All tubes with one or more control grids are controlled by an AC (Alternating Current) input voltage applied to the control grid, while the resulting amplified signal appears at the anode as a current. Due to the high voltage placed on the anode, a relatively small anode current can represent a considerable increase in energy over the value of the original signal voltage. The space charge electrons driven off the heated cathode are strongly attracted by the positive anode. The control grid(s) in a tube mediate this current flow by combining the small AC signal current with the grid's slightly negative value. When the signal sine (AC) wave is applied to the grid, it rides on this negative value, driving it both positive and negative as the AC signal wave changes. This relationship is shown with a set of Plate Characteristics curves, (see example above,) which visually display how the output current from the anode () can be affected by a small input voltage applied on the grid (), for any given voltage on the plate(anode) (). Every tube has a unique set of such characteristic curves. The curves graphically relate the changes to the instantaneous plate current driven by a much smaller change in the grid-to-cathode voltage () as the input signal varies. The V-I characteristic depends upon the size and material of the plate and cathode. Express the ratio between voltage plate and plate current. V-I curve (Voltage across filaments, plate current) Plate current, plate voltage characteristics DC plate resistance of the plateresistance of the path between anode and cathode of direct current AC plate resistance of the plateresistance of the path between anode and cathode of alternating current Size of electrostatic field Size of electrostatic field is the size between two or more plates in the tube. Patents Instrument for converting alternating electric currents into continuous currents (Fleming valve patent) Device for amplifying feeble electrical currents de Forest's three electrode Audion
Technology
Components
null
32498
https://en.wikipedia.org/wiki/Volume
Volume
Volume is a measure of regions in three-dimensional space. It is often quantified numerically using SI derived units (such as the cubic metre and litre) or by various imperial or US customary units (such as the gallon, quart, cubic inch). The definition of length and height (cubed) is interrelated with volume. The volume of a container is generally understood to be the capacity of the container; i.e., the amount of fluid (gas or liquid) that the container could hold, rather than the amount of space the container itself displaces. By metonymy, the term "volume" sometimes is used to refer to the corresponding region (e.g., bounding volume). In ancient times, volume was measured using similar-shaped natural containers. Later on, standardized containers were used. Some simple three-dimensional shapes can have their volume easily calculated using arithmetic formulas. Volumes of more complicated shapes can be calculated with integral calculus if a formula exists for the shape's boundary. Zero-, one- and two-dimensional objects have no volume; in four and higher dimensions, an analogous concept to the normal volume is the hypervolume. History Ancient history The precision of volume measurements in the ancient period usually ranges between . The earliest evidence of volume calculation came from ancient Egypt and Mesopotamia as mathematical problems, approximating volume of simple shapes such as cuboids, cylinders, frustum and cones. These math problems have been written in the Moscow Mathematical Papyrus (c. 1820 BCE). In the Reisner Papyrus, ancient Egyptians have written concrete units of volume for grain and liquids, as well as a table of length, width, depth, and volume for blocks of material. The Egyptians use their units of length (the cubit, palm, digit) to devise their units of volume, such as the volume cubit or deny (1 cubit × 1 cubit × 1 cubit), volume palm (1 cubit × 1 cubit × 1 palm), and volume digit (1 cubit × 1 cubit × 1 digit). The last three books of Euclid's Elements, written in around 300 BCE, detailed the exact formulas for calculating the volume of parallelepipeds, cones, pyramids, cylinders, and spheres. The formula were determined by prior mathematicians by using a primitive form of integration, by breaking the shapes into smaller and simpler pieces. A century later, Archimedes () devised approximate volume formula of several shapes using the method of exhaustion approach, meaning to derive solutions from previous known formulas from similar shapes. Primitive integration of shapes was also discovered independently by Liu Hui in the 3rd century CE, Zu Chongzhi in the 5th century CE, the Middle East and India. Archimedes also devised a way to calculate the volume of an irregular object, by submerging it underwater and measure the difference between the initial and final water volume. The water volume difference is the volume of the object. Though highly popularized, Archimedes probably does not submerge the golden crown to find its volume, and thus its density and purity, due to the extreme precision involved. Instead, he likely have devised a primitive form of a hydrostatic balance. Here, the crown and a chunk of pure gold with a similar weight are put on both ends of a weighing scale submerged underwater, which will tip accordingly due to the Archimedes' principle. Calculus and standardization of units In the Middle Ages, many units for measuring volume were made, such as the sester, amber, coomb, and seam. The sheer quantity of such units motivated British kings to standardize them, culminated in the Assize of Bread and Ale statute in 1258 by Henry III of England. The statute standardized weight, length and volume as well as introduced the peny, ounce, pound, gallon and bushel. In 1618, the London Pharmacopoeia (medicine compound catalog) adopted the Roman gallon or congius as a basic unit of volume and gave a conversion table to the apothecaries' units of weight. Around this time, volume measurements are becoming more precise and the uncertainty is narrowed to between . Around the early 17th century, Bonaventura Cavalieri applied the philosophy of modern integral calculus to calculate the volume of any object. He devised Cavalieri's principle, which said that using thinner and thinner slices of the shape would make the resulting volume more and more accurate. This idea would then be later expanded by Pierre de Fermat, John Wallis, Isaac Barrow, James Gregory, Isaac Newton, Gottfried Wilhelm Leibniz and Maria Gaetana Agnesi in the 17th and 18th centuries to form the modern integral calculus, which remains in use in the 21st century. Metrication and redefinitions On 7 April 1795, the metric system was formally defined in French law using six units. Three of these are related to volume: the stère (1 m3) for volume of firewood; the litre (1 dm3) for volumes of liquid; and the gramme, for mass—defined as the mass of one cubic centimetre of water at the temperature of melting ice. Thirty years later in 1824, the imperial gallon was defined to be the volume occupied by ten pounds of water at . This definition was further refined until the United Kingdom's Weights and Measures Act 1985, which makes 1 imperial gallon precisely equal to 4.54609 litres with no use of water. The 1960 redefinition of the metre from the International Prototype Metre to the orange-red emission line of krypton-86 atoms unbounded the metre, cubic metre, and litre from physical objects. This also make the metre and metre-derived units of volume resilient to changes to the International Prototype Metre. The definition of the metre was redefined again in 1983 to use the speed of light and second (which is derived from the caesium standard) and reworded for clarity in 2019. Properties As a measure of the Euclidean three-dimensional space, volume cannot be physically measured as a negative value, similar to length and area. Like all continuous monotonic (order-preserving) measures, volumes of bodies can be compared against each other and thus can be ordered. Volume can also be added together and be decomposed indefinitely; the latter property is integral to Cavalieri's principle and to the infinitesimal calculus of three-dimensional bodies. A 'unit' of infinitesimally small volume in integral calculus is the volume element; this formulation is useful when working with different coordinate systems, spaces and manifolds. Measurement The oldest way to roughly measure a volume of an object is using the human body, such as using hand size and pinches. However, the human body's variations make it extremely unreliable. A better way to measure volume is to use roughly consistent and durable containers found in nature, such as gourds, sheep or pig stomachs, and bladders. Later on, as metallurgy and glass production improved, small volumes nowadays are usually measured using standardized human-made containers. This method is common for measuring small volume of fluids or granular materials, by using a multiple or fraction of the container. For granular materials, the container is shaken or leveled off to form a roughly flat surface. This method is not the most accurate way to measure volume but is often used to measure cooking ingredients. Air displacement pipette is used in biology and biochemistry to measure volume of fluids at the microscopic scale. Calibrated measuring cups and spoons are adequate for cooking and daily life applications, however, they are not precise enough for laboratories. There, volume of liquids is measured using graduated cylinders, pipettes and volumetric flasks. The largest of such calibrated containers are petroleum storage tanks, some can hold up to of fluids. Even at this scale, by knowing petroleum's density and temperature, very precise volume measurement in these tanks can still be made. For even larger volumes such as in a reservoir, the container's volume is modeled by shapes and calculated using mathematics. Units To ease calculations, a unit of volume is equal to the volume occupied by a unit cube (with a side length of one). Because the volume occupies three dimensions, if the metre (m) is chosen as a unit of length, the corresponding unit of volume is the cubic metre (m3). The cubic metre is also a SI derived unit. Therefore, volume has a unit dimension of L3. The metric units of volume uses metric prefixes, strictly in powers of ten. When applying prefixes to units of volume, which are expressed in units of length cubed, the cube operators are applied to the unit of length including the prefix. An example of converting cubic centimetre to cubic metre is: 2.3 cm3 = 2.3 (cm)3 = 2.3 (0.01 m)3 = 0.0000023 m3 (five zeros). Commonly used prefixes for cubed length units are the cubic millimetre (mm3), cubic centimetre (cm3), cubic decimetre (dm3), cubic metre (m3) and the cubic kilometre (km3). The conversion between the prefix units are as follows: 1000 mm3 = 1 cm3, 1000 cm3 = 1 dm3, and 1000 dm3 = 1 m3. The metric system also includes the litre (L) as a unit of volume, where 1 L = 1 dm3 = 1000 cm3 = 0.001 m3. For the litre unit, the commonly used prefixes are the millilitre (mL), centilitre (cL), and the litre (L), with 1000 mL = 1 L, 10 mL = 1 cL, 10 cL = 1 dL, and 10 dL = 1 L. Various other imperial or U.S. customary units of volume are also in use, including: cubic inch, cubic foot, cubic yard, acre-foot, cubic mile; minim, drachm, fluid ounce, pint; teaspoon, tablespoon; gill, quart, gallon, barrel; cord, peck, bushel, hogshead. Capacity and volume Capacity is the maximum amount of material that a container can hold, measured in volume or weight. However, the contained volume does not need to fill towards the container's capacity, or vice versa. Containers can only hold a specific amount of physical volume, not weight (excluding practical concerns). For example, a tank that can just hold of fuel oil will not be able to contain the same of naphtha, due to naphtha's lower density and thus larger volume. Computation Basic shapes For many shapes such as the cube, cuboid and cylinder, they have an essentially the same volume calculation formula as one for the prism: the base of the shape multiplied by its height. Integral calculus The calculation of volume is a vital part of integral calculus. One of which is calculating the volume of solids of revolution, by rotating a plane curve around a line on the same plane. The washer or disc integration method is used when integrating by an axis parallel to the axis of rotation. The general equation can be written as:where and are the plane curve boundaries. The shell integration method is used when integrating by an axis perpendicular to the axis of rotation. The equation can be written as: The volume of a region D in three-dimensional space is given by the triple or volume integral of the constant function over the region. It is usually written as: In cylindrical coordinates, the volume integral is In spherical coordinates (using the convention for angles with as the azimuth and measured from the polar axis; see more on conventions), the volume integral is Geometric modeling A polygon mesh is a representation of the object's surface, using polygons. The volume mesh explicitly define its volume and surface properties. Derived quantities Density is the substance's mass per unit volume, or total mass divided by total volume. Specific volume is total volume divided by mass, or the inverse of density. The volumetric flow rate or discharge is the volume of fluid which passes through a given surface per unit time. The volumetric heat capacity is the heat capacity of the substance divided by its volume.
Mathematics
Geometry and topology
null
32500
https://en.wikipedia.org/wiki/Vacuum%20pump
Vacuum pump
A vacuum pump is a type of pump device that draws gas particles from a sealed volume in order to leave behind a partial vacuum. The first vacuum pump was invented in 1650 by Otto von Guericke, and was preceded by the suction pump, which dates to antiquity. History Early pumps The predecessor to the vacuum pump was the suction pump. Dual-action suction pumps were found in the city of Pompeii. Arabic engineer Al-Jazari later described dual-action suction pumps as part of water-raising machines in the 13th century. He also said that a suction pump was used in siphons to discharge Greek fire. The suction pump later appeared in medieval Europe from the 15th century. By the 17th century, water pump designs had improved to the point that they produced measurable vacuums, but this was not immediately understood. What was known was that suction pumps could not pull water beyond a certain height: 18 Florentine yards according to a measurement taken around 1635, or about . This limit was a concern in irrigation projects, mine drainage, and decorative water fountains planned by the Duke of Tuscany, so the duke commissioned Galileo Galilei to investigate the problem. Galileo suggested, incorrectly, in his Two New Sciences (1638) that the column of a water pump will break of its own weight when the water has been lifted to 34 feet. Other scientists took up the challenge, including Gasparo Berti, who replicated it by building the first water barometer in Rome in 1639. Berti's barometer produced a vacuum above the water column, but he could not explain it. A breakthrough was made by Galileo's student Evangelista Torricelli in 1643. Building upon Galileo's notes, he built the first mercury barometer and wrote a convincing argument that the space at the top was a vacuum. The height of the column was then limited to the maximum weight that atmospheric pressure could support; this is the limiting height of a suction pump. In 1650, Otto von Guericke invented the first vacuum pump. Four years later, he conducted his famous Magdeburg hemispheres experiment, showing that teams of horses could not separate two hemispheres from which the air had been evacuated. Robert Boyle improved Guericke's design and conducted experiments on the properties of vacuum. Robert Hooke also helped Boyle produce an air pump that helped to produce the vacuum. By 1709, Francis Hauksbee improved on the design further with his two-cylinder pump, where two pistons worked via a rack-and-pinion design that reportedly "gave a vacuum within about one inch of mercury of perfect." This design remained popular and only slightly changed until well into the nineteenth century. 19th century Heinrich Geissler invented the mercury displacement pump in 1855 and achieved a record vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, and this renewed interest in vacuum. This, in turn, led to the development of the vacuum tube. The Sprengel pump was a widely used vacuum producer of this time. 20th century The early 20th century saw the invention of many types of vacuum pump, including the molecular drag pump, the diffusion pump, and the turbomolecular pump. Types Pumps can be broadly categorized according to three techniques: positive displacement, momentum transfer, and entrapment. Positive displacement pumps use a mechanism to repeatedly expand a cavity, allow gases to flow in from the chamber, seal off the cavity, and exhaust it to the atmosphere. Momentum transfer pumps, also called molecular pumps, use high-speed jets of dense fluid or high-speed rotating blades to knock gas molecules out of the chamber. Entrapment pumps capture gases in a solid or adsorbed state; this includes cryopumps, getters, and ion pumps. Positive displacement pumps are the most effective for low vacuums. Momentum transfer pumps, in conjunction with one or two positive displacement pumps, are the most common configuration used to achieve high vacuums. In this configuration the positive displacement pump serves two purposes. First it obtains a rough vacuum in the vessel being evacuated before the momentum transfer pump can be used to obtain the high vacuum, as momentum transfer pumps cannot start pumping at atmospheric pressures. Second the positive displacement pump backs up the momentum transfer pump by evacuating to low vacuum the accumulation of displaced molecules in the high vacuum pump. Entrapment pumps can be added to reach ultrahigh vacuums, but they require periodic regeneration of the surfaces that trap air molecules or ions. Due to this requirement their available operational time can be unacceptably short in low and high vacuums, thus limiting their use to ultrahigh vacuums. Pumps also differ in details like manufacturing tolerances, sealing material, pressure, flow, admission or no admission of oil vapor, service intervals, reliability, tolerance to dust, tolerance to chemicals, tolerance to liquids and vibration. Positive displacement pump A partial vacuum may be generated by increasing the volume of a container. To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind a positive displacement pump, for example the manual water pump. Inside the pump, a mechanism expands a small sealed cavity to reduce its pressure below that of the atmosphere. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size. More sophisticated systems are used for most industrial applications, but the basic principle of cyclic volume removal is the same: Rotary vane pump, the most common Diaphragm pump, zero oil contamination Liquid ring high resistance to dust Piston pump, fluctuating vacuum Scroll pump, highest speed dry pump Screw pump (10 Pa) Wankel pump External vane pump Roots blower, also called a booster pump, has highest pumping speeds but low compression ratio Multistage Roots pump that combine several stages providing high pumping speed with better compression ratio Toepler pump Lobe pump The base pressure of a rubber- and plastic-sealed piston pump system is typically 1 to 50 kPa, while a scroll pump might reach 10 Pa (when new) and a rotary vane oil pump with a clean and empty metallic chamber can easily achieve 0.1 Pa. A positive displacement vacuum pump moves the same volume of gas with each cycle, so its pumping speed is constant unless it is overcome by backstreaming. Momentum transfer pump In a momentum transfer pump (or kinetic pump), gas molecules are accelerated from the vacuum side to the exhaust side (which is usually maintained at a reduced pressure by a positive displacement pump). Momentum transfer pumping is only possible below pressures of about 0.1 kPa. Matter flows differently at different pressures based on the laws of fluid dynamics. At atmospheric pressure and mild vacuums, molecules interact with each other and push on their neighboring molecules in what is known as viscous flow. When the distance between the molecules increases, the molecules interact with the walls of the chamber more often than with the other molecules, and molecular pumping becomes more effective than positive displacement pumping. This regime is generally called high vacuum. Molecular pumps sweep out a larger area than mechanical pumps, and do so more frequently, making them capable of much higher pumping speeds. They do this at the expense of the seal between the vacuum and their exhaust. Since there is no seal, a small pressure at the exhaust can easily cause backstreaming through the pump; this is called stall. In high vacuum, however, pressure gradients have little effect on fluid flows, and molecular pumps can attain their full potential. The two main types of molecular pumps are the diffusion pump and the turbomolecular pump. Both types of pumps blow out gas molecules that diffuse into the pump by imparting momentum to the gas molecules. Diffusion pumps blow out gas molecules with jets of an oil or mercury vapor, while turbomolecular pumps use high speed fans to push the gas. Both of these pumps will stall and fail to pump if exhausted directly to atmospheric pressure, so they must be exhausted to a lower grade vacuum created by a mechanical pump, in this case called a backing pump. As with positive displacement pumps, the base pressure will be reached when leakage, outgassing, and backstreaming equal the pump speed, but now minimizing leakage and outgassing to a level comparable to backstreaming becomes much more difficult. Entrapment pump An entrapment pump may be a cryopump, which uses cold temperatures to condense gases to a solid or adsorbed state, a chemical pump, which reacts with gases to produce a solid residue, or an ion pump, which uses strong electrical fields to ionize gases and propel the ions into a solid substrate. A cryomodule uses cryopumping. Other types are the sorption pump, non-evaporative getter pump, and titanium sublimation pump (a type of evaporative getter that can be used repeatedly). Other types Regenerative pump Regenerative pumps utilize vortex behavior of the fluid (air). The construction is based on hybrid concept of centrifugal pump and turbopump. Usually it consists of several sets of perpendicular teeth on the rotor circulating air molecules inside stationary hollow grooves like multistage centrifugal pump. They can reach to 1×10−5 mbar (0.001 Pa)(when combining with Holweck pump) and directly exhaust to atmospheric pressure. Examples of such pumps are Edwards EPX (technical paper ) and Pfeiffer OnTool™ Booster 150. It is sometimes referred as side channel pump. Due to high pumping rate from atmosphere to high vacuum and less contamination since bearing can be installed at exhaust side, this type of pumps are used in load lock in semiconductor manufacturing processes. This type of pump suffers from high power consumption(~1 kW) compared to turbomolecular pump (<100W) at low pressure since most power is consumed to back atmospheric pressure. This can be reduced by nearly 10 times by backing with a small pump. More examples Additional types of pump include the: Venturi vacuum pump (aspirator) (10 to 30 kPa) Steam ejector (vacuum depends on the number of stages, but can be very low) Performance measures Pumping speed refers to the volume flow rate of a pump at its inlet, often measured in volume per unit of time. Momentum transfer and entrapment pumps are more effective on some gases than others, so the pumping rate can be different for each of the gases being pumped, and the average volume flow rate of the pump will vary depending on the chemical composition of the gases remaining in the chamber. Throughput refers to the pumping speed multiplied by the gas pressure at the inlet, and is measured in units of pressure·volume/unit time. At a constant temperature, throughput is proportional to the number of molecules being pumped per unit time, and therefore to the mass flow rate of the pump. When discussing a leak in the system or backstreaming through the pump, throughput refers to the volume leak rate multiplied by the pressure at the vacuum side of the leak, so the leak throughput can be compared to the pump throughput. Positive displacement and momentum transfer pumps have a constant volume flow rate (pumping speed), but as the chamber's pressure drops, this volume contains less and less mass. So although the pumping speed remains constant, the throughput and mass flow rate drop exponentially. Meanwhile, the leakage, evaporation, sublimation and backstreaming rates continue to produce a constant throughput into the system. Techniques Vacuum pumps are combined with chambers and operational procedures into a wide variety of vacuum systems. Sometimes more than one pump will be used (in series or in parallel) in a single application. A partial vacuum, or rough vacuum, can be created using a positive displacement pump that transports a gas load from an inlet port to an outlet (exhaust) port. Because of their mechanical limitations, such pumps can only achieve a low vacuum. To achieve a higher vacuum, other techniques must then be used, typically in series (usually following an initial fast pump down with a positive displacement pump). Some examples might be use of an oil sealed rotary vane pump (the most common positive displacement pump) backing a diffusion pump, or a dry scroll pump backing a turbomolecular pump. There are other combinations depending on the level of vacuum being sought. Achieving high vacuum is difficult because all of the materials exposed to the vacuum must be carefully evaluated for their outgassing and vapor pressure properties. For example, oils, greases, and rubber or plastic gaskets used as seals for the vacuum chamber must not boil off when exposed to the vacuum, or the gases they produce would prevent the creation of the desired degree of vacuum. Often, all of the surfaces exposed to the vacuum must be baked at high temperature to drive off adsorbed gases. Outgassing can also be reduced simply by desiccation prior to vacuum pumping. High-vacuum systems generally require metal chambers with metal gasket seals such as Klein flanges or ISO flanges, rather than the rubber gaskets more common in low vacuum chamber seals. The system must be clean and free of organic matter to minimize outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. As a result, many materials that work well in low vacuums, such as epoxy, will become a source of outgassing at higher vacuums. With these standard precautions, vacuums of 1 mPa are easily achieved with an assortment of molecular pumps. With careful design and operation, 1 μPa is possible. Several types of pumps may be used in sequence or in parallel. In a typical pumpdown sequence, a positive displacement pump would be used to remove most of the gas from a chamber, starting from atmosphere (760 Torr, 101 kPa) to 25 Torr (3 kPa). Then a sorption pump would be used to bring the pressure down to 10−4 Torr (10 mPa). A cryopump or turbomolecular pump would be used to bring the pressure further down to 10−8 Torr (1 μPa). An additional ion pump can be started below 10−6 Torr to remove gases which are not adequately handled by a cryopump or turbo pump, such as helium or hydrogen. Ultra-high vacuum generally requires custom-built equipment, strict operational procedures, and a fair amount of trial-and-error. Ultra-high vacuum systems are usually made of stainless steel with metal-gasketed vacuum flanges. The system is usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials in the system and boil them off. If necessary, this outgassing of the system can also be performed at room temperature, but this takes much more time. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures to minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system. In ultra-high vacuum systems, some very odd leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the absorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The porosity of the metallic vacuum chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face. The impact of molecular size must be considered. Smaller molecules can leak in more easily and are more easily absorbed by certain materials, and molecular pumps are less effective at pumping gases with lower molecular weights. A system may be able to evacuate nitrogen (the main component of air) to the desired vacuum, but the chamber could still be full of residual atmospheric hydrogen and helium. Vessels lined with a highly gas-permeable material such as palladium (which is a high-capacity hydrogen sponge) create special outgassing problems. Applications Vacuum pumps are used in many industrial and scientific processes, including: Vacuum deaerator Composite plastic moulding processes; Production of most types of electric lamps, vacuum tubes, and CRTs where the device is either left evacuated or re-filled with a specific gas or gas mixture; Semiconductor processing, notably ion implantation, dry etch and PVD, ALD, PECVD and CVD deposition and so on in photolithography; Electron microscopy; Medical processes that require suction; Uranium enrichment; Medical applications such as radiotherapy, radiosurgery and radiopharmacy; Analytical instrumentation to analyse gas, liquid, solid, surface and bio materials; Mass spectrometers to create a high vacuum between the ion source and the detector; vacuum coating on glass, metal and plastics for decoration, for durability and for energy saving, such as low-emissivity glass, hard coating for engine components (as in Formula One), ophthalmic coating, milking machines and other equipment in dairy sheds; Vacuum impregnation of porous products such as wood or electric motor windings; Air conditioning service (removing all contaminants from the system before charging with refrigerant); Trash compactor; Vacuum engineering; Sewage systems (see EN1091:1997 standards); Freeze drying; and Fusion research. In the field of oil regeneration and re-refining, vacuum pumps create a low vacuum for oil dehydration and a high vacuum for oil purification. A vacuum may be used to power, or provide assistance to mechanical devices. In hybrid and diesel engine motor vehicles, a pump fitted on the engine (usually on the camshaft) is used to produce a vacuum. In petrol engines, instead, the vacuum is typically obtained as a side-effect of the operation of the engine and the flow restriction created by the throttle plate but may be also supplemented by an electrically operated vacuum pump to boost braking assistance or improve fuel consumption. This vacuum may then be used to power the following motor vehicle components: vacuum servo booster for the hydraulic brakes, motors that move dampers in the ventilation system, throttle driver in the cruise control servomechanism, door locks or trunk releases. In an aircraft, the vacuum source is often used to power gyroscopes in the various flight instruments. To prevent the complete loss of instrumentation in the event of an electrical failure, the instrument panel is deliberately designed with certain instruments powered by electricity and other instruments powered by the vacuum source. Depending on the application, some vacuum pumps may either be electrically driven (using electric current) or pneumatically-driven (using air pressure), or powered and actuated by other means. Hazards Old vacuum-pump oils that were produced before circa 1980 often contain a mixture of several different dangerous polychlorinated biphenyls (PCBs), which are highly toxic, carcinogenic, persistent organic pollutants.
Technology
Hydraulics and pneumatics
null
32502
https://en.wikipedia.org/wiki/Vacuum
Vacuum
A vacuum (: vacuums or vacua) is space devoid of matter. The word is derived from the Latin adjective (neuter ) meaning "vacant" or "void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists often discuss ideal test results that would occur in a perfect vacuum, which they sometimes simply call "vacuum" or free space, and use the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space. In engineering and applied physics on the other hand, vacuum refers to any space in which the pressure is considerably lower than atmospheric pressure. The Latin term in vacuo is used to describe an object that is surrounded by a vacuum. The quality of a partial vacuum refers to how closely it approaches a perfect vacuum. Other things equal, lower gas pressure means higher-quality vacuum. For example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. But higher-quality vacuums are possible. Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm3. Outer space is an even higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average in intergalactic space. Vacuum has been a frequent topic of philosophical debate since ancient Greek times, but was not studied empirically until the 17th century. Clemens Timpler (1605) philosophized about the experimental possibility of producing a vacuum in small tubes. Evangelista Torricelli produced the first laboratory vacuum in 1643, and other experimental techniques were developed as a result of his theories of atmospheric pressure. A Torricellian vacuum is created by filling with mercury a tall glass container closed at one end, and then inverting it in a bowl to contain the mercury (see below). Vacuum became a valuable industrial tool in the 20th century with the introduction of incandescent light bulbs and vacuum tubes, and a wide array of vacuum technologies has since become available. The development of human spaceflight has raised interest in the impact of vacuum on human health, and on life forms in general. Etymology The word vacuum comes , noun use of neuter of vacuus, meaning "empty", related to vacare, meaning "to be empty". Vacuum is one of the few words in the English language that contains two consecutive instances of the vowel u. Historical understanding Historically, there has been much dispute over whether such a thing as a vacuum can exist. Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, which posited void and atom as the fundamental explanatory elements of physics. Lucretius argued for the existence of vacuum in the first century BC and Hero of Alexandria tried unsuccessfully to create an artificial vacuum in the first century AD. Following Plato, however, even the abstract concept of a featureless void faced considerable skepticism: it could not be apprehended by the senses, it could not, itself, provide additional explanatory power beyond the physical volume with which it was commensurate and, by definition, it was quite literally nothing at all, which cannot rightly be said to exist. Aristotle believed that no void could occur naturally, because the denser surrounding material continuum would immediately fill any incipient rarity that might give rise to a void. In his Physics, book IV, Aristotle offered numerous arguments against the void: for example, that motion through a medium which offered no impediment could continue ad infinitum, there being no reason that something would come to rest anywhere in particular. In the medieval Muslim world, the physicist and Islamic scholar Al-Farabi wrote a treatise rejecting the existence of the vacuum in the 10th century. He concluded that air's volume can expand to fill available space, and therefore the concept of a perfect vacuum was incoherent. According to Ahmad Dallal, Abū Rayhān al-Bīrūnī states that "there is no observable evidence that rules out the possibility of vacuum". The suction pump was described by Arab engineer Al-Jazari in the 13th century, and later appeared in Europe from the 15th century. European scholars such as Roger Bacon, Blasius of Parma and Walter Burley in the 13th and 14th century focused considerable attention on issues concerning the concept of a vacuum. The commonly held view that nature abhorred a vacuum was called horror vacui. There was even speculation that even God could not create a vacuum if he wanted and the 1277 Paris condemnations of Bishop Étienne Tempier, which required there to be no restrictions on the powers of God, led to the conclusion that God could create a vacuum if he so wished. From the 14th century onward increasingly departed from the Aristotelian perspective, scholars widely acknowledged that a supernatural void exists beyond the confines of the cosmos itself by the 17th century. This idea, influenced by Stoic physics, helped to segregate natural and theological concerns. Almost two thousand years after Plato, René Descartes also proposed a geometrically based alternative theory of atomism, without the problematic nothing–everything dichotomy of void and atom. Although Descartes agreed with the contemporary position, that a vacuum does not occur in nature, the success of his namesake coordinate system and more implicitly, the spatial–corporeal component of his metaphysics would come to define the philosophically modern notion of empty space as a quantified extension of volume. By the ancient definition however, directional information and magnitude were conceptually distinct. Medieval thought experiments into the idea of a vacuum considered whether a vacuum was present, if only for an instant, between two flat plates when they were rapidly separated. There was much discussion of whether the air moved in quickly enough as the plates were separated, or, as Walter Burley postulated, whether a 'celestial agent' prevented the vacuum arising. Jean Buridan reported in the 14th century that teams of ten horses could not pull open bellows when the port was sealed. The 17th century saw the first attempts to quantify measurements of partial vacuum. Evangelista Torricelli's mercury barometer of 1643 and Blaise Pascal's experiments both demonstrated a partial vacuum. In 1654, Otto von Guericke invented the first vacuum pump and conducted his famous Magdeburg hemispheres experiment, showing that, owing to atmospheric pressure outside the hemispheres, teams of horses could not separate two hemispheres from which the air had been partially evacuated. Robert Boyle improved Guericke's design and with the help of Robert Hooke further developed vacuum pump technology. Thereafter, research into the partial vacuum lapsed until 1850 when August Toepler invented the Toepler pump and in 1855 when Heinrich Geissler invented the mercury displacement pump, achieving a partial vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, which renewed interest in further research. While outer space provides the most rarefied example of a naturally occurring partial vacuum, the heavens were originally thought to be seamlessly filled by a rigid indestructible material called aether. Borrowing somewhat from the pneuma of Stoic physics, aether came to be regarded as the rarefied air from which it took its name, (see Aether (mythology)). Early theories of light posited a ubiquitous terrestrial and celestial medium through which light propagated. Additionally, the concept informed Isaac Newton's explanations of both refraction and of radiant heat. 19th century experiments into this luminiferous aether attempted to detect a minute drag on the Earth's orbit. While the Earth does, in fact, move through a relatively dense medium in comparison to that of interstellar space, the drag is so minuscule that it could not be detected. In 1912, astronomer Henry Pickering commented: "While the interstellar absorbing medium may be simply the ether, [it] is characteristic of a gas, and free gaseous molecules are certainly there". Thereafter, however, luminiferous aether was discarded. Later, in 1930, Paul Dirac proposed a model of the vacuum as an infinite sea of particles possessing negative energy, called the Dirac sea. This theory helped refine the predictions of his earlier formulated Dirac equation, and successfully predicted the existence of the positron, confirmed two years later. Werner Heisenberg's uncertainty principle, formulated in 1927, predicted a fundamental limit within which instantaneous position and momentum, or energy and time can be measured. This far reaching consequences also threatened whether the "emptiness" of space between particles exists. Classical field theories The strictest criterion to define a vacuum is a region of space and time where all the components of the stress–energy tensor are zero. This means that this region is devoid of energy and momentum, and by consequence, it must be empty of particles and other physical fields (such as electromagnetism) that contain energy and momentum. Gravity In general relativity, a vanishing stress–energy tensor implies, through Einstein field equations, the vanishing of all the components of the Ricci tensor. Vacuum does not mean that the curvature of space-time is necessarily flat: the gravitational field can still produce curvature in a vacuum in the form of tidal forces and gravitational waves (technically, these phenomena are the components of the Weyl tensor). The black hole (with zero electric charge) is an elegant example of a region completely "filled" with vacuum, but still showing a strong curvature. Electromagnetism In classical electromagnetism, the vacuum of free space, or sometimes just free space or perfect vacuum, is a standard reference medium for electromagnetic effects. Some authors refer to this reference medium as classical vacuum, a terminology intended to separate this concept from QED vacuum or QCD vacuum, where vacuum fluctuations can produce transient virtual particle densities and a relative permittivity and relative permeability that are not identically unity. In the theory of classical electromagnetism, free space has the following properties: Electromagnetic radiation travels, when unobstructed, at the speed of light, the defined value 299,792,458 m/s in SI units. The superposition principle is always exactly true. For example, the electric potential generated by two charges is the simple addition of the potentials generated by each charge in isolation. The value of the electric field at any point around these two charges is found by calculating the vector sum of the two electric fields from each of the charges acting alone. The permittivity and permeability are exactly the electric constant and magnetic constant , respectively (in SI units), or exactly 1 (in Gaussian units). The characteristic impedance () equals the impedance of free space ≈ 376.73 Ω. The vacuum of classical electromagnetism can be viewed as an idealized electromagnetic medium with the constitutive relations in SI units: relating the electric displacement field to the electric field and the magnetic field or H-field to the magnetic induction or B-field . Here is a spatial location and is time. Quantum mechanics In quantum mechanics and quantum field theory, the vacuum is defined as the state (that is, the solution to the equations of the theory) with the lowest possible energy (the ground state of the Hilbert space). In quantum electrodynamics this vacuum is referred to as 'QED vacuum' to distinguish it from the vacuum of quantum chromodynamics, denoted as QCD vacuum. QED vacuum is a state with no matter particles (hence the name), and no photons. As described above, this state is impossible to achieve experimentally. (Even if every matter particle could somehow be removed from a volume, it would be impossible to eliminate all the blackbody photons.) Nonetheless, it provides a good model for realizable vacuum, and agrees with a number of experimental observations as described next. QED vacuum has interesting and complex properties. In QED vacuum, the electric and magnetic fields have zero average values, but their variances are not zero. As a result, QED vacuum contains vacuum fluctuations (virtual particles that hop into and out of existence), and a finite energy called vacuum energy. Vacuum fluctuations are an essential and ubiquitous part of quantum field theory. Some experimentally verified effects of vacuum fluctuations include spontaneous emission and the Lamb shift. Coulomb's law and the electric potential in vacuum near an electric charge are modified. Theoretically, in QCD multiple vacuum states can coexist. The starting and ending of cosmological inflation is thought to have arisen from transitions between different vacuum states. For theories obtained by quantization of a classical theory, each stationary point of the energy in the configuration space gives rise to a single vacuum. String theory is believed to have a huge number of vacua – the so-called string theory landscape. Outer space Outer space has very low density and pressure, and is the closest physical approximation of a perfect vacuum. But no vacuum is truly perfect, not even in interstellar space, where there are still a few hydrogen atoms per cubic meter. Stars, planets, and moons keep their atmospheres by gravitational attraction, and as such, atmospheres have no clearly delineated boundary: the density of atmospheric gas simply decreases with distance from the object. The Earth's atmospheric pressure drops to about at of altitude, the Kármán line, which is a common definition of the boundary with outer space. Beyond this line, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar winds, so the definition of pressure becomes difficult to interpret. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather. Astrophysicists prefer to use number density to describe these environments, in units of particles per cubic centimetre. But although it meets the definition of outer space, the atmospheric density within the first few hundred kilometers above the Kármán line is still sufficient to produce significant drag on satellites. Most artificial satellites operate in this region called low Earth orbit and must fire their engines every couple of weeks or a few times a year (depending on solar activity). The drag here is low enough that it could theoretically be overcome by radiation pressure on solar sails, a proposed propulsion system for interplanetary travel. All of the observable universe is filled with large numbers of photons, the so-called cosmic background radiation, and quite likely a correspondingly large number of neutrinos. The current temperature of this radiation is about . Measurement The quality of a vacuum is indicated by the amount of matter remaining in the system, so that a high quality vacuum is one with very little matter left in it. Vacuum is primarily measured by its absolute pressure, but a complete characterization requires further parameters, such as temperature and chemical composition. One of the most important parameters is the mean free path (MFP) of residual gases, which indicates the average distance that molecules will travel between collisions with each other. As the gas density decreases, the MFP increases, and when the MFP is longer than the chamber, pump, spacecraft, or other objects present, the continuum assumptions of fluid mechanics do not apply. This vacuum state is called high vacuum, and the study of fluid flows in this regime is called particle gas dynamics. The MFP of air at atmospheric pressure is very short, 70 nm, but at 100 mPa (≈) the MFP of room temperature air is roughly 100 mm, which is on the order of everyday objects such as vacuum tubes. The Crookes radiometer turns when the MFP is larger than the size of the vanes. Vacuum quality is subdivided into ranges according to the technology required to achieve it or measure it. These ranges were defined in ISO 3529-1:2019 as shown in the following table (100 Pa corresponds to 0.75 Torr; Torr is a non-SI unit): Atmospheric pressure is variable but are common standard or reference pressures. Deep space is generally much more empty than any artificial vacuum. It may or may not meet the definition of high vacuum above, depending on what region of space and astronomical bodies are being considered. For example, the MFP of interplanetary space is smaller than the size of the Solar System, but larger than small planets and moons. As a result, solar winds exhibit continuum flow on the scale of the Solar System, but must be considered a bombardment of particles with respect to the Earth and Moon. Perfect vacuum is an ideal state of no particles at all. It cannot be achieved in a laboratory, although there may be small volumes which, for a brief moment, happen to have no particles of matter in them. Even if all particles of matter were removed, there would still be photons, as well as dark energy, virtual particles, and other aspects of the quantum vacuum. Relative versus absolute measurement Vacuum is measured in units of pressure, typically as a subtraction relative to ambient atmospheric pressure on Earth. But the amount of relative measurable vacuum varies with local conditions. On the surface of Venus, where ground-level atmospheric pressure is much higher than on Earth, much higher relative vacuum readings would be possible. On the surface of the Moon with almost no atmosphere, it would be extremely difficult to create a measurable vacuum relative to the local environment. Similarly, much higher than normal relative vacuum readings are possible deep in the Earth's ocean. A submarine maintaining an internal pressure of 1 atmosphere submerged to a depth of 10 atmospheres (98 metres; a 9.8-metre column of seawater has the equivalent weight of 1 atm) is effectively a vacuum chamber keeping out the crushing exterior water pressures, though the 1 atm inside the submarine would not normally be considered a vacuum. Therefore, to properly understand the following discussions of vacuum measurement, it is important that the reader assumes the relative measurements are being done on Earth at sea level, at exactly 1 atmosphere of ambient atmospheric pressure. Measurements relative to 1 atm The SI unit of pressure is the pascal (symbol Pa), but vacuum is often measured in torrs, named for an Italian physicist Torricelli (1608–1647). A torr is equal to the displacement of a millimeter of mercury (mmHg) in a manometer with 1 torr equaling 133.3223684 pascals above absolute zero pressure. Vacuum is often also measured on the barometric scale or as a percentage of atmospheric pressure in bars or atmospheres. Low vacuum is often measured in millimeters of mercury (mmHg) or pascals (Pa) below standard atmospheric pressure. "Below atmospheric" means that the absolute pressure is equal to the current atmospheric pressure. In other words, most low vacuum gauges that read, for example 50.79 Torr. Many inexpensive low vacuum gauges have a margin of error and may report a vacuum of 0 Torr but in practice this generally requires a two-stage rotary vane or other medium type of vacuum pump to go much beyond (lower than) 1 torr. Measuring instruments Many devices are used to measure the pressure in a vacuum, depending on what range of vacuum is needed. Hydrostatic gauges (such as the mercury column manometer) consist of a vertical column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight is in equilibrium with the pressure differential between the two ends of the tube. The simplest design is a closed-end U-shaped tube, one side of which is connected to the region of interest. Any fluid can be used, but mercury is preferred for its high density and low vapour pressure. Simple hydrostatic gauges can measure pressures ranging from 1 torr (100 Pa) to above atmospheric. An important variation is the McLeod gauge which isolates a known volume of vacuum and compresses it to multiply the height variation of the liquid column. The McLeod gauge can measure vacuums as high as 10−6 torr (0.1 mPa), which is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-controlled properties. These indirect measurements must be calibrated via a direct measurement, most commonly a McLeod gauge. The kenotometer is a particular type of hydrostatic gauge, typically used in power plants using steam turbines. The kenotometer measures the vacuum in the steam space of the condenser, that is, the exhaust of the last stage of the turbine. Mechanical or elastic gauges depend on a Bourdon tube, diaphragm, or capsule, usually made of metal, which will change shape in response to the pressure of the region in question. A variation on this idea is the capacitance manometer, in which the diaphragm makes up a part of a capacitor. A change in pressure leads to the flexure of the diaphragm, which results in a change in capacitance. These gauges are effective from 103 torr to 10−4 torr, and beyond. Thermal conductivity gauges rely on the fact that the ability of a gas to conduct heat decreases with pressure. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or Resistance Temperature Detector (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10 torr to 10−3 torr, but they are sensitive to the chemical composition of the gases being measured. Ionization gauges are used in ultrahigh vacuum. They come in two types: hot cathode and cold cathode. In the hot cathode version an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 torr to 10−10 torr. The principle behind cold cathode version is the same, except that electrons are produced in a discharge created by a high voltage electrical discharge. Cold cathode gauges are accurate from 10−2 torr to 10−9 torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement. Uses Vacuum is useful in a variety of processes and devices. Its first widespread use was in the incandescent light bulb to protect the filament from chemical degradation. The chemical inertness produced by a vacuum is also useful for electron beam welding, cold welding, vacuum packing and vacuum frying. Ultra-high vacuum is used in the study of atomically clean substrates, as only a very good vacuum preserves atomic-scale clean surfaces for a reasonably long time (on the order of minutes to days). High to ultra-high vacuum removes the obstruction of air, allowing particle beams to deposit or remove materials without contamination. This is the principle behind chemical vapor deposition, physical vapor deposition, and dry etching which are essential to the fabrication of semiconductors and optical coatings, and to surface science. The reduction of convection provides the thermal insulation of thermos bottles. Deep vacuum lowers the boiling point of liquids and promotes low temperature outgassing which is used in freeze drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical properties of vacuum make electron microscopes and vacuum tubes possible, including cathode-ray tubes. Vacuum interrupters are used in electrical switchgear. Vacuum arc processes are industrially important for production of certain grades of steel or high purity materials. The elimination of air friction is useful for flywheel energy storage and ultracentrifuges. Vacuum-driven machines Vacuums are commonly used to produce suction, which has an even wider variety of applications. The Newcomen steam engine used vacuum instead of pressure to drive a piston. In the 19th century, vacuum was used for traction on Isambard Kingdom Brunel's experimental atmospheric railway. Vacuum brakes were once widely used on trains in the UK but, except on heritage railways, they have been replaced by air brakes. Manifold vacuum can be used to drive accessories on automobiles. The best known application is the vacuum servo, used to provide power assistance for the brakes. Obsolete applications include vacuum-driven windscreen wipers and Autovac fuel pumps. Some aircraft instruments (Attitude Indicator (AI) and the Heading Indicator (HI)) are typically vacuum-powered, as protection against loss of all (electrically powered) instruments, since early aircraft often did not have electrical systems, and since there are two readily available sources of vacuum on a moving aircraft, the engine and an external venturi. Vacuum induction melting uses electromagnetic induction within a vacuum. Maintaining a vacuum in the condenser is an important aspect of the efficient operation of steam turbines. A steam jet ejector or liquid ring vacuum pump is used for this purpose. The typical vacuum maintained in the condenser steam space at the exhaust of the turbine (also called condenser backpressure) is in the range 5 to 15 kPa (absolute), depending on the type of condenser and the ambient conditions. Outgassing Evaporation and sublimation into a vacuum is called outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. Outgassing has the same effect as a leak and will limit the achievable vacuum. Outgassing products may condense on nearby colder surfaces, which can be troublesome if they obscure optical instruments or react with other materials. This is of great concern to space missions, where an obscured telescope or solar cell can ruin an expensive mission. The most prevalent outgassing product in vacuum systems is water absorbed by chamber materials. It can be reduced by desiccating or baking the chamber, and removing absorbent materials. Outgassed water can condense in the oil of rotary vane pumps and reduce their net speed drastically if gas ballasting is not used. High vacuum systems must be clean and free of organic matter to minimize outgassing. Ultra-high vacuum systems are usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials and boil them off. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures and minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system. Pumping and ambient air pressure Fluids cannot generally be pulled, so a vacuum cannot be created by suction. Suction can spread and dilute a vacuum by letting a higher pressure push fluids into it, but the vacuum has to be created first before suction can occur. The easiest way to create an artificial vacuum is to expand the volume of a container. For example, the diaphragm muscle expands the chest cavity, which causes the volume of the lungs to increase. This expansion reduces the pressure and creates a partial vacuum, which is soon filled by air pushed in by atmospheric pressure. To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind positive displacement pumps, like the manual water pump for example. Inside the pump, a mechanism expands a small sealed cavity to create a vacuum. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size. The above explanation is merely a simple introduction to vacuum pumping, and is not representative of the entire range of pumps in use. Many variations of the positive displacement pump have been developed, and many other pump designs rely on fundamentally different principles. Momentum transfer pumps, which bear some similarities to dynamic pumps used at higher pressures, can achieve much higher quality vacuums than positive displacement pumps. Entrapment pumps can capture gases in a solid or absorbed state, often with no moving parts, no seals and no vibration. None of these pumps are universal; each type has important performance limitations. They all share a difficulty in pumping low molecular weight gases, especially hydrogen, helium, and neon. The lowest pressure that can be attained in a system is also dependent on many things other than the nature of the pumps. Multiple pumps may be connected in series, called stages, to achieve higher vacuums. The choice of seals, chamber geometry, materials, and pump-down procedures will all have an impact. Collectively, these are called vacuum technique. And sometimes, the final pressure is not the only relevant characteristic. Pumping systems differ in oil contamination, vibration, preferential pumping of certain gases, pump-down speeds, intermittent duty cycle, reliability, or tolerance to high leakage rates. In ultra high vacuum systems, some very "odd" leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the adsorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The permeability of the metallic chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face. The lowest pressures currently achievable in laboratory are about . However, pressures as low as have been indirectly measured in a cryogenic vacuum system. This corresponds to ≈100 particles/cm3. Effects on humans and animals Humans and animals exposed to vacuum will lose consciousness after a few seconds and die of hypoxia within minutes, but the symptoms are not nearly as graphic as commonly depicted in media and popular culture. The reduction in pressure lowers the temperature at which blood and other body fluids boil, but the elastic pressure of blood vessels ensures that this boiling point remains above the internal body temperature of Although the blood will not boil, the formation of gas bubbles in bodily fluids at reduced pressures, known as ebullism, is still a concern. The gas may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Swelling and ebullism can be restrained by containment in a flight suit. Shuttle astronauts wore a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which prevents ebullism at pressures as low as 2 kPa (15 Torr). Rapid boiling will cool the skin and create frost, particularly in the mouth, but this is not a significant hazard. Animal experiments show that rapid and complete recovery is normal for exposures shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has never been successful. A study by NASA on eight chimpanzees found all of them survived two and a half minute exposures to vacuum. There is only a limited amount of data available from human accidents, but it is consistent with animal data. Limbs may be exposed for much longer if breathing is not impaired. Robert Boyle was the first to show in 1660 that vacuum is lethal to small animals. An experiment indicates that plants are able to survive in a low pressure environment (1.5 kPa) for about 30 minutes. Cold or oxygen-rich atmospheres can sustain life at pressures much lower than atmospheric, as long as the density of oxygen is similar to that of standard sea-level atmosphere. The colder air temperatures found at altitudes of up to 3 km generally compensate for the lower pressures there. Above this altitude, oxygen enrichment is necessary to prevent altitude sickness in humans that did not undergo prior acclimatization, and spacesuits are necessary to prevent ebullism above 19 km. Most spacesuits use only 20 kPa (150 Torr) of pure oxygen. This pressure is high enough to prevent ebullism, but decompression sickness and gas embolisms can still occur if decompression rates are not managed. Rapid decompression can be much more dangerous than vacuum exposure itself. Even if the victim does not hold his or her breath, venting through the windpipe may be too slow to prevent the fatal rupture of the delicate alveoli of the lungs. Eardrums and sinuses may be ruptured by rapid decompression, soft tissues may bruise and seep blood, and the stress of shock will accelerate oxygen consumption leading to hypoxia. Injuries caused by rapid decompression are called barotrauma. A pressure drop of 13 kPa (100 Torr), which produces no symptoms if it is gradual, may be fatal if it occurs suddenly. Some extremophile microorganisms, such as tardigrades, can survive vacuum conditions for periods of days or weeks. Examples
Physical sciences
Physics
null
32505
https://en.wikipedia.org/wiki/Vapor
Vapor
In physics, a vapor (American English) or vapour (Commonwealth English; see spelling differences) is a substance in the gas phase at a temperature lower than its critical temperature, which means that the vapor can be condensed to a liquid by increasing the pressure on it without reducing the temperature of the vapor. A vapor is different from an aerosol. An aerosol is a suspension of tiny particles of liquid, solid, or both within a gas. For example, water has a critical temperature of , which is the highest temperature at which liquid water can exist at any pressure. In the atmosphere at ordinary temperatures gaseous water (known as water vapor) will condense into a liquid if its partial pressure is increased sufficiently. A vapor may co-exist with a liquid (or a solid). When this is true, the two phases will be in equilibrium, and the gas-partial pressure will be equal to the equilibrium vapor pressure of the liquid (or solid). Properties Vapor refers to a gas phase at a temperature where the same substance can also exist in the liquid or solid state, below the critical temperature of the substance. (For example, water has a critical temperature of 374 °C (647 K), which is the highest temperature at which liquid water can exist.) If the vapor is in contact with a liquid or solid phase, the two phases will be in a state of equilibrium. The term gas refers to a compressible fluid phase. Fixed gases are gases for which no liquid or solid can form at the temperature of the gas, such as air at typical ambient temperatures. A liquid or solid does not have to boil to release a vapor. Vapor is responsible for the familiar processes of cloud formation and condensation. It is commonly employed to carry out the physical processes of distillation and headspace extraction from a liquid sample prior to gas chromatography. The constituent molecules of a vapor possess vibrational, rotational, and translational motion. These motions are considered in the kinetic theory of gases. Vapor pressure The vapor pressure is the equilibrium pressure from a liquid or a solid at a specific temperature. The equilibrium vapor pressure of a liquid or solid is not affected by the amount of contact with the liquid or solid interface. The normal boiling point of a liquid is the temperature at which the vapor pressure is equal to normal atmospheric pressure. For two-phase systems (e.g., two liquid phases), the vapor pressure of the individual phases are equal. In the absence of stronger inter-species attractions between like-like or like-unlike molecules, the vapor pressure follows Raoult's law, which states that the partial pressure of each component is the product of the vapor pressure of the pure component and its mole fraction in the mixture. The total vapor pressure is the sum of the component partial pressures. Examples Perfumes contain chemicals that vaporize at different temperatures and at different rate in scent accords, known as notes. Atmospheric water vapor is found near the earth's surface, and may condense into small liquid droplets and form meteorological phenomena, such as fog, mist, and haar. Mercury-vapor lamps and sodium vapor lamps produce light from atoms in excited states. Flammable liquids do not burn when ignited. It is the vapor cloud above the liquid that will burn if the vapor's concentration is between the lower flammable limit (LFL) and upper flammable limit (UFL), of the flammable liquid. E-cigarettes produce aerosols, not vapors. Measuring vapor Since it is in the gas phase, the amount of vapor present is quantified by the partial pressure of the gas. Also, vapors obey the barometric formula in a gravitational field, just as conventional atmospheric gases do.
Physical sciences
States of matter
null
32509
https://en.wikipedia.org/wiki/Vitamin%20C
Vitamin C
Vitamin C (also known as ascorbic acid and ascorbate) is a water-soluble vitamin found in citrus and other fruits, berries and vegetables. It is also a generic prescription medication and in some countries is sold as a non-prescription dietary supplement. As a therapy, it is used to prevent and treat scurvy, a disease caused by vitamin C deficiency. Vitamin C is an essential nutrient involved in the repair of tissue, the formation of collagen, and the enzymatic production of certain neurotransmitters. It is required for the functioning of several enzymes and is important for immune system function. It also functions as an antioxidant. Vitamin C may be taken by mouth or by intramuscular, subcutaneous or intravenous injection. Various health claims exist on the basis that moderate vitamin C deficiency increases disease risk, such as for the common cold, cancer or COVID-19. There are also claims of benefits from vitamin C supplementation in excess of the recommended dietary intake for people who are not considered vitamin C deficient. Vitamin C is generally well tolerated. Large doses may cause gastrointestinal discomfort, headache, trouble sleeping, and flushing of the skin. The United States Institute of Medicine recommends against consuming large amounts. Most animals are able to synthesize their own vitamin C. However, apes (including humans) and monkeys (but not all primates), most bats, most fish, some rodents, and certain other animals must acquire it from dietary sources because a gene for a synthesis enzyme has mutations that render it dysfunctional. Vitamin C was discovered in 1912, isolated in 1928, and in 1933, was the first vitamin to be chemically produced. Partly for its discovery, Albert Szent-Györgyi was awarded the 1937 Nobel Prize in Physiology or Medicine. Chemistry The name "vitamin C" always refers to the -enantiomer of ascorbic acid and its oxidized form, dehydroascorbate (DHA). Therefore, unless written otherwise, "ascorbate" and "ascorbic acid" refer in the nutritional literature to -ascorbate and -ascorbic acid respectively. Ascorbic acid is a weak sugar acid structurally related to glucose. In biological systems, ascorbic acid can be found only at low pH, but in solutions above pH 5 is predominantly found in the ionized form, ascorbate. Many analytical methods have been developed for ascorbic acid detection. For example, vitamin C content of a food sample such as fruit juice can be calculated by measuring the volume of the sample required to decolorize a solution of dichlorophenolindophenol (DCPIP) and then calibrating the results by comparison with a known concentration of vitamin C. Deficiency Plasma vitamin C is the most widely applied test for vitamin C status. Adequate levels are defined as near 50 μmol/L. Hypovitaminosis of vitamin C is defined as less than 23 μmol/L, and deficiency as less than 11.4 μmol/L. For people 20 years of age or above, data from the US 2017–18 National Health and Nutrition Examination Survey showed mean serum concentrations of 53.4 μmol/L. The percent of people reported as deficient was 5.9%. Globally, vitamin C deficiency is common in low and middle-income countries, and not uncommon in high income countries. In the latter, prevalence is higher in males than in females. Plasma levels are considered saturated at about 65 μmol/L, achieved by intakes of 100 to 200 mg/day, which are well above the recommended intakes. Even higher oral intake does not further raise plasma nor tissue concentrations because absorption efficiency decreases and any excess that is absorbed is excreted in urine. Diagnostic testing Vitamin C content in plasma is used to determine vitamin status. For research purposes, concentrations can be assessed in leukocytes and tissues, which are normally maintained at an order of magnitude higher than in plasma via an energy-dependent transport system, depleted slower than plasma concentrations during dietary deficiency and restored faster during dietary repletion, but these analysis are difficult to measure, and hence not part of standard diagnostic testing. Diet Recommended consumption Recommendations for vitamin C intake by adults have been set by various national agencies: 40 mg/day: India National Institute of Nutrition, Hyderabad 45 mg/day or 300 mg/week: the World Health Organization 80 mg/day: the European Commission Council on nutrition labeling 90 mg/day (males) and 75 mg/day (females): Health Canada 2007 90 mg/day (males) and 75 mg/day (females): United States National Academy of Sciences 100 mg/day: Japan National Institute of Health and Nutrition 110 mg/day (males) and 95 mg/day (females): European Food Safety Authority In 2000, the chapter on Vitamin C in the North American Dietary Reference Intake was updated to give the Recommended Dietary Allowance (RDA) as 90 milligrams per day for adult men, 75 mg/day for adult women, and setting a Tolerable upper intake level (UL) for adults of 2,000 mg/day. The table (right) shows RDAs for the United States and Canada for children, and for pregnant and lactating women, as well as the ULs for adults. For the European Union, the EFSA set higher recommendations for adults, and also for children: 20 mg/day for ages 1–3, 30 mg/day for ages 4–6, 45 mg/day for ages 7–10, 70 mg/day for ages 11–14, 100 mg/day for males ages 15–17, 90 mg/day for females ages 15–17. For pregnancy 100 mg/day; for lactation 155 mg/day. Cigarette smokers and people exposed to secondhand smoke have lower serum vitamin C levels than nonsmokers. The thinking is that inhalation of smoke causes oxidative damage, depleting this antioxidant vitamin. The US Institute of Medicine estimated that smokers need 35 mg more vitamin C per day than nonsmokers, but did not formally establish a higher RDA for smokers. The US National Center for Health Statistics conducts biannual National Health and Nutrition Examination Survey (NHANES) to assess the health and nutritional status of adults and children in the United States. Some results are reported as What We Eat In America. The 2013–2014 survey reported that for adults ages 20 years and older, men consumed on average 83.3 mg/d and women 75.1 mg/d. This means that half the women and more than half the men are not consuming the RDA for vitamin C. The same survey stated that about 30% of adults reported they consumed a vitamin C dietary supplement or a multi-vitamin/mineral supplement that included vitamin C, and that for these people total consumption was between 300 and 400 mg/d. Tolerable upper intake level In 2000, the Institute of Medicine of the US National Academy of Sciences set a Tolerable upper intake level (UL) for adults of 2,000 mg/day. The amount was chosen because human trials had reported diarrhea and other gastrointestinal disturbances at intakes of greater than 3,000 mg/day. This was the Lowest-Observed-Adverse-Effect Level (LOAEL), meaning that other adverse effects were observed at even higher intakes. ULs are progressively lower for younger and younger children. In 2006, the European Food Safety Authority (EFSA) also pointed out the disturbances at that dose level, but reached the conclusion that there was not sufficient evidence to set a UL for vitamin C, as did the Japan National Institute of Health and Nutrition in 2010. Food labeling For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin C labeling purposes, 100% of the Daily Value was 60 mg, but as of May 27, 2016, it was revised to 90 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. European Union regulations require that labels declare energy, protein, fat, saturated fat, carbohydrates, sugars, and salt. Voluntary nutrients may be shown if present in significant amounts. Instead of Daily Values, amounts are shown as percent of Reference Intakes (RIs). For vitamin C, 100% RI was set at 80 mg in 2011. Sources Although also present in other plant-derived foods, the richest natural sources of vitamin C are fruits and vegetables. Vitamin C is the most widely taken dietary supplement. Plant sources The following table is approximate and shows the relative abundance in different raw plant sources. The amount is given in milligrams per 100 grams of the edible portion of the fruit or vegetable: Animal sources Compared to plant sources, animal-sourced foods do not provide so great an amount of vitamin C, and what there is is largely destroyed by the heat used when it is cooked. For example, raw chicken liver contains 17.9 mg/100 g, but fried, the content is reduced to 2.7 mg/100 g. Vitamin C is present in human breast milk at 5.0 mg/100 g. Cow's milk contains 1.0 mg/100 g, but the heat of pasteurization destroys it. Food preparation Vitamin C chemically decomposes under certain conditions, many of which may occur during the cooking of food. Vitamin C concentrations in various food substances decrease with time in proportion to the temperature at which they are stored. Cooking can reduce the vitamin C content of vegetables by around 60%, possibly due to increased enzymatic destruction. Longer cooking times may add to this effect. Another cause of vitaminC loss from food is leaching, which transfers vitaminC to the cooking water, which is decanted and not consumed. Supplements Vitamin C dietary supplements are available as tablets, capsules, drink mix packets, in multi-vitamin/mineral formulations, in antioxidant formulations, and as crystalline powder. Vitamin C is also added to some fruit juices and juice drinks. Tablet and capsule content ranges from 25 mg to 1500 mg per serving. The most commonly used supplement compounds are ascorbic acid, sodium ascorbate and calcium ascorbate. Vitamin C molecules can also be bound to the fatty acid palmitate, creating ascorbyl palmitate, or else incorporated into liposomes. Food fortification Countries fortify foods with nutrients to address known deficiencies. While many countries mandate or have voluntary programs to fortify wheat flour, maize (corn) flour or rice with vitamins, none include vitamin C in those programs. As described in Vitamin C Fortification of Food Aid Commodities (1997), the United States provides rations to international food relief programs, later under the auspices of the Food for Peace Act and the Bureau for Humanitarian Assistance. Vitamin C is added to corn-soy blend and wheat-soy blend products at 40 mg/100 grams. (along with minerals and other vitamins). Supplemental rations of these highly fortified, blended foods are provided to refugees and displaced persons in camps and to beneficiaries of development feeding programs that are targeted largely toward mothers and children. The report adds: "The stability of vitamin C (L-ascorbic acid) is of concern because this is one of the most labile vitamins in foods. Its main loss during processing and storage is from oxidation, which is accelerated by light, oxygen, heat, increased pH, high moisture content (water activity), and the presence of copper or ferrous salts. To reduce oxidation, the vitamin C used in commodity fortification is coated with ethyl cellulose (2.5 percent). Oxidative losses also occur during food processing and preparation, and additional vitamin C may be lost if it dissolves into cooking liquid and is then discarded." Food preservation additive Ascorbic acid and some of its salts and esters are common additives added to various foods, such as canned fruits, mostly to slow oxidation and enzymatic browning. It may be used as a flour treatment agent used in breadmaking. As food additives, they are assigned E numbers, with safety assessment and approval the responsibility of the European Food Safety Authority. The relevant E numbers are: E300 ascorbic acid (approved for use as a food additive in the UK, US Canada, Australia and New Zealand) E301 sodium ascorbate (approved for use as a food additive in the UK, US, Canada, Australia and New Zealand) E302 calcium ascorbate (approved for use as a food additive in the UK, US Canada, Australia and New Zealand) E303 potassium ascorbate (approved in Australia and New Zealand, but not in the UK, US or Canada) E304 fatty acid esters of ascorbic acid such as ascorbyl palmitate (approved for use as a food additive in the UK, US, Canada, Australia and New Zealand) The stereoisomers of Vitamin C have a similar effect in food despite their lack of efficacy in human scurvy. They include erythorbic acid and its sodium salt (E315, E316). Pharmacology Pharmacodynamics is the study of how the drug – in this instance vitamin C – affects the organism, whereas pharmacokinetics is the study of how an organism affects the drug. Pharmacodynamics Pharmacodynamics includes enzymes for which vitamin C is a cofactor, with function potentially compromised in a deficiency state, and any enzyme cofactor or other physiological function affected by administration of vitamin C, orally or injected, in excess of normal requirements. At normal physiological concentrations, vitamin C serves as an enzyme substrate or cofactor and an electron donor antioxidant. The enzymatic functions include the synthesis of collagen, carnitine, and neurotransmitters; the synthesis and catabolism of tyrosine; and the metabolism of microsomes. In nonenzymatic functions it acts as a reducing agent, donating electrons to oxidized molecules and preventing oxidation in order to keep iron and copper atoms in their reduced states. At non-physiological concentrations achieved by intravenous dosing, vitamin C may function as a pro-oxidant, with therapeutic toxicity against cancer cells. Vitamin C functions as a cofactor for the following enzymes: Three groups of enzymes (prolyl-3-hydroxylases, prolyl-4-hydroxylases, and lysyl hydroxylases) that are required for the hydroxylation of proline and lysine in the synthesis of collagen. These reactions add hydroxyl groups to the amino acids proline or lysine in the collagen molecule via prolyl hydroxylase and lysyl hydroxylase, both requiring vitamin C as a cofactor. The role of vitamin C as a cofactor is to oxidize prolyl hydroxylase and lysyl hydroxylase from Fe to Fe and to reduce it from Fe to Fe. Hydroxylation allows the collagen molecule to assume its triple helix structure, and thus vitamin C is essential to the development and maintenance of scar tissue, blood vessels, and cartilage. Two enzymes (ε-N-trimethyl-L-lysine hydroxylase and γ-butyrobetaine hydroxylase) are necessary for synthesis of carnitine. Carnitine is essential for the transport of fatty acids into mitochondria for ATP generation. Hypoxia-inducible factor-proline dioxygenase enzymes (isoforms: EGLN1, EGLN2, and EGLN3) allows cells to respond physiologically to low concentrations of oxygen. Dopamine beta-hydroxylase participates in the biosynthesis of norepinephrine from dopamine. Peptidylglycine alpha-amidating monooxygenase amidates peptide hormones by removing the glyoxylate residue from their c-terminal glycine residues. This increases peptide hormone stability and activity. As an antioxidant, ascorbate scavenges reactive oxygen and nitrogen compounds, thus neutralizing the potential tissue damage of these free radical compounds. Dehydroascorbate, the oxidized form, is then recycled back to ascorbate by endogenous antioxidants such as glutathione. In the eye, ascorbate is thought to protect against photolytically generated free-radical damage; higher plasma ascorbate is associated with lower risk of cataracts. Ascorbate may also provide antioxidant protection indirectly by regenerating other biological antioxidants such as α-tocopherol back to an active state. In addition, ascorbate also functions as a non-enzymatic reducing agent for mixed-function oxidases in the microsomal drug-metabolizing system that inactivates a wide variety of substrates such as drugs and environmental carcinogens. Pharmacokinetics Ascorbic acid is absorbed in the body by both active transport and passive diffusion. Approximately 70%–90% of vitamin C is active-transport absorbed when intakes of 30–180 mg/day from a combination of food sources and moderate-dose dietary supplements such as a multi-vitamin/mineral product are consumed. However, when large amounts are consumed, such as a vitamin C dietary supplement, the active transport system becomes saturated, and while the total amount being absorbed continues to increase with dose, absorption efficiency falls to less than 50%. Active transport is managed by Sodium-Ascorbate Co-Transporter proteins (SVCTs) and Hexose Transporter proteins (GLUTs). SVCT1 and SVCT2 import ascorbate across plasma membranes. The Hexose Transporter proteins GLUT1, GLUT3 and GLUT4 transfer only the oxydized dehydroascorbic acid (DHA) form of vitamin C. The amount of DHA found in plasma and tissues under normal conditions is low, as cells rapidly reduce DHA to ascorbate. SVCTs are the predominant system for vitamin C transport within the body. In both vitamin C synthesizers (example: rat) and non-synthesizers (example: human) cells maintain ascorbic acid concentrations much higher than the approximately 50 micromoles/liter (μmol/L) found in plasma. For example, the ascorbic acid content of pituitary and adrenal glands can exceed 2,000 μmol/L, and muscle is at 200–300 μmol/L. The known coenzymatic functions of ascorbic acid do not require such high concentrations, so there may be other, as yet unknown functions. A consequence of all this high concentration organ content is that plasma vitamin C is not a good indicator of whole-body status, and people may vary in the amount of time needed to show symptoms of deficiency when consuming a diet very low in vitamin C. Excretion (via urine) is as ascorbic acid and metabolites. The fraction that is excreted as unmetabolized ascorbic acid increases as intake increases. In addition, ascorbic acid converts (reversibly) to DHA and from that compound non-reversibly to 2,3-diketogulonate and then oxalate. These three metabolites are also excreted via urine. During times of low dietary intake, vitamin C is reabsorbed by the kidneys rather than excreted. This salvage process delays onset of deficiency. Humans are better than guinea pigs at converting DHA back to ascorbate, and thus take much longer to become vitamin C deficient. Synthesis Most animals and plants are able to synthesize vitamin C through a sequence of enzyme-driven steps, which convert monosaccharides to vitamin C. Yeasts do not make -ascorbic acid but rather its stereoisomer, erythorbic acid. In plants, synthesis is accomplished through the conversion of mannose or galactose to ascorbic acid. In animals, the starting material is glucose. In some species that synthesize ascorbate in the liver (including mammals and perching birds), the glucose is extracted from glycogen; ascorbate synthesis is a glycogenolysis-dependent process. In humans and in animals that cannot synthesize vitamin C, the enzyme -gulonolactone oxidase (GULO), which catalyzes the last step in the biosynthesis, is highly mutated and non-functional. Animal synthesis There is some information on serum vitamin C concentrations maintained in animal species that are able to synthesize vitamin C. One study of several breeds of dogs reported an average of 35.9 μmol/L. A report on goats, sheep and cattle reported ranges of 100–110, 265–270 and 160–350 μmol/L, respectively. The biosynthesis of ascorbic acid in vertebrates starts with the formation of UDP-glucuronic acid. UDP-glucuronic acid is formed when UDP-glucose undergoes two oxidations catalyzed by the enzyme UDP-glucose 6-dehydrogenase. UDP-glucose 6-dehydrogenase uses the co-factor NAD+ as the electron acceptor. The transferase UDP-glucuronate pyrophosphorylase removes a UMP and glucuronokinase, with the cofactor ADP, removes the final phosphate leading to -glucuronic acid. The aldehyde group of this compound is reduced to a primary alcohol using the enzyme glucuronate reductase and the cofactor NADPH, yielding -gulonic acid. This is followed by lactone formationutilizing the hydrolase gluconolactonasebetween the carbonyl on C1 and hydroxyl group on C4. -Gulonolactone then reacts with oxygen, catalyzed by the enzyme L-gulonolactone oxidase (which is nonfunctional in humans and other Haplorrhini primates; see Unitary pseudogenes) and the cofactor FAD+. This reaction produces 2-oxogulonolactone (2-keto-gulonolactone), which spontaneously undergoes enolization to form ascorbic acid. Reptiles and older orders of birds make ascorbic acid in their kidneys. Recent orders of birds and most mammals make ascorbic acid in their liver. Non-synthesizers Some mammals have lost the ability to synthesize vitamin C, including simians and tarsiers, which together make up one of two major primate suborders, Haplorhini. This group includes humans. The other more primitive primates (Strepsirrhini) have the ability to make vitamin C. Synthesis does not occur in some species in the rodent family Caviidae, which includes guinea pigs and capybaras, but does occur in other rodents, including rats and mice. Synthesis does not occur in most bat species, but there are at least two species, frugivorous bat Rousettus leschenaultii and insectivorous bat Hipposideros armiger, that retain (or regained) their ability of vitamin C production. A number of species of passerine birds also do not synthesize, but not all of them, and those that do not are not clearly related; it has been proposed that the ability was lost separately a number of times in birds. In particular, the ability to synthesize vitamin C is presumed to have been lost and then later re-acquired in at least two cases. The ability to synthesize vitaminC has also been lost in about 96% of extant fish (the teleosts). On a milligram consumed per kilogram of body weight basis, simian non-synthesizer species consume the vitamin in amounts 10 to 20 times higher than what is recommended by governments for humans. This discrepancy constituted some of the basis of the controversy on human recommended dietary allowances being set too low. However, simian consumption does not indicate simian requirements. Merck's veterinary manual states that daily intake of vitamin C at 3–6 mg/kg prevents scurvy in non-human primates. By way of comparison, across several countries, the recommended dietary intake for adult humans is in the range of 1–2 mg/kg. Evolution of animal synthesis Ascorbic acid is a common enzymatic cofactor in mammals used in the synthesis of collagen, as well as a powerful reducing agent capable of rapidly scavenging a number of reactive oxygen species (ROS). Given that ascorbate has these important functions, it is surprising that the ability to synthesize this molecule has not always been conserved. In fact, anthropoid primates, Cavia porcellus (guinea pigs), teleost fishes, most bats, and some passerine birds have all independently lost the ability to internally synthesize vitamin C in either the kidney or the liver. In all of the cases where genomic analysis was done on an ascorbic acid auxotroph, the origin of the change was found to be a result of loss-of-function mutations in the gene that encodes L-gulono-γ-lactone oxidase, the enzyme that catalyzes the last step of the ascorbic acid pathway outlined above. One explanation for the repeated loss of the ability to synthesize vitamin C is that it was the result of genetic drift; assuming that the diet was rich in vitaminC, natural selection would not act to preserve it. In the case of the simians, it is thought that the loss of the ability to make vitamin C may have occurred much farther back in evolutionary history than the emergence of humans or even apes, since it evidently occurred soon after the appearance of the first primates, yet sometime after the split of early primates into the two major suborders Haplorrhini (which cannot make vitamin C) and its sister suborder of non-tarsier prosimians, the Strepsirrhini ("wet-nosed" primates), which retained the ability to make vitamin C. According to molecular clock dating, these two suborder primate branches parted ways about 63 to 60 million years ago. Approximately three to five million years later (58 million years ago), only a short time afterward from an evolutionary perspective, the infraorder Tarsiiformes, whose only remaining family is that of the tarsier (Tarsiidae), branched off from the other haplorrhines. Since tarsiers also cannot make vitamin C, this implies the mutation had already occurred, and thus must have occurred between these two marker points (63 to 58 million years ago). It has also been noted that the loss of the ability to synthesize ascorbate strikingly parallels the inability to break down uric acid, also a characteristic of primates. Uric acid and ascorbate are both strong reducing agents. This has led to the suggestion that, in higher primates, uric acid has taken over some of the functions of ascorbate. Plant synthesis There are many different biosynthesis pathways to ascorbic acid in plants. Most proceed through products of glycolysis and other metabolic pathways. For example, one pathway utilizes plant cell wall polymers. The principal plant ascorbic acid biosynthesis pathway seems to be via -galactose. The enzyme -galactose dehydrogenase catalyzes the overall oxidation to the lactone and isomerization of the lactone to the C4-hydroxyl group, resulting in -galactono-1,4-lactone. -Galactono-1,4-lactone then reacts with the mitochondrial flavoenzyme -galactonolactone dehydrogenase to produce ascorbic acid. -Ascorbic acid has a negative feedback on -galactose dehydrogenase in spinach. Ascorbic acid efflux by embryos of dicot plants is a well-established mechanism of iron reduction and a step obligatory for iron uptake. All plants synthesize ascorbic acid. Ascorbic acid functions as a cofactor for enzymes involved in photosynthesis, synthesis of plant hormones, as an antioxidant and regenerator of other antioxidants. Plants use multiple pathways to synthesize vitamin C. The major pathway starts with glucose, fructose or mannose (all simple sugars) and proceeds to -galactose, -galactonolactone and ascorbic acid. This biosynthesis is regulated following a diurnal rhythm. Enzyme expression peaks in the morning to supporting biosynthesis for when mid-day sunlight intensity demands high ascorbic acid concentrations. Minor pathways may be specific to certain parts of plants; these can be either identical to the vertebrate pathway (including the GLO enzyme), or start with inositol and get to ascorbic acid via -galactonic acid to -galactonolactone. Industrial synthesis Vitamin C can be produced from glucose by two main routes. The no longer utilized Reichstein process, developed in the 1930s, used a single fermentation followed by a purely chemical route. The modern two-step fermentation process, originally developed in China in the 1960s, uses additional fermentation to replace part of the later chemical stages. The Reichstein process and the modern two-step fermentation processes both use glucose as the starting material, convert that to sorbitol, and then to sorbose using fermentation. The two-step fermentation process then converts sorbose to 2-keto-l-gulonic acid (KGA) through another fermentation step, avoiding an extra intermediate. Both processes yield approximately 60% vitamin C from the glucose starting point. Researchers are exploring means for one-step fermentation. China produces about 70% of the global vitamin C market. The rest is split among European Union, India and North America. The global market is expected to exceed 141 thousand metric tons in 2024. Cost per metric ton (1000 kg) in US dollars was $2,220 in Shanghai, $2,850 in Hamburg and $3,490 in the US. Health effects Vitamin C has a definitive role in treating scurvy, which is a disease caused by vitaminC deficiency. Beyond that, a role for vitaminC as prevention or treatment for various diseases is disputed, with reviews often reporting conflicting results. No effect of vitaminC supplementation reported for overall mortality. It is on the World Health Organization's List of Essential Medicines and on the World Health Organization's Model Forumulary. In 2022, it was the 226th most commonly prescribed medication in the United States, with more than 1million prescriptions. Scurvy Scurvy is a disease resulting from a deficiency of vitamin C. Without this vitamin, collagen made by the body is too unstable to perform its function and several other enzymes in the body do not operate correctly. Early symptoms are malaise and lethargy, progressing to shortness of breath, bone pain and susceptibility to bruising. As the disease progressed, it is characterized by spots on and bleeding under the skin and bleeding gums. The skin lesions are most abundant on the thighs and legs. A person with the ailment looks pale, feels depressed, and is partially immobilized. In advanced scurvy there is fever, old wounds may become open and suppurating, loss of teeth, convulsions and, eventually, death. Until quite late in the disease the damage is reversible, as healthy collagen replaces the defective collagen with vitaminC repletion. Notable human dietary studies of experimentally induced scurvy were conducted on conscientious objectors during World War II in Britain and on Iowa state prisoners in the late 1960s to the 1980s. Men in the prison study developed the first signs of scurvy about four weeks after starting the vitamin C-free diet, whereas in the earlier British study, six to eight months were required, possibly due to the pre-loading of this group with a 70 mg/day supplement for six weeks before the scorbutic diet was fed. Men in both studies had blood levels of ascorbic acid too low to be accurately measured by the time they developed signs of scurvy. These studies both reported that all obvious symptoms of scurvy could be completely reversed by supplementation of only 10 mg a day. Treatment of scurvy can be with vitaminC-containing foods or dietary supplements or injection. Sepsis People in sepsis may have micronutrient deficiencies, including low levels of vitamin C. An intravenous intake of doses much higher than the RDA, such as or more, appears to be needed to maintain normal plasma concentrations in people with sepsis, as the body's demand for vitamin C may increase significantly due to the heightened inflammatory response and oxidative stress. Sepsis mortality may be reduced with administration of intravenous vitamin C. Common cold Research on vitaminC in the common cold has been divided into effects on prevention, duration, and severity. Oral intakes of more than 200 mg/day taken on a regular basis was not effective in prevention of the common cold. Restricting analysis to trials that used at least 1000 mg/day also saw no prevention benefit. However, taking a vitaminC supplement on a regular basis did reduce the average duration of the illness by 8% in adults and 14% in children, and also reduced the severity of colds. Vitamin C taken on a regular basis reduced the duration of severe symptoms but had no effect on the duration of mild symptoms. Therapeutic use, meaning that the vitamin was not started unless people started to feel the beginnings of a cold, had no effect on the duration or severity of the illness. Vitamin C distributes readily in high concentrations into immune cells, promotes natural killer cell activities, promotes lymphocyte proliferation, and is depleted quickly during infections, effects suggesting a prominent role in immune system function. The European Food Safety Authority concluded there is a cause and effect relationship between the dietary intake of vitamin C and functioning of a normal immune system in adults and in children under three years of age. COVID-19 During March through July 2020, vitamin C was the subject of more US FDA warning letters than any other ingredient for claims for prevention and/or treatment of COVID-19. In April 2021, the US National Institutes of Health (NIH) COVID-19 Treatment Guidelines stated that "there are insufficient data to recommend either for or against the use of vitaminC for the prevention or treatment of COVID-19." In an update posted December 2022, the NIH position was unchanged: There is insufficient evidence for the COVID-19 Treatment Guidelines Panel (the Panel) to recommend either for or against the use of vitamin C for the treatment of COVID-19 in nonhospitalized patients. There is insufficient evidence for the Panel to recommend either for or against the use of vitamin C for the treatment of COVID-19 in hospitalized patients. For people hospitalized with severe COVID-19 there are reports of a significant reduction in the risk of all-cause, in-hospital mortality with the administration of vitamin C relative to no vitamin C. There were no significant differences in ventilation incidence, hospitalization duration or length of intensive care unit stay between the two groups. The majority of the trials incorporated into these meta-analyses used intravenous administration of the vitamin. Acute kidney injury was lower in people treated with vitamin C treatment. There were no differences in the frequency of other adverse events due to the vitamin. The conclusion was that further large-scale studies are needed to affirm its mortality benefits before issuing updated guidelines and recommendations. Cancer Higher vitamin C intake appears to reduce the risk for lung cancer. There is no evidence that vitamin C supplementation reduces the risk of prostate cancer, colorectal cancer or breast cancer. Cardiovascular disease There is no evidence that vitamin C supplementation decreases the risk cardiovascular disease, although there may be an association between higher circulating vitamin C levels or dietary vitamin C and a lower risk of stroke. There is a positive effect of vitamin C on endothelial dysfunction when taken at doses greater than 500 mg per day. (The endothelium is a layer of cells that line the interior surface of blood vessels.) Blood pressure Serum vitamin C was reported to be 15.13 μmol/L lower in people with hypertension compared to normotensives. The vitamin was inversely associated with both systolic blood pressure (SBP) and diastolic blood pressure (DBP). Oral supplementation of the vitamin resulted in a very modest but statistically significant decrease in SBP in people with hypertension. The proposed explanation is that vitamin C increases intracellular concentrations of tetrahydrobiopterin, an endothelial nitric oxide synthase cofactor that promotes the production of nitric oxide, which is a potent vasodilator. Vitamin C supplementation might also reverse the nitric oxide synthase inhibitor NG-monomethyl-L-arginine 1, and there is also evidence cited that vitamin C directly enhances the biological activity of nitric oxide Type 2 diabetes There are contradictory reviews. From one, vitamin C supplementation cannot be recommended for management of type 2 diabetes. However, another reported that supplementation with high doses of vitamin C can decrease blood glucose, insulin and hemoglobin A1c. Iron deficiency One of the causes of iron-deficiency anemia is reduced absorption of iron. Iron absorption can be enhanced through ingestion of vitamin C alongside iron-containing food or supplements. Vitamin C helps to keep iron in the reduced ferrous state, which is more soluble and more easily absorbed. It also chelates iron into a soluble complex. It specifically helps the absorption of non-heme iron, which is found in non-meat sources and absorbed via DMT1. Alzheimer's disease Lower plasma vitamin C concentrations were reported in people with Alzheimer's disease. Reviews do not present reporting on supplement intervention clinical trials. Eye health Higher dietary intake of vitamin C was associated with lower risk of age-related cataracts. Vitamin C supplementation did not prevent age-related macular degeneration. Periodontal disease Low intake and low serum concentration were associated with greater progression of periodontal disease. Adverse effects Oral intake of dietary supplements vitamin C in excess of requirements is poorly absorbed, and excess amounts in the blood are rapidly excreted in the urine, so it exhibits low acute toxicity. More than two to three grams, consumed orally, may cause nausea, abdominal cramps and diarrhea. These effects are attributed to the osmotic effect of unabsorbed vitamin C passing through the intestine. In theory, high vitamin C intake may cause excessive absorption of iron. A summary of reviews of supplementation in healthy subjects did not report this problem, but left as untested the possibility that individuals with hereditary hemochromatosis might be adversely affected. There is a longstanding belief among the mainstream medical community that vitamin C increases risk of kidney stones. "Reports of kidney stone formation associated with excess ascorbic acid intake are limited to individuals with renal disease". A 2003 review stated that "data from epidemiological studies do not support an association between excess ascorbic acid intake and kidney stone formation in apparently healthy individuals", although one large, multi-year trial published in 2013 did report a nearly two-fold increase in kidney stones in men who regularly consumed a vitamin C supplement. There is extensive research on the purported benefits of intravenous vitamin C for treatment of sepsis, severe COVID-19 and cancer. Reviews list trials with doses as high as 24 grams per day. Concerns about possible adverse effects are that intravenous high-dose vitamin C leads to a supraphysiological level of vitamin C followed by oxidative degradation to dehydroascorbic acid and hence to oxalate, increasing the risk of oxalate kidney stones and oxalate nephropathy. The risk may be higher in people with renal impairment, as kidneys efficiently excrete excess vitamin C. Second, treatment with high dose vitamin C should be avoided in patients with glucose-6-phosphate dehydrogenase deficiency as it can lead to acute hemolysis. Third, treatment might interfere with the accuracy of glucometer measurement of blood glucose levels, as both vitamin C and glucose have similar molecular structure, which could lead to false high blood glucose readings. Despite all these concerns, meta-analyses of patients in intensive care for sepsis, septic shock, COVID-19 and other acute conditions reported no increase in new-onset kidney stones, acute kidney injury or requirement for renal replacement therapy for patients receiving short-term, high-dose, intravenous vitamin C treatment. This suggests that intravenous vitamin C is safe under these short-term applications. History Scurvy was known to Hippocrates, described in book two of his Prorrheticorum and in his Liber de internis affectionibus, and cited by James Lind. Symptoms of scurvy were also described by Pliny the Elder: (i) ; and (ii) Strabo, in Geographicorum, book 16, cited in the 1881 International Encyclopedia of Surgery. Scurvy at sea In the 1497 expedition of Vasco da Gama, the curative effects of citrus fruit were known. In the 1500s, Portuguese sailors put in to the island of Saint Helena to avail themselves of planted vegetable gardens and wild-growing fruit trees. Authorities occasionally recommended plant food to prevent scurvy during long sea voyages. John Woodall, the first surgeon to the British East India Company, recommended the preventive and curative use of lemon juice in his 1617 book, The Surgeon's Mate. In 1734, the Dutch writer Johann Bachstrom gave the firm opinion, "scurvy is solely owing to a total abstinence from fresh vegetable food, and greens." Scurvy had long been a principal killer of sailors during the long sea voyages. According to Jonathan Lamb, "In 1499, Vasco da Gama lost 116 of his crew of 170; In 1520, Magellan lost 208 out of 230; ... all mainly to scurvy." The first attempt to give scientific basis for the cause of this disease was by a ship's surgeon in the Royal Navy, James Lind. While at sea in May 1747, Lind provided some crew members with two oranges and one lemon per day, in addition to normal rations, while others continued on cider, vinegar, sulfuric acid or seawater, along with their normal rations, in one of the world's first controlled experiments. The results showed that citrus fruits prevented the disease. Lind published his work in 1753 in his Treatise on the Scurvy. Fresh fruit was expensive to keep on board, whereas boiling it down to juice allowed easy storage, but destroyed the vitamin (especially if it was boiled in copper kettles). It was 1796 before the British navy adopted lemon juice as standard issue at sea. In 1845, ships in the West Indies were provided with lime juice instead, and in 1860 lime juice was used throughout the Royal Navy, giving rise to the American use of the nickname "limey" for the British. Captain James Cook had previously demonstrated the advantages of carrying "Sour krout" on board by taking his crew on a 1772–75 Pacific Ocean voyage without losing any of his men to scurvy. For his report on his methods the British Royal Society awarded him the Copley Medal in 1776. The name antiscorbutic was used in the eighteenth and nineteenth centuries for foods known to prevent scurvy. These foods included lemons, limes, oranges, sauerkraut, cabbage, malt, and portable soup. In 1928, the Canadian Arctic anthropologist Vilhjalmur Stefansson showed that the Inuit avoided scurvy on a diet largely of raw meat. Later studies on traditional food diets of the Yukon First Nations, Dene, Inuit, and Métis of Northern Canada showed that their daily intake of vitamin C averaged between 52 and 62 mg/day. Discovery Vitamin C was discovered in 1912, isolated in 1928 and synthesized in 1933, making it the first vitamin to be synthesized. Shortly thereafter Tadeus Reichstein succeeded in synthesizing the vitamin in bulk by what is now called the Reichstein process. This made possible the inexpensive mass-production of vitamin C. In 1934, Hoffmann–La Roche bought the Reichstein process patent, trademarked synthetic vitamin C under the brand name Redoxon, and began to market it as a dietary supplement. In 1907, a laboratory animal model which would help to identify the antiscorbutic factor was serendipitously discovered by the Norwegian physicians Axel Holst and Theodor Frølich, who when studying shipboard beriberi, fed guinea pigs their test diet of grains and flour and were surprised when scurvy resulted instead of beriberi. Unknown at that time, this species did not make its own vitamin C (being a caviomorph), whereas mice and rats do. In 1912, the Polish biochemist Casimir Funk developed the concept of vitamins. One of these was thought to be the anti-scorbutic factor. In 1928, this was referred to as "water-soluble C", although its chemical structure had not been determined. From 1928 to 1932, Albert Szent-Györgyi and Joseph L. Svirbely's Hungarian team, and Charles Glen King's American team, identified the anti-scorbutic factor. Szent-Györgyi isolated hexuronic acid from animal adrenal glands, and suspected it to be the antiscorbutic factor. In late 1931, Szent-Györgyi gave Svirbely the last of his adrenal-derived hexuronic acid with the suggestion that it might be the anti-scorbutic factor. By the spring of 1932, King's laboratory had proven this, but published the result without giving Szent-Györgyi credit for it. This led to a bitter dispute over priority. In 1933, Walter Norman Haworth chemically identified the vitamin as -hexuronic acid, proving this by synthesis in 1933. Haworth and Szent-Györgyi proposed that L-hexuronic acid be named a-scorbic acid, and chemically -ascorbic acid, in honor of its activity against scurvy. The term's etymology is from Latin, "a-" meaning away, or off from, while -scorbic is from Medieval Latin scorbuticus (pertaining to scurvy), cognate with Old Norse skyrbjugr, French scorbut, Dutch scheurbuik and Low German scharbock. Partly for this discovery, Szent-Györgyi was awarded the 1937 Nobel Prize in Medicine, and Haworth shared that year's Nobel Prize in Chemistry. In 1957, J. J. Burns showed that some mammals are susceptible to scurvy as their liver does not produce the enzyme -gulonolactone oxidase, the last of the chain of four enzymes that synthesize vitamin C. American biochemist Irwin Stone was the first to exploit vitamin C for its food preservative properties. He later developed the idea that humans possess a mutated form of the -gulonolactone oxidase coding gene. Stone introduced Linus Pauling to the theory that humans needed to consume vitamin C in quantities far higher than what was considered a recommended daily intake in order to optimize health. In 2008, researchers discovered that in humans and other primates the red blood cells have evolved a mechanism to more efficiently utilize the vitamin C present in the body by recycling oxidized -dehydroascorbic acid (DHA) back into ascorbic acid for reuse by the body. The mechanism was not found to be present in mammals that synthesize their own vitamin C. History of large dose therapies Vitamin C megadosage is a term describing the consumption or injection of vitamin C in doses comparable to or higher than the amounts produced by the livers of mammals which are able to synthesize vitamin C. An argument for this, although not the actual term, was described in 1970 in an article by Linus Pauling. Briefly, his position was that for optimal health, humans should be consuming at least 2,300 mg/day to compensate for the inability to synthesize vitamin C. The recommendation also fell into the consumption range for gorillas — a non-synthesizing near-relative to humans. A second argument for high intake is that serum ascorbic acid concentrations increase as intake increases until it plateaus at about 190 to 200 micromoles per liter (μmol/L) once consumption exceeds 1,250 milligrams. As noted, government recommendations are a range of 40 to 110 mg/day and normal plasma is approximately 50 μmol/L, so "normal" is about 25% of what can be achieved when oral consumption is in the proposed megadose range. Pauling popularized the concept of high dose vitamin C as prevention and treatment of the common cold in 1970. A few years later he proposed that vitamin C would prevent cardiovascular disease, and that 10 grams/day, initially administered intravenously and thereafter orally, would cure late-stage cancer. Mega-dosing with ascorbic acid has other champions, among them chemist Irwin Stone and the controversial Matthias Rath and Patrick Holford, who both have been accused of making unsubstantiated treatment claims for treating cancer and HIV infection. The idea that large amounts of intravenous ascorbic acid can be used to treat late-stage cancer or ameliorate the toxicity of chemotherapy is — some forty years after Pauling's seminal paper — still considered unproven and still in need of high quality research. Research directions Cancer research There is research investigating whether high dose intravenous vitamin C administration as a co-treatment will suppress cancer stem cells, which are responsible for tumor recurrence, metastasis and chemoresistance. Skin aging research There is also ongoing research on topical application of vitamin C to prevent signs of skin aging. Human skin physiologically contains small amounts of vitamin C, which supports collagen synthesis, decreases collagen degradation, and assists in antioxidant protection against UV-induced photo-aging, including photocarcinogenesis. This knowledge is often used as a rationale for the marketing of vitamin C as a topical "serum" ingredient to prevent or treat facial skin aging, melasma (dark pigmented spots), and wrinkles; however, these claims are unsubstantiated and are not supported by research conducted so far; the supposed efficacy of topical treatment as opposed to oral intake is poorly understood. The purported mechanism on supposed benefit of topical vitamin C application to slow skin aging is that vitamin C functions as an antioxidant, neutralizing free radicals from sunlight exposure, air pollutants or normal metabolic processes. The clinical trial literature is characterized as insufficient to support health claims; one reason being put forward was that "All the studies used vitamin C in combination with other ingredients or therapeutic mechanisms, thereby complicating any specific conclusions regarding the efficacy of vitamin C." Pneumonia Further research is needed to determine if prophylactic vitamin C treatment is helpful for preventing or treating pneumonia.
Biology and health sciences
Vitamins
Health
32512
https://en.wikipedia.org/wiki/Vitamin
Vitamin
Vitamins are organic molecules (or a set of closely related molecules called vitamers) that are essential to an organism in small quantities for proper metabolic function. Essential nutrients cannot be synthesized in the organism in sufficient quantities for survival, and therefore must be obtained through the diet. For example, vitamin C can be synthesized by some species but not by others; it is not considered a vitamin in the first instance but is in the second. Most vitamins are not single molecules, but groups of related molecules called vitamers. For example, there are eight vitamers of vitamin E: four tocopherols and four tocotrienols. The term vitamin does not include the three other groups of essential nutrients: minerals, essential fatty acids, and essential amino acids. Major health organizations list thirteen vitamins: Vitamin A (all-trans-retinols, all-trans-retinyl-esters, as well as all-trans-β-carotene and other provitamin A carotenoids) Vitamin B1 (thiamine) Vitamin B2 (riboflavin) Vitamin B3 (niacin) Vitamin B5 (pantothenic acid) Vitamin B6 (pyridoxine) Vitamin B7 (biotin) Vitamin B9 (folic acid and folates) Vitamin B12 (cobalamins) Vitamin C (ascorbic acid and ascorbates) Vitamin D (calciferols) Vitamin E (tocopherols and tocotrienols) Vitamin K (phylloquinones, menaquinones, and menadiones) Some sources include a fourteenth, choline. Vitamins have diverse biochemical functions. Vitamin A acts as a regulator of cell and tissue growth and differentiation. Vitamin D provides a hormone-like function, regulating mineral metabolism for bones and other organs. The B complex vitamins function as enzyme cofactors (coenzymes) or the precursors for them. Vitamins C and E function as antioxidants. Both deficient and excess intake of a vitamin can potentially cause clinically significant illness, although excess intake of water-soluble vitamins is less likely to do so. All the vitamins were discovered between 1913 and 1948. Historically, when intake of vitamins from diet was lacking, the results were vitamin deficiency diseases. Then, starting in 1935, commercially produced tablets of yeast-extract vitamin B complex and semi-synthetic vitamin C became available. This was followed in the 1950s by the mass production and marketing of vitamin supplements, including multivitamins, to prevent vitamin deficiencies in the general population. Governments have mandated the addition of some vitamins to staple foods such as flour or milk, referred to as food fortification, to prevent deficiencies. Recommendations for folic acid supplementation during pregnancy reduced risk of infant neural tube defects. List of vitamins History The value of eating certain foods to maintain health was recognized long before vitamins were identified. The ancient Egyptians knew that feeding liver to a person may help with night blindness, an illness now known to be caused by a vitamin A deficiency. The advance of ocean voyages during the Age of Discovery resulted in prolonged periods without access to fresh fruits and vegetables, and made illnesses from vitamin deficiency common among ships' crews. In 1747, the Scottish surgeon James Lind discovered that citrus foods helped prevent scurvy, a particularly deadly disease in which collagen is not properly formed, causing poor wound healing, bleeding of the gums, severe pain, and death. In 1753, Lind published his Treatise on the Scurvy, which recommended using lemons and limes to avoid scurvy, which was adopted by the British Royal Navy. This led to the nickname limey for British sailors. However, during the 19th century, limes grown in the West Indies were substituted for lemons; these were subsequently found to be much lower in vitamin C. As a result, Arctic expeditions continued to be plagued by scurvy and other deficiency diseases. In the early 20th century, when Robert Falcon Scott made his two expeditions to the Antarctic, the prevailing medical theory was that scurvy was caused by "tainted" canned food. In 1881, Russian medical doctor Nikolai Lunin studied the effects of scurvy at the University of Tartu. He fed mice an artificial mixture of all the separate constituents of milk known at that time, namely the proteins, fats, carbohydrates, and salts. The mice that received only the individual constituents died, while the mice fed by milk itself developed normally. He made a conclusion that "a natural food such as milk must therefore contain, besides these known principal ingredients, small quantities of unknown substances essential to life." However, his conclusions were rejected by his advisor, Gustav von Bunge. A similar result by Cornelis Adrianus Pekelharing appeared in Dutch medical journal Nederlands Tijdschrift voor Geneeskunde in 1905, but it was not widely reported. In East Asia, where polished white rice was the common staple food of the middle class, beriberi resulting from lack of vitamin B1 was endemic. In 1884, Takaki Kanehiro, a British-trained medical doctor of the Imperial Japanese Navy, observed that beriberi was endemic among low-ranking crew who often ate nothing but rice, but not among officers who consumed a Western-style diet. With the support of the Japanese navy, he experimented using crews of two battleships; one crew was fed only white rice, while the other was fed a diet of meat, fish, barley, rice, and beans. The group that ate only white rice documented 161 crew members with beriberi and 25 deaths, while the latter group had only 14 cases of beriberi and no deaths. This convinced Takaki and the Japanese Navy that diet was the cause of beriberi, but they mistakenly believed that sufficient amounts of protein prevented it. That diseases could result from some dietary deficiencies was further investigated by Christiaan Eijkman, who in 1897 discovered that feeding unpolished rice instead of the polished variety to chickens helped to prevent a kind of polyneuritis that was the equivalent of beriberi. The following year, Frederick Hopkins postulated that some foods contained "accessory factors" – in addition to proteins, carbohydrates, fats etc. – that are necessary for the functions of the human body. "Vitamine" to vitamin In 1910, the first vitamin complex was isolated by Japanese scientist Umetaro Suzuki, who succeeded in extracting a water-soluble complex of micronutrients from rice bran and named it aberic acid (later Orizanin). He published this discovery in a Japanese scientific journal. When the article was translated into German, the translation failed to state that it was a newly discovered nutrient, a claim made in the original Japanese article, and hence his discovery failed to gain publicity. In 1912 Polish-born biochemist Casimir Funk, working in London, isolated the same complex of micronutrients and proposed the complex be named "vitamine". It was later to be known as vitamin B3 (niacin), though he described it as "anti-beri-beri-factor" (which would today be called thiamine or vitamin B1). Funk proposed the hypothesis that other diseases, such as rickets, pellagra, coeliac disease, and scurvy could also be cured by vitamins. Max Nierenstein a friend and Reader of Biochemistry at Bristol University reportedly suggested the "vitamine" name (from "vital amine"). The name soon became synonymous with Hopkins' "accessory factors", and, by the time it was shown that not all vitamins are amines, the word was already ubiquitous. In 1920, Jack Cecil Drummond proposed that the final "e" be dropped to deemphasize the "amine" reference, hence "vitamin", after researchers began to suspect that not all "vitamines" (in particular, vitamin A) have an amine component. Nobel Prizes for vitamin research The Nobel Prize for Chemistry for 1928 was awarded to Adolf Windaus "for his studies on the constitution of the sterols and their connection with vitamins", the first person to receive an award mentioning vitamins, even though it was not specifically about vitamin D. The Nobel Prize in Physiology or Medicine for 1929 was awarded to Christiaan Eijkman and Frederick Gowland Hopkins for their contributions to the discovery of vitamins. Thirty-five years earlier, Eijkman had observed that chickens fed polished white rice developed neurological symptoms similar to those observed in military sailors and soldiers fed a rice-based diet, and that the symptoms were reversed when the chickens were switched to whole-grain rice. He called this "the anti-beriberi factor", which was later identified as vitamin B1, thiamine. In 1930, Paul Karrer elucidated the correct structure for beta-carotene, the main precursor of vitamin A, and identified other carotenoids. Karrer and Norman Haworth confirmed Albert Szent-Györgyi's discovery of ascorbic acid and made significant contributions to the chemistry of flavins, which led to the identification of lactoflavin. For their investigations on carotenoids, flavins and vitamins A and B2, they both received the Nobel Prize in Chemistry in 1937. In 1931, Albert Szent-Györgyi and a fellow researcher Joseph Svirbely suspected that "hexuronic acid" was actually vitamin C, and gave a sample to Charles Glen King, who proved its activity counter to scurvy in his long-established guinea pig scorbutic assay. In 1937, Szent-Györgyi was awarded the Nobel Prize in Physiology or Medicine for his discovery. In 1943, Edward Adelbert Doisy and Henrik Dam were awarded the Nobel Prize in Physiology or Medicine for their discovery of vitamin K and its chemical structure. In 1938, Richard Kuhn was awarded the Nobel Prize in Chemistry for his work on carotenoids and vitamins, specifically B2 and B6. Five people have been awarded Nobel Prizes for direct and indirect studies of vitamin B12: George Whipple, George Minot and William P. Murphy (1934), Alexander R. Todd (1957), and Dorothy Hodgkin (1964). In 1967, George Wald, Ragnar Granit and Haldan Keffer Hartline were awarded the Nobel Prize in Physiology and Medicine "...for their discoveries concerning the primary physiological and chemical visual processes in the eye." Wald's contribution was discovering the role vitamin A had in the process. History of promotional marketing Once discovered, vitamins were actively promoted in articles and advertisements in McCall's, Good Housekeeping, and other media outlets. Marketers enthusiastically promoted cod-liver oil, a source of vitamin D, as "bottled sunshine", and bananas as a "natural vitality food". They promoted foods such as yeast cakes, a source of B vitamins, on the basis of scientifically determined nutritional value, rather than taste or appearance. In 1942, when flour enrichment with nicotinic acid began, a headline in the popular press said "Tobacco in Your Bread." In response, the Council on Foods and Nutrition of the American Medical Association approved of the Food and Nutrition Board's new names niacin and niacin amide for use primarily by non-scientists. It was thought appropriate to choose a name to dissociate nicotinic acid from nicotine, to avoid the perception that vitamins or niacin-rich food contains nicotine, or that cigarettes contain vitamins. The resulting name niacin was derived from cotinic id + vitam. Researchers also focused on the need to ensure adequate nutrition, especially to compensate for what was lost in the manufacture of processed foods. Robert W. Yoder is credited with first using the term vitamania, in 1942, to describe the appeal of relying on nutritional supplements rather than on obtaining vitamins from a varied diet of foods. The continuing preoccupation with a healthy lifestyle led to an obsessive consumption of vitamins and multi-vitamins, the beneficial effects of which are questionable. As one example, in the 1950s, the Wonder Bread company sponsored the Howdy Doody television show, with host Buffalo Bob Smith telling the audience, "Wonder Bread builds strong bodies 8 ways", referring to the number of added nutrients. Etymology The term "vitamin" was derived from "vitamine", a portmanteau coined in 1912 by the biochemist Casimir Funk while working at the Lister Institute of Preventive Medicine. Funk created the name from vital and amine, because it appeared that these organic micronutrient food factors that prevent beriberi and perhaps other similar dietary-deficiency diseases were required for life, hence "vital", and were chemical amines, hence "amine". This was true of thiamine, but after it was found that vitamin C and other such micronutrients were not amines, the word was shortened to "vitamin" in English. Classification Vitamins are classified as either water-soluble or fat-soluble. In humans there are 13 vitamins: 4 fat-soluble (A, D, E, and K) and 9 water-soluble (8 B vitamins and vitamin C). Water-soluble vitamins dissolve easily in water and, in general, are readily excreted from the body, to the degree that urinary output is a strong predictor of vitamin consumption. Because they are not as readily stored, more consistent intake is important. Fat-soluble vitamins are absorbed through the gastrointestinal tract with the help of lipids (fats). Vitamins A and D can accumulate in the body, which can result in dangerous hypervitaminosis. Fat-soluble vitamin deficiency due to malabsorption is of particular significance in cystic fibrosis. Anti-vitamins Anti-vitamins are chemical compounds that inhibit the absorption or actions of vitamins. For example, avidin is a protein in raw egg whites that inhibits the absorption of biotin; it is deactivated by cooking. Pyrithiamine, a synthetic compound, has a molecular structure similar to thiamine, vitamin B1, and inhibits the enzymes that use thiamine. Biochemical functions Each vitamin is typically used in multiple reactions, and therefore most have multiple functions. On fetal growth and childhood development Vitamins are essential for the normal growth and development of a multicellular organism. Using the genetic blueprint inherited from its parents, a fetus develops from the nutrients it absorbs. It requires certain vitamins and minerals to be present at certain times. These nutrients facilitate the chemical reactions that produce among other things, skin, bone, and muscle. If there is serious deficiency in one or more of these nutrients, a child may develop a deficiency disease. Even minor deficiencies may cause permanent damage. On adult health maintenance Once growth and development are completed, vitamins remain essential nutrients for the healthy maintenance of the cells, tissues, and organs that make up a multicellular organism; they also enable a multicellular life form to efficiently use chemical energy provided by food it eats, and to help process the proteins, carbohydrates, and fats required for cellular respiration. Intake Sources For the most part, vitamins are obtained from the diet, but some are acquired by other means: for example, microorganisms in the gut flora produce vitamin K and biotin; and one form of vitamin D is synthesized in skin cells when they are exposed to a certain wavelength of ultraviolet light present in sunlight. Humans can produce some vitamins from precursors they consume: for example, vitamin A is synthesized from beta carotene; and niacin is synthesized from the amino acid tryptophan. Vitamin C can be synthesized by some species but not by others. Vitamin B12 is the only vitamin or nutrient not available from plant sources. The Food Fortification Initiative lists countries which have mandatory fortification programs for vitamins folic acid, niacin, vitamin A and vitamins B1, B2 and B12. Deficient intake The body's stores for different vitamins vary widely; vitamins A, D, and B12 are stored in significant amounts, mainly in the liver, and an adult's diet may be deficient in vitamins A and D for many months and B12 in some cases for years, before developing a deficiency condition. However, vitamin B3 (niacin and niacinamide) is not stored in significant amounts, so stores may last only a couple of weeks. For vitamin C, the first symptoms of scurvy in experimental studies of complete vitamin C deprivation in humans have varied widely, from a month to more than six months, depending on previous dietary history that determined body stores. Deficiencies of vitamins are classified as either primary or secondary. A primary deficiency occurs when an organism does not get enough of the vitamin in its food. A secondary deficiency may be due to an underlying disorder that prevents or limits the absorption or use of the vitamin, due to a "lifestyle factor", such as smoking, excessive alcohol consumption, or the use of medications that interfere with the absorption or use of the vitamin. People who eat a varied diet are unlikely to develop a severe primary vitamin deficiency, but may be consuming less than the recommended amounts; a national food and supplement survey conducted in the US over 2003–2006 reported that over 90% of individuals who did not consume vitamin supplements were found to have inadequate levels of some of the essential vitamins, notably vitamins D and E. Well-researched human vitamin deficiencies involve thiamine (beriberi), niacin (pellagra), vitamin C (scurvy), folate (neural tube defects) and vitamin D (rickets). In much of the developed world these deficiencies are rare due to an adequate supply of food and the addition of vitamins to common foods. In addition to these classical vitamin deficiency diseases, some evidence has also suggested links between vitamin deficiency and a number of different disorders. Excess intake Some vitamins have documented acute or chronic toxicity at larger intakes, which is referred to as hypertoxicity. The European Union and the governments of several countries have established Tolerable upper intake levels (ULs) for those vitamins which have documented toxicity (see table). The likelihood of consuming too much of any vitamin from food is remote, but excessive intake (vitamin poisoning) from dietary supplements does occur. In 2016, overdose exposure to all formulations of vitamins and multi-vitamin/mineral formulations was reported by 63,931 individuals to the American Association of Poison Control Centers with 72% of these exposures in children under the age of five. In the US, analysis of a national diet and supplement survey reported that about 7% of adult supplement users exceeded the UL for folate and 5% of those older than age 50 years exceeded the UL for vitamin A. Effects of cooking The USDA has conducted extensive studies on the percentage losses of various nutrients from food types and cooking methods. Some vitamins may become more "bio-available" – that is, usable by the body – when foods are cooked. The table below shows whether various vitamins are susceptible to loss from heat—such as heat from boiling, steaming, frying, etc. The effect of cutting vegetables can be seen from exposure to air and light. Water-soluble vitamins such as B and C dissolve into the water when a vegetable is boiled, and are then lost when the water is discarded. Recommended levels In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher than average needs. Adequate Intakes (AIs) are set when there is not sufficient information to establish EARs and RDAs. Governments are slow to revise information of this nature. For the U.S. values, with the exception of calcium and vitamin D, all of the data date to 1997–2004. All values are consumption per day: EAR US Estimated Average Requirements. RDA US Recommended Dietary Allowances; higher for adults than for children, and may be even higher for women who are pregnant or lactating. AI US and EFSA Adequate Intake; AIs established when there is not sufficient information to set EARs and RDAs. PRI Population Reference Intake is European Union equivalent of RDA; higher for adults than for children, and may be even higher for women who are pregnant or lactating. For Thiamin and Niacin the PRIs are expressed as amounts per MJ of calories consumed. MJ = megajoule = 239 food calories. UL or Upper Limit Tolerable upper intake levels. ND ULs have not been determined. NE EARs have not been established. Supplementation In those who are otherwise healthy, there is little evidence that supplements have any benefits with respect to cancer or heart disease. Vitamin A and E supplements not only provide no health benefits for generally healthy individuals, but they may increase mortality, though the two large studies that support this conclusion included smokers for whom it was already known that beta-carotene supplements can be harmful. A 2018 meta-analysis found no evidence that intake of vitamin D or calcium for community-dwelling elderly people reduced bone fractures. Europe has regulations that define limits of vitamin (and mineral) dosages for their safe use as dietary supplements. Most vitamins that are sold as dietary supplements are not supposed to exceed a maximum daily dosage referred to as the tolerable upper intake level (UL or Upper Limit). Vitamin products above these regulatory limits are not considered supplements and should be registered as prescription or non-prescription (over-the-counter drugs) due to their potential side effects. The European Union, United States and Japan establish ULs. Dietary supplements often contain vitamins, but may also include other ingredients, such as minerals, herbs, and botanicals. Scientific evidence supports the benefits of dietary supplements for persons with certain health conditions. In some cases, vitamin supplements may have unwanted effects, especially if taken before surgery, with other dietary supplements or medicines, or if the person taking them has certain health conditions. They may also contain levels of vitamins many times higher, and in different forms, than one may ingest through food. Governmental regulation Most countries place dietary supplements in a special category under the general umbrella of foods, not drugs. As a result, the manufacturer, and not the government, has the responsibility of ensuring that its dietary supplement products are safe before they are marketed. Regulation of supplements varies widely by country. In the United States, a dietary supplement is defined under the Dietary Supplement Health and Education Act of 1994. There is no FDA approval process for dietary supplements, and no requirement that manufacturers prove the safety or efficacy of supplements introduced before 1994. The Food and Drug Administration must rely on its Adverse Event Reporting System to monitor adverse events that occur with supplements. In 2007, the US Code of Federal Regulations (CFR) Title 21, part III took effect, regulating Good Manufacturing Practices (GMPs) in the manufacturing, packaging, labeling, or holding operations for dietary supplements. Even though product registration is not required, these regulations mandate production and quality control standards (including testing for identity, purity and adulterations) for dietary supplements. In the European Union, the Food Supplements Directive requires that only those supplements that have been proven safe can be sold without a prescription. For most vitamins, pharmacopoeial standards have been established. In the United States, the United States Pharmacopeia (USP) sets standards for the most commonly used vitamins and preparations thereof. Likewise, monographs of the European Pharmacopoeia (Ph.Eur.) regulate aspects of identity and purity for vitamins on the European market. Naming The reason that the set of vitamins skips directly from E to K is that the vitamins corresponding to letters F–J were either reclassified over time, discarded as false leads, or renamed because of their relationship to vitamin B, which became a complex of vitamins. The Danish-speaking scientists who isolated and described vitamin K (in addition to naming it as such) did so because the vitamin is intimately involved in the coagulation of blood following wounding (from the Danish word Koagulation). At the time, most (but not all) of the letters from F through to J were already designated, so the use of the letter K was considered quite reasonable. The table Nomenclature of reclassified vitamins lists chemicals that had previously been classified as vitamins, as well as the earlier names of vitamins that later became part of the B-complex. The missing numbered B vitamins were reclassified or determined not to be vitamins. For example, B9 is folic acid and five of the folates are in the range B11 through B16. Others, such as PABA (formerly B10), are biologically inactive, toxic, or with unclassifiable effects in humans, or not generally recognised as vitamins by science, such as the highest-numbered, which some naturopath practitioners call B21 and B22. There are also lettered B substances (e.g., Bm) listed at B vitamins that are not recognized as vitamins. There are other "D vitamins" now recognised as other substances, which some sources of the same type number up to D7. The controversial cancer treatment laetrile was at one point lettered as vitamin B17. There appears to be no consensus on the existence of substances that may have at one time been named as vitamins Q, R, T, V, W, X, Y or Z. "Vitamin N" is a term popularized for the mental health benefits of spending time in nature settings. "Vitamin I" is slang among athletes for frequent/daily consumption of ibuprofen as a pain-relieving treatment.
Biology and health sciences
Health and fitness
null
32528
https://en.wikipedia.org/wiki/Visual%20cortex
Visual cortex
The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5 (also known as V2, V3, V4, and V5, or Brodmann area 18 and all Brodmann area 19). Both hemispheres of the brain include a visual cortex; the visual cortex in the left hemisphere receives signals from the right visual field, and the visual cortex in the right hemisphere receives signals from the left visual field. Introduction The primary visual cortex (V1) is located in and around the calcarine fissure in the occipital lobe. Each hemisphere's V1 receives information directly from its ipsilateral lateral geniculate nucleus that receives signals from the contralateral visual hemifield. Neurons in the visual cortex fire action potentials when visual stimuli appear within their receptive field. By definition, the receptive field is the region within the entire visual field that elicits an action potential. But, for any given neuron, it may respond best to a subset of stimuli within its receptive field. This property is called neuronal tuning. In the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have complex tuning. For example, in the inferior temporal cortex (IT), a neuron may fire only when a certain face appears in its receptive field. Furthermore, the arrangement of receptive fields in V1 is retinotopic, meaning neighboring cells in V1 have receptive fields that correspond to adjacent portions of the visual field. This spatial organization allows for a systematic representation of the visual world within V1. Additionally, recent studies have delved into the role of contextual modulation in V1, where the perception of a stimulus is influenced not only by the stimulus itself but also by the surrounding context, highlighting the intricate processing capabilities of V1 in shaping our visual experiences. The visual cortex receives its blood supply primarily from the calcarine branch of the posterior cerebral artery. The size of V1, V2, and V3 can vary three-fold, a difference that is partially inherited. Psychological model of the neural processing of visual information Ventral-dorsal model V1 transmits information to two primary pathways, called the ventral stream and the dorsal stream. The ventral stream begins with V1, goes through visual area V2, then through visual area V4, and to the inferior temporal cortex (IT cortex). The ventral stream, sometimes called the "What Pathway", is associated with form recognition and object representation. It is also associated with storage of long-term memory. The dorsal stream begins with V1, goes through Visual area V2, then to the dorsomedial area (DM/V6) and middle temporal area (MT/V5) and to the posterior parietal cortex. The dorsal stream, sometimes called the "Where Pathway" or "How Pathway", is associated with motion, representation of object locations, and control of the eyes and arms, especially when visual information is used to guide saccades or reaching. The what vs. where account of the ventral/dorsal pathways was first described by Ungerleider and Mishkin. More recently, Goodale and Milner extended these ideas and suggested that the ventral stream is critical for visual perception whereas the dorsal stream mediates the visual control of skilled actions. It has been shown that visual illusions such as the Ebbinghaus illusion distort judgements of a perceptual nature, but when the subject responds with an action, such as grasping, no distortion occurs. Work such as that from Franz et al. suggests that both the action and perception systems are equally fooled by such illusions. Other studies, however, provide strong support for the idea that skilled actions such as grasping are not affected by pictorial illusions and suggest that the action/perception dissociation is a useful way to characterize the functional division of labor between the dorsal and ventral visual pathways in the cerebral cortex. Primary visual cortex (V1) The primary visual cortex is the most studied visual area in the brain. In mammals, it is located in the posterior pole of the occipital lobe and is the simplest, earliest cortical visual area. It is highly specialized for processing information about static and moving objects and is excellent in pattern recognition. Moreover, V1 is characterized by a laminar organization, with six distinct layers, each playing a unique role in visual processing. Neurons in the superficial layers (II and III) are often involved in local processing and communication within the cortex, while neurons in the deeper layers (V and VI) often send information to other brain regions involved in higher-order visual processing and decision-making. Research on V1 has also revealed the presence of orientation-selective cells, which respond preferentially to stimuli with a specific orientation, contributing to the perception of edges and contours. The discovery of these orientation-selective cells has been fundamental in shaping our understanding of how V1 processes visual information. Furthermore, V1 exhibits plasticity, allowing it to undergo functional and structural changes in response to sensory experience. Studies have demonstrated that sensory deprivation or exposure to enriched environments can lead to alterations in the organization and responsiveness of V1 neurons, highlighting the dynamic nature of this critical visual processing hub. The primary visual cortex, which is defined by its function or stage in the visual system, is approximately equivalent to the striate cortex, also known as Brodmann area 17, which is defined by its anatomical location. The name "striate cortex" is derived from the line of Gennari, a distinctive stripe visible to the naked eye that represents myelinated axons from the lateral geniculate body terminating in layer 4 of the gray matter. Brodmann area 17 is just one subdivision of the broader Brodmann areas, which are regions of the cerebral cortex defined based on cytoarchitectural differences. In the case of the striate cortex, the line of Gennari corresponds to a band rich in myelinated nerve fibers, providing a clear marker for the primary visual processing region. Additionally, the functional significance of the striate cortex extends beyond its role as the primary visual cortex. It serves as a crucial hub for the initial processing of visual information, such as the analysis of basic features like orientation, spatial frequency, and color. The integration of these features in the striate cortex forms the foundation for more complex visual processing carried out in higher-order visual areas. Recent neuroimaging studies have contributed to a deeper understanding of the dynamic interactions within the striate cortex and its connections with other visual and non-visual brain regions, shedding light on the intricate neural circuits that underlie visual perception. The primary visual cortex is divided into six functionally distinct layers, labeled 1 to 6. Layer 4, which receives most visual input from the lateral geniculate nucleus (LGN), is further divided into 4 layers, labelled 4A, 4B, 4Cα, and 4Cβ. Sublamina 4Cα receives mostly magnocellular input from the LGN, while layer 4Cβ receives input from parvocellular pathways. The average number of neurons in the adult human primary visual cortex in each hemisphere has been estimated at 140 million. The volume of each V1 area in an adult human is about 5400mm on average. A study of 25 hemispheres from 15 normal individuals with average age 59 years at autopsy found a very high variation, from 4272 to 7027mm for the right hemisphere (mean 5692mm), and from 3185 to 7568mm for the left hemisphere (mean 5119mm), with 0.81 correlation between left and right hemispheres. The same study found average V1 area 2400mm per hemisphere, but with very high variability. (Right hemisphere mean 2477mm, range 1441–3221mm. Left hemisphere mean 2315mm, range 1438–3365mm.) Function The initial stage of visual processing within the cortex, known as V1, plays a fundamental role in shaping our perception of the visual world. V1 possesses a meticulously defined map, referred to as the retinotopic map, which intricately organizes spatial information from the visual field. In humans, the upper bank of the calcarine sulcus in the occipital lobe robustly responds to the lower half of the visual field, while the lower bank responds to the upper half. This retinotopic mapping conceptually represents a projection of the visual image from the retina to V1. The importance of this retinotopic organization lies in its ability to preserve spatial relationships present in the external environment. Neighboring neurons in V1 exhibit responses to adjacent portions of the visual field, creating a systematic representation of the visual scene. This mapping extends both vertically and horizontally, ensuring the conservation of both horizontal and vertical relationships within the visual input. Moreover, the retinotopic map demonstrates a remarkable degree of plasticity, adapting to alterations in visual experience. Studies have revealed that changes in sensory input, such as those induced by visual training or deprivation, can lead to shifts in the retinotopic map. This adaptability underscores the brain's capacity to reorganize in response to varying environmental demands, highlighting the dynamic nature of visual processing. Beyond its spatial processing role, the retinotopic map in V1 establishes intricate connections with other visual areas, forming a network crucial for integrating diverse visual features and constructing a coherent visual percept. This dynamic mapping mechanism is indispensable for our ability to navigate and interpret the visual world effectively. The correspondence between specific locations in V1 and the subjective visual field is exceptionally precise, even extending to map the blind spots of the retina. Evolutionarily, this correspondence is a fundamental feature found in most animals possessing a V1. In humans and other species with a fovea (cones in the retina), a substantial portion of V1 is mapped to the small central portion of the visual field—a phenomenon termed cortical magnification. This magnification reflects an increased representation and processing capacity devoted to the central visual field, essential for detailed visual acuity and high-resolution processing. Notably, neurons in V1 have the smallest receptive field size, signifying the highest resolution, among visual cortex microscopic regions. This specialization equips V1 with the ability to capture fine details and nuances in the visual input, emphasizing its pivotal role as a critical hub in early visual processing and contributing significantly to our intricate and nuanced visual perception. In addition to its role in spatial processing, the retinotopic map in V1 is intricately connected with other visual areas, forming a network that contributes to the integration of various visual features and the construction of a coherent visual percept. This dynamic mapping mechanism is fundamental to our ability to navigate and interpret the visual world effectively. The correspondence between a given location in V1 and in the subjective visual field is very precise: even the blind spots of the retina are mapped into V1. In terms of evolution, this correspondence is very basic and found in most animals that possess a V1. In humans and other animals with a fovea (cones in the retina), a large portion of V1 is mapped to the small, central portion of visual field, a phenomenon known as cortical magnification. Perhaps for the purpose of accurate spatial encoding, neurons in V1 have the smallest receptive field size (that is, the highest resolution) of any visual cortex microscopic regions. The tuning properties of V1 neurons (what the neurons respond to) differ greatly over time. Early in time (40 ms and further) individual V1 neurons have strong tuning to a small set of stimuli. That is, the neuronal responses can discriminate small changes in visual orientations, spatial frequencies and colors (as in the optical system of a camera obscura, but projected onto retinal cells of the eye, which are clustered in density and fineness). Each V1 neuron propagates a signal from a retinal cell, in continuation. Furthermore, individual V1 neurons in humans and other animals with binocular vision have ocular dominance, namely tuning to one of the two eyes. In V1, and primary sensory cortex in general, neurons with similar tuning properties tend to cluster together as cortical columns. David Hubel and Torsten Wiesel proposed the classic ice-cube organization model of cortical columns for two tuning properties: ocular dominance and orientation. However, this model cannot accommodate the color, spatial frequency and many other features to which neurons are tuned . The exact organization of all these cortical columns within V1 remains a hot topic of current research. The receptive fields of V1 neurons resemble Gabor functions, so the operation of the visual cortex has been compared to the Gabor transform. Later in time (after 100 ms), neurons in V1 are also sensitive to the more global organisation of the scene. These response properties probably stem from recurrent feedback processing (the influence of higher-tier cortical areas on lower-tier cortical areas) and lateral connections from pyramidal neurons. While feedforward connections are mainly driving, feedback connections are mostly modulatory in their effects. Evidence shows that feedback originating in higher-level areas such as V4, IT, or MT, with bigger and more complex receptive fields, can modify and shape V1 responses, accounting for contextual or extra-classical receptive field effects. The visual information relayed by V1 is sometimes described as edge detection. As an example, for an image comprising half side black and half side white, the dividing line between black and white has strongest local contrast (that is, edge detection) and is encoded, while few neurons code the brightness information (black or white per se). As information is further relayed to subsequent visual areas, it is coded as increasingly non-local frequency/phase signals. Note that, at these early stages of cortical visual processing, spatial location of visual information is well preserved amid the local contrast encoding (edge detection). In primates, one role of V1 might be to create a saliency map (highlights what is important) from visual inputs to guide the shifts of attention known as gaze shifts. According to the V1 Saliency Hypothesis, V1 does this by transforming visual inputs to neural firing rates from millions of neurons, such that the visual location signaled by the highest firing neuron is the most salient location to attract gaze shift. V1's outputs are received by the superior colliculus (in the mid-brain), among other locations, which reads out the V1 activities to guide gaze shifts. Differences in size of V1 also seem to have an effect on the perception of illusions. V2 Visual area V2, or secondary visual cortex, also called prestriate cortex, receives strong feedforward connections from V1 (direct and via the pulvinar) and sends robust connections to V3, V4, and V5. Additionally, it plays a crucial role in the integration and processing of visual information. The feedforward connections from V1 to V2 contribute to the hierarchical processing of visual stimuli. V2 neurons build upon the basic features detected in V1, extracting more complex visual attributes such as texture, depth, and color. This hierarchical processing is essential for the construction of a more nuanced and detailed representation of the visual scene. Furthermore, the reciprocal feedback connections from V2 to V1 play a significant role in modulating the activity of V1 neurons. This feedback loop is thought to be involved in processes such as attention, perceptual grouping, and figure-ground segregation. The dynamic interplay between V1 and V2 highlights the intricate nature of information processing within the visual system. Moreover, V2's connections with subsequent visual areas, including V3, V4, and V5, contribute to the formation of a distributed network for visual processing. These connections enable the integration of different visual features, such as motion and form, across multiple stages of the visual hierarchy. In terms of anatomy, V2 is split into four quadrants, a dorsal and ventral representation in the left and the right hemispheres. Together, these four regions provide a complete map of the visual world. V2 has many properties in common with V1: Cells are tuned to simple properties such as orientation, spatial frequency, and color. The responses of many V2 neurons are also modulated by more complex properties, such as the orientation of illusory contours, binocular disparity, and whether the stimulus is part of the figure or the ground. Recent research has shown that V2 cells show a small amount of attentional modulation (more than V1, less than V4), are tuned for moderately complex patterns, and may be driven by multiple orientations at different subregions within a single receptive field. It is argued that the entire ventral visual-to-hippocampal stream is important for visual memory. This theory, unlike the dominant one, predicts that object-recognition memory (ORM) alterations could result from the manipulation in V2, an area that is highly interconnected within the ventral stream of visual cortices. In the monkey brain, this area receives strong feedforward connections from the primary visual cortex (V1) and sends strong projections to other secondary visual cortices (V3, V4, and V5). Most of the neurons of this area in primates are tuned to simple visual characteristics such as orientation, spatial frequency, size, color, and shape. Anatomical studies implicate layer 3 of area V2 in visual-information processing. In contrast to layer 3, layer 6 of the visual cortex is composed of many types of neurons, and their response to visual stimuli is more complex. In one study, the Layer 6 cells of the V2 cortex were found to play a very important role in the storage of Object Recognition Memory as well as the conversion of short-term object memories into long-term memories. Third visual cortex, including area V3 The term third visual complex refers to the region of cortex located immediately in front of V2, which includes the region named visual area V3 in humans. The "complex" nomenclature is justified by the fact that some controversy still exists regarding the exact extent of area V3, with some researchers proposing that the cortex located in front of V2 may include two or three functional subdivisions. For example, David Van Essen and others (1986) have proposed the existence of a "dorsal V3" in the upper part of the cerebral hemisphere, which is distinct from the "ventral V3" (or ventral posterior area, VP) located in the lower part of the brain. Dorsal and ventral V3 have distinct connections with other parts of the brain, appear different in sections stained with a variety of methods, and contain neurons that respond to different combinations of visual stimulus (for example, colour-selective neurons are more common in the ventral V3). Additional subdivisions, including V3A and V3B have also been reported in humans. These subdivisions are located near dorsal V3, but do not adjoin V2. Dorsal V3 is normally considered to be part of the dorsal stream, receiving inputs from V2 and from the primary visual area and projecting to the posterior parietal cortex. It may be anatomically located in Brodmann area 19. Braddick using fMRI has suggested that area V3/V3A may play a role in the processing of global motion Other studies prefer to consider dorsal V3 as part of a larger area, named the dorsomedial area (DM), which contains a representation of the entire visual field. Neurons in area DM respond to coherent motion of large patterns covering extensive portions of the visual field (Lui and collaborators, 2006). Ventral V3 (VP), has much weaker connections from the primary visual area, and stronger connections with the inferior temporal cortex. While earlier studies proposed that VP contained a representation of only the upper part of the visual field (above the point of fixation), more recent work indicates that this area is more extensive than previously appreciated, and like other visual areas it may contain a complete visual representation. The revised, more extensive VP is referred to as the ventrolateral posterior area (VLP) by Rosa and Tweedale. V4 Visual area V4 is one of the visual areas in the extrastriate visual cortex. In macaques, it is located anterior to V2 and posterior to the posterior inferotemporal area (PIT). It comprises at least four regions (left and right V4d, left and right V4v), and some groups report that it contains rostral and caudal subdivisions as well. It is unknown whether the human V4 is as expansive as that of the macaque homologue. This is a subject of debate. V4 is the third cortical area in the ventral stream, receiving strong feedforward input from V2 and sending strong connections to the PIT. It also receives direct input from V1, especially for central space. In addition, it has weaker connections to V5 and the dorsal prelunate gyrus (DP). V4 is the first area in the ventral stream to show strong attentional modulation. Most studies indicate that selective attention can change firing rates in V4 by about 20%. A seminal paper by Moran and Desimone characterizing these effects was the first paper to find attention effects anywhere in the visual cortex. Like V2, V4 is tuned for orientation, spatial frequency, and color. Unlike V2, V4 is tuned for object features of intermediate complexity, like simple geometric shapes, although no one has developed a full parametric description of the tuning space for V4. Visual area V4 is not tuned for complex objects such as faces, as areas in the inferotemporal cortex are. The firing properties of V4 were first described by Semir Zeki in the late 1970s, who also named the area. Before that, V4 was known by its anatomical description, the prelunate gyrus. Originally, Zeki argued that the purpose of V4 was to process color information. Work in the early 1980s proved that V4 was as directly involved in form recognition as earlier cortical areas. This research supported the two-streams hypothesis, first presented by Ungerleider and Mishkin in 1982. Recent work has shown that V4 exhibits long-term plasticity, encodes stimulus salience, is gated by signals coming from the frontal eye fields, and shows changes in the spatial profile of its receptive fields with attention. In addition, it has recently been shown that activation of area V4 in humans (area V4h) is observed during the perception and retention of the color of objects, but not their shape. Middle temporal visual area (V5) The middle temporal visual area (MT or V5) is a region of extrastriate visual cortex. In several species of both New World monkeys and Old World monkeys the MT area contains a high concentration of direction-selective neurons. The MT in primates is thought to play a major role in the perception of motion, the integration of local motion signals into global percepts, and the guidance of some eye movements. Connections MT is connected to a wide array of cortical and subcortical brain areas. Its input comes from visual cortical areas V1, V2 and dorsal V3 (dorsomedial area), the koniocellular regions of the LGN, and the inferior pulvinar. The pattern of projections to MT changes somewhat between the representations of the foveal and peripheral visual fields, with the latter receiving inputs from areas located in the midline cortex and retrosplenial region. A standard view is that V1 provides the "most important" input to MT. Nonetheless, several studies have demonstrated that neurons in MT are capable of responding to visual information, often in a direction-selective manner, even after V1 has been destroyed or inactivated. Moreover, research by Semir Zeki and collaborators has suggested that certain types of visual information may reach MT before it even reaches V1. MT sends its major output to areas located in the cortex immediately surrounding it, including areas FST, MST, and V4t (middle temporal crescent). Other projections of MT target the eye movement-related areas of the frontal and parietal lobes (frontal eye field and lateral intraparietal area). Function The first studies of the electrophysiological properties of neurons in MT showed that a large portion of the cells are tuned to the speed and direction of moving visual stimuli. Lesion studies have also supported the role of MT in motion perception and eye movements. Neuropsychological studies of a patient unable to see motion, seeing the world in a series of static 'frames' instead, suggested that V5 in the primate is homologous to MT in the human. However, since neurons in V1 are also tuned to the direction and speed of motion, these early results left open the question of precisely what MT could do that V1 could not. Much work has been carried out on this region, as it appears to integrate local visual motion signals into the global motion of complex objects. For example, lesion to the V5 leads to deficits in perceiving motion and processing of complex stimuli. It contains many neurons selective for the motion of complex visual features (line ends, corners). Microstimulation of a neuron located in the V5 affects the perception of motion. For example, if one finds a neuron with preference for upward motion in a monkey's V5 and stimulates it with an electrode, then the monkey becomes more likely to report 'upward' motion when presented with stimuli containing 'left' and 'right' as well as 'upward' components. There is still much controversy over the exact form of the computations carried out in area MT and some research suggests that feature motion is in fact already available at lower levels of the visual system such as V1. Functional organization MT was shown to be organized in direction columns. DeAngelis argued that MT neurons were also organized based on their tuning for binocular disparity. V6 The dorsomedial area (DM) also known as V6, appears to respond to visual stimuli associated with self-motion and wide-field stimulation. V6 is a subdivision of the visual cortex of primates first described by John Allman and Jon Kaas in 1975. V6 is located in the dorsal part of the extrastriate cortex, near the deep groove through the centre of the brain (medial longitudinal fissure), and typically also includes portions of the medial cortex, such as the parieto-occipital sulcus (POS). DM contains a topographically organized representation of the entire field of vision. There are similarities between the visual area V5 and V6 of the common marmoset. Both areas receive direct connections from the primary visual cortex. And both have a high myelin content, a characteristic that is usually present in brain structures involved in fast transmission of information. For many years, it was considered that DM only existed in New World monkeys. However, more recent research has suggested that DM also exists in Old World monkeys and humans. V6 is also sometimes referred to as the parieto-occipital area (PO), although the correspondence is not exact. Properties Neurons in area DM/V6 of night monkeys and common marmosets have unique response properties, including an extremely sharp selectivity for the orientation of visual contours, and preference for long, uninterrupted lines covering large parts of the visual field. However, in comparison with area MT, a much smaller proportion of DM cells shows selectivity for the direction of motion of visual patterns. Another notable difference with area MT is that cells in DM are attuned to low spatial frequency components of an image, and respond poorly to the motion of textured patterns such as a field of random dots. These response properties suggest that DM and MT may work in parallel, with the former analyzing self-motion relative to the environment, and the latter analyzing the motion of individual objects relative to the background. Recently, an area responsive to wide-angle flow fields has been identified in the human and is thought to be a homologue of macaque area V6. Pathways The connections and response properties of cells in DM/V6 suggest that this area is a key node in a subset of the "dorsal stream", referred to by some as the "dorsomedial pathway". This pathway is likely to be important for the control of skeletomotor activity, including postural reactions and reaching movements towards objects The main 'feedforward' connection of DM is to the cortex immediately rostral to it, in the interface between the occipital and parietal lobes (V6A). This region has, in turn, relatively direct connections with the regions of the frontal lobe that control arm movements, including the premotor cortex.
Biology and health sciences
Visual system
Biology
32529
https://en.wikipedia.org/wiki/Velociraptor
Velociraptor
Velociraptor (; ) is a genus of small dromaeosaurid dinosaurs that lived in Asia during the Late Cretaceous epoch, about 75 million to 71 million years ago. Two species are currently recognized, although others have been assigned in the past. The type species is V. mongoliensis, named and described in 1924. Fossils of this species have been discovered in the Djadochta Formation, Mongolia. A second species, V. osmolskae, was named in 2008 for skull material from the Bayan Mandahu Formation, China. Smaller than other dromaeosaurids like Deinonychus and Achillobator, Velociraptor was about long with a body mass around . It nevertheless shared many of the same anatomical features. It was a bipedal, feathered carnivore with a long tail and an enlarged sickle-shaped claw on each hindfoot, which is thought to have been used to tackle and restrain prey. Velociraptor can be distinguished from other dromaeosaurids by its long and low skull, with an upturned snout. Velociraptor (commonly referred to as "raptor") is one of the dinosaur genera most familiar to the general public due to its prominent role in the Jurassic Park films. In reality, however, Velociraptor was roughly the size of a turkey, considerably smaller than the approximately tall and reptiles seen in the novels and films (which were based on members of the related genus Deinonychus). Today, Velociraptor is well known to paleontologists, with over a dozen described fossil skeletons. One particularly famous specimen preserves a Velociraptor locked in combat with a Protoceratops. History of discovery During an American Museum of Natural History expedition to the Flaming Cliffs (Bayn Dzak or Bayanzag) of the Djadochta Formation, Gobi Desert, on 11 August 1923, Peter Kaisen discovered the first Velociraptor fossil known to science—a crushed but complete skull, associated with one of the raptorial second toe claws (AMNH 6515). In 1924, museum president Henry Fairfield Osborn designated the skull and claw (which he assumed to come from the hand) as the type specimen of his new genus, Velociraptor. This name is derived from the Latin words ('swift') and ('robber' or 'plunderer') and refers to the animal's cursorial nature and carnivorous diet. Osborn named the type species V. mongoliensis after its country of origin. Earlier that year, Osborn had informally mentioned the animal in a popular press article, under the name "Ovoraptor djadochtari" (not to be confused with the similarly named Oviraptor), eventually changed into V. mongoliensis during its formal description. While North American teams were shut out of communist Mongolia during the Cold War, expeditions by Soviet and Polish scientists, in collaboration with Mongolian colleagues, recovered several more specimens of Velociraptor. The most famous is part of the "Fighting Dinosaurs" specimen (MPC-D 100/25; formerly IGM, GIN, or GI SPS), discovered by a Polish-Mongolian team in 1971. The fossil preserves a Velociraptor in battle against a Protoceratops. It is considered a national treasure of Mongolia, and in 2000 it was loaned to the American Museum of Natural History in New York City for a temporary exhibition. Between 1988 and 1990, a joint Chinese-Canadian team discovered Velociraptor remains in northern China. American scientists returned to Mongolia in 1990, and a joint Mongolian-American expedition to the Gobi, led by the American Museum of Natural History and the Mongolian Academy of Sciences, turned up several well-preserved skeletons. One such specimen, MPC-D 100/980, was nicknamed "Ichabodcraniosaurus" by Norell's team because the fairly complete specimen was found without its skull (an allusion to the Washington Irving character Ichabod Crane). While Norell and Makovicky provisionally considered it a specimen of Velociraptor mongoliensis, it was named as a new species Shri devi in 2021. In 1999, Rinchen Barsbold and Halszka Osmólska reported a juvenile Velociraptor specimen (GIN or IGM 100/2000), represented by a complete skeleton including the skull of a young individual. It was found at the Tugriken Shireh locality of the Djadochta Formation during the context of the Mongolian-Japanese Palaeontological Expeditions. The coauthors stated that detailed descriptions of this and other specimens would be published at a later date. Additional species Maxillae and a lacrimal (the main tooth-bearing bones of the upper jaw, and the bone that forms the anterior margin of the eye socket, respectively) recovered from the Bayan Mandahu Formation in 1999 by the Sino-Belgian Dinosaur Expeditions were found to pertain to Velociraptor, but not to the type species V. mongoliensis. Pascal Godefroit and colleagues named these bones V. osmolskae (for Polish paleontologist Halszka Osmólska) in 2008. However, the 2013 study noted that while "the elongate shape of the maxilla in V. osmolskae is similar to that of V. mongoliensis," phylogenetic analysis found it to be closer to Linheraptor, making the genus paraphyletic; thus, V. osmolskae might not actually belong to the genus Velociraptor and requires reassessment. Paleontologists Mark A. Norell and Peter J. Makovicky in 1997 described new and well preserved specimens of V. mongoliensis, namely MPC-D 100/985 collected from the Tugrik Shireh locality in 1993, and MPC-D 100/986 collected in 1993 from the Chimney Buttes locality. The team briefly mentioned another specimen, MPC-D 100/982, which by the time of this publication remained undescribed. In 1999 Norell and Makovicky provided more insights into the anatomy of Velociraptor with additional specimens. Among these, MPC-D 100/982 was partially described and figured, and referred to V. mongoliensis mainly based on cranial similarities with the holotype skull, although they stated that differences were present between the pelvic region of this specimen and other Velociraptor specimens. This relatively well-preserved specimen including the skull was discovered and collected in 1995 at the Bayn Dzak locality (specifically at the "Volcano" sub-locality). Martin Kundrát in a 2004 abstract compared the neurocranium of MPC-D 100/982 to another Velociraptor specimen, MPC-D 100/976. He concluded that the overall morphology of the former was more derived (advanced) than the latter, suggesting that they could represent distinct taxa. Mark J. Powers in his 2020 master thesis fully described MPC-D 100/982, which he concluded to represent a new and third species of Velociraptor. This species, which he considered distinct, was stated to mainly differ from other Velociraptor species in having a shallow maxilla morphology. Powers and colleagues also in 2020 used morphometric analyses to compare several dromaeosaurid maxillae, and found the maxilla of MPC-D 100/982 to strongly differ from specimens referred to Velociraptor. They indicated that this specimen, based on these results, represents a different species. In 2021 Powers with team used Principal Component Analysis to separate dromaeosaurid maxillae, most notably finding that MPC-D 100/982 falls outside the instraspecific variability of V. mongoliensis, arguing for a distinct species. They considered that both V. mongoliensis and this new species were ecologically separated based on their skull anatomy. The team in another 2021 abstract reinforced again the species-level separation, noting that additional differences can be found in the hindlimbs. Description Velociraptor was a small to medium-sized dromaeosaurid, with adults measuring between long, approximately high at the hips, and weighing about . Prominent quill knobs—attachment site of "wing" feathers and direct indicator of a feather covering—have been reported from the ulna of a single Velociraptor specimen (IGM 100/981), which represents an animal of estimated long and in weight. The spacing of 6 preserved knobs suggests that 8 additional knobs may have been present, giving a total of 14 quill knobs that developed large secondaries ("wing" feathers stemming from the forearm). However, the specimen number has been corrected to IGM 100/3503 and its referral to Velociraptor may require reevaluation, pending further study. Nevertheless, there is strong phylogenetic evidence from other dromaeosaurid relatives that indicates the presence of feathers in Velociraptor, including dromaeosaurids such as Daurlong, Microraptor, or Zhenyuanlong. Skull The skull of Velociraptor was rather elongated and grew up to long. It was uniquely up-curved at the snout region, concave on the upper surface, and convex on the lower surface. The snout, which occupied about 60% of the entire skull length, was notably narrow and mainly formed by the nasal, premaxilla, and maxilla bones. The was the anteriormost bone in the skull, and it was longer than taller. While its posterior end joined the nasal, the main body of the premaxilla touched the maxilla. The was nearly triangular in shape and the largest element of the snout. On its center or main body, there was a depression developing a small oval to circular-shaped hole, called maxillary fenestra. Though in front of this fenestra were two small openings, referred to as promaxillary fenestrae. The posterior border of the maxilla formed (predominantly) the antorbital fenestra, one of the several large holes in the skull. Both premaxilla and maxilla had several alveoli (tooth sockets) on their bottom surfaces. Above the maxilla and making contact with the premaxilla, there was the bone. It was a thin/narrow and elongated bone contributing to the top surface of the snout. Together, both premaxilla and nasal bones gave form to the naris or narial fenestra (nostril opening), which was relatively large and circular. The posterior end of the nasal was joined by the frontal and lacrimal bones. The back or anterior region of the skull was built by the frontal, lacrimal, postorbital, jugal, parietal, quadrate, and quadratojugal bones. The was large element, having a vaguely rectangular shape when seen from above. On its posterior end, this bone was in contact with the , and such elements were the main bodies of the skull roof. The was a T-shaped bone and its main body was thin and delicated. Its lower end meet the (often called cheek bone), which was a large, sub-triangular-shaped element. Its lower border was notably straight/horizontal. The was located just above the jugal: a stocky and strongly T-shaped bone. As a whole, the orbit or orbital fenestra (eye socket)—formed by the lacrimal, jugal, frontal, and postorbital—was large and near circular in shape, being longer than taller. When seen from above, a pair of large and markedly rounded holes were present near the rear of the skull (the temporal fenestrae), whose main components were the postorbital and squamosal. Behind the jugal, an inverted T-shaped bone (also seen in other dromaeosaurids), known as the , was developed. While the upper end of the quadratojugal joined the , an irregularly-shaped element, its inner side meet the . The latter was of great importance for the articulation with the lower jaw. The posteriormost bone was the and its projection the occipital condyle: a rounded and bulbous protuberance that meet the first vertebra of the neck. The lower jaw of Velociraptor comprised mainly the dentary, splenial, angular, surangular, and articular bones. The was a very long, weakly curved, and narrow element that developed several alveoli on its top surface. On its posterior end, it meet the . It had a small hole near its posterior end, called surangular foramen or fenestra. Both bones were the largest elements of the lower jaw of Velociraptor, contributing to virtually its entire length. Below them were the smaller and , closely articulated to each other. The , located on the inner side of the surangular, was a small element that joined the quadrate of the upper skull, enabling the articulation with the lower jaw. An elongated, near oval-shaped hole was developed in the center of the lower jaw (the mandibular fenestra), and it was produced by the joint of the dentary, surangular, and angular bones. The teeth of Velociraptor were fairly homodont (equal in shape) and had several denticles (serrations), each more strongly serrated on the back edge than the front. The premaxilla had 4 alveoli (meaning that 4 teeth were developed), and the maxilla had 11 alveoli. At the dentary, between 14–15 alveoli were present. All teeth present at the premaxilla were poorly curved, and the two first teeth were the longest, with the second having a characteristic large size. The maxillary teeth were more slender, recurved, and most notably, the lower end was strongly more serrated than the upper one. Postcranial skeleton The arm of Velociraptor was formed by the humerus (upper arm bone), radius and ulna (forearm bones), and manus (hand). Velociraptor, like other dromaeosaurids, had a large manus with three elongated digits (fingers), which ended up in strongly curved unguals (claw bones) that were similar in construction and flexibility to the wing bones of modern birds. The second digit was the longest of the three digits present, while the first was shortest. The structure of the carpal (wrist) bones prevented pronation of the wrist and forced the manus to be held with the palmar surface facing inward (medially), not downward. The pes (foot) anatomy of Velociraptor consisted of the metatarsus—a large element composed of three metatarsals of which the first one was extremely reduced in size—and four digits that developed large unguals. The first digit, as in other theropods, was a small dewclaw. The second digit, for which Velociraptor is most famous, was highly modified and held retracted off the ground, which caused Velociraptor and other dromaeosaurids to walk on only their third and fourth digits. It bore a relatively large, sickle-shaped claw, typical of dromaeosaurid and troodontid dinosaurs. This enlarged claw, which could grow to over long around its outer edge, was most likely a predatory device used to restrain struggling prey. As in other dromaeosaurs, Velociraptor tails had prezygapophyses (long bony projections) on the upper surfaces of the vertebrae, as well as ossified tendons underneath. The prezygapophyses began on the tenth tail (caudal) vertebra and extended forward to brace four to ten additional vertebrae, depending on position in the tail. These were once thought to fully stiffen the tail, forcing the entire tail to act as a single rod-like unit. However, at least one specimen has preserved a series of intact tail vertebrae curved sideways into an S-shape, suggesting that there was considerably more horizontal flexibility than once thought. Classification Velociraptor is a member of the group Eudromaeosauria, a derived sub-group of the larger family Dromaeosauridae. It is often placed within its own subfamily, Velociraptorinae. In phylogenetic taxonomy, Velociraptorinae is usually defined as "all dromaeosaurs more closely related to Velociraptor than to Dromaeosaurus." However, dromaeosaurid classification is highly variable. Originally, the subfamily Velociraptorinae was erected solely to contain Velociraptor. Other analyses have often included other genera, usually Deinonychus and Saurornitholestes, and more recently Tsaagan. Several studies published during the 2010s, including expanded versions of the analyses that found support for Velociraptorinae, have failed to resolve it as a distinct group, but rather have suggested it is a paraphyletic grade which gave rise to the Dromaeosaurinae. When first described in 1924, Velociraptor was placed in the family Megalosauridae, as was the case with most carnivorous dinosaurs at the time (Megalosauridae, like Megalosaurus, functioned as a sort of 'wastebin' taxon, where many unrelated species were grouped together). As dinosaur discoveries multiplied, Velociraptor was later recognized as a dromaeosaurid. All dromaeosaurids have also been referred to the family Archaeopterygidae by at least one author (which would, in effect, make Velociraptor a flightless bird). In the past, other dromaeosaurid species, including Deinonychus antirrhopus and Saurornitholestes langstoni, have sometimes been classified in the genus Velociraptor. Since Velociraptor was the first to be named, these species were renamed Velociraptor antirrhopus and V. langstoni. the only currently recognized species of Velociraptor are V. mongoliensis and V. osmolskae. However, several studies have found "V." osmolskae to be distantly related to V. mongoliensis. Below are the results for the Eudromaeosauria phylogeny based on the phylogenetic analysis conducted by James G. Napoli and team in 2021 during the description of Kuru, showing the position of Velociraptor: Paleobiology Feathers In 2007 Alan H. Turner and colleagues reported the presence of six quill knobs in the ulna of a referred Velociraptor specimen (IGM 100/981) from the Ukhaa Tolgod locality of the Djadochta Formation. Turner and colleagues interpreted the presence of feathers on Velociraptor as evidence against the idea that the larger, flightless maniraptorans lost their feathers secondarily due to larger body size. Furthermore, they noted that quill knobs are almost never found in flightless bird species today, and that their presence in Velociraptor (presumed to have been flightless due to its relatively large size and short forelimbs) is evidence that the ancestors of dromaeosaurids could fly, making Velociraptor and other large members of this family secondarily flightless, though it is possible the large wing feathers inferred in the ancestors of Velociraptor had a purpose other than flight. The feathers of the flightless Velociraptor may have been used for display, for covering their nests while brooding, or for added speed and thrust when running up inclined slopes. Because of the presence of another dromaeosaurid in Ukhaa Tolgod, Tsaagan, Napoli and team have noted that the referral of this specimen to Velociraptor is currently subject to reexamination. Senses Examinations of the endocranium of Velociraptor indicate that it was able to detect and hear a wide range of sound frequencies (2,368–3,965 Hz) and could track prey with ease as a result. The endocranium examinations also further cemented the theory that the dromaeosaur was an agile, swift predator. Fossil evidence suggesting Velociraptor scavenged also indicates that it was an opportunistic and actively predatory animal, feeding on carrion during times of drought or famine, if in poor health, or depending on the animal's age. Feeding In 2020, Powers and colleagues re-examined the maxillae of several eudromaeosaur taxa concluding that most Asian and North American eudromaeosaurs were separated by snout morphology and ecological strategies. They found the maxilla to be a reliable reference when inferring the shape of the premaxilla and overall snout. For instance, most Asian species have elongated snouts based on the maxilla (namely velociraptorines), indicating a selective feeding in Velociraptor and relatives, such as picking up small, fast prey. In contrast, most North American eudromaeosaurs, mostly dromaeosaurines, feature a robust and deep maxillar morphology. However, the large dromaeosurine Achillobator is a unique exception to Asian taxa with its deep maxilla. Manabu Sakamoto in 2022 performed a Bayesian phylogenetic predictive modelling framework for estimating jaw muscle parameters and bite forces of several extinct archosaurs, based on skull widths and phylogenetic relationships between groups. Among studied taxa, Velociraptor was scored with a bite force of 304 N, which was lower than that of other dromaeosaurids such as Dromaeosaurus (885 N) or Deinonychus (706 N). Predatory behavior The "Fighting Dinosaurs" specimen, found in 1971, preserves a Velociraptor mongoliensis and Protoceratops andrewsi in combat and provides direct evidence of predatory behavior. When originally reported, it was hypothesized that the two animals drowned. However, as the animals were preserved in ancient sand dune deposits, it is now thought that the animals were buried in sand, either from a collapsing dune or in a sandstorm. Burial must have been extremely rapid, judging from the lifelike poses in which the animals were preserved. Parts of the Protoceratops are missing, which has been seen as evidence of scavenging by other animals. Comparisons between the scleral rings of Velociraptor, Protoceratops, and modern birds and reptiles indicates that Velociraptor may have been nocturnal, while Protoceratops may have been cathemeral, active throughout the day during short intervals, suggesting that the fight may have occurred at twilight or during low-light conditions. The distinctive claw, on the second digit of dromaeosaurids, has traditionally been depicted as a slashing weapon; its assumed use being to cut and disembowel prey. In the "Fighting Dinosaurs" specimen, the Velociraptor lies underneath, with one of its sickle claws apparently embedded in the throat of its prey, while the beak of Protoceratops is clamped down upon the right forelimb of its attacker. This suggests Velociraptor may have used its sickle claw to pierce vital organs of the throat, such as the jugular vein, carotid artery, or trachea (windpipe), rather than slashing the abdomen. The inside edge of the claw was rounded and not unusually sharp, which may have precluded any sort of cutting or slashing action, although only the bony core of the claw is preserved. The thick abdominal wall of skin and muscle of large prey species would have been difficult to slash without a specialized cutting surface. The slashing hypothesis was tested during a 2005 BBC documentary, The Truth About Killer Dinosaurs. The producers of the program created an artificial Velociraptor leg with a sickle claw and used a pork belly to simulate the dinosaur's prey. Though the sickle claw did penetrate the abdominal wall, it was unable to tear it open, indicating that the claw was not used to disembowel prey. Remains of Deinonychus, a closely related dromaeosaurid, have commonly been found in aggregations of several individuals. Deinonychus has also been found in association with the large ornithopod Tenontosaurus, which has been cited as evidence of cooperative (pack) hunting. However, the only solid evidence for social behavior of any kind among dromaeosaurids comes from a Chinese trackway which shows six individuals of a large species moving as a group. Although many isolated fossils of Velociraptor have been found in Mongolia, none were closely associated with other individuals. Therefore, while Velociraptor is commonly depicted as a pack hunter, as in Jurassic Park, there is only limited fossil evidence to support this theory for dromaeosaurids in general and none specific to Velociraptor itself. Dromeosaur footprints in China suggest that a few other raptor genera may have hunted in packs, but there have been no conclusive examples of pack behavior found. In 2011, Denver Fowler and colleagues suggested a new method by which dromaeosaurs like Velociraptor and similar dromaeosaurs may have captured and restrained prey. This model, known as the "raptor prey restraint" (RPR) model of predation, proposes that dromaeosaurs killed their prey in a manner very similar to extant accipitrid birds of prey: by leaping onto their quarry, pinning it under their body weight, and gripping it tightly with the large, sickle-shaped claws. These researchers proposed that, like accipitrids, the dromaeosaur would then begin to feed on the animal while it was still alive, and prey death would eventually result from blood loss and organ failure. This proposal is based primarily on comparisons between the morphology and proportions of the feet and legs of dromaeosaurs to several groups of extant birds of prey with known predatory behaviors. Fowler found that the feet and legs of dromaeosaurs most closely resemble those of eagles and hawks, especially in terms of having an enlarged second claw and a similar range of grasping motion. The short metatarsus and foot strength, however, would have been more similar to that of owls. The RPR method of predation would be consistent with other aspects of Velociraptors anatomy, such as their unusual jaw and arm morphology. The arms, which could exert a lot of force but were likely covered in long feathers, may have been used as flapping stabilizers for balance while atop a struggling prey animal, along with the stiff counterbalancing tail. The jaws, thought by Fowler and colleagues to be comparatively weak, would have been useful for row saw motion bites like the modern day Komodo dragon, which also has a weak bite, to finish off its prey if the kicks were not powerful enough. These predatory adaptations working together may also have implications for the origin of flapping in paravians. Scavenging behavior In 2010, Hone and colleagues published a paper on their 2008 discovery of shed teeth of what they believed to be a Velociraptor near a tooth-marked jaw bone of what they believed to be a Protoceratops in the Bayan Mandahu Formation. The authors concluded that the find represented "late-stage carcass consumption by Velociraptor" as the predator would have eaten other parts of a freshly killed Protoceratops before biting in the jaw area. The evidence was seen as supporting the inference from the "Fighting Dinosaurs" fossil that Protoceratops was part of the diet of Velociraptor. In 2012, Hone and colleagues published a paper that described a Velociraptor specimen with a long bone of an azhdarchid pterosaur in its gut. This was interpreted as showing scavenging behaviour. In a 2024 study by Tse, Miller, and Pittman et al., focusing on the skull morphology and bite forces of various dromaeosaurids, it was discovered that Velociraptor had high bite force resistance compared to other dromaeosaurids such as Dromaeosaurus itself and Deinonychus, the latter of which was much larger. It is theorized by the authors that high bite force resistance was an adaptation towards obtaining food through scavenging more often than through active predation in Velociraptor. Metabolism Velociraptor was warm-blooded to some degree, as it required a significant amount of energy to hunt. Modern animals that possess feathery or furry coats, like Velociraptor did, tend to be warm-blooded, since these coverings function as insulation. However, bone growth rates in dromaeosaurids and some early birds suggest a more moderate metabolism, compared with most modern warm-blooded mammals and birds. The kiwi is similar to dromaeosaurids in anatomy, feather type, bone structure and even the narrow anatomy of the nasal passages (usually a key indicator of metabolism). The kiwi is a highly active, if specialized, flightless bird, with a stable body temperature and a fairly low resting metabolic rate, making it a good model for the metabolism of primitive birds and dromaeosaurids. In 2023, Seishiro Tada and team examined the nasal cavities of ectotherm (cold-blooded) or endotherm (warm-blooded) species, in order to evaluate the thermoregulatory physiology of non-avian dinosaurs compared to these groups. They found that the size of the nasal cavity relative to the head size of extant endotherms is larger than those of extant ectotherms, and among taxa, Velociraptor was recovered below the extant endotherms level by reconstructing its nasal respiratory cavity. Tada with team suggested that Velociraptor and most other non-avian dinosaurs may not have possessed a fully or well-developed nasal thermoregulation apparatus as modern endothermic animals do. Paleopathology Norell with colleagues in 1995 reported one V. mongoliensis skull bearing two parallel rows of small punctures on its frontal bones that, upon closer examination, match the spacing and size of Velociraptor teeth. They suggested that the wound was likely inflicted by another Velociraptor during a fight within the species. Because its bone structure shows no sign of healing near the bite wounds and the overall specimen was not scavenged, this individual was likely killed by this fatal wound. In 2001 Molnar and team noted that this specimen is MPC-D 100/976 hailing from the Tugrik Shireh locality, which has also yielded the Fighting Dinosaurs specimen. In 2012 David Hone and team reported another injured Velociraptor specimen (MPC-D 100/54, roughly a sub-adult individual) found with the bones of an azhdarchid pterosaur within its stomach cavity, was carrying or recovering from an injury sustained to one broken rib. From evidence on the pterosaur bones, which were devoid of pitting or deformations from digestion, the Velociraptor died shortly after, possibly from the earlier injury. Nevertheless, the team noted that this broken ribs shows signs of bone healing. Paleoenvironment Bayan Mandahu Formation In both Bayan Mandahu and Djadochta formations many of the same genera were present, though they varied at the species level. These differences in species composition may be due a natural barrier separating the two formations, which are relatively close to each other geographically. However, given the lack of any known barrier which would cause the specific faunal compositions found in these areas, it is more likely that those differences indicate a slight time difference. V. osmolskae lived alongside the ankylosaurid Pinacosaurus mephistocephalus; alvarezsaurid Linhenykus; closely related dromaeosaurid Linheraptor; oviraptorids Machairasaurus and Wulatelong; protoceratopsids Bagaceratops and Protoceratops hellenikorhinus; and troodontids Linhevenator, Papiliovenator, and Philovenator. Sediments across the formation indicate a similar depositional environment to that of the Djadochta Formation. Djadochta Formation Known specimens of Velociraptor mongoliensis have been recovered from the Djadochta Formation (also spelled Djadokhta), in the Mongolian province of Ömnögovi. This geological formation is estimated to date back to the Campanian stage (between 75 million and 71 million years ago) of the Late Cretaceous epoch. The abundant sediments—sands, sandstones, or caliche—of the Djadochta Formation were deposited by eolian (wind) processes in arid settings with fields of sand dunes and only intermittent streams, as indicated by very sparse fluvial (river-deposited) sedimentation, under a semi-arid climate. The Djadochta Formation is separated into a lower Bayn Dzak Member and upper Turgrugyin Member. V. mongoliensis is known from both members, represented by numerous specimens. The Bayn Dzak Member (mainly Bayn Dzak locality) has yielded the oviraptorid Oviraptor; ankylosaurid Pinacosaurus grangeri; protoceratopsid Protoceratops andrewsi; and troodontid Saurornithoides. The younger Turgrugyin Member (mainly Tugriken Shireh locality) has produced the bird Elsornis; dromaeosaurid Mahakala: ornithomimid Aepyornithomimus; and protoceratopsid Protoceratops andrewsi. V. mongoliensis has been found at many of the most famous and prolific Djadochta localities. The type specimen was discovered at the Flaming Cliffs site (sublocality of the larger Bayn Dzak locality/region), while the "Fighting Dinosaurs" were found at the Tugrik Shire locality (also known as Tugrugeen Shireh and many other spellings). The latter is notorious for its exceptional in situ fossil preservation. Based on deposits (such as structureless sandstones), it has been concluded that a large number of specimens were buried alive during powerful sand-bearing events, common to these paleoenvironments. Cultural significance Velociraptor is commonly perceived as a vicious and cunning killer thanks to their portrayal in the 1990 novel Jurassic Park by Michael Crichton and its 1993 film adaptation, directed by Steven Spielberg. The "raptors" portrayed in Jurassic Park were actually modeled after the closely related dromaeosaurid Deinonychus. Paleontologists in both the novel and film excavate a skeleton in Montana, far from the central Asian range of Velociraptor but characteristic of the Deinonychus range. Crichton met with the discoverer of Deinonychus, John Ostrom, several times at Yale University to discuss details of the animal's possible range of behaviors and appearance. Crichton at one point apologetically told Ostrom that he had decided to use the name Velociraptor in place of Deinonychus because the former name was "more dramatic." According to Ostrom, Crichton stated that the Velociraptor of the novel was based on Deinonychus in almost every detail, and that only the name had been changed. The Jurassic Park filmmakers also requested all of Ostrom's published papers on Deinonychus during production. They portrayed the animals with the size, proportions, and snout shape of Deinonychus rather than Velociraptor. Production on Jurassic Park began before the discovery of the large dromaeosaurid Utahraptor was made public in 1991, but as Jody Duncan wrote about this discovery: "Later, after we had designed and built the raptor, there was a discovery of a raptor skeleton in Utah, which they labeled 'super-slasher.' They had uncovered the largest Velociraptor to date and it measured five-and-a-half-feet tall, just like ours. So we designed it, we built it, and then they discovered it. That still boggles my mind." Spielberg's name was briefly considered for naming of the new dinosaur in exchange for funding of field work, but no agreement was reached. Jurassic Park and its sequel The Lost World: Jurassic Park were released before the discovery that dromaeosaurs had feathers, so the Velociraptor in both films were depicted as scaled and featherless. For Jurassic Park III, the male Velociraptor was given quill-like structures along the back of the head and neck, but these structures do not resemble the feathers that Velociraptor would have had in reality due to reasons of continuity. The Jurassic World sequel trilogy ignored the feathers of Velociraptor, adhering to the designs from Jurassic Park. However, the dromaeosaur Pyroraptor was feathered for Jurassic World Dominion, along with other changes such as stiffening the tail to account for ossified tendons and de-pronating the hands.
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
32533
https://en.wikipedia.org/wiki/Euclidean%20vector
Euclidean vector
In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Euclidean vectors can be added and scaled to form a vector space. A vector quantity is a vector-valued physical quantity, including units of measurement and possibly a support, formulated as a directed line segment. A vector is frequently depicted graphically as an arrow connecting an initial point A with a terminal point B, and denoted by A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors. History The vector concept, as it is known today, is the result of a gradual development over a period of more than 200 years. About a dozen people contributed significantly to its development. In 1835, Giusto Bellavitis abstracted the basic idea when he established the concept of equipollence. Working in a Euclidean plane, he made equipollent any pair of parallel line segments of the same length and orientation. Essentially, he realized an equivalence relation on the pairs of points (bipoints) in the plane, and thus erected the first space of vectors in the plane. The term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum of a real number (also called scalar) and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments. As complex numbers use an imaginary unit to complement the real line, Hamilton considered the vector to be the imaginary part of a quaternion: Several other mathematicians developed vector-like systems in the middle of the nineteenth century, including Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O'Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis that is similar to today's system, and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870s. Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇. In 1878, Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers—and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901, Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs's lectures, which banished any mention of quaternions in the development of vector calculus. Overview In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a relative direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above-mentioned geometric entities are a special kind of abstract vectors, as they are elements of a special kind of vector space called Euclidean space. This particular article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors. A Euclidean vector may possess a definite initial point and terminal point; such a condition may be emphasized calling the result a bound vector. When only the magnitude and direction of the vector matter, and the particular initial or terminal points are of no importance, the vector is called a free vector. The distinction between bound and free vectors is especially relevant in mechanics, where a force applied to a body has a point of contact (see resultant force and couple). Two arrows and in space represent the same free vector if they have the same magnitude and direction: that is, they are equipollent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin. The term vector also has generalizations to higher dimensions, and to more formal approaches with much wider applications. Further information In classical Euclidean geometry (i.e., synthetic geometry), vectors were introduced (during the 19th century) as equivalence classes under equipollence, of ordered pairs of points; two pairs and being equipollent if the points , in this order, form a parallelogram. Such an equivalence class is called a vector, more precisely, a Euclidean vector. The equivalence class of is often denoted A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the line segment ) and same direction (e.g., the direction from to ). In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction. For example, velocity, forces and acceleration are represented by vectors. In modern geometry, Euclidean spaces are often defined from linear algebra. More precisely, a Euclidean space is defined as a set to which is associated an inner product space of finite dimension over the reals and a group action of the additive group of which is free and transitive (See Affine space for details of this construction). The elements of are called translations. It has been proven that the two definitions of Euclidean spaces are equivalent, and that the equivalence classes under equipollence may be identified with translations. Sometimes, Euclidean vectors are considered without reference to a Euclidean space. In this case, a Euclidean vector is an element of a normed vector space of finite dimension over the reals, or, typically, an element of the real coordinate space equipped with the dot product. This makes sense, as the addition in such a vector space acts freely and transitively on the vector space itself. That is, is a Euclidean space, with itself as an associated vector space, and the dot product as an inner product. The Euclidean space is often presented as the standard Euclidean space of dimension . This is motivated by the fact that every Euclidean space of dimension is isomorphic to the Euclidean space More precisely, given such a Euclidean space, one may choose any point as an origin. By Gram–Schmidt process, one may also find an orthonormal basis of the associated vector space (a basis such that the inner product of two basis vectors is 0 if they are different and 1 if they are equal). This defines Cartesian coordinates of any point of the space, as the coordinates on this basis of the vector These choices define an isomorphism of the given Euclidean space onto by mapping any point to the -tuple of its Cartesian coordinates, and every vector to its coordinate vector. Examples in one dimension Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters would be 4 m or −4 m, depending on its direction, and its magnitude would be 4 m regardless. In physics and engineering Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For instance, the velocity 5 meters per second upward could be represented by the vector (0, 5) (in 2 dimensions with the positive y-axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction, but fail to follow the rules of vector addition, are angular displacement and electric current. Consequently, these are not vectors. In Cartesian space In the Cartesian coordinate system, a bound vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points and in space determine the bound vector pointing from the point on the x-axis to the point on the y-axis. In Cartesian coordinates, a free vector may be thought of in terms of a corresponding bound vector, in this sense, whose initial point has the coordinates of the origin . It is then determined by the coordinates of that bound vector's terminal point. Thus the free vector represented by (1, 0, 0) is a vector of unit length—pointing along the direction of the positive x-axis. This coordinate representation of free vectors allows their algebraic features to be expressed in a convenient numerical fashion. For example, the sum of the two (free) vectors (1, 2, 3) and (−2, 0, 4) is the (free) vector Euclidean and affine vectors In the geometrical and physical settings, it is sometimes possible to associate, in a natural way, a length or magnitude and a direction to vectors. In addition, the notion of direction is strictly associated with the notion of an angle between two vectors. If the dot product of two vectors is defined—a scalar-valued product of two vectors—then it is also possible to define a length; the dot product gives a convenient algebraic characterization of both angle (a function of the dot product between any two non-zero vectors) and length (the square root of the dot product of a vector by itself). In three dimensions, it is further possible to define the cross product, which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram). In any dimension (and, in particular, higher dimensions), it is possible to define the exterior product, which (among other things) supplies an algebraic characterization of the area and orientation in space of the n-dimensional parallelotope defined by n vectors. In a pseudo-Euclidean space, a vector's squared length can be positive, negative, or zero. An important example is Minkowski space (which is important to our understanding of special relativity). However, it is not always possible or desirable to define the length of a vector. This more general type of spatial vector is the subject of vector spaces (for free vectors) and affine spaces (for bound vectors, as each represented by an ordered pair of "points"). One physical example comes from thermodynamics, where many quantities of interest can be considered vectors in a space with no notion of length or angle. Generalizations In physics, as well as mathematics, a vector is often identified with a tuple of components, or list of numbers, that act as scalar coefficients for a set of basis vectors. When the basis is transformed, for example by rotation or stretching, then the components of any vector in terms of that basis also transform in an opposite sense. The vector itself has not changed, but the basis has, so the components of the vector must change to compensate. The vector is called covariant or contravariant, depending on how the transformation of the vector's components is related to the transformation of the basis. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement), or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of basis) from meters to millimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm—a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm—a covariant change in value (for more, see covariance and contravariance of vectors). Tensors are another type of quantity that behave in this way; a vector is one type of tensor. In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition, because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction". Representations Vectors are usually denoted in lowercase boldface, as in , and , or in lowercase italic boldface, as in a. (Uppercase letters are typically used to represent matrices.) Other conventions include or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. , which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as or AB. In German literature, it was especially common to represent vectors with small fraktur letters such as . Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here, the point A is called the origin, tail, base, or initial point, and the point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction. On a two-dimensional diagram, a vector perpendicular to the plane of the diagram is sometimes desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the flights of an arrow from the back. In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system. As an example in two dimensions (see figure), the vector from the origin O = (0, 0) to the point A = (2, 3) is simply written as The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation is usually deemed not necessary (and is indeed rarely used). In three dimensional Euclidean space (or ), vectors are identified with triples of scalar components: also written, This can be generalised to n-dimensional Euclidean space (or ). These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows: Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them: These have the intuitive interpretation as vectors of unit length pointing up the x-, y-, and z-axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in can be expressed in the form: or where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections). In introductory physics textbooks, the standard basis vectors are often denoted instead (or , in which the hat symbol typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus, The notation ei is compatible with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering. Decomposition or resolution As explained above, a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set. The decomposition or resolution of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected. Moreover, the use of Cartesian unit vectors such as as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of an arbitrary basis, including the unit vectors of a cylindrical coordinate system () or spherical coordinate system (). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry, respectively. The choice of a basis does not affect the properties of a vector or its behaviour under transformations. A vector can also be broken up with respect to "non-fixed" basis vectors that change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it. In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame). Properties and operations The following section uses the Cartesian coordinate system with basis vectors and assumes that all vectors have the origin as a common base point. A vector a will be written as Equality Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors and are equal if Opposite, parallel, and antiparallel vectors Two vectors are opposite if they have the same magnitude but opposite direction; so two vectors and are opposite if Two vectors are equidirectional (or codirectional) if they have the same direction but not necessarily the same magnitude. Two vectors are parallel if they have either the same or opposite direction, but not necessarily the same magnitude; two vectors are antiparallel if they have strictly opposite direction, but not necessarily the same magnitude. Addition and subtraction The sum of a and b of two vectors may be defined as The resulting vector is sometimes called the resultant vector of a and b. The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below: This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c). The difference of a and b is Subtraction of two vectors can be geometrically illustrated as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector (-b) + a, with (-b) being the opposite of b, see drawing. And (-b) + a = a − b. Scalar multiplication A vector may also be multiplied, or re-scaled, by any real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below: Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b. Length The length, magnitude or norm of the vector a is denoted by ‖a‖ or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm"). The length of the vector a can be computed with the Euclidean norm, which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors. This happens to be equal to the square root of the dot product, discussed below, of the vector with itself: Unit vector A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â. To normalize a vector , scale the vector by the reciprocal of its length ‖a‖. That is: Zero vector The zero vector is the vector with length zero. Written out in coordinates, the vector is , and it is commonly denoted , 0, or simply 0. Unlike any other vector, it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector that is a multiple of the zero vector). The sum of the zero vector with any vector a is a (that is, ). Dot product The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b, and is defined as: where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point, and then the length of a is multiplied with the length of the component of b that points in the same direction as a. The dot product can also be defined as the sum of the products of the components of each vector as Cross product The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (−n). The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (although a and b are not necessarily orthogonal). This is the right-hand rule. The length of a × b can be interpreted as the area of the parallelogram having a and b as sides. The cross product can be written as For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below). Scalar triple product The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as: It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed. In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows The scalar triple product is linear in all three entries and anti-symmetric in the following sense: Conversion between multiple Cartesian bases All examples thus far have dealt with vectors expressed in terms of the same basis, namely, the e basis {e1, e2, e3}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. In the e basis, a vector a is expressed, by definition, as The scalar components in the e basis are, by definition, In another orthonormal basis n = {n1, n2, n3} that is not necessarily aligned with e, the vector a is expressed as and the scalar components in the n basis are, by definition, The values of p, q, r, and u, v, w relate to the unit vectors in such a way that the resulting vector sum is exactly the same physical vector a in both cases. It is common to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In such a case it is necessary to develop a method to convert between bases so the basic vector operations such as addition and subtraction can be performed. One way to express u, v, w in terms of p, q, r is to use column matrices along with a direction cosine matrix containing the information that relates the two bases. Such an expression can be formed by substitution of the above equations to form Distributing the dot-multiplication gives Replacing each dot product with a unique scalar gives and these equations can be expressed as the single matrix equation This matrix equation relates the scalar components of a in the n basis (u,v, and w) with those in the e basis (p, q, and r). Each matrix element cjk is the direction cosine relating nj to ek. The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product. Therefore, By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines). The properties of a rotation matrix are such that its inverse is equal to its transpose. This means that the "rotation matrix from e to n" is the transpose of "rotation matrix from n to e". The properties of a direction cosine matrix, C are: the determinant is unity, |C| = 1; the inverse is equal to the transpose; the rows and columns are orthogonal unit vectors, therefore their dot products are zero. The advantage of this method is that a direction cosine matrix can usually be obtained independently by using Euler angles or a quaternion to relate the two vector bases, so the basis conversions can be performed directly, without having to work out all the dot products described above. By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases. Other dimensions With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as and in four dimensions as The cross product does not readily generalise to other dimensions, though the closely related exterior product does, whose result is a bivector. In two dimensions this is simply a pseudoscalar A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products. Physics Vectors have many uses in physics and other sciences. Length and units In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1 m:50 N and 1:250 respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance. Vector-valued functions Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions. Position, velocity and acceleration The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin The position vector has dimensions of length. Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector which specifies the position of y relative to x. The length of this vector gives the straight-line distance from x to y. Displacement has the dimensions of length. The velocity v of a point or particle is a vector, its length gives the speed. For constant velocity the position at time t will be where x0 is the position at time t = 0. Velocity is the time derivative of position. Its dimensions are length/time. Acceleration a of a point is vector which is the time derivative of velocity. Its dimensions are length/time2. Force, energy, work Force is a vector with dimensions of mass×length/time2 (N m s −2) and Newton's second law is the scalar multiplication Work is the dot product of force and displacement Vectors, pseudovectors, and transformations An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform opposite to the basis" under changes of basis. The vector itself does not change when the basis is transformed; instead, the components of the vector make a change that cancels the change in the basis. In other words, if the reference axes (and the basis derived from it) were rotated in one direction, the component representation of the vector would rotate in the opposite way to generate the same final vector. Similarly, if the reference axes were stretched in one direction, the components of the vector would reduce in an exactly compensating way. Mathematically, if the basis undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to , then a contravariant vector v must be similarly transformed via . This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration. In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule. Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors. One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors. This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties.
Mathematics
Algebra
null
32541
https://en.wikipedia.org/wiki/Vitamin%20K
Vitamin K
Vitamin K is a family of structurally similar, fat-soluble vitamers found in foods and marketed as dietary supplements. The human body requires vitamin K for post-synthesis modification of certain proteins that are required for blood coagulation ("K" from Danish koagulation, for "coagulation") or for controlling binding of calcium in bones and other tissues. The complete synthesis involves final modification of these "Gla proteins" by the enzyme gamma-glutamyl carboxylase that uses vitamin K as a cofactor. Vitamin K is used in the liver as the intermediate VKH2 to deprotonate a glutamate residue and then is reprocessed into vitamin K through a vitamin K oxide intermediate. The presence of uncarboxylated proteins indicates a vitamin K deficiency. Carboxylation allows them to bind (chelate) calcium ions, which they cannot do otherwise. Without vitamin K, blood coagulation is seriously impaired, and uncontrolled bleeding occurs. Research suggests that deficiency of vitamin K may also weaken bones, potentially contributing to osteoporosis, and may promote calcification of arteries and other soft tissues. Chemically, the vitamin K family comprises 2-methyl-1,4-naphthoquinone (3-) derivatives. Vitamin K includes two natural vitamers: vitamin K1 (phylloquinone) and vitamin K2 (menaquinone). Vitamin K2, in turn, consists of a number of related chemical subtypes, with differing lengths of carbon side chains made of isoprenoid groups of atoms. The two most studied are menaquinone-4 (MK-4) and menaquinone-7 (MK-7). Vitamin K1 is made by plants, and is found in highest amounts in green leafy vegetables, being directly involved in photosynthesis. It is active as a vitamin in animals and performs the classic functions of vitamin K, including its activity in the production of blood-clotting proteins. Animals may also convert it to vitamin K2, variant MK-4. Bacteria in the gut flora can also convert K1 into K2. All forms of K2 other than MK-4 can only be produced by bacteria, which use these during anaerobic respiration. Vitamin K3 (menadione), a synthetic form of vitamin K, was used to treat vitamin K deficiency, but because it interferes with the function of glutathione, it is no longer used in this manner in human nutrition. Definition Vitamin K refers to structurally similar, fat-soluble vitamers found in foods and marketed as dietary supplements. "Vitamin K" include several chemical compounds. These are similar in structure in that they share a quinone ring, but differ in the length and degree of saturation of the carbon tail and the number of repeating isoprene units in the side chain (see figures in Chemistry section). Plant-sourced forms are primarily vitamin K1. Animal-sourced foods are primarily vitamin K2. Vitamin K has several roles: an essential nutrient absorbed from food, a product synthesized and marketed as part of a multi-vitamin or as a single-vitamin dietary supplement, and a prescription medication for specific purposes. Dietary recommendations The US National Academy of Medicine does not distinguish between K1 and K2 – both are counted as vitamin K. When recommendations were last updated in 1998, sufficient information was not available to establish an estimated average requirement or recommended dietary allowance, terms that exist for most vitamins. In instances such as these, the academy defines adequate intakes (AIs) as amounts that appear to be sufficient to maintain good health, with the understanding that at some later date, AIs will be replaced by more exact information. The current AIs for adult women and men ages 19 and older are 90 and 120 μg/day, respectively, for pregnancy is 90 μg/day, and for lactation is 90 μg/day. For infants up to 12 months, the AI is 2.0–2.5 μg/day; for children ages 1–18 years the AI increases with age from 30 to 75 μg/day. As for safety, the academy sets tolerable upper intake levels (known as "upper limits") for vitamins and minerals when evidence is sufficient. Vitamin K has no upper limit, as human data for adverse effects from high doses are not sufficient. In the European Union, adequate intake is defined the same way as in the US. For women and men over age 18 the adequate intake is set at 70 μg/day, for pregnancy 70 μg/day, and for lactation 70 μg/day. For children ages 1–17 years, adequate intake values increase with age from 12 to 65 μg/day. Japan set adequate intakes for adult women at 65 μg/day and for men at 75 μg/day. The European Union and Japan also reviewed safety and concluded – as had the United States – that there was insufficient evidence to set an upper limit for vitamin K. For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percentage of daily value. For vitamin K labeling purposes, 100% of the daily value was 80 μg, but on 27 May 2016 it was revised upwards to 120 μg, to bring it into agreement with the highest value for adequate intake. Compliance with the updated labeling regulations was required by 1 January 2020 for manufacturers with US$10 million or more in annual food sales, and by 1 January 2021 for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. Fortification According to the Global Fortification Data Exchange, vitamin K deficiency is so rare that no countries require that foods be fortified. The World Health Organization does not have recommendations on vitamin K fortification. Sources Vitamin K1 is primarily from plants, especially leafy green vegetables. Small amounts are provided by animal-sourced foods. Vitamin K2 is primarily from animal-sourced foods, with poultry and eggs much better sources than beef, pork or fish. One exception to the latter is nattō, which is made from bacteria-fermented soybeans. It is a rich food source of vitamin K2 variant MK-7, made by the bacteria. Vitamin K1 Vitamin K2 Animal-sourced foods are a source of vitamin K2. The MK-4 form is from conversion of plant-sourced vitamin K1 in various tissues in the body. Vitamin K deficiency Because vitamin K aids mechanisms for blood clotting, its deficiency may lead to reduced blood clotting, and in severe cases, can result in reduced clotting, increased bleeding, and increased prothrombin time. Normal diets are usually not deficient in vitamin K, indicating that deficiency is uncommon in healthy children and adults. An exception may be infants who are at an increased risk of deficiency regardless of the vitamin status of the mother during pregnancy and breast feeding due to poor transfer of the vitamin to the placenta and low amounts of the vitamin in breast milk. Secondary deficiencies can occur in people who consume adequate amounts, but have malabsorption conditions, such as cystic fibrosis or chronic pancreatitis, and in people who have liver damage or disease. Secondary vitamin K deficiency can also occur in people who have a prescription for a vitamin K antagonist drug, such as warfarin. A drug associated with increased risk of vitamin K deficiency is cefamandole, although the mechanism is unknown. Medical uses Treating vitamin deficiency in newborns Vitamin K is given as an injection to newborns to prevent vitamin K deficiency bleeding. The blood clotting factors of newborn babies are roughly 30–60% that of adult values; this appears to be a consequence of poor transfer of the vitamin across the placenta, and thus low fetal plasma vitamin K. Occurrence of vitamin K deficiency bleeding in the first week of the infant's life is estimated at between 1 in 60 and 1 in 250. Human milk contains 0.85–9.2 μg/L (median 2.5 μg/L) of vitamin K1, while infant formula is formulated in range of 24–175 μg/L. Late onset bleeding, with onset 2 to 12 weeks after birth, can be a consequence of exclusive breastfeeding, especially if there was no preventive treatment. Late onset prevalence reported at 35 cases per 100,000 live births in infants who had not received prophylaxis at or shortly after birth. Vitamin K deficiency bleeding occurs more frequently in the Asian population compared to the Caucasian population. Bleeding in infants due to vitamin K deficiency can be severe, leading to hospitalization, brain damage, and death. Intramuscular injection, typically given shortly after birth, is more effective in preventing vitamin K deficiency bleeding than oral administration, which calls for weekly dosing up to three months of age. Managing warfarin therapy Warfarin is an anticoagulant drug. It functions by inhibiting an enzyme that is responsible for recycling vitamin K to a functional state. As a consequence, proteins that should be modified by vitamin K are not, including proteins essential to blood clotting, and are thus not functional. The purpose of the drug is to reduce risk of inappropriate blood clotting, which can have serious, potentially fatal consequences. The proper anticoagulant action of warfarin is a function of vitamin K intake and drug dose. Due to differing absorption of the drug and amounts of vitamin K in the diet, dosing must be monitored and customized for each patient. Some foods are so high in vitamin K1 that medical advice is to avoid those (examples: collard greens, spinach, turnip greens) entirely, and for foods with a modestly high vitamin content, keep consumption as consistent as possible, so that the combination of vitamin intake and warfarin keep the anti-clotting activity in the therapeutic range. Vitamin K is a treatment for bleeding events caused by overdose of the drug. The vitamin can be administered by mouth, intravenously or subcutaneously. Oral vitamin K is used in situations when a person's International normalized ratio is greater than 10 but there is no active bleeding. The newer anticoagulants apixaban, dabigatran and rivaroxaban are not vitamin K antagonists. Treating rodenticide poisoning Coumarin is used in the pharmaceutical industry as a precursor reagent in the synthesis of a number of synthetic anticoagulant pharmaceuticals. One subset, 4-hydroxycoumarins, act as vitamin K antagonists. They block the regeneration and recycling of vitamin K. Some of the 4-hydroxycoumarin anticoagulant class of chemicals are designed to have high potency and long residence times in the body, and these are used specifically as second generation rodenticides ("rat poison"). Death occurs after a period of several days to two weeks, usually from internal hemorrhaging. For humans, and for animals that have consumed either the rodenticide or rats poisoned by the rodenticide, treatment is prolonged administration of large amounts of vitamin K. This dosing must sometimes be continued for up to nine months in cases of poisoning by "superwarfarin" rodenticides such as brodifacoum. Oral vitamin K1 is preferred over other vitamin K1 routes of administration because it has fewer side effects. Methods of assessment An increase in prothrombin time, a coagulation assay, has been used as an indicator of vitamin K status, but it lacks sufficient sensitivity and specificity for this application. Serum phylloquinone is the most commonly used marker of vitamin K status. Concentrations <0.15 μg/L are indicative of deficiency. Disadvantages include exclusion of the other vitamin K vitamers and interference from recent dietary intake. Vitamin K is required for the gamma-carboxylation of specific glutamic acid residues within the Gla domain of the 17 vitamin K–dependent proteins. Thus, a rise in uncarboxylated versions of these proteins is an indirect but sensitive and specific marker for vitamin K deficiency. If uncarboxylated prothrombin is being measured, this "Protein induced by Vitamin K Absence/antagonism (PIVKA-II)" is elevated in vitamin K deficiency. The test is used to assess risk of vitamin K–deficient bleeding in newborn infants. Osteocalcin is involved in calcification of bone tissue. The ratio of uncarboxylated osteocalcin to carboxylated osteocalcin increases with vitamin K deficiency. Vitamin K2 has been shown to lower this ratio and improve lumbar vertebrae bone mineral density. Matrix Gla protein must undergo vitamin K dependent phosphorylation and carboxylation. Elevated plasma concentration of dephosphorylated, uncarboxylated MGP is indicative of vitamin K deficiency. Side effects No known toxicity is associated with high oral doses of the vitamin K1 or vitamin K2 forms of vitamin K, so regulatory agencies from US, Japan and European Union concur that no tolerable upper intake levels needs to be set. However, vitamin K1 has been associated with severe adverse reactions such as bronchospasm and cardiac arrest when given intravenously. The reaction is described as a nonimmune-mediated anaphylactoid reaction, with incidence of 3 per 10,000 treatments. The majority of reactions occurred when polyoxyethylated castor oil was used as the solubilizing agent. Non-human uses Menadione, a natural compound sometimes referred to as vitamin K3, is used in the pet food industry because once consumed it is converted to vitamin K2. The US Food and Drug Administration has banned this form from sale as a human dietary supplement because overdoses have been shown to cause allergic reactions, hemolytic anemia, and cytotoxicity in liver cells. 4-amino-2-methyl-1-naphthol ("K5") is not natural and hence not a "vitamin". Research with "K5" suggests it may inhibit fungal growth in fruit juices. Chemistry The structure of phylloquinone, Vitamin K1, is marked by the presence of a phytyl sidechain. Vitamin K1 has an (E) trans double bond responsible for its biological activity, and two chiral centers on the phytyl sidechain. Vitamin K1 appears as a yellow viscous liquid at room temperature due to its absorption of violet light in the UV-visible spectra obtained by ultraviolet–visible spectroscopy. The structures of menaquinones, vitamin K2, are marked by the polyisoprenyl side chain present in the molecule that can contain four to 13 isoprenyl units. MK-4 is the most common form. The large size of Vitamin K1 gives many different peaks in mass spectroscopy, most of which involve derivatives of the naphthoquinone ring base and the alkyl side chain. Conversion of vitamin K1 to vitamin K2 In animals, the MK-4 form of vitamin K2 is produced by conversion of vitamin K1 in the testes, pancreas, and arterial walls. While major questions still surround the biochemical pathway for this transformation, the conversion is not dependent on gut bacteria, as it occurs in germ-free rats and in parenterally administered K1 in rats. There is evidence that the conversion proceeds by removal of the phytyl tail of K1 to produce menadione (also referred to as vitamin K3) as an intermediate, which is then prenylated to produce MK-4. Physiology In animals, vitamin K is involved in the carboxylation of certain glutamate residues in proteins to form gamma-carboxyglutamate (Gla) residues. The modified residues are often (but not always) situated within specific protein domains called Gla domains. Gla residues are usually involved in binding calcium, and are essential for the biological activity of all known Gla proteins. 17 human proteins with Gla domains have been discovered; they play key roles in the regulation of three physiological processes: Blood coagulation: prothrombin (factor II), factors VII, IX, and X, and proteins C, S, and Z Bone metabolism: osteocalcin, matrix Gla protein (MGP), periostin, and Gla-rich protein. Vascular biology: Matrix Gla protein, growth arrest – specific protein 6 (Gas6) Unknown functions: proline-rich γ-carboxyglutamyl proteins 1 and 2, and transmembrane γ-carboxy glutamyl proteins 3 and 4. Absorption Vitamin K is absorbed through the jejunum and ileum in the small intestine. The process requires bile and pancreatic juices. Estimates for absorption are on the order of 80% for vitamin K1 in its free form (as a dietary supplement) but much lower when present in foods. For example, the absorption of vitamin K from kale and spinach – foods identified as having a high vitamin K content – are on the order of 4% to 17% regardless of whether raw or cooked. Less information is available for absorption of vitamin K2 from foods. The intestinal membrane protein Niemann–Pick C1-like 1 (NPC1L1) mediates cholesterol absorption. Animal studies show that it also factors into absorption of vitamins E and K1. The same study predicts potential interaction between SR-BI and CD36 proteins as well. The drug ezetimibe inhibits NPC1L1 causing a reduction in cholesterol absorption in humans, and in animal studies, also reduces vitamin E and vitamin K1 absorption. An expected consequence would be that administration of ezetimibe to people who take warfarin (a vitamin K antagonist) would potentiate the warfarin effect. This has been confirmed in humans. Biochemistry Function in animals Vitamin K is distributed differently within animals depending on its specific homologue. Vitamin K1 is mainly present in the liver, heart and pancreas, while MK-4 is better represented in the kidneys, brain and pancreas. The liver also contains longer chain homologues MK-7 to MK-13. The function of vitamin K2 in the animal cell is to add a carboxylic acid functional group to a glutamate (Glu) amino acid residue in a protein, to form a gamma-carboxyglutamate (Gla) residue. This is a somewhat uncommon posttranslational modification of the protein, which is then known as a "Gla protein". The presence of two −COOH (carboxylic acid) groups on the same carbon in the gamma-carboxyglutamate residue allows it to chelate calcium ions. The binding of calcium ions in this way very often triggers the function or binding of Gla-protein enzymes, such as the so-called vitamin K–dependent clotting factors discussed below. Within the cell, vitamin K participates in a cyclic process. The vitamin undergoes electron reduction to a reduced form called vitamin K hydroquinone (quinol), catalyzed by the enzyme vitamin K epoxide reductase (VKOR). Another enzyme then oxidizes vitamin K hydroquinone to allow carboxylation of Glu to Gla; this enzyme is called gamma-glutamyl carboxylase or the vitamin K–dependent carboxylase. The carboxylation reaction only proceeds if the carboxylase enzyme is able to oxidize vitamin K hydroquinone to vitamin K epoxide at the same time. The carboxylation and epoxidation reactions are said to be coupled. Vitamin K epoxide is then restored to vitamin K by VKOR. The reduction and subsequent reoxidation of vitamin K coupled with carboxylation of Glu is called the vitamin K cycle. Humans are rarely deficient in vitamin K because, in part, vitamin K2 is continuously recycled in cells. Warfarin and other 4-hydroxycoumarins block the action of VKOR. This results in decreased concentrations of vitamin K and vitamin K hydroquinone in tissues, such that the carboxylation reaction catalyzed by the glutamyl carboxylase is inefficient. This results in the production of clotting factors with inadequate Gla. Without Gla on the amino termini of these factors, they no longer bind stably to the blood vessel endothelium and cannot activate clotting to allow formation of a clot during tissue injury. As it is impossible to predict what dose of warfarin will give the desired degree of clotting suppression, warfarin treatment must be carefully monitored to avoid underdose and overdose. Gamma-carboxyglutamate proteins The following human Gla-containing proteins ("Gla proteins") have been characterized to the level of primary structure: blood coagulation factors II (prothrombin), VII, IX, and X, anticoagulant protein C and protein S, and the factor X-targeting protein Z. The bone Gla protein osteocalcin, the calcification-inhibiting matrix Gla protein (MGP), the cell growth regulating growth arrest specific gene 6 protein, and the four transmembrane Gla proteins, the function of which is at present unknown. The Gla domain is responsible for high-affinity binding of calcium ions (Ca2+) to Gla proteins, which is often necessary for their conformation, and always necessary for their function. Gla proteins are known to occur in a wide variety of vertebrates: mammals, birds, reptiles, and fish. The venom of a number of Australian snakes acts by activating the human blood-clotting system. In some cases, activation is accomplished by snake Gla-containing enzymes that bind to the endothelium of human blood vessels and catalyze the conversion of procoagulant clotting factors into activated ones, leading to unwanted and potentially deadly clotting. Another interesting class of invertebrate Gla-containing proteins is synthesized by the fish-hunting snail Conus geographus. These snails produce a venom containing hundreds of neuroactive peptides, or conotoxins, which is sufficiently toxic to kill an adult human. Several of the conotoxins contain two to five Gla residues. Function in plants and cyanobacteria Vitamin K1 is an important chemical in green plants (including land plants and green algae) and some species of cyanobacteria, where it functions as an electron acceptor transferring one electron in photosystem I during photosynthesis. For this reason, vitamin K1 is found in large quantities in the photosynthetic tissues of plants (green leaves, and dark green leafy vegetables such as romaine lettuce, kale, and spinach), but it occurs in far smaller quantities in other plant tissues. Detection of VKORC1 homologues active on the K1-epioxide suggest that K1 may have a non-redox function in these organisms. In plants but not cyanobacteria, knockout of this gene show growth restriction similar to mutants lacking the ability to produce K1. Function in other bacteria Many bacteria, including Escherichia coli found in the large intestine, can synthesize vitamin K2 (MK-7 up to MK-11), but not vitamin K1. In the vitamin K2 synthesizing bacteria, menaquinone transfers two electrons between two different small molecules, during oxygen-independent metabolic energy production processes (anaerobic respiration). For example, a small molecule with an excess of electrons (also called an electron donor) such as lactate, formate, or NADH, with the help of an enzyme, passes two electrons to menaquinone. The menaquinone, with the help of another enzyme, then transfers these two electrons to a suitable oxidant, such as fumarate or nitrate (also called an electron acceptor). Adding two electrons to fumarate or nitrate converts the molecule to succinate or nitrite plus water, respectively. Some of these reactions generate a cellular energy source, ATP, in a manner similar to eukaryotic cell aerobic respiration, except the final electron acceptor is not molecular oxygen, but fumarate or nitrate. In aerobic respiration, the final oxidant is molecular oxygen, which accepts four electrons from an electron donor such as NADH to be converted to water. E. coli, as facultative anaerobes, can carry out both aerobic respiration and menaquinone-mediated anaerobic respiration. History In 1929, Danish scientist Henrik Dam investigated the role of cholesterol by feeding chickens a cholesterol-depleted diet. He initially replicated experiments reported by scientists at the Ontario Agricultural College. McFarlane, Graham and Richardson, working on the chick feed program at OAC, used chloroform to remove all fat from chick chow. They noticed that chicks fed only fat-depleted chow developed hemorrhages and started bleeding from tag sites. Dam found that these defects could not be restored by adding purified cholesterol to the diet. It appeared that – together with the cholesterol – a second compound was extracted from the food, and this compound was called the coagulation vitamin. The new vitamin received the letter K because the initial discoveries were reported in a German journal, in which it was designated as Koagulationsvitamin. Edward Adelbert Doisy of Saint Louis University did much of the research that led to the discovery of the structure and chemical nature of vitamin K. Dam and Doisy shared the 1943 Nobel Prize for medicine for their work on vitamin K1 and K2 published in 1939. Several laboratories synthesized the compound(s) in 1939. For several decades, the vitamin K–deficient chick model was the only method of quantifying vitamin K in various foods: the chicks were made vitamin K–deficient and subsequently fed with known amounts of vitamin K–containing food. The extent to which blood coagulation was restored by the diet was taken as a measure for its vitamin K content. Three groups of physicians independently found this: Biochemical Institute, University of Copenhagen (Dam and Johannes Glavind), University of Iowa Department of Pathology (Emory Warner, Kenneth Brinkhous, and Harry Pratt Smith), and the Mayo Clinic (Hugh Butt, Albert Snell, and Arnold Osterberg). The first published report of successful treatment with vitamin K of life-threatening hemorrhage in a jaundiced patient with prothrombin deficiency was made in 1938 by Smith, Warner, and Brinkhous. The precise function of vitamin K was not discovered until 1974, when prothrombin, a blood coagulation protein, was confirmed to be vitamin K dependent. When the vitamin is present, prothrombin has amino acids near the amino terminus of the protein as γ-carboxyglutamate instead of glutamate, and is able to bind calcium, part of the clotting process. Research Osteoporosis Vitamin K is required for the gamma-carboxylation of osteocalcin in bone. The risk of osteoporosis, assessed via bone mineral density and fractures, was not affected for people on warfarin therapy – a vitamin K antagonist. Studies investigating whether vitamin K supplementation reduces risk of bone fractures have shown mixed results. Cardiovascular health Matrix Gla protein is a vitamin K-dependent protein found in bone, but also in soft tissues such as arteries, where it appears to function as an anti-calcification protein. In animal studies, animals that lack the gene for MGP exhibit calcification of arteries and other soft tissues. In humans, Keutel syndrome is a rare recessive genetic disorder associated with abnormalities in the gene coding for MGP and characterized by abnormal diffuse cartilage calcification. These observations led to a theory that in humans, inadequately carboxylated MGP, due to low dietary intake of the vitamin, could result in increased risk of arterial calcification and coronary heart disease. In meta-analyses of population studies, low intake of vitamin K was associated with inactive MGP, arterial calcification and arterial stiffness. Lower dietary intakes of vitamin K1 and vitamin K2 were also associated with higher coronary heart disease. When blood concentration of circulating vitamin K1 was assessed there was an increased risk in all cause mortality linked to low concentration. In contrast to these population studies, a review of randomized trials using supplementation with either vitamin K1 or vitamin K2 reported no role in mitigating vascular calcification or reducing arterial stiffness. The trials were too short to assess any impact on coronary heart disease or mortality. Other Population studies suggest that vitamin K status may have roles in inflammation, brain function, endocrine function and an anti-cancer effect. For all of these, there is not sufficient evidence from intervention trials to draw any conclusions. From a review of observational trials, long-term use of vitamin K antagonists as anticoagulation therapy is associated with lower cancer incidence in general. There are conflicting reviews as to whether agonists reduce the risk of prostate cancer.
Biology and health sciences
Vitamins
Health
32549
https://en.wikipedia.org/wiki/Voltage
Voltage
Voltage, also known as (electrical) potential difference, electric pressure, or electric tension is the difference in electric potential between two points. In a static electric field, it corresponds to the work needed per unit of charge to move a positive test charge from the first point to the second point. In the International System of Units (SI), the derived unit for voltage is the volt (V). The voltage between points can be caused by the build-up of electric charge (e.g., a capacitor), and from an electromotive force (e.g., electromagnetic induction in a generator). On a macroscopic scale, a potential difference can be caused by electrochemical processes (e.g., cells and batteries), the pressure-induced piezoelectric effect, and the thermoelectric effect. Since it is the difference in electric potential, it is a physical scalar quantity. A voltmeter can be used to measure the voltage between two points in a system. Often a common reference potential such as the ground of the system is used as one of the points. In this case, voltage is often mentioned at a point without completely mentioning the other measurement point. A voltage can be associated with either a source of energy or the loss, dissipation, or storage of energy. Definition The SI unit of work per unit charge is the joule per coulomb, where 1 volt = 1 joule (of work) per 1 coulomb of charge. The old SI definition for volt used power and current; starting in 1990, the quantum Hall and Josephson effect were used, and in 2019 physical constants were given defined values for the definition of all SI units. Voltage is denoted symbolically by , simplified V, especially in English-speaking countries. Internationally, the symbol U is standardized. It is used, for instance, in the context of Ohm's or Kirchhoff's circuit laws. The electrochemical potential is the voltage that can be directly measured with a voltmeter. The Galvani potential that exists in structures with junctions of dissimilar materials is also work per charge but cannot be measured with a voltmeter in the external circuit (see ). Voltage is defined so that negatively charged objects are pulled towards higher voltages, while positively charged objects are pulled towards lower voltages. Therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Historically, voltage has been referred to using terms like "tension" and "pressure". Even today, the term "tension" is still used, for example within the phrase "high tension" (HT) which is commonly used in thermionic valve (vacuum tube) based and automotive electronics. Electrostatics In electrostatics, the voltage increase from point to some point is given by the change in electrostatic potential from to . By definition, this is: where is the intensity of the electric field. In this case, the voltage increase from point A to point B is equal to the work done per unit charge, against the electric field, to move the charge from A to B without causing any acceleration. Mathematically, this is expressed as the line integral of the electric field along that path. In electrostatics, this line integral is independent of the path taken. Under this definition, any circuit where there are time-varying magnetic fields, such as AC circuits, will not have a well-defined voltage between nodes in the circuit, since the electric force is not a conservative force in those cases. However, at lower frequencies when the electric and magnetic fields are not rapidly changing, this can be neglected (see electrostatic approximation). Electrodynamics The electric potential can be generalized to electrodynamics, so that differences in electric potential between points are well-defined even in the presence of time-varying fields. However, unlike in electrostatics, the electric field can no longer be expressed only in terms of the electric potential. Furthermore, the potential is no longer uniquely determined up to a constant, and can take significantly different forms depending on the choice of gauge. In this general case, some authors use the word "voltage" to refer to the line integral of the electric field, rather than to differences in electric potential. In this case, the voltage rise along some path from to is given by: However, in this case the "voltage" between two points depends on the path taken. Circuit theory In circuit analysis and electrical engineering, lumped element models are used to represent and analyze circuits. These elements are idealized and self-contained circuit elements used to model physical components. When using a lumped element model, it is assumed that the effects of changing magnetic fields produced by the circuit are suitably contained to each element. Under these assumptions, the electric field in the region exterior to each component is conservative, and voltages between nodes in the circuit are well-defined, where as long as the path of integration does not pass through the inside of any component. The above is the same formula used in electrostatics. This integral, with the path of integration being along the test leads, is what a voltmeter will actually measure. If uncontained magnetic fields throughout the circuit are not negligible, then their effects can be modelled by adding mutual inductance elements. In the case of a physical inductor though, the ideal lumped representation is often accurate. This is because the external fields of inductors are generally negligible, especially if the inductor has a closed magnetic path. If external fields are negligible, we find that is path-independent, and there is a well-defined voltage across the inductor's terminals. This is the reason that measurements with a voltmeter across an inductor are often reasonably independent of the placement of the test leads. Volt The volt (symbol: ) is the derived unit for electric potential, voltage, and electromotive force. The volt is named in honour of the Italian physicist Alessandro Volta (1745–1827), who invented the voltaic pile, possibly the first chemical battery. Hydraulic analogy A simple analogy for an electric circuit is water flowing in a closed circuit of pipework, driven by a mechanical pump. This can be called a "water circuit". The potential difference between two points corresponds to the pressure difference between two points. If the pump creates a pressure difference between two points, then water flowing from one point to the other will be able to do work, such as driving a turbine. Similarly, work can be done by an electric current driven by the potential difference provided by a battery. For example, the voltage provided by a sufficiently-charged automobile battery can "push" a large current through the windings of an automobile's starter motor. If the pump is not working, it produces no pressure difference, and the turbine will not rotate. Likewise, if the automobile's battery is very weak or "dead" (or "flat"), then it will not turn the starter motor. The hydraulic analogy is a useful way of understanding many electrical concepts. In such a system, the work done to move water is equal to the "pressure drop" (compare p.d.) multiplied by the volume of water moved. Similarly, in an electrical circuit, the work done to move electrons or other charge carriers is equal to "electrical pressure difference" multiplied by the quantity of electrical charges moved. In relation to "flow", the larger the "pressure difference" between two points (potential difference or water pressure difference), the greater the flow between them (electric current or water flow). (See "electric power".) Applications Specifying a voltage measurement requires explicit or implicit specification of the points across which the voltage is measured. When using a voltmeter to measure voltage, one electrical lead of the voltmeter must be connected to the first point, one to the second point. A common use of the term "voltage" is in describing the voltage dropped across an electrical device (such as a resistor). The voltage drop across the device can be understood as the difference between measurements at each terminal of the device with respect to a common reference point (or ground). The voltage drop is the difference between the two readings. Two points in an electric circuit that are connected by an ideal conductor without resistance and not within a changing magnetic field have a voltage of zero. Any two points with the same potential may be connected by a conductor and no current will flow between them. Addition of voltages The voltage between A and C is the sum of the voltage between A and B and the voltage between B and C. The various voltages in a circuit can be computed using Kirchhoff's circuit laws. When talking about alternating current (AC) there is a difference between instantaneous voltage and average voltage. Instantaneous voltages can be added for direct current (DC) and AC, but average voltages can be meaningfully added only when they apply to signals that all have the same frequency and phase. Measuring instruments Instruments for measuring voltages include the voltmeter, the potentiometer, and the oscilloscope. Analog voltmeters, such as moving-coil instruments, work by measuring the current through a fixed resistor, which, according to Ohm's law, is proportional to the voltage across the resistor. The potentiometer works by balancing the unknown voltage against a known voltage in a bridge circuit. The cathode-ray oscilloscope works by amplifying the voltage and using it to deflect an electron beam from a straight path, so that the deflection of the beam is proportional to the voltage. Typical voltages A common voltage for flashlight batteries is 1.5 volts (DC). A common voltage for automobile batteries is 12 volts (DC). Common voltages supplied by power companies to consumers are 110 to 120 volts (AC) and 220 to 240 volts (AC). The voltage in electric power transmission lines used to distribute electricity from power stations can be several hundred times greater than consumer voltages, typically 110 to 1200 kV (AC). The voltage used in overhead lines to power railway locomotives is between 12 kV and 50 kV (AC) or between 0.75 kV and 3 kV (DC). Galvani potential vs. electrochemical potential Inside a conductive material, the energy of an electron is affected not only by the average electric potential but also by the specific thermal and atomic environment that it is in. When a voltmeter is connected between two different types of metal, it measures not the electrostatic potential difference, but instead something else that is affected by thermodynamics. The quantity measured by a voltmeter is the negative of the difference of the electrochemical potential of electrons (Fermi level) divided by the electron charge and commonly referred to as the voltage difference, while the pure unadjusted electrostatic potential (not measurable with a voltmeter) is sometimes called Galvani potential. The terms "voltage" and "electric potential" are ambiguous in that, in practice, they can refer to either of these in different contexts. History The term electromotive force was first used by Volta in a letter to Giovanni Aldini in 1798, and first appeared in a published paper in 1801 in Annales de chimie et de physique. Volta meant by this a force that was not an electrostatic force, specifically, an electrochemical force. The term was taken up by Michael Faraday in connection with electromagnetic induction in the 1820s. However, a clear definition of voltage and method of measuring it had not been developed at this time. Volta distinguished electromotive force (emf) from tension (potential difference): the observed potential difference at the terminals of an electrochemical cell when it was open circuit must exactly balance the emf of the cell so that no current flowed.
Physical sciences
Electrostatics
Physics
32557
https://en.wikipedia.org/wiki/Vulture
Vulture
A vulture is a bird of prey that scavenges on carrion. There are 23 extant species of vulture (including condors). Old World vultures include 16 living species native to Europe, Africa, and Asia; New World vultures are restricted to North and South America and consist of seven identified species, all belonging to the Cathartidae family. A particular characteristic of many vultures is a bald, unfeathered head. This bare skin is thought to keep the head clean when feeding, and also plays an important role in thermoregulation. Vultures have been observed to hunch their bodies and tuck in their heads in the cold, and open their wings and stretch their necks in the heat. They also urinate on themselves as a means of cooling their bodies. A group of vultures in flight is called a 'kettle', while the term 'committee' refers to a group of vultures resting on the ground or in trees. A group of vultures that are feeding is termed a 'wake'. Taxonomy Although New World vultures and Old World vultures share many resemblances, they are not very closely related. Rather, they share resemblance because of convergent evolution. Early naturalists placed all vultures under one single biological group. Carl Linnaeus had assigned both Old World vultures and New World vultures in a Vultur genus, even including the harpy eagle. Soon anatomists split Old and New World vultures, with New World vultures being placed in a new suborder, Cathartae, later renamed Cathartidae as per the Rules of Nomenclature (from Greek: carthartes, meaning "purifier") by French ornithologist Frédéric de Lafresnaye. The suborder was later recognised as a family, rather than a suborder. In the late 20th century, some ornithologists argued that New World vultures are more closely related to storks on the basis of karyotype, morphological, and behavioral data. Thus some authorities placed them in the Ciconiiformes family with storks and herons; Sibley and Monroe (1990) even considered them a subfamily of the storks. This was criticized, and an early DNA sequence study was based on erroneous data and subsequently retracted. There was then an attempt to raise the New World vultures to the rank of an independent order, Cathartiformes, not closely associated with either the birds of prey or the storks and herons. Old World The Old World vultures found in Africa, Asia, and Europe belong to the family Accipitridae, which also includes eagles, kites, buzzards, and hawks. Old World vultures find carcasses exclusively by sight. The 16 species in 9 genera are: Cinereous vulture, Aegypius monachus Griffon vulture, Gyps fulvus White-rumped vulture, Gyps bengalensis Rüppell's vulture, Gyps rueppelli Indian vulture, Gyps indicus Slender-billed vulture, Gyps tenuirostris Himalayan vulture, Gyps himalayensis White-backed vulture, Gyps africanus Cape vulture, Gyps coprotheres Hooded vulture, Necrosyrtes monachus Red-headed vulture, Sarcogyps calvus Lappet-faced vulture, Torgos tracheliotos White-headed vulture, Trigonoceps occipitalis Bearded vulture (Lammergeier), Gypaetus barbatus Egyptian vulture, Neophron percnopterus Palm-nut vulture, Gypohierax angolensis New World The New World vultures and condors found in warm and temperate areas of the Americas belong to the family Cathartidae. Recent DNA evidence suggests that they should be included within order Accipitriformes along with birds of prey including hawks, eagles, and Old World vultures . Several species have a good sense of smell, unusual for raptors, and are able to smell dead animals from great heights, up to a mile away. The seven species are: Black vulture Coragyps atratus in South America and north to the US Turkey vulture Cathartes aura throughout the Americas to southern Canada Lesser yellow-headed vulture Cathartes burrovianus in South America and north to Mexico Greater yellow-headed vulture Cathartes melambrotus in the Amazon Basin of tropical South America California condor Gymnogyps californianus in California, formerly widespread in the mountains of western North America Andean condor Vultur gryphus in the Andes King vulture Sarcoramphus papa from southern Mexico to northern Argentina Feeding Vultures are scavengers, meaning that they eat dead animals. Outside of the oceans, vultures are the only known obligate scavengers. They rarely attack healthy animals, but may kill the wounded or sick. When a carcass has too thick a hide for its beak to open, it waits for a larger scavenger to eat first. Vast numbers have been seen upon battlefields. They gorge themselves when prey is abundant, until their crops bulge, and sit, sleepy or half torpid, to digest their food. These birds do not carry food to their young in their talons but disgorge it from their crops. The mountain-dwelling bearded vulture is the only vertebrate to specialize in eating bones; it carries bones to the nest for the young, and hunts some live prey. Vultures are of great value as scavengers, especially in hot regions. Vulture stomach acid is exceptionally corrosive (pH=1.0), allowing them to safely digest putrid carcasses infected with botulinum toxin, hog cholera bacteria, and anthrax bacteria that would be lethal to other scavengers and remove these bacteria from the environment. New World vultures often vomit when threatened or approached. Contrary to some accounts, they do not "projectile vomit" on their attacker in defense, but to lighten their stomach load to ease take-off. The vomited meal residue may distract a predator, allowing the bird to escape. In various regions of Africa, the dynamic interplay of vultures and predators such as lions, cheetahs, hyenas, and jackals significantly influences the continent's food web. These avian scavengers actively engage in competition with these predatory animals for sustenance, meticulously tracking their hunting activities. Traditionally, vultures are known to bide their time, patiently observing from a distance or high in the sky as predators bring down their prey and commence feeding. Once these formidable predators have satiated their hunger and moved away from their kills, the vultures swoop in, making the most of the leftovers. New research has revealed that these birds can, in addition to sight, respond to auditory cues indicative of potential foraging opportunities. Interaction between vultures and predators is not strictly sequential or one-sided. Vultures, being opportunistic creatures, will often engage in risky behavior if a prime opportunity arises. Sometimes, when the predator numbers are low or distracted, these large birds might move in earlier, attempting to snatch morsels from the kill before the predators have fully vacated the scene. This daring strategy, while high-risk, underscores the fierce competition and survival instincts prevalent in the harsh realities of the African wild. New World vultures also urinate straight down their legs; the uric acid kills bacteria accumulated from walking through carcasses, and also acts as evaporative cooling. Conservation status Vultures in south Asia, mainly in India and Nepal, have declined dramatically since the early 1990s. It has been found that this decline was caused by residues of the drug diclofenac in livestock carcasses. The government of India has taken very late cognizance of this fact and has banned the drug for animals. It may take decades for vultures to come back to their earlier population level, if ever. Without them to pick corpses clean, rabid dogs have multiplied, feeding on the carrion, and age-old practices like the sky burials of the Parsees are coming to an end, permanently reducing the supply of corpses. The same problem is also seen in Nepal where the government has taken some late steps to conserve the remaining vultures. The vulture population is threatened across Africa and Eurasia. There are many human activities that threaten vultures such as poisoning and collisions with wind turbines. In central Africa there have been efforts to conserve the remaining vultures and bring their population numbers back up. The decline is largely due to the trade in vulture meat, "it is estimated that more than of wild animal meat is traded" and vultures take up a large percentage of this bushmeat due to the demand in the fetish market. The substantial drop in vulture populations in the continent of Africa is also said to be the result of both intentional and unintentional poisoning, with one study finding it to be the cause of 61% of the vulture deaths recorded. A recent study in 2016, reported that "of the 22 vulture species, nine are critically endangered, three are endangered, four are near threatened, and six are least concern". The conservation status of vultures is of particular concern to humans. For example, the decline of vulture populations can lead to increased disease transmission and resource damage, through increased populations of disease vector and pest animal populations that scavenge carcasses opportunistically. Vultures control these pests and disease vectors indirectly through competition for carcasses. On 20 June 2019, the corpses of 468 white-backed vultures, 17 white-headed vultures, 28 hooded vultures, 14 lappet-faced vultures and 10 cape vultures, altogether 537 vultures, besides 2 tawny eagles, were found in northern Botswana. It is suspected that they died after eating the corpses of three elephants that were poisoned by poachers, possibly to avoid detection by the birds, which help rangers to track poaching activity by circling above dead animals. In myth and culture In Ancient Egyptian art, Nekhbet, a mythological goddess and patron of both the city of Nekheb and Upper Egypt was depicted as a vulture. Alan Gardiner identified the species that was used in divine iconography as a griffon vulture. Arielle P. Kozloff argues that the vultures in New Kingdom art, with their blue-tipped beaks and loose skin, better resemble the lappet-faced vulture. Many Great Royal Wives wore vulture crowns - a symbol of protection from the goddess Nekhbet. Ancient Egyptians believed that all vultures were female and were spontaneously born from eggs without the intervention of a male, and therefore linked the birds to purity and motherhood, but also the eternal cycle of death and rebirth for their ability to transform the "death" they feed on – i.e. carrion and waste – into life. In Pre-Columbian times, vultures were appreciated as extraordinary beings and had high iconographic status. They appear in many Mesoamerican myths, legends, and fables from civilizations such as the Maya and Aztecs, some depicting them negatively, others positively.
Biology and health sciences
Accipitrimorphae
Animals
32565
https://en.wikipedia.org/wiki/Sildenafil
Sildenafil
Sildenafil, sold under the brand name Viagra among others, is a medication used to treat erectile dysfunction and pulmonary arterial hypertension. It is also sometimes used off-label for the treatment of certain symptoms in secondary Raynaud's phenomenon. It is unclear if it is effective for treating sexual dysfunction in females. It can be taken orally (swallowed by mouth), intravenously (injection into a vein), or through the sublingual route (dissolved under the tongue). Onset when taken orally is typically within twenty minutes and lasts for about two hours. Common side effects include headaches, heartburn, and flushed skin. Caution is advised in those with cardiovascular disease. Rare but serious side effects include vision problems, hearing loss, and prolonged erection (priapism) that can lead to damage to the penis. Sildenafil should not be taken by people on nitric oxide donors such as nitroglycerin (glycerin trinitrate), as this may result in a serious drop in blood pressure. Sildenafil acts by blocking phosphodiesterase 5 (PDE5), an enzyme that promotes breakdown of cGMP, which regulates blood flow in the penis. It requires sexual arousal to work, and does not by itself cause or increase sexual arousal. It also results in dilation of the blood vessels in the lungs. Pfizer originally discovered the medication in 1989 while looking for a treatment for angina. It was approved for medical use in the United States and in the European Union in 1998. In 2022, it was the 157th most commonly prescribed medication in the United States, with more than 3million prescriptions. It is available as a generic medication. In the United Kingdom, it is available over-the-counter (OTC). Medical uses Erectile dysfunction The primary indication of sildenafil is treatment of erectile dysfunction (inability to sustain a satisfactory erection to complete sexual intercourse). Its use is now one of the standard treatments for erectile dysfunction, including for males with diabetes mellitus. Antidepressant-associated erectile dysfunction Tentative evidence suggests that sildenafil may help males who experience antidepressant-induced erectile dysfunction. Pulmonary hypertension While sildenafil improves some markers of disease in people with pulmonary arterial hypertension, it does not appear to affect the risk of death or serious side effects. Raynaud's phenomenon Sildenafil and other PDE5 inhibitors are used off-label to alleviate vasospasm and treat severe ischemia and ulcers in fingers and toes for people with secondary Raynaud's phenomenon; these drugs have moderate efficacy for reducing the frequency and duration of vasospastic episodes. their role more generally in Raynaud's was not clear. Altitude sickness Sildenafil has shown some potential for improving exercise performance at high altitudes. However, its overall efficacy is not clear. High-altitude pulmonary edema Sildenafil has been studied for high-altitude pulmonary edema (HAPE), but its use is currently not recommended for that indication. Adverse effects In clinical trials, the most common adverse effects of sildenafil use included headache, flushing, indigestion, nasal congestion, and impaired vision, including photophobia and blurred vision. Some sildenafil users have complained of seeing everything tinted blue (cyanopsia). This cyanopsia can be explained because sildenafil, while selective for PDE5, does have some affinity for PDE6, which is the phosphodiesterase found in the retina. Patients thus taking the drug may experience colorvision abnormalities. Some complained of blurriness and loss of peripheral vision. In July 2005, the US Food and Drug Administration (FDA) updated labeling for tadalafil (Cialis), vardenafil (Levitra), and sildenafil (Viagra) to reflect a small number of post-marketing reports of sudden vision loss, while acknowledging that "...it is not possible to determine whether these oral medicines for erectile dysfunction were the cause of the loss of eyesight or whether the problem is related to other factors such as high blood pressure or diabetes, or to a combination of these problems." A careful review of pooled data from clinical trials containing well documented information about the dose and duration of exposure to the drug for a large number of patients, yields no evidence for an increased risk of non-arteritic anterior ischemic optic neuropathy or other adverse ocular events associated with PDE-5 inhibitor use. Rare but serious adverse effects found through postmarketing surveillance include prolonged erections, severe low blood pressure, myocardial infarction (heart attack), ventricular arrhythmias, stroke, increased intraocular pressure, and sudden hearing loss. In October 2007, the FDA announced that the labeling for all PDE5 inhibitors, including sildenafil, required a more prominent warning of the potential risk of sudden hearing loss. Interactions Care should be exercised by people who are also taking protease inhibitors for the treatment of HIV infection. Protease inhibitors inhibit the metabolism of sildenafil, effectively multiplying the plasma levels of sildenafil, increasing the incidence and severity of side effects. Those using protease inhibitors are recommended to limit their use of sildenafil to no more than one 25 mg dose every 48 hours. Other drugs that interfere with the metabolism of sildenafil include erythromycin and cimetidine, both of which can also lead to prolonged plasma half-life levels. The use of sildenafil and an α1 blocker (typically prescribed for hypertension or for urologic conditions, such as benign prostatic hypertrophy) at the same time may lead to low blood pressure, but this effect does not occur if they are taken at least 4 hours apart. Contraindications Contraindications include: Concomitant use of nitric oxide donors, organic nitrites and nitrates, such as: nitroglycerin isosorbide mononitrate isosorbide dinitrate sodium nitroprusside alkyl nitrites (commonly known as "poppers") Concomitant use of soluble guanylyl cyclase stimulators, such as riociguat Known hypersensitivity to sildenafil Sildenafil should not be used if sexual activity is inadvisable due to underlying cardiovascular risk factors. Non-medical use Recreational use Sildenafil's popularity with young adults has increased over the years. Sildenafil's brand name, Viagra, is widely recognized in popular culture, and the drug's association with treating erectile dysfunction has led to its recreational use. The reasons behind such use include the belief that the drug increases libido, improves sexual performance, or permanently increases penis size. Studies on the effects of sildenafil when used recreationally are limited, but suggest it has little effect when used by those who do not have erectile dysfunction. In one study, a 25 mg dose was shown to cause no significant change in erectile quality, but did reduce the postejaculatory refractory time. This study also noted a significant placebo effect in the control group. Unprescribed recreational use of sildenafil and other PDE5 inhibitors is noted as particularly high among users of illegal drugs. Sildenafil is sometimes used to counteract the effects of other substances, often illicit. Some users mix it with methylenedioxymethamphetamine (MDMA, ecstasy), other stimulants, or opiates in an attempt to compensate for the common side effect of erectile dysfunction, a combination known as "sextasy", "rockin' and rollin,'" "hammerheading," or "trail mix". Mixing it with amyl nitrite, another vasodilator, is particularly dangerous and potentially fatal. Jet lag research The 2007 Ig Nobel Prize in aviation went to Patricia V. Agostino, Santiago A. Plano, and Diego A. Golombek of Universidad Nacional de Quilmes, Argentina, for their discovery that sildenafil helps treat jet lag recovery in hamsters. Sports Professional athletes have been documented using sildenafil, believing the opening of their blood vessels will enrich their muscles. In turn, they believe it will enhance their performances. Analogs Acetildenafil and other synthetic structural analogs of sildenafil which are PDE5 inhibitors have been found as adulterants in a number of "herbal" aphrodisiac products sold over-the-counter. These analogs have not undergone any of the rigorous testing that drugs like sildenafil have passed, and thus have unknown side-effect profiles. Some attempts have been made to ban these drugs, but progress has been slow so far, as, even in those jurisdictions that have laws targeting designer drugs, the laws are drafted to ban analogs of illegal drugs of abuse, rather than analogs of prescription medicines. However, at least one court case has resulted in a product being taken off the market. The US Food and Drug Administration (FDA) has banned numerous products claiming to be Eurycoma longifolia that, in fact, contain only analogs of sildenafil. Sellers of such fake herbals typically respond by just changing the names of their products. Detection in biological fluids Sildenafil and/or N-desmethylsildenafil, its major active metabolite, may be quantified in plasma, serum, or whole blood to assess pharmacokinetic status in those receiving the drug therapeutically, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdose. Mechanism of action Sildenafil protects cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum. Nitric oxide (NO) in the corpus cavernosum of the penis binds to guanylate cyclase receptors, which results in increased levels of cGMP, leading to smooth muscle relaxation (vasodilation) of the intimal cushions of the helicine arteries. This smooth muscle relaxation leads to vasodilation and increased inflow of blood into the spongy tissue of the penis, causing an erection. Robert F. Furchgott, Ferid Murad, and Louis Ignarro won the Nobel Prize in Physiology or Medicine in 1998 for their independent study of the metabolic pathway of nitric oxide in smooth muscle vasodilation. The molecular mechanism of smooth muscle relaxation involves the enzyme CGMP-dependent protein kinase, also known as PKG. This kinase is activated by cGMP and it phosphorylates multiple targets in the smooth muscle cells, namely myosin light chain phosphatase, RhoA, IP3 receptor, phospholipase C, and others. Overall, this results in a decrease in intracellular calcium and desensitizing proteins to the effects of calcium, engendering smooth muscle relaxation. Sildenafil is a potent and selective inhibitor of cGMP-specific phosphodiesterase type 5 (PDE5), which is responsible for degradation of cGMP in the corpus cavernosum. The molecular structure of sildenafil is similar to that of cGMP and acts as a competitive binding agent of PDE5 in the corpus cavernosum, resulting in more cGMP and increased penile response to sexual stimulation. Without sexual stimulation, and therefore lack of activation of the NO/cGMP system, sildenafil should not cause an erection. Other drugs that operate by the same mechanism include tadalafil (Cialis) and vardenafil (Levitra). Sildenafil is broken down in the liver by hepatic metabolism using cytochrome p450 enzymes, mainly CYP450 3A4 (major route), but also by CYP2C9 (minor route) hepatic isoenzymes. The major product of metabolisation by these enzymes is N-desmethylated sildenafil, which is metabolised further. This metabolite also has an affinity for the PDE receptors, about 40% of that of sildenafil. Thus, the metabolite is responsible for about 20% of sildenafil's action. Sildenafil is excreted as metabolites predominantly in the feces (about 80% of administered oral dose) and to a lesser extent in the urine (around 13% of the administered oral dose). If taken with a high-fat meal, absorption is reduced; the time taken to reach the maximum plasma concentration increases by around one hour, and the maximum concentration itself is decreased by nearly one-third. Route of administration When taken orally, sildenafil for erectile dysfunction results in an average time to onset of erections of 27 minutes (ranging from 12 to 70 minutes). Sublingual use of sildenafil for erectile dysfunction results in an average onset of action of 15 minutes and lasting for an average of 40 minutes. Chemical synthesis The preparation steps for synthesis of sildenafil are: Methylation of 3-propylpyrazole-5-carboxylic acid ethyl ester with hot dimethyl sulfate Hydrolysis with aqueous sodium hydroxide (NaOH) to free acid Nitration with oleum/fuming nitric acid Carboxamide formation with refluxing thionyl chloride/NH4OH Reduction of nitro group to amino group Acylation with 2-ethoxybenzoyl chloride Cyclization Sulfonation to the chlorosulfonyl derivative Condensation with 1-methylpiperazine. History Sildenafil (compound UK-92,480) was synthesized by a group of pharmaceutical chemists led by Simon Campbell working at Pfizer's Sandwich, Kent, research facility in England. It was initially studied for use in hypertension (high blood pressure) and angina pectoris (a symptom of ischaemic heart disease). The first clinical trials were conducted in Morriston Hospital in Swansea. Phase I clinical trials under the direction of Ian Osterloh suggested the drug had little effect on angina, but it could induce marked penile erections. Pfizer therefore decided to market it for erectile dysfunction, rather than for angina; this decision became an often-cited example of drug repositioning. The drug was patented in 1996, approved for use in erectile dysfunction by the FDA on 27 March 1998, becoming the first oral treatment approved to treat erectile dysfunction in the United States, and offered for sale in the United States later that year. It soon became a great success: annual sales of Viagra peaked in 2008 at US$1.934 billion. Counterfeits Counterfeit Viagra, despite generally being cheaper, can contain harmful substances or substances that affect how Viagra works, such as blue printer ink, amphetamines, metronidazole, boric acid, and rat poison. Viagra is one of the world's most counterfeited medicines. According to a 2012 Pfizer study, around 80% of sites claiming to sell Viagra were selling counterfeits. An October 2023 release stated that erectile dysfunction medicines were the most seized drugs by the Interpol accounting for 22% of seizures. International networks may be active. Society and culture Marketing and sales In the US, even though sildenafil is available only by prescription from a doctor, it was advertised directly to consumers on TV (famously being endorsed by former United States Senator Bob Dole and football star Pelé). Numerous sites on the Internet offer Viagra for sale after an "online consultation", often a simple web questionnaire. The Viagra name has become so well known that many fake aphrodisiacs now call themselves "herbal viagra" or are presented as blue tablets imitating the shape and colour of Pfizer's product. Viagra is also informally known as "vitamin V", "the blue pill", or "blue diamond", as well as various other nicknames. Viagra and other products for sexual dysfunction, termed sexuopharmaceuticals, proliferated new types of specialised marketing for such products. Viagra and similar prescription pharmaceuticals were promoted by images in media to the extent of becoming a cultural icon, at the time a relatively new phenomenon known to be permitted only in the United States and New Zealand and which is believed to have significantly contributed to norms regarding male sexuality. One author notes that although the effect of Viagra is only limited to penile blood vessels, advertisements routinely use imagery of couples hugging, smiling and dancing, with the author claiming that pharmaceutical companies were deceptive in the use of such advertisements. In 2000, Viagra sales accounted for 92% of the global market for prescribed erectile dysfunction pills. By 2007, Viagra's global share had plunged to about 50% due to several factors, including the entry of Cialis and Levitra, along with several counterfeits and clones, and reports of vision loss in people taking PDE5 inhibitors. In 2008, the FDA forced Pfizer to remove Viva Cruiser, an advergame for Viagra, from appearing on Forbes, after the game failed to disclose risk information about the drug. In February 2007, it was announced that Boots, the UK pharmacy chain, would try over-the-counter sales of Viagra in stores in Manchester, England. Males between the ages of 30 and 65 would be eligible to buy four tablets after a consultation with a pharmacist. In 2017, the Medicines and Healthcare products Regulatory Agency (MHRA) enacted legislation that expanded this nationwide, allowing a particular branded formulation of Sildenafil, Viagra Connect (50 mg), to be sold over the counter and without a prescription throughout the UK from early 2018. While the sale remains subject to a consultation with a pharmacist, the other restrictions from the trial have been removed, allowing customers over the age of 18 to purchase an unlimited number of pills. The decision was made, in part, to reduce online sales of counterfeit and potentially dangerous erectile dysfunction treatments. In May 2013, Pfizer, which manufactures Viagra, told the Associated Press they will begin selling the drug directly to people on its website. Pfizer's patents on Viagra expired outside the US in 2012; in the US they were set to expire, but Pfizer settled litigation with each of Mylan and Teva which agreed that both companies could introduce generics in the US on 11 December 2017. In December 2017, Pfizer released its own generic version of Viagra. , the US Food and Drug Administration has approved fifteen drug manufacturers to market generic sildenafil in the United States. Seven of these companies are based in India. Regional issues United States In 1992, Pfizer filed a patent covering the substance sildenafil and its use to treat cardiovascular diseases. This would be marketed as Revatio. The patent was published in 1993 and expired in 2012. The patent on Revatio (indicated for pulmonary arterial hypertension rather than erectile dysfunction) expired in late 2012. Generic versions of this low-dose form of sildenafil have been available in the US from a number of manufacturers, including Greenstone, Mylan, and Watson, since early 2013. Health care providers may prescribe generic sildenafil for erectile dysfunction. For a time, the generic was not available in the same dosages as branded Viagra, so using dosages typically required for treating ED required patients to take multiple pills. In 1994, Pfizer filed a patent covering the use of sildenafil to treat erectile dysfunction. This would be marketed as Viagra. This patent was published in 2002 and expired in 2019. Teva sued to have the latter patent invalidated, but Pfizer prevailed in an August 2011 federal district court case. An agreement with Pfizer allowed Teva to begin to provide the generic drug in December 2017. In the United States, Pfizer received two patents for sildenafil: one for its indication to treat cardiovascular disease (marketed as Revatio) and another for its indication to treat erectile dysfunction (marketed as Viagra). The substance is the same under both brand names. Sildenafil is available as a generic drug in the United States, labeled for pulmonary arterial hypertension. In the US, Revatio and Viagra are marketed by Viatris after Upjohn was spun off from Pfizer. Brazil Pfizer's patent on sildenafil citrate expired in Brazil in 2010. Canada In Canada, Pfizer's patent 2,324,324 for Revatio (sildenafil used to treat pulmonary hypertension) was found invalid by the Federal Court in June 2010, on an application by Ratiopharm Inc. On 8 November 2012, the Supreme Court of Canada ruled that Pfizer's patent 2,163,446 on Viagra was invalid from the beginning because the company did not provide full disclosure in its application. The decision, Teva Canada Ltd. v. Pfizer Canada Inc., pointed to section 27(3)(b) of The Patent Act which requires that disclosure must include sufficient information "to enable any person skilled in the art or science to which it pertains" to produce it. It added further: "As a matter of policy and sound statutory interpretation, patentees cannot be allowed to 'game' the system in this way. This, in my view, is the key issue in this appeal." Teva Canada launched Novo-Sildenafil, a generic version of Viagra, on the day the Supreme Court of Canada released its decision. To remain competitive, Pfizer then reduced the price of Viagra in Canada. However, on 9 November 2012, Pfizer filed a motion for a re-hearing of the appeal in the Supreme Court of Canada, on the grounds that the court accidentally exceeded its jurisdiction by voiding the patent. Finally, on 22 April 2013, the Supreme Court of Canada invalidated Pfizer's patent altogether. China Manufacture and sale of sildenafil citrate drugs is common in China, where Pfizer's patent claim is not widely enforced. Egypt Egypt approved Viagra for sale in 2002, but soon afterwards allowed local companies to produce generic versions of the drug, citing the interests of poor people who would not be able to afford Pfizer's price. European Union In June 2013 Pfizer's patent on sildenafil citrate expired in some member countries of the European Union, including Austria, Denmark, France, Germany, Ireland, Italy, The Netherlands, Spain, Sweden, the United Kingdom, and Switzerland. A UK patent held by Pfizer on the use of PDE5 inhibitors (see below) as treatment of impotence was invalidated in 2000 because of obviousness; this decision was upheld on appeal in 2002. India Manufacture and sale of sildenafil citrate drugs known as "generic Viagra" is common in India, where Pfizer's patent claim does not apply. Brand names include Kamagra (Ajanta Pharma), Silagra (Cipla), Edegra (Sun Pharmaceutical), Penegra (Zydus Cadila), Manly (Cooper Pharma) and Zenegra (Alkem Laboratories). New Zealand Sildenafil was reclassified in New Zealand in 2014 so it could be bought over the counter from a pharmacist. It is thought that this reduced sales over the Internet and was safer as males could be referred for medical advice if appropriate. South Korea In 1999 South Korea granted two patents to Pfizer related to sildenafil. The first document guaranteed sole production and sale of the substance until 2012, while the second gave Pfizer the exclusive use to treating erectile dysfunction with sildenafil until 2014. In 2011 Hanmi Pharmaceutical and CJ CheilJedang launched a suit against the exclusive use patent. The Korean Court system made a ruling against Pfizer in June 2012, allowing for the unhindered domestic production of generic prescription sildenafil. During 2012 Viagra lost its position as the top selling erectile dysfunction treatment in South Korea. This development was credited largely "due to the introduction of generic products." Generic sildenafil became publicly available in May. Sales of PalPal by Hanmi Pharmaceuticals totalled ₩22 billion or about 86% the market share of Viagra that year. By 2017 there were over 50 generic sildenafil pills available. During that year Viagra sales slumped to 38% that of Palpal. United Kingdom There were 2,958,199 prescriptions for Sildenafil in 2016 in England, compared with 1,042,431 in 2006. In 2018, Viagra Connect, a particular formulation of Sildenafil marketed by Pfizer, became available for sale without a prescription in the UK, in an attempt to widen availability and reduce demand for counterfeit products.
Biology and health sciences
Specific drugs
Health
32567
https://en.wikipedia.org/wiki/Volt
Volt
The volt (symbol: V) is the unit of electric potential, electric potential difference (voltage), and electromotive force in the International System of Units (SI). Definition One volt is defined as the electric potential between two points of a conducting wire when an electric current of one ampere dissipates one watt of power between those points. It can be expressed in terms of SI base units (m, kg, s, and A) as Equivalently, it is the potential difference between two points that will impart one joule of energy per coulomb of charge that passes through it. It can be expressed in terms of SI base units (m, kg, s, and A) as It can also be expressed as amperes times ohms (current times resistance, Ohm's law), webers per second (magnetic flux per time), watts per ampere (power per current), or joules per coulomb (energy per charge), which is also equivalent to electronvolts per elementary charge: Josephson junction definition Historically the "conventional" volt, V90, defined in 1987 by the 18th General Conference on Weights and Measures and in use from 1990 to 2019, was implemented using the Josephson effect for exact frequency-to-voltage conversion, combined with the caesium frequency standard. Though the Josephson effect is still used to realize a volt, the constant used has changed slightly. For the Josephson constant, KJ = 2e/h (where e is the elementary charge and h is the Planck constant), a "conventional" value KJ-90 = was used for the purpose of defining the volt. As a consequence of the 2019 revision of the SI, as of 2019 the Josephson constant has an exact value of = , which replaced the conventional value KJ-90. This standard is typically realized using a series-connected array of several thousand or tens of thousands of junctions, excited by microwave signals between 10 and 80 GHz (depending on the array design). Empirically, several experiments have shown that the method is independent of device design, material, measurement setup, etc., and no correction terms are required in a practical implementation. Water-flow analogy In the water-flow analogy, sometimes used to explain electric circuits by comparing them with water-filled pipes, voltage (difference in electric potential) is likened to difference in water pressure, while current is proportional to the amount of water flowing. A resistor would be a reduced diameter somewhere in the piping or something akin to a radiator offering resistance to flow. The relationship between voltage and current is defined (in ohmic devices like resistors) by Ohm's law. Ohm's Law is analogous to the Hagen–Poiseuille equation, as both are linear models relating flux and potential in their respective systems. Common voltages The voltage produced by each electrochemical cell in a battery is determined by the chemistry of that cell (see ). Cells can be combined in series for multiples of that voltage, or additional circuitry added to adjust the voltage to a different level. Mechanical generators can usually be constructed to any voltage in a range of feasibility. Nominal voltages of familiar sources: Nerve cell resting potential: ~ 75 mV Single-cell, rechargeable NiMH or NiCd battery: 1.2 V Single-cell, non-rechargeable (e.g., AAA, AA, C and D cells): alkaline battery: 1.5 V; zinc–carbon battery: 1.56 V if fresh and unused Logic voltage levels: 1.2 V, 1.5 V, 1.8 V, 2.5 V, 3.3 V, 5.0 V LiFePO4 rechargeable battery: 3.3 V Cobalt-based lithium polymer rechargeable battery: 3.75 V (see Comparison of commercial battery types) Transistor–transistor logic/CMOS (TTL) power supply: 5 V USB: 5 V DC PP3 battery: 9 V Automotive battery systems use cells with 2.1 volts per cell; a "12 V" battery has six cells connected in series, which produces 12.6 V; a "24 V" battery has 12 cells connected in series, producing 25.2 V. Some antique vehicles use "6 V" 3-cell batteries, or 6.3 volts. Household mains electricity AC (see Mains electricity by country for a list of countries with mains power plugs, voltages and frequencies) 100 V in Japan 120 V in North America 230 V in Europe, Asia, Africa and Australia Rapid transit third rail: 600–750 V (see List of railway electrification systems) High-speed train overhead power lines: 25 kV at 50 Hz, but see the List of railway electrification systems and 25 kV at 60 Hz for exceptions. High-voltage electric power transmission lines: 110 kV and up (1.15 MV is the record; the highest active voltage is 1.10 MV) Lightning: a maximum of around 150 MV. History In 1800, as the result of a professional disagreement over the galvanic response advocated by Luigi Galvani, Alessandro Volta developed the so-called voltaic pile, a forerunner of the battery, which produced a steady electric current. Volta had determined that the most effective pair of dissimilar metals to produce electricity was zinc and silver. In 1861, Latimer Clark and Sir Charles Bright coined the name "volt" for the unit of resistance. By 1873, the British Association for the Advancement of Science had defined the volt, ohm, and farad. In 1881, the International Electrical Congress, now the International Electrotechnical Commission (IEC), approved the volt as the unit for electromotive force. They made the volt equal to 108 cgs units of voltage, the cgs system at the time being the customary system of units in science. They chose such a ratio because the cgs unit of voltage is inconveniently small and one volt in this definition is approximately the emf of a Daniell cell, the standard source of voltage in the telegraph systems of the day. At that time, the volt was defined as the potential difference [i.e., what is nowadays called the "voltage (difference)"] across a conductor when a current of one ampere dissipates one watt of power. The "international volt" was defined in 1893 as of the emf of a Clark cell. This definition was abandoned in 1908 in favor of a definition based on the international ohm and international ampere until the entire set of "reproducible units" was abandoned in 1948. A 2019 revision of the SI, including defining the value of the elementary charge, took effect on 20 May 2019.
Physical sciences
Electromagnetism
null
32568
https://en.wikipedia.org/wiki/Vela%20%28constellation%29
Vela (constellation)
Vela is a constellation in the southern sky, which contains the Vela Supercluster. Its name is Latin for the sails of a ship, and it was originally part of a larger constellation, the ship Argo Navis, which was later divided into three parts, the others being Carina and Puppis. With an apparent magnitude of 1.8, its brightest star is the hot blue multiple star Gamma Velorum, one component of which is the closest and brightest Wolf-Rayet star in the sky. Delta and Kappa Velorum, together with Epsilon and Iota Carinae, form the asterism known as the False Cross. 1.95-magnitude Delta is actually a triple or quintuple star system. History Argo Navis was one of the 48 classical constellations listed by the 2nd-century astronomer Ptolemy, and represented the ship Argo, used by Jason and the Argonauts on their quest for the Golden Fleece in Greek mythology. German cartographer Johann Bayer depicted the constellation on his Uranometria of 1603, and gave the stars Bayer designations from Alpha to Omega. However, his chart was inaccurate as the constellation was not fully visible from the Northern Hemisphere. Argo was more accurately charted and subdivided in 1752 by the French astronomer Nicolas Louis de Lacaille, forming Carina (the keel), Vela (the sails), and Puppis (the poop deck). Despite the division, Lacaille kept Argo's Bayer designations. Therefore, Carina has the Alpha, Beta and Epsilon originally assigned to Argo Navis, while Vela's brightest stars are Gamma and Delta, Puppis has Zeta as its brightest star, and so on. Characteristics Vela is bordered by Antlia and Pyxis to the north, Puppis to the northwest, Carina to the south and southwest, and Centaurus to the east. Covering 500 square degrees, it ranks 32nd of the 88 modern constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Vel". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −37.16° and −57.17°. Features Stars The brightest star in the constellation, Gamma Velorum, is a complex multiple star system. The brighter component, known as Gamma2 Velorum, shines as a blue-white star of apparent magnitude 1.83. It is a spectroscopic binary made up of two very hot blue stars orbiting each other every 78.5 days and separated by somewhere between 0.8 and 1.6 Astronomical Units (AU). The brighter component is a hot blue main-sequence star of spectral type O7.5 and is around 280,000 times as luminous, is around 30 times as massive and is 17 times the diameter of the Sun with a surface temperature of 35,000 K. The second component is an extremely rare example of hot star known as a Wolf–Rayet star, and is the closest and brightest example in the sky. It has a surface temperature of 57,000 and is around 170,000 times as luminous as the Sun, though it radiates most of its energy in the ultraviolet spectrum. Gamma1 is a blue-white star of spectral type B2III and apparent magnitude 4.3. The two pairs are separated by 41 arcseconds, easily separable in binoculars. Parallax measurements give a distance of 1,116 light-years, meaning that they are at least 12,000 AU apart. Further afield are 7.3-magnitude Gamma Velorum C and 9.4-magnitude Gamma Velorum D, lying 62 and 93 arcseconds south-southeast from Gamma2. The next brightest star is Delta Velorum or Alsephina, also a multiple star system and one of the brightest eclipsing binaries in the sky. Together with Kappa Velorum or Markeb, Iota Carinae or Aspidiske and Epsilon Carinae or Avior, it forms the diamond-shaped asterism known as the False Cross—so called because it is sometimes mistaken for the Southern Cross, causing errors in astronavigation. Appearing as a white star of magnitude 1.95, Delta is actually a triple or possibly quintuple star system located around 80 light-years from the Solar System. Delta A has a magnitude of 1.99 and is an eclipsing binary composed of two A-type white stars (Delta Aa and Ab) which orbit each other every 45.2 days and lie 0.5 AU from each other, with a resulting drop in magnitude of 0.4 when the dimmer one passes.in front of the brighter. Delta B is a 5.1 magnitude yellow G-class star of similar dimensions to the Sun which ranges between 26 and 72 AU away from the brighter pair, taking 142 years to complete a revolution. Further out still, at a distance of 1700 AU, are two red dwarfs of magnitudes 11 and 13. If they are part of the multiple system, they take 28000 years to complete an orbit. Also called Markeb, Kappa appears as a blue-white star of spectral type B2IV-V and magnitude 2.47 but is in fact a spectroscopic binary. The two orbit around each other with a period of 116.65 days, but the size, mass and nature of the companion are as yet unclear. The orange-hued Lambda Velorum, or Suhail, is the third-brightest star in the constellation. A supergiant of spectral type K4Ib-II, it varies between magnitudes 2.14 and 2.3, and lies 545 light-years distant. It has around 11,000 times the luminosity, 9 to 12 times the mass and 207 times the diameter of the Sun. AH Velorum is a Cepheid variable located less than a degree to the northeast of Gamma. A yellow-white supergiant of spectral type F7Ib-II, it pulsates between magnitudes 5.5 and 5.89 over 4.2 days. Also lying close to Gamma, V Velorum is a Cepheid of spectral type F6-F9II ranging from magnitude 7.2 to 7.9 over 4.4 days. AI Velorum is located 2.8 degrees north-northeast of Gamma, a Delta Scuti variable of spectral type A2p-F2pIV/V that ranges between magnitudes 6.15 and 6.76 in around 2.7 hours. V390 Velorum is an aged star that has been found to be surrounded by a dusty disk. An RV Tauri variable, it has a spectral type of F3e and ranges between magnitudes 9.01 and 9.27 over nearly 95 days. Omicron Velorum is a blue-white subgiant of spectral type B3III-IV located around 495 light-years from the Solar System. A slowly pulsating B star, it ranges between magnitudes 3.57 and 3.63 over 2.8 days. It is the brightest star in, and gives its name to, the Omicron Velorum Cluster, also known as IC 2391, an open cluster located around 500 light-years away. Seven star systems have been found to have planets. HD 75289 is a Sun-like star of spectral type G0V with a hot Jupiter planetary companion that takes only about 3.51 days to revolve at an orbital distance of 0.0482 AU. WASP-19 is a star of apparent magnitude 12.3 located 815 light-years away, which has a hot Jupiter-like planet that orbits every 0.7 days. HD 73526 is a Sun-like star of spectral type G6V that has two planets around double the mass of Jupiter each with orbits of 187 and 377 days, respectively. HD 85390 is an orange dwarf of spectral type K1.5V lying around 111 light-years distant with a planet 42 times as massive as Earth orbiting every 788 days. HD 93385 is a Sun-like star of spectral type G2/G3V located around 138 light-years away that is orbited by two super-Earths with periods of 13 and 46 days and masses 8.3 and 10.1 times that of Earth, respectively. Brown dwarfs The discovery of a binary brown dwarf system named Luhman 16 only 6.6 light-years away, the third-closest system to the Solar System, was announced on 11 March 2013. Deep-sky objects Of the deep-sky objects of interest in Vela is a planetary nebula known as NGC 3132, nicknamed the 'Eight-Burst Nebula' or 'Southern Ring Nebula' (see accompanying photo). It lies on the border of the constellation with Antlia. NGC 2899 is an unusual red-hued example. This constellation has 32 more planetary nebulae. The Gum Nebula is a faint emission nebula, believed to be the remains of a million-year-old supernova. Within it lies the smaller and younger Vela Supernova Remnant. This is the nebula of a supernova explosion that is believed to have been visible from Earth around 10,000 years ago. The remnant contains the Vela Pulsar, the first pulsar to be identified optically. Nearby is NGC 2736, also known as the Pencil Nebula. HH-47 is a Herbig-Haro Object, a young star around 1,400 light-years from the Sun that is ejecting material at tremendous speed (up to a million kilometres per hour) into its surrounds. This material glows as it hits surrounding gas. NGC 2670 is an open cluster located in Vela. It has an overall magnitude of 7.8 and is 3,200 light-years from Earth. The stars of NGC 2670, a Trumpler class II 2 p and Shapley class-d cluster, are in a conformation suggesting a bow and arrow. Its class indicates that it is a poor, loose cluster, though detached from the star field. It is somewhat concentrated at its center, and its less than 50 stars range moderately in brightness. Located 2 degrees south of Gamma Velorum, NGC 2547 is an open cluster containing around 50 stars of magnitudes 7 to 15. NGC 3201 is a globular cluster discovered by James Dunlop on May 28, 1826. Its stellar population is inhomogeneous, varying with distance from the core. The effective temperature of the stars shows an increase with greater distance, with the redder and cooler stars tending to be located closer to the core. As of 2010, is one of only two clusters (including Messier 4) that shows a definite inhomogeneous population. RCW 36 is a star-forming region in Vela, and one of the nearest sites of massive star formation. This star-forming region has given rise to a cluster of several hundred young stars that power an HII region. The star-forming region lies in Clump 6 in the Vela Molecular Ridge Cloud C.
Physical sciences
Other
Astronomy
32571
https://en.wikipedia.org/wiki/Volcano
Volcano
A volcano is commonly defined as a vent or fissure in the crust of a planetary-mass object, such as Earth, that allows hot lava, volcanic ash, and gases to escape from a magma chamber below the surface. On Earth, volcanoes are most often found where tectonic plates are diverging or converging, and because most of Earth's plate boundaries are underwater, most volcanoes are found underwater. For example, a mid-ocean ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates whereas the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates. Volcanoes resulting from divergent tectonic activity are usually non-explosive whereas those resulting from convergent tectonic activity cause violent eruptions. Volcanoes can also form where there is stretching and thinning of the crust's plates, such as in the East African Rift, the Wells Gray-Clearwater volcanic field, and the Rio Grande rift in North America. Volcanism away from plate boundaries most likely arises from upwelling diapirs from the core–mantle boundary called mantle plumes, deep within Earth. This results in hotspot volcanism or intraplate volcanism, in which the plume may cause thinning of the crust and result in a volcanic island chain due to the continuous movement of the tectonic plate, of which the Hawaiian hotspot is an example. Volcanoes are usually not created at transform tectonic boundaries where two tectonic plates slide past one another. Volcanoes, based on their frequency of eruption or volcanism, can be defined as either active, dormant or extinct. Active volcanoes have a recent history of volcanism and are likely to erupt again, dormant ones have not erupted in a long time but may erupt later, while extinct ones are not capable of eruption at all. These categories aren't entirely uniform; they may overlap for certain examples. Large eruptions can affect atmospheric temperature as ash and droplets of sulfuric acid obscure the Sun and cool Earth's troposphere. Historically, large volcanic eruptions have been followed by volcanic winters which have caused catastrophic famines. Other planets besides Earth have volcanoes. For example, volcanoes are very numerous on Venus. Mars has significant volcanoes. In 2009, a paper was published suggesting a new definition for the word 'volcano' that includes processes such as cryovolcanism. It suggested that a volcano be defined as 'an opening on a planet or moon's surface from which magma, as defined for that body, and/or magmatic gas is erupted.' This article mainly covers volcanoes on Earth. See and cryovolcano for more information. Etymology and terminology The word volcano (UK: /vɒlˈkeɪnəʊ/; and US /vɔlˈkeɪnoʊ/) originates from the early 17th century, derived from the Italian vulcano, a volcanic island in the Aeolian Islands of Italy whose name in turn comes from latin volcānus or vulcānus referring to Vulcan, the god of fire in Roman mythology. The set of processes and phenomena involved in volcanic activity is called volcanism [Early 19th century: from volcano + -ism]. The study of volcanism and volcanoes is called volcanology [mid 19th century: from volcano + -logy], sometimes spelled vulcanology. Plate tectonics According to the theory of plate tectonics, Earth's lithosphere, its rigid outer shell, is broken into sixteen larger and several smaller plates. These move continuously at a slow pace, due to convection in the underlying ductile mantle, and most volcanic activity on Earth takes place along plate boundaries, where plates are converging (and lithosphere is being destroyed) or are diverging (and new lithosphere is being created). During the development of geological theory, certain concepts that allowed the grouping of volcanoes in time, place, structure and composition have developed that ultimately have had to be explained in the theory of plate tectonics. For example, some volcanoes are polygenetic with more than one period of activity during their history; other volcanoes that become extinct after erupting exactly once are monogenetic (meaning "one life") and such volcanoes are often grouped together in a geographical region. Divergent plate boundaries At the mid-ocean ridges, two tectonic plates diverge from one another as hot mantle rock creeps upwards beneath the thinned oceanic crust. The decrease of pressure in the rising mantle rock leads to adiabatic expansion and the partial melting of the rock, causing volcanism and creating new oceanic crust. Most divergent plate boundaries are at the bottom of the oceans, and so most volcanic activity on Earth is submarine, forming new seafloor. Black smokers (also known as deep sea vents) are evidence of this kind of volcanic activity. Where the mid-oceanic ridge is above sea level, volcanic islands are formed, such as Iceland. Convergent plate boundaries Subduction zones are places where two plates, usually an oceanic plate and a continental plate, collide. The oceanic plate subducts (dives beneath the continental plate), forming a deep ocean trench just offshore. In a process called flux melting, water released from the subducting plate lowers the melting temperature of the overlying mantle wedge, thus creating magma. This magma tends to be extremely viscous because of its high silica content, so it often does not reach the surface but cools and solidifies at depth. When it does reach the surface, however, a volcano is formed. Thus subduction zones are bordered by chains of volcanoes called volcanic arcs. Typical examples are the volcanoes in the Pacific Ring of Fire, such as the Cascade Volcanoes or the Japanese Archipelago, or the eastern islands of Indonesia. Hotspots Hotspots are volcanic areas thought to be formed by mantle plumes, which are hypothesized to be columns of hot material rising from the core-mantle boundary. As with mid-ocean ridges, the rising mantle rock experiences decompression melting which generates large volumes of magma. Because tectonic plates move across mantle plumes, each volcano becomes inactive as it drifts off the plume, and new volcanoes are created where the plate advances over the plume. The Hawaiian Islands are thought to have been formed in such a manner, as has the Snake River Plain, with the Yellowstone Caldera being part of the North American plate currently above the Yellowstone hotspot. However, the mantle plume hypothesis has been questioned. Continental rifting Sustained upwelling of hot mantle rock can develop under the interior of a continent and lead to rifting. Early stages of rifting are characterized by flood basalts and may progress to the point where a tectonic plate is completely split. A divergent plate boundary then develops between the two halves of the split plate. However, rifting often fails to completely split the continental lithosphere (such as in an aulacogen), and failed rifts are characterized by volcanoes that erupt unusual alkali lava or carbonatites. Examples include the volcanoes of the East African Rift. Volcanic features A volcano needs a reservoir of molten magma (e.g. a magma chamber), a conduit to allow magma to rise through the crust, and a vent to allow the magma to escape above the surface as lava. The erupted volcanic material (lava and tephra) that is deposited around the vent is known as a , typically a volcanic cone or mountain. The most common perception of a volcano is of a conical mountain, spewing lava and poisonous gases from a crater at its summit; however, this describes just one of the many types of volcano. The features of volcanoes are varied. The structure and behaviour of volcanoes depend on several factors. Some volcanoes have rugged peaks formed by lava domes rather than a summit crater while others have landscape features such as massive plateaus. Vents that issue volcanic material (including lava and ash) and gases (mainly steam and magmatic gases) can develop anywhere on the landform and may give rise to smaller cones such as Puu Ōō on a flank of Kīlauea in Hawaii. Volcanic craters are not always at the top of a mountain or hill and may be filled with lakes such as with Lake Taupō in New Zealand. Some volcanoes can be low-relief landform features, with the potential to be hard to recognize as such and be obscured by geological processes. Other types of volcano include mud volcanoes, which are structures often not associated with known magmatic activity; and cryovolcanoes (or ice volcanoes), particularly on some moons of Jupiter, Saturn, and Neptune. Active mud volcanoes tend to involve temperatures much lower than those of igneous volcanoes except when the mud volcano is actually a vent of an igneous volcano. Fissure vents Volcanic fissure vents are generally found at diverging plate boundaries, they are flat, linear fractures through which basaltic lava emerges. These kinds of volcanoes are non-explosive and the basaltic lava tends to have a low viscosity and solidifies slowly leading to a gentle sloping basaltic lava plateau. They often relate or constitute shield volcanoes Shield volcanoes Shield volcanoes, so named for their broad, shield-like profiles, are formed by the eruption of low-viscosity basaltic or andesitic lava that can flow a great distance from a vent. They generally do not explode catastrophically but are characterized by relatively gentle effusive eruptions. Since low-viscosity magma is typically low in silica, shield volcanoes are more common in oceanic than continental settings. The Hawaiian volcanic chain is a series of shield cones, and they are common in Iceland, as well. Olympus Mons, an extinct martian shield volcano is the largest known volcano in the Solar System. Lava domes Lava domes, also called dome volcanoes, have steep convex sides built by slow eruptions of highly viscous lava, for example, rhyolite. They are sometimes formed within the crater of a previous volcanic eruption, as in the case of Mount St. Helens, but can also form independently, as in the case of Lassen Peak. Like stratovolcanoes, they can produce violent, explosive eruptions, but the lava generally does not flow far from the originating vent. Cryptodomes Cryptodomes are formed when viscous lava is forced upward causing the surface to bulge. The 1980 eruption of Mount St. Helens was an example; lava beneath the surface of the mountain created an upward bulge, which later collapsed down the north side of the mountain. Cinder cones Cinder cones result from eruptions of mostly small pieces of scoria and pyroclastics (both resemble cinders, hence the name of this volcano type) that build up around the vent. These can be relatively short-lived eruptions that produce a cone-shaped hill perhaps high. Most cinder cones erupt only once and some may be found in monogenetic volcanic fields that may include other features that form when magma comes into contact with water such as maar explosion craters and tuff rings. Cinder cones may form as flank vents on larger volcanoes, or occur on their own. Parícutin in Mexico and Sunset Crater in Arizona are examples of cinder cones. In New Mexico, Caja del Rio is a volcanic field of over 60 cinder cones. Based on satellite images, it has been suggested that cinder cones might occur on other terrestrial bodies in the Solar system too; on the surface of Mars and the Moon. Stratovolcanoes (composite volcanoes) Stratovolcanoes are tall conical mountains composed of lava flows and tephra in alternate layers, the strata that gives rise to the name. They are also known as composite volcanoes because they are created from multiple structures during different kinds of eruptions; the main conduit bringing magma to the surface branches into multiple secondary conduits and occasional laccoliths or sills, the branching conduits may form parasitic cones on the flanks of the main cone. Classic examples include Mount Fuji in Japan, Mayon Volcano in the Philippines, and Mount Vesuvius and Stromboli in Italy. Ash produced by the explosive eruption of stratovolcanoes has historically posed the greatest volcanic hazard to civilizations. The lavas of stratovolcanoes are higher in silica, and therefore much more viscous, than lavas from shield volcanoes. High-silica lavas also tend to contain more dissolved gas. The combination is deadly, promoting explosive eruptions that produce great quantities of ash, as well as pyroclastic surges like the one that destroyed the city of Saint-Pierre in Martinique in 1902. They are also steeper than shield volcanoes, with slopes of 30–35° compared to slopes of generally 5–10°, and their loose tephra are material for dangerous lahars. Large pieces of tephra are called volcanic bombs. Big bombs can measure more than across and weigh several tons. Supervolcanoes A supervolcano is defined as a volcano that has experienced one or more eruptions that produced over of volcanic deposits in a single explosive event. Such eruptions occur when a very large magma chamber full of gas-rich, silicic magma is emptied in a catastrophic caldera-forming eruption. Ash flow tuffs emplaced by such eruptions are the only volcanic product with volumes rivalling those of flood basalts. Supervolcano eruptions, while the most dangerous type, are very rare; four are known from the last million years, and about 60 historical VEI 8 eruptions have been identified in the geologic record over millions of years. A supervolcano can produce devastation on a continental scale, and severely cool global temperatures for many years after the eruption due to the huge volumes of sulfur and ash released into the atmosphere. Because of the enormous area they cover, and subsequent concealment under vegetation and glacial deposits, supervolcanoes can be difficult to identify in the geologic record without careful geologic mapping. Known examples include Yellowstone Caldera in Yellowstone National Park and Valles Caldera in New Mexico (both western United States); Lake Taupō in New Zealand; Lake Toba in Sumatra, Indonesia; and Ngorongoro Crater in Tanzania. Caldera volcanoes Volcanoes that, though large, are not large enough to be called supervolcanoes, may also form calderas (collapsed crater) in the same way. There may be active or dormant cones inside of the caldera or even a lake, such lakes are called Volcanogenic lakes, or simply, volcanic lakes. Submarine volcanoes Submarine volcanoes are common features of the ocean floor. Volcanic activity during the Holocene Epoch has been documented at only 119 submarine volcanoes, but there may be more than one million geologically young submarine volcanoes on the ocean floor. In shallow water, active volcanoes disclose their presence by blasting steam and rocky debris high above the ocean's surface. In the deep ocean basins, the tremendous weight of the water prevents the explosive release of steam and gases; however, submarine eruptions can be detected by hydrophones and by the discoloration of water because of volcanic gases. Pillow lava is a common eruptive product of submarine volcanoes and is characterized by thick sequences of discontinuous pillow-shaped masses which form underwater. Even large submarine eruptions may not disturb the ocean surface, due to the rapid cooling effect and increased buoyancy in water (as compared to air), which often causes volcanic vents to form steep pillars on the ocean floor. Hydrothermal vents are common near these volcanoes, and some support peculiar ecosystems based on chemotrophs feeding on dissolved minerals. Over time, the formations created by submarine volcanoes may become so large that they break the ocean surface as new islands or floating pumice rafts. In May and June 2018, a multitude of seismic signals were detected by earthquake monitoring agencies all over the world. They took the form of unusual humming sounds, and some of the signals detected in November of that year had a duration of up to 20 minutes. An oceanographic research campaign in May 2019 showed that the previously mysterious humming noises were caused by the formation of a submarine volcano off the coast of Mayotte. Subglacial volcanoes Subglacial volcanoes develop underneath ice caps. They are made up of lava plateaus capping extensive pillow lavas and palagonite. These volcanoes are also called table mountains, tuyas, or (in Iceland) mobergs. Very good examples of this type of volcano can be seen in Iceland and in British Columbia. The origin of the term comes from Tuya Butte, which is one of the several tuyas in the area of the Tuya River and Tuya Range in northern British Columbia. Tuya Butte was the first such landform analysed and so its name has entered the geological literature for this kind of volcanic formation. The Tuya Mountains Provincial Park was recently established to protect this unusual landscape, which lies north of Tuya Lake and south of the Jennings River near the boundary with the Yukon Territory. Hydrothermal features Hydrothermal features, for example geysers, fumaroles, mud pools, mud volcanoes, hot springs and acidic hot springs involve water as well as geothermal or magmatic activity. Such features are common around volcanoes and are often indicative of volcanism. Mud volcanoes Mud volcanoes or mud domes are conical structures created by eruption of liquids and gases, particularly mud (slurries), water and gases, although several activities may contribute. The largest mud volcanoes are in diameter and reach high. Mud volcanoes can be seen off the shore of Indonesia, on the island of Baratang, in Balochistan and in central Asia. Fumarole Fumaroles are vents on the surface from which hot steam and volcanic gases erupt due to the presence of superheated groundwater, these may indicate volcanic activity. Fumaroles erupting sulfurous gases are also often called solfataras. Geysers Geysers are springs which will occasionally erupt and discharge hot water and steam. Geysers may indicate ongoing magmatism, water underground is heated by hot rocks and steam pressure builds up before being released along with a jet of hot water. Almost half of all active geysers are present in Yellowstone National Park, US. Erupted material The material that is expelled in a volcanic eruption can be classified into three types: Volcanic gases, a mixture made mostly of steam, carbon dioxide, and a sulfur compound (either sulfur dioxide, SO2, or hydrogen sulfide, H2S, depending on the temperature) Lava, the name of magma when it emerges and flows over the surface Tephra, particles of solid material of all shapes and sizes ejected and thrown through the air Volcanic gases The concentrations of different volcanic gases can vary considerably from one volcano to the next. Water vapour is typically the most abundant volcanic gas, followed by carbon dioxide and sulfur dioxide. Other principal volcanic gases include hydrogen sulfide, hydrogen chloride, and hydrogen fluoride. A large number of minor and trace gases are also found in volcanic emissions, for example hydrogen, carbon monoxide, halocarbons, organic compounds, and volatile metal chlorides. Lava flows The form and style of an eruption of a volcano is largely determined by the composition of the lava it erupts. The viscosity (how fluid the lava is) and the amount of dissolved gas are the most important characteristics of magma, and both are largely determined by the amount of silica in the magma. Magma rich in silica is much more viscous than silica-poor magma, and silica-rich magma also tends to contain more dissolved gases. Lava can be broadly classified into four different compositions: If the erupted magma contains a high percentage (>63%) of silica, the lava is described as felsic. Felsic lavas (dacites or rhyolites) are highly viscous and are erupted as domes or short, stubby flows. Lassen Peak in California is an example of a volcano formed from felsic lava and is actually a large lava dome. Because felsic magmas are so viscous, they tend to trap volatiles (gases) that are present, which leads to explosive volcanism. Pyroclastic flows (ignimbrites) are highly hazardous products of such volcanoes since they hug the volcano's slopes and travel far from their vents during large eruptions. Temperatures as high as are known to occur in pyroclastic flows, which will incinerate everything flammable in their path, and thick layers of hot pyroclastic flow deposits can be laid down, often many meters thick. Alaska's Valley of Ten Thousand Smokes, formed by the eruption of Novarupta near Katmai in 1912, is an example of a thick pyroclastic flow or ignimbrite deposit. Volcanic ash that is light enough to erupt high into the Earth's atmosphere as an eruption column may travel hundreds of kilometres before it falls back to ground as a fallout tuff. Volcanic gases may remain in the stratosphere for years. Felsic magmas are formed within the crust, usually through the melting of crust rock from the heat of underlying mafic magmas. The lighter felsic magma floats on the mafic magma without significant mixing. Less commonly, felsic magmas are produced by extreme fractional crystallization of more mafic magmas. This is a process in which mafic minerals crystallize out of the slowly cooling magma, which enriches the remaining liquid in silica. If the erupted magma contains 52–63% silica, the lava is of intermediate composition or andesitic. Intermediate magmas are characteristic of stratovolcanoes. They are most commonly formed at convergent boundaries between tectonic plates, by several processes. One process is the hydration melting of mantle peridotite followed by fractional crystallization. Water from a subducting slab rises into the overlying mantle, lowering its melting point, particularly for the more silica-rich minerals. Fractional crystallization further enriches the magma in silica. It has also been suggested that intermediate magmas are produced by the melting of sediments carried downwards by the subducted slab. Another process is magma mixing between felsic rhyolitic and mafic basaltic magmas in an intermediate reservoir before emplacement or lava flow. If the erupted magma contains <52% and >45% silica, the lava is called mafic (because it contains higher percentages of magnesium (Mg) and iron (Fe)) or basaltic. These lavas are usually hotter and much less viscous than felsic lavas. Mafic magmas are formed by partial melting of the dry mantle, with limited fractional crystallization and assimilation of crustal material. Mafic lavas occur in a wide range of settings. These include mid-ocean ridges; Shield volcanoes (such the Hawaiian Islands, including Mauna Loa and Kilauea), on both oceanic and continental crust; and as continental flood basalts. Some erupted magmas contain ≤45% silica and produce ultramafic lava. Ultramafic flows, also known as komatiites, are very rare; indeed, very few have been erupted at Earth's surface since the Proterozoic, when the planet's heat flow was higher. They are (or were) the hottest lavas, and were probably more fluid than common mafic lavas, with a viscosity less than a tenth that of hot basalt magma. Mafic lava flows show two varieties of surface texture: Aa (pronounced ) and pāhoehoe (), both Hawaiian words. Aa is characterized by a rough, clinkery surface and is the typical texture of cooler basalt lava flows. Pāhoehoe is characterized by its smooth and often ropey or wrinkly surface and is generally formed from more fluid lava flows. Pāhoehoe flows are sometimes observed to transition to aa flows as they move away from the vent, but never the reverse. More silicic lava flows take the form of block lava, where the flow is covered with angular, vesicle-poor blocks. Rhyolitic flows typically consist largely of obsidian. Tephra Tephra is made when magma inside the volcano is blown apart by the rapid expansion of hot volcanic gases. Magma commonly explodes as the gas dissolved in it comes out of solution as the pressure decreases when it flows to the surface. These violent explosions produce particles of material that can then fly from the volcano. Solid particles smaller than 2 mm in diameter (sand-sized or smaller) are called volcanic ash. Tephra and other volcaniclastics (shattered volcanic material) make up more of the volume of many volcanoes than do lava flows. Volcaniclastics may have contributed as much as a third of all sedimentation in the geologic record. The production of large volumes of tephra is characteristic of explosive volcanism. Dissection Through natural processes, mainly erosion, so much of the solidified erupted material that makes up the mantle of a volcano may be stripped away that its inner anatomy becomes apparent. Using the metaphor of biological anatomy, such a process is called "dissection". When the volcano is extinct, a plug forms on its vent, over time due to erosion, the volcanic cone slowly erodes away leaving the resistant lava plug intact. Cinder Hill, a feature of Mount Bird on Ross Island, Antarctica, is a prominent example of a dissected volcano. Volcanoes that were, on a geological timescale, recently active, such as for example Mount Kaimon in southern Kyūshū, Japan, tend to be undissected. Devils Tower in Wyoming is a famous example of exposed volcanic plug. Types of volcanic eruptions Eruption styles are broadly divided into magmatic, phreatomagmatic (hydrovolcanic), and phreatic eruptions. The intensity of explosive volcanism is expressed using the volcanic explosivity index (VEI), which ranges from 0 for Hawaiian-type eruptions to 8 for supervolcanic eruptions: Magmatic eruptions are driven primarily by gas release due to decompression. Low-viscosity magma with little dissolved gas produces relatively gentle effusive eruptions. High-viscosity magma with a high content of dissolved gas produces violent explosive eruptions. The range of observed eruption styles is expressed from historical examples. Hawaiian eruptions are typical of volcanoes that erupt mafic lava with a relatively low gas content. These are almost entirely effusive, producing local lava fountains and highly fluid lava flows but relatively little tephra. They are named after the Hawaiian volcanoes. The eruption column from these eruptions does not exceed in height. Strombolian eruptions are characterized by moderate viscosities and dissolved gas levels. They are characterized by frequent but short-lived eruptions that can produce eruptive columns hundreds of meters high, which can also be seen in a gas slug. Their primary product is scoria. They are named after Stromboli. Vulcanian eruptions are characterized by yet higher viscosities and partial crystallization of magma, which is often intermediate in composition. Eruptions take the form of short-lived explosions for several hours, which destroy a central dome and eject large lava blocks and bombs. This is followed by an effusive phase that rebuilds the central dome. Vulcanian eruptions are named after Vulcano. Eruption columns from these eruptions do not exceed in height. Peléan eruptions are more violent still, being characterized by dome growth and collapse that produces various kinds of pyroclastic flows. They are named after Mount Pelée. Plinian eruptions are characterized by sustained huge eruption columns whose collapse produces catastrophic pyroclastic flows. They are named after Pliny the Younger, who chronicled the Plinian eruption of Mount Vesuvius in 79 AD. Ultra-Plinian eruptions are the largest of all volcanic eruptions are more intense, have a higher eruption rate than Plinian ones, form higher eruption columns and may form large calderas. These eruptions produce rhyolitic lava, tephra, pumice and thick pyroclastic flows that cover vast areas and may produce widespread ash-fall deposits. Examples are Mt. Mazama and Yellowstone. Phreatomagmatic eruptions (hydrovolcanic) are characterized by interaction of rising magma with groundwater. They are driven by the resulting rapid buildup of pressure in the superheated groundwater. Phreatic eruptions are characterized by superheating of groundwater that comes in contact with hot rock or magma. They are distinguished from phreatomagmatic eruptions because the erupted material is all country rock; no magma is erupted. Volcanic activity , the Smithsonian Institution's Global Volcanism Program database of volcanic eruptions in the Holocene Epoch (the last 11,700 years) lists 9,901 confirmed eruptions from 859 volcanoes. The database also lists 1,113 uncertain eruptions and 168 discredited eruptions for the same time interval. Volcanoes vary greatly in their level of activity, with individual volcanic systems having an eruption recurrence ranging from several times a year to once in tens of thousands of years. Volcanoes are informally described as erupting, active, dormant, or extinct, but the definitions of these terms are not entirely uniform among volcanologists. The level of activity of most volcanoes falls upon a graduated spectrum, with much overlap between categories, and does not always fit neatly into only one of these three separate categories. Erupting The USGS defines a volcano as "erupting" whenever the ejection of magma from any point on the volcano is visible, including visible magma still contained within the walls of the summit crater. Active While there is no international consensus among volcanologists on how to define an active volcano, the USGS defines a volcano as active whenever subterranean indicators, such as earthquake swarms, ground inflation, or unusually high levels of carbon dioxide or sulfur dioxide are present. Dormant and reactivated The USGS defines a dormant volcano as any volcano that is not showing any signs of unrest such as earthquake swarms, ground swelling, or excessive noxious gas emissions, but which shows signs that it could yet become active again. Many dormant volcanoes have not erupted for thousands of years, but have still shown signs that they may be likely to erupt again in the future. In an article justifying the re-classification of Alaska's Mount Edgecumbe volcano from "dormant" to "active", volcanologists at the Alaska Volcano Observatory pointed out that the term "dormant" in reference to volcanoes has been deprecated over the past few decades and that "[t]he term "dormant volcano" is so little used and undefined in modern volcanology that the Encyclopedia of Volcanoes (2000) does not contain it in the glossaries or index", however the USGS still widely employs the term. Previously a volcano was often considered to be extinct if there were no written records of its activity. Such a generalization is inconsistent with observation and deeper study, as has occurred recently with the unexpected eruption of the Chaitén volcano in 2008. Modern volcanic activity monitoring techniques, and improvements in the modelling of the factors that produce eruptions, have helped the understanding of why volcanoes may remain dormant for a long time, and then become unexpectedly active again. The potential for eruptions, and their style, depend mainly upon the state of the magma storage system under the volcano, the eruption trigger mechanism and its timescale. For example, the Yellowstone volcano has a repose/recharge period of around 700,000 years, and Toba of around 380,000 years. Vesuvius was described by Roman writers as having been covered with gardens and vineyards before its unexpected eruption of 79 CE, which destroyed the towns of Herculaneum and Pompeii. Accordingly, it can sometimes be difficult to distinguish between an extinct volcano and a dormant (inactive) one. Long volcano dormancy is known to decrease awareness. Pinatubo was an inconspicuous volcano, unknown to most people in the surrounding areas, and initially not seismically monitored before its unanticipated and catastrophic eruption of 1991. Two other examples of volcanoes that were once thought to be extinct, before springing back into eruptive activity were the long-dormant Soufrière Hills volcano on the island of Montserrat, thought to be extinct until activity resumed in 1995 (turning its capital Plymouth into a ghost town) and Fourpeaked Mountain in Alaska, which, before its September 2006 eruption, had not erupted since before 8000 BCE. Extinct Extinct volcanoes are those that scientists consider unlikely to erupt again because the volcano no longer has a magma supply. Examples of extinct volcanoes are many volcanoes on the Hawaiian–Emperor seamount chain in the Pacific Ocean (although some volcanoes at the eastern end of the chain are active), Hohentwiel in Germany, Shiprock in New Mexico, US, Capulin in New Mexico, US, Zuidwal volcano in the Netherlands, and many volcanoes in Italy such as Monte Vulture. Edinburgh Castle in Scotland is located atop an extinct volcano, which forms Castle Rock. Whether a volcano is truly extinct is often difficult to determine. Since "supervolcano" calderas can have eruptive lifespans sometimes measured in millions of years, a caldera that has not produced an eruption in tens of thousands of years may be considered dormant instead of extinct. An individual volcano in a monogenetic volcanic field can be extinct but that does not mean a completely new volcano might not erupt close by with little or no warning as its field may have an active magma supply. Volcanic-alert level The three common popular classifications of volcanoes can be subjective and some volcanoes thought to have been extinct have erupted again. To help prevent people from falsely believing they are not at risk when living on or near a volcano, countries have adopted new classifications to describe the various levels and stages of volcanic activity. Some alert systems use different numbers or colours to designate the different stages. Other systems use colours and words. Some systems use a combination of both. Decade volcanoes The Decade Volcanoes are 16 volcanoes identified by the International Association of Volcanology and Chemistry of the Earth's Interior (IAVCEI) as being worthy of particular study in light of their history of large, destructive eruptions and proximity to populated areas. They are named Decade Volcanoes because the project was initiated as part of the United Nations-sponsored International Decade for Natural Disaster Reduction (the 1990s). The 16 current Decade Volcanoes are: Avachinsky-Koryaksky (grouped together), Kamchatka, Russia Nevado de Colima, Jalisco and Colima, Mexico Mount Etna, Sicily, Italy Galeras, Nariño, Colombia Mauna Loa, Hawaii, US Mount Merapi, Central Java, Indonesia Mount Nyiragongo, Democratic Republic of the Congo Mount Rainier, Washington, US Sakurajima, Kagoshima Prefecture, Japan Santa Maria/Santiaguito, Guatemala Santorini, Cyclades, Greece Taal Volcano, Luzon, Philippines Teide, Canary Islands, Spain Ulawun, New Britain, Papua New Guinea Mount Unzen, Nagasaki Prefecture, Japan Vesuvius, Naples, Italy The Deep Earth Carbon Degassing Project, an initiative of the Deep Carbon Observatory, monitors nine volcanoes, two of which are Decade volcanoes. The focus of the Deep Earth Carbon Degassing Project is to use Multi-Component Gas Analyzer System instruments to measure CO2/SO2 ratios in real-time and in high-resolution to allow detection of the pre-eruptive degassing of rising magmas, improving prediction of volcanic activity. Volcanoes and humans Volcanic eruptions pose a significant threat to human civilization. However, volcanic activity has also provided humans with important resources. Hazards There are many different types of volcanic eruptions and associated activity: phreatic eruptions (steam-generated eruptions), explosive eruptions of high-silica lava (e.g., rhyolite), effusive eruptions of low-silica lava (e.g., basalt), sector collapses, pyroclastic flows, lahars (debris flows) and volcanic gas emissions. These can pose a hazard to humans. Earthquakes, hot springs, fumaroles, mud pots and geysers often accompany volcanic activity. Volcanic gases can reach the stratosphere, where they form sulfuric acid aerosols that can reflect solar radiation and lower surface temperatures significantly. Sulfur dioxide from the eruption of Huaynaputina may have caused the Russian famine of 1601–1603. Chemical reactions of sulfate aerosols in the stratosphere can also damage the ozone layer, and acids such as hydrogen chloride (HCl) and hydrogen fluoride (HF) can fall to the ground as acid rain. Excessive fluoride salts from eruptions have poisoned livestock in Iceland on multiple occasions. Explosive volcanic eruptions release the greenhouse gas carbon dioxide and thus provide a deep source of carbon for biogeochemical cycles. Ash thrown into the air by eruptions can present a hazard to aircraft, especially jet aircraft where the particles can be melted by the high operating temperature; the melted particles then adhere to the turbine blades and alter their shape, disrupting the operation of the turbine. This can cause major disruptions to air travel. A volcanic winter is thought to have taken place around 70,000 years ago after the supereruption of Lake Toba on Sumatra island in Indonesia. This may have created a population bottleneck that affected the genetic inheritance of all humans today. Volcanic eruptions may have contributed to major extinction events, such as the End-Ordovician, Permian-Triassic, and Late Devonian mass extinctions. The 1815 eruption of Mount Tambora created global climate anomalies that became known as the "Year Without a Summer" because of the effect on North American and European weather. The freezing winter of 1740–41, which led to widespread famine in northern Europe, may also owe its origins to a volcanic eruption. Benefits Although volcanic eruptions pose considerable hazards to humans, past volcanic activity has created important economic resources. Tuff formed from volcanic ash is a relatively soft rock, and it has been used for construction since ancient times. The Romans often used tuff, which is abundant in Italy, for construction. The Rapa Nui people used tuff to make most of the moai statues in Easter Island. Volcanic ash and weathered basalt produce some of the most fertile soil in the world, rich in nutrients such as iron, magnesium, potassium, calcium, and phosphorus. Volcanic activity is responsible for emplacing valuable mineral resources, such as metal ores. It is accompanied by high rates of heat flow from Earth's interior. These can be tapped as geothermal power. Tourism associated with volcanoes is also a worldwide industry. Safety considerations Many volcanoes near human settlements are heavily monitored with the aim of providing adequate advance warnings of imminent eruptions to nearby populations. Also, a better modern-day understanding of volcanology has led to some better informed governmental and public responses to unanticipated volcanic activities. While the science of volcanology may not yet be capable of predicting the exact times and dates of eruptions far into the future, on suitably monitored volcanoes the monitoring of ongoing volcanic indicators is often capable of predicting imminent eruptions with advance warnings minimally of hours, and usually of days prior to any eruptions. The diversity of volcanoes and their complexities mean that eruption forecasts for the foreseeable future will be based on probability, and the application of risk management. Even then, some eruptions will have no useful warning. An example of this occurred in March 2017, when a tourist group was witnessing a presumed to be predictable Mount Etna eruption and the flowing lava came in contact with a snow accumulation causing a situational phreatic explosion causing injury to ten persons. Other types of significant eruptions are known to give useful warnings of only hours at the most by seismic monitoring. The recent demonstration of a magma chamber with repose times of tens of thousands of years, with potential for rapid recharge so potentially decreasing warning times, under the youngest volcano in central Europe, does not tell us if more careful monitoring will be useful. Scientists are known to perceive risk, with its social elements, differently from local populations and those that undertake social risk assessments on their behalf, so that both disruptive false alarms and retrospective blame, when disasters occur, will continue to happen. Thus in many cases, while volcanic eruptions may still cause major property destruction, the periodic large-scale loss of human life that was once associated with many volcanic eruptions, has recently been significantly reduced in areas where volcanoes are adequately monitored. This life-saving ability is derived via such volcanic-activity monitoring programs, through the greater abilities of local officials to facilitate timely evacuations based upon the greater modern-day knowledge of volcanism that is now available, and upon improved communications technologies such as cell phones. Such operations tend to provide enough time for humans to escape at least with their lives before a pending eruption. One example of such a recent successful volcanic evacuation was the Mount Pinatubo evacuation of 1991. This evacuation is believed to have saved 20,000 lives. In the case of Mount Etna, a 2021 review found 77 deaths due to eruptions since 1536 but none since 1987. Citizens who may be concerned about their own exposure to risk from nearby volcanic activity should familiarize themselves with the types of, and quality of, volcano monitoring and public notification procedures being employed by governmental authorities in their areas. Volcanoes on other celestial bodies Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the Moon), rilles and domes. The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank. However, the interpretation of the flows as ash flows has been questioned. There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well. Jupiter's moon Io is the most volcanically active object in the Solar System because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. Its lavas are the hottest known anywhere in the Solar System, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the Solar System occurred on Io. Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the Solar System. In 1989, the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, ammonia, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar. A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, suggested that tidal heating from the host star very close to the planet and neighbouring planets could generate intense volcanic activity similar to that found on Io. History of volcano understanding Volcanoes are not distributed evenly over the Earth's surface but active ones with significant impact were encountered early in human history, evidenced by footprints of hominina found in East African volcanic ash dated at 3.66 million years old. The association of volcanoes with fire and disaster is found in many oral traditions and had religious and thus social significance before the first written record of concepts related to volcanoes. Examples are: (1) the stories in the Athabascan subcultures about humans living inside mountains and a woman who uses fire to escape from a mountain, (2) Pele's migration through the Hawarian island chain, ability to destroy forests and manifestations of the god's temper, and (3) the association in Javanese folklore of a king resident in Mount Merapi volcano and a queen resident at a beach away on what is now known to be an earthquake fault that interacts with that volcano. Many ancient accounts ascribe volcanic eruptions to supernatural causes, such as the actions of gods or demigods. The earliest known such example is a neolithic goddess at Çatalhöyük. The Ancient Greek god Hephaistos and the concepts of the underworld are aligned to volcanoes in that Greek culture. However, others proposed more natural (but still incorrect) causes of volcanic activity. In the fifth century BC, Anaxagoras proposed eruptions were caused by a great wind. By 65 CE, Seneca the Younger proposed combustion as the cause, an idea also adopted by the Jesuit Athanasius Kircher (1602–1680), who witnessed eruptions of Mount Etna and Stromboli, then visited the crater of Vesuvius and published his view of an Earth in Mundus Subterraneus with a central fire connected to numerous others depicting volcanoes as a type of safety valve. Edward Jorden, in his work on mineral waters, challenged this view; in 1632 he proposed sulfur "fermentation" as a heat source within Earth, Astronomer Johannes Kepler (1571–1630) believed volcanoes were ducts for Earth's tears. In 1650, René Descartes proposed the core of Earth was incandescent and, by 1785, the works of Decartes and others were synthesized into geology by James Hutton in his writings about igneous intrusions of magma. Lazzaro Spallanzani had demonstrated by 1794 that steam explosions could cause explosive eruptions and many geologists held this as the universal cause of explosive eruptions up to the 1886 eruption of Mount Tarawera which allowed in one event differentiation of the concurrent phreatomagmatic and hydrothermal eruptions from dry explosive eruption, of, as it turned out, a basalt dyke. Alfred Lacroix built upon his other knowledge with his studies on the 1902 eruption of Mount Pelée, and by 1928 Arthur Holmes work had brought together the concepts of radioactive generation of heat, Earth's mantle structure, partial decompression melting of magma, and magma convection. This eventually led to the acceptance of plate tectonics.
Physical sciences
Earth science
null
32572
https://en.wikipedia.org/wiki/Vesicle%20%28biology%20and%20chemistry%29
Vesicle (biology and chemistry)
In cell biology, a vesicle is a structure within or outside a cell, consisting of liquid or cytoplasm enclosed by a lipid bilayer. Vesicles form naturally during the processes of secretion (exocytosis), uptake (endocytosis), and the transport of materials within the plasma membrane. Alternatively, they may be prepared artificially, in which case they are called liposomes (not to be confused with lysosomes). If there is only one phospholipid bilayer, the vesicles are called unilamellar liposomes; otherwise they are called multilamellar liposomes. The membrane enclosing the vesicle is also a lamellar phase, similar to that of the plasma membrane, and intracellular vesicles can fuse with the plasma membrane to release their contents outside the cell. Vesicles can also fuse with other organelles within the cell. A vesicle released from the cell is known as an extracellular vesicle. Vesicles perform a variety of functions. Because it is separated from the cytosol, the inside of the vesicle can be made to be different from the cytosolic environment. For this reason, vesicles are a basic tool used by the cell for organizing cellular substances. Vesicles are involved in metabolism, transport, buoyancy control, and temporary storage of food and enzymes. They can also act as chemical reaction chambers. The 2013 Nobel Prize in Physiology or Medicine was shared by James Rothman, Randy Schekman and Thomas Südhof for their roles in elucidating (building upon earlier research, some of it by their mentors) the makeup and function of cell vesicles, especially in yeasts and in humans, including information on each vesicle's parts and how they are assembled. Vesicle dysfunction is thought to contribute to Alzheimer's disease, diabetes, some hard-to-treat cases of epilepsy, some cancers and immunological disorders and certain neurovascular conditions. Types of vesicular structures Vacuoles Vacuoles are cellular organelles that contain mostly water. Plant cells have a large central vacuole in the center of the cell that is used for osmotic control and nutrient storage. Contractile vacuoles are found in certain protists, especially those in Phylum Ciliophora. These vacuoles take water from the cytoplasm and excrete it from the cell to avoid bursting due to osmotic pressure. Lysosomes Lysosomes are involved in cellular digestion. Food can be taken from outside the cell into food vacuoles by a process called endocytosis. These food vacuoles fuse with lysosomes which break down the components so that they can be used in the cell. This form of cellular eating is called phagocytosis. Lysosomes are also used to destroy defective or damaged organelles in a process called autophagy. They fuse with the membrane of the damaged organelle, digesting it. Transport vesicles Transport vesicles can move molecules between locations inside the cell, e.g., proteins from the rough endoplasmic reticulum to the Golgi apparatus. Membrane-bound and secreted proteins are made on ribosomes found in the rough endoplasmic reticulum. Most of these proteins mature in the Golgi apparatus before going to their final destination which may be to lysosomes, peroxisomes, or outside of the cell. These proteins travel within the cell inside transport vesicles. Secretory vesicles Secretory vesicles contain materials that are to be excreted from the cell. Cells have many reasons to excrete materials. One reason is to dispose of wastes. Another reason is tied to the function of the cell. Within a larger organism, some cells are specialized to produce certain chemicals. These chemicals are stored in secretory vesicles and released when needed. Types Synaptic vesicles are located at presynaptic terminals in neurons and store neurotransmitters. When a signal comes down an axon, the synaptic vesicles fuse with the cell membrane releasing the neurotransmitter so that it can be detected by receptor molecules on the next nerve cell. In animals, endocrine tissues release hormones into the bloodstream. These hormones are stored within secretory vesicles. A good example is an endocrine tissue found in the islets of Langerhans in the pancreas. This tissue contains many cell types that are defined by which hormones they produce. Secretory vesicles hold the enzymes that are used to make the cell walls of plants, protists, fungi, bacteria and archaea cells as well as the extracellular matrix of animal cells. Bacteria, archaea, fungi and parasites release membrane vesicles (MVs) containing varied but specialized toxic compounds and biochemical signal molecules, which are transported to target cells to initiate processes in favour of the microbe, which include invasion of host cells and killing of competing microbes in the same niche. Extracellular vesicles Extracellular vesicles (EVs) are lipid bilayer-delimited particles produced by all domains of life including complex eukaryotes, both Gram-negative and Gram-positive bacteria, mycobacteria, and fungi. Types Ectosomes/microvesicles are shed directly from the plasma membrane and can range in size from around 30 nm to larger than a micron in diameter). These may include large particles such as apoptotic blebs released by dying cells, large oncosomes released by some cancer cells, or "exophers," released by nematode neurons and mouse cardiomyocytes. Exosomes: membranous vesicles of endocytic origin (30-100 nm diameter). Different types of EVs may be separated based on density (by gradient differential centrifugation), size, or surface markers. However, EV subtypes have an overlapping size and density ranges, and subtype-unique markers must be established on a cell-by-cell basis. Therefore, it is difficult to pinpoint the biogenesis pathway that gave rise to a particular EV after it has left the cell. In humans, endogenous extracellular vesicles likely play a role in coagulation, intercellular signaling and waste management. They are also implicated in the pathophysiological processes involved in multiple diseases, including cancer. Extracellular vesicles have raised interest as a potential source of biomarker discovery because of their role in intercellular communication, release into easily accessible body fluids and the resemblance of their molecular content to that of the releasing cells. The extracellular vesicles of (mesenchymal) stem cells, also known as the secretome of stem cells, are being researched and applied for therapeutic purposes, predominantly degenerative, auto-immune and/or inflammatory diseases. In Gram-negative bacteria, EVs are produced by the pinching off of the outer membrane; however, how EVs escape the thick cell walls of Gram-positive bacteria, mycobacteria and fungi is still unknown. These EVs contain varied cargo, including nucleic acids, toxins, lipoproteins and enzymes and have important roles in microbial physiology and pathogenesis. In host–pathogen interactions, gram negative bacteria produce vesicles which play roles in establishing a colonization niche, carrying and transmitting virulence factors into host cells and modulating host defense and response. Ocean cyanobacteria have been found to continuously release vesicles containing proteins, DNA and RNA into the open ocean. Vesicles carrying DNA from diverse bacteria are abundant in coastal and open-ocean seawater samples. Protocells The RNA world hypothesis assumes that the first self-replicating genomes were strands of RNA. This hypothesis contains the idea that RNA strands formed ribozymes (folded RNA molecules) capable of catalyzing RNA replication. These primordial biological catalysis were considered to be contained within vesicles (protocells) with membranes composed of fatty acids and related amphiphiles. Template-directed RNA synthesis by the copying of RNA templates inside fatty acid vesicles has been demonstrated by Adamata and Szostak. Other types Gas vesicles are used by archaea, bacteria and planktonic microorganisms, possibly to control vertical migration by regulating the gas content and thereby buoyancy, or possibly to position the cell for maximum solar light harvesting. These vesicles are typically lemon-shaped or cylindrical tubes made out of protein; their diameter determines the strength of the vesicle with larger ones being weaker. The diameter of the vesicle also affects its volume and how efficiently it can provide buoyancy. In cyanobacteria, natural selection has worked to create vesicles that are at the maximum diameter possible while still being structurally stable. The protein skin is permeable to gases but not water, keeping the vesicles from flooding. Matrix vesicles are located within the extracellular space, or matrix. Using electron microscopy, they were discovered independently in 1967 by H. Clarke Anderson and Ermanno Bonucci. These cell-derived vesicles are specialized to initiate biomineralisation of the matrix in a variety of tissues, including bone, cartilage and dentin. During normal calcification, a major influx of calcium and phosphate ions into the cells accompanies cellular apoptosis (genetically determined self-destruction) and matrix vesicle formation. Calcium-loading also leads to formation of phosphatidylserine:calcium:phosphate complexes in the plasma membrane mediated in part by a protein called annexins. Matrix vesicles bud from the plasma membrane at sites of interaction with the extracellular matrix. Thus, matrix vesicles convey to the extracellular matrix calcium, phosphate, lipids and the annexins which act to nucleate mineral formation. These processes are precisely coordinated to bring about, at the proper place and time, mineralization of the tissue's matrix unless the Golgi are non-existent. Multivesicular body, or MVB, is a membrane-bound vesicle containing a number of smaller vesicles. Formation and transport Some vesicles are made when part of the membrane pinches off the endoplasmic reticulum or the Golgi complex. Others are made when an object outside of the cell is surrounded by the cell membrane. Vesicle coat and cargo molecules The vesicle "coat" is a collection of proteins that serve to shape the curvature of a donor membrane, forming the rounded vesicle shape. Coat proteins can also function to bind to various transmembrane receptor proteins, called cargo receptors. These receptors help select what material is endocytosed in receptor-mediated endocytosis or intracellular transport. There are three types of vesicle coats: clathrin, COPI and COPII. The various types of coat proteins help with sorting of vesicles to their final destination. Clathrin coats are found on vesicles trafficking between the Golgi and plasma membrane, the Golgi and endosomes and the plasma membrane and endosomes. COPI coated vesicles are responsible for retrograde transport from the Golgi to the ER, while COPII coated vesicles are responsible for anterograde transport from the ER to the Golgi. The clathrin coat is thought to assemble in response to regulatory G protein. A protein coat assembles and disassembles due to an ADP ribosylation factor (ARF) protein. Vesicle docking Surface proteins called SNAREs identify the vesicle's cargo and complementary SNAREs on the target membrane act to cause fusion of the vesicle and target membrane. Such v-SNARES are hypothesised to exist on the vesicle membrane, while the complementary ones on the target membrane are known as t-SNAREs. Often SNAREs associated with vesicles or target membranes are instead classified as Qa, Qb, Qc, or R SNAREs owing to further variation than simply v- or t-SNAREs. An array of different SNARE complexes can be seen in different tissues and subcellular compartments, with 38 isoforms currently identified in humans. Regulatory Rab proteins are thought to inspect the joining of the SNAREs. Rab protein is a regulatory GTP-binding protein and controls the binding of these complementary SNAREs for a long enough time for the Rab protein to hydrolyse its bound GTP and lock the vesicle onto the membrane. SNAREs proteins in plants are understudied compared to fungi and animals. The cell botanist Natasha Raikhel has done some of the basic research in this area, including Zheng et al 1999 in which she and her team found AtVTI1a to be essential to Golgi⇄vacuole transport. Vesicle fusion Vesicle fusion can occur in one of two ways: full fusion or kiss-and-run fusion. Fusion requires the two membranes to be brought within 1.5 nm of each other. For this to occur water must be displaced from the surface of the vesicle membrane. This is energetically unfavorable and evidence suggests that the process requires ATP, GTP and acetyl-coA. Fusion is also linked to budding, which is why the term budding and fusing arises. In receptor downregulation Membrane proteins serving as receptors are sometimes tagged for downregulation by the attachment of ubiquitin. After arriving an endosome via the pathway described above, vesicles begin to form inside the endosome, taking with them the membrane proteins meant for degradation; When the endosome either matures to become a lysosome or is united with one, the vesicles are completely degraded. Without this mechanism, only the extracellular part of the membrane proteins would reach the lumen of the lysosome and only this part would be degraded. It is because of these vesicles that the endosome is sometimes known as a multivesicular body. The pathway to their formation is not completely understood; unlike the other vesicles described above, the outer surface of the vesicles is not in contact with the cytosol. Preparation Isolated vesicles Producing membrane vesicles is one of the methods to investigate various membranes of the cell. After the living tissue is crushed into suspension, various membranes form tiny closed bubbles. Big fragments of the crushed cells can be discarded by low-speed centrifugation and later the fraction of the known origin (plasmalemma, tonoplast, etc.) can be isolated by precise high-speed centrifugation in the density gradient. Using osmotic shock, it is possible temporarily open vesicles (filling them with the required solution) and then centrifugate down again and resuspend in a different solution. Applying ionophores like valinomycin can create electrochemical gradients comparable to the gradients inside living cells. Vesicles are mainly used in two types of research: To find and later isolate membrane receptors that specifically bind hormones and various other important substances. To investigate transport of various ions or other substances across the membrane of the given type. While transport can be more easily investigated with patch clamp techniques, vesicles can also be isolated from objects for which a patch clamp is not applicable. Artificial vesicles Artificial vesicles are classified into three groups based on their size: small unilamellar liposomes/vesicles (SUVs) with a size range of 20–100 nm, large unilamellar liposomes/vesicles (LUVs) with a size range of 100–1000 nm and giant unilamellar liposomes/vesicles (GUVs) with a size range of 1–200 μm. Smaller vesicles in the same size range as trafficking vesicles found in living cells are frequently used in biochemistry and related fields. For such studies, a homogeneous phospholipid vesicle suspension can be prepared by extrusion or sonication, or by rapid injection of a phospholipid solution into an aqueous buffer solution. In this way, aqueous vesicle solutions can be prepared of different phospholipid composition, as well as different sizes of vesicles. Larger synthetically made vesicles such as GUVs are used for in vitro studies in cell biology in order to mimic cell membranes. These vesicles are large enough to be studied using traditional fluorescence light microscopy. A variety of methods exist to encapsulate biological reactants like protein solutions within such vesicles, making GUVs an ideal system for the in vitro recreation (and investigation) of cell functions in cell-like model membrane environments. These methods include microfluidic methods, which allow for a high-yield production of vesicles with consistent sizes.
Biology and health sciences
Organelles
Biology
32612
https://en.wikipedia.org/wiki/Virtual%20reality
Virtual reality
Virtual reality (VR) is a simulated experience that employs 3D near-eye displays and pose tracking to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment (particularly video games), education (such as medical, safety or military training) and business (such as virtual meetings). VR is one of the key technologies in the reality-virtuality continuum. As such, it is different from other digital visualization solutions, such as augmented virtuality and augmented reality. Currently, standard virtual reality systems use either virtual reality headsets or multi-projected environments to generate some realistic images, sounds and other sensations that simulate a user's physical presence in a virtual environment. A person using virtual reality equipment is able to look around the artificial world, move around in it, and interact with virtual features or items. The effect is commonly created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes, but can also be created through specially designed rooms with multiple large screens. Virtual reality typically incorporates auditory and video feedback, but may also allow other types of sensory and force feedback through haptic technology. Etymology "Virtual" has had the meaning of "being something in essence or effect, though not actually or in fact" since the mid-1400s. The term "virtual" has been used in the computer sense of "not physically existing but made to appear by software" since 1959. In 1938, French avant-garde playwright Antonin Artaud described the illusory nature of characters and objects in the theatre as "la réalité virtuelle" in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double, is the earliest published use of the term "virtual reality". The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s. The term "virtual reality" was first used in a science fiction context in The Judas Mandala, a 1982 novel by Damien Broderick. Widespread adoption of the term "virtual reality" in the popular media is attributed to Jaron Lanier, who in the late 1980s designed some of the first business-grade virtual reality hardware under his firm VPL Research, and the 1992 film Lawnmower Man, which features use of virtual reality systems. Forms and methods One method of realizing virtual reality is through simulation-based virtual reality. For example, driving simulators give the driver the impression of actually driving a vehicle by predicting vehicular motion based on the driver's input and providing corresponding visual, motion, and audio cues. With avatar image-based virtual reality, people can join the virtual environment in the form of real video as well as an avatar. One can participate in the 3D distributed virtual environment in the form of either a conventional avatar or a real video. Users can select their own type of participation based on the system capability. In projector-based virtual reality, modeling of the real environment plays a vital role in various virtual reality applications, including robot navigation, construction modeling, and airplane simulation. Image-based virtual reality systems have been gaining popularity in computer graphics and computer vision communities. In generating realistic models, it is essential to accurately register acquired 3D data; usually, a camera is used for modeling small objects at a short distance. Desktop-based virtual reality involves displaying a 3D virtual world on a regular desktop display without use of any specialized VR positional tracking equipment. Many modern first-person video games can be used as an example, using various triggers, responsive characters, and other such interactive devices to make the user feel as though they are in a virtual world. A common criticism of this form of immersion is that there is no sense of peripheral vision, limiting the user's ability to know what is happening around them. A head-mounted display (HMD) more fully immerses the user in a virtual world. A virtual reality headset typically includes two small high resolution OLED or LCD monitors which provide separate images for each eye for stereoscopic graphics rendering a 3D virtual world, a binaural audio system, positional and rotational real-time head tracking for six degrees of movement. Options include motion controls with haptic feedback for physically interacting within the virtual world in an intuitive way with little to no abstraction and an omnidirectional treadmill for more freedom of physical movement allowing the user to perform locomotive motion in any direction. Augmented reality (AR) is a type of virtual reality technology that blends what the user sees in their real surroundings with digital content generated by computer software. The additional software-generated images with the virtual scene typically enhance how the real surroundings look in some way. AR systems layer virtual information over a camera live feed into a headset or smartglasses or through a mobile device giving the user the ability to view three-dimensional images. Mixed reality (MR) is the merging of the real world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time. A cyberspace is sometimes defined as a networked virtual reality. Simulated reality is a hypothetical virtual reality as truly immersive as the actual reality, enabling an advanced lifelike experience or even virtual eternity. History The development of perspective in Renaissance European art and the stereoscope invented by Sir Charles Wheatstone were both precursors to virtual reality. The first references to the more modern-day concept of virtual reality came from science fiction. 20th century Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device. Heilig also developed what he referred to as the "Telesphere Mask" (patented in 1960). The patent application described the device as "a telescopic television apparatus for individual use... The spectator is given a complete sensation of reality, i.e. moving three dimensional images which may be in colour, with 100% peripheral vision, binaural sound, scents and air breezes." In 1968, Harvard Professor Ivan Sutherland, with the help of his students including Bob Sproull, created what was widely considered to be the first head-mounted display system for use in immersive simulation applications, called The Sword of Damocles. It was primitive both in terms of user interface and visual realism, and the HMD to be worn by the user was so heavy that it had to be suspended from the ceiling, which gave the device a formidable appearance and inspired its name. Technically, the device was an augmented reality device due to optical passthrough. The graphics comprising the virtual environment were simple wire-frame model rooms. 1970–1990 The virtual reality industry mainly provided VR devices for medical, flight simulation, automobile industry design, and military training purposes from 1970 to 1990. David Em became the first artist to produce navigable virtual worlds at NASA's Jet Propulsion Laboratory (JPL) from 1977 to 1984. The Aspen Movie Map, a crude virtual tour in which users could wander the streets of Aspen in one of the three modes (summer, winter, and polygons), was created at MIT in 1978. In 1979, Eric Howlett developed the Large Expanse, Extra Perspective (LEEP) optical system. The combined system created a stereoscopic image with a field-of-view wide enough to create a convincing sense of space. The users of the system have been impressed by the sensation of depth (field of view) in the scene and the corresponding realism. The original LEEP system was redesigned for NASA's Ames Research Center in 1985 for their first virtual reality installation, the VIEW (Virtual Interactive Environment Workstation) by Scott Fisher. The LEEP system provides the basis for most of the modern virtual reality headsets. By the late 1980s, the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1984. VPL Research has developed several VR devices like the DataGlove, the EyePhone, the Reality Built For Two (RB2), and the AudioSphere. VPL licensed the DataGlove technology to Mattel, which used it to make the Power Glove, an early affordable VR device, released in 1989. That same year Broderbund's U-Force was released. Atari, Inc. founded a research lab for virtual reality in 1982, but the lab was closed after two years due to the video game crash of 1983. However, its hired employees, such as Scott Fisher, Michael Naimark, and Brenda Laurel, kept their research and development on VR-related technologies. In 1988, the Cyberspace Project at Autodesk was the first to implement VR on a low-cost personal computer. The project leader Eric Gullichsen left in 1990 to found Sense8 Corporation and develop the WorldToolKit virtual reality SDK, which offered the first real time graphics with Texture mapping on a PC, and was widely used throughout industry and academia. 1990–2000 The 1990s saw the first widespread commercial releases of consumer headsets. In 1992, for instance, Computer Gaming World predicted "affordable VR by 1994". In 1991, Sega announced the Sega VR headset for the Mega Drive home console. It used LCD screens in the visor, stereo headphones, and inertial sensors that allowed the system to track and react to the movements of the user's head. In the same year, Virtuality launched and went on to become the first mass-produced, networked, multiplayer VR entertainment system that was released in many countries, including a dedicated VR arcade at Embarcadero Center. Costing up to $73,000 per multi-pod Virtuality system, they featured headsets and exoskeleton gloves that gave one of the first "immersive" VR experiences. That same year, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti from the Electronic Visualization Laboratory created the first cubic immersive room, the Cave automatic virtual environment (CAVE). Developed as Cruz-Neira's PhD thesis, it involved a multi-projected environment, similar to the holodeck, allowing people to see their own bodies in relation to others in the room. Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to "drive" Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals. In 1992, Nicole Stenger created Angels, the first real-time interactive immersive movie where the interaction was facilitated with a dataglove and high-resolution goggles. That same year, Louis Rosenberg created the virtual fixtures system at the U.S. Air Force's Armstrong Labs using a full upper-body exoskeleton, enabling a physically realistic mixed reality in 3D. The system enabled the overlay of physically real 3D virtual objects registered with a user's direct view of the real world, producing the first true augmented reality experience enabling sight, sound, and touch. By July 1994, Sega had released the VR-1 motion simulator ride attraction in Joypolis indoor theme parks, as well as the Dennou Senki Net Merc arcade game. Both used an advanced head-mounted display dubbed the "Mega Visor Display" developed in conjunction with Virtuality; it was able to track head movement in a 360-degree stereoscopic 3D environment, and in its Net Merc incarnation was powered by the Sega Model 1 arcade system board. Apple released QuickTime VR, which, despite using the term "VR", was unable to represent virtual reality, and instead displayed 360-degree interactive panoramas. Nintendo's Virtual Boy console was released in 1995. A group in Seattle created public demonstrations of a "CAVE-like" 270 degree immersive projection room called the Virtual Environment Theater, produced by entrepreneurs Chet Dagit and Bob Jacobson. Forte released the VFX1, a PC-powered virtual reality headset that same year. In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial focus on the development of VR hardware. In its earliest form, the company struggled to produce a commercial version of "The Rig", which was realized in prototype form as a clunky steel contraption with several computer monitors that users could wear on their shoulders. The concept was later adapted into the personal computer-based, 3D virtual world program Second Life. 21st century The 2000s were a period of relative public and investment indifference to commercially available VR technologies. In 2001, SAS Cube (SAS3) became the first PC-based cubic room, developed by Z-A Production (Maurice Benayoun, David Nahon), Barco, and Clarté. It was installed in Laval, France. The SAS library gave birth to Virtools VRPack. In 2007, Google introduced Street View, a service that shows panoramic views of an increasing number of worldwide positions such as roads, indoor buildings and rural areas. It also features a stereoscopic 3D mode, introduced in 2010. 2010–present In 2010, Palmer Luckey designed the first prototype of the Oculus Rift. This prototype, built on a shell of another virtual reality headset, was only capable of rotational tracking. However, it boasted a 90-degree field of vision that was previously unseen in the consumer market at the time. Luckey eliminated distortion issues arising from the type of lens used to create the wide field of vision using software that pre-distorted the rendered image in real-time. This initial design would later serve as a basis from which the later designs came. In 2012, the Rift is presented for the first time at the E3 video game trade show by John Carmack. In 2014, Facebook (later Meta) purchased Oculus VR for what at the time was stated as $2 billion but later revealed that the more accurate figure was $3 billion. This purchase occurred after the first development kits ordered through Oculus' 2012 Kickstarter had shipped in 2013 but before the shipping of their second development kits in 2014. ZeniMax, Carmack's former employer, sued Oculus and Facebook for taking company secrets to Facebook; the verdict was in favour of ZeniMax, settled out of court later. In 2013, Valve discovered and freely shared the breakthrough of low-persistence displays which make lag-free and smear-free display of VR content possible. This was adopted by Oculus and was used in all their future headsets. In early 2014, Valve showed off their SteamSight prototype, the precursor to both consumer headsets released in 2016. It shared major features with the consumer headsets including separate 1K displays per eye, low persistence, positional tracking over a large area, and Fresnel lenses. HTC and Valve announced the virtual reality headset HTC Vive and controllers in 2015. The set included tracking technology called Lighthouse, which utilized wall-mounted "base stations" for positional tracking using infrared light. In 2014, Sony announced Project Morpheus (its code name for the PlayStation VR), a virtual reality headset for the PlayStation 4 video game console. The Chinese headset AntVR was released in late 2014; it was briefly competitive in the Chinese market but ultimately unable to compete with the larger technology companies. In 2015, Google announced Cardboard, a do-it-yourself stereoscopic viewer: the user places their smartphone in the cardboard holder, which they wear on their head. Michael Naimark was appointed Google's first-ever 'resident artist' in their new VR division. The Kickstarter campaign for Gloveone, a pair of gloves providing motion tracking and haptic feedback, was successfully funded, with over $150,000 in contributions. Also in 2015, Razer unveiled its open source project OSVR. By 2016, there were at least 230 companies developing VR-related products. Amazon, Apple, Facebook, Google, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame rate that images were still identifiable as virtual. In 2016, HTC shipped its first units of the HTC Vive SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space. A patent filed by Sony in 2017 showed they were developing a similar location tracking technology to the Vive for PlayStation VR, with the potential for the development of a wireless headset. In 2019, Oculus released the Oculus Rift S and a standalone headset, the Oculus Quest. These headsets utilized inside-out tracking compared to external outside-in tracking seen in previous generations of headsets. Later in 2019, Valve released the Valve Index. Notable features include a 130° field of view, off-ear headphones for immersion and comfort, open-handed controllers which allow for individual finger tracking, front facing cameras, and a front expansion slot meant for extensibility. In 2020, Oculus released the Oculus Quest 2, later renamed the Meta Quest 2. Some new features include a sharper screen, reduced price, and increased performance. Facebook (which became Meta a year later) initially required users to log in with a Facebook account in order to use the new headset. In 2021 the Oculus Quest 2 accounted for 80% of all VR headsets sold. In 2021, EASA approved the first Virtual Reality-based Flight Simulation Training Device. The device, made by Loft Dynamics for rotorcraft pilots, enhances safety by opening up the possibility of practicing risky maneuvers in a virtual environment. This addresses a key risk area in rotorcraft operations, where statistics show that around 20% of accidents occur during training flights. In 2022, Meta released the Meta Quest Pro. This device utilised a thinner, visor-like design that was not fully enclosed, and was the first headset by Meta to target mixed reality applications using high-resolution colour video passthrough. It also included integrated face and eye tracking, pancake lenses, and updated Touch Pro controllers with on-board motion tracking. In 2023, Sony released the PlayStation VR2, a follow-up to their 2016 headset. The device includes inside-out tracking, eye-tracked foveated rendering, higher-resolution OLED displays, controllers with adaptive triggers and haptic feedback, 3D audio, and a wider field of view. While initially exclusive for use with the PlayStation 5 console, a PC adapter is scheduled for August 2024. Later in 2023, Meta released the Meta Quest 3, the successor to the Quest 2. It features the pancake lenses and mixed reality features of the Quest Pro, as well as an increased field of view and resolution compared to Quest 2. In October 2024 Meta released a lower cost entry headset the Meta Quest 3S with the same fresnel lenses as the Quest 2 and a lower resolution of 1832x1920 as compared to 2064x2208 on the Quest 3. In 2024, Apple released the Apple Vision Pro. The device is a fully enclosed mixed reality headset that strongly utilises video passthrough. While some VR experiences are available on the device, it lacks standard VR headset features such as external controllers or support for OpenXR and is instead branded as a "spatial computer". In 2024, the Federal Aviation Administration approved its first virtual reality flight simulation training device: Loft Dynamics' virtual reality Airbus Helicopters H125 FSTD—the same device EASA qualified. As of September 2024, Loft Dynamics remains the only VR FSTD qualified by EASA and the FAA. Technology Hardware Modern virtual reality headset displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, body, and hand positions; small HD screens for stereoscopic displays; and small, lightweight and fast computer processors. These components led to relative affordability for independent VR developers, and led to the 2012 Oculus Rift Kickstarter offering the first independently developed VR headset. Independent production of VR images and video has increased alongside the development of affordable omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record 360 interactive photography, although at relatively low resolutions or in highly compressed formats for online streaming of 360 video. In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications. To create a feeling of immersion, special output devices are needed to display virtual worlds. Well-known formats include head-mounted displays or the CAVE. In order to convey a spatial impression, two images are generated and displayed from different perspectives (stereo projection). There are different technologies available to bring the respective image to the right eye. A distinction is made between active (e.g. shutter glasses) and passive technologies (e.g. polarizing filters or Infitec). In order to improve the feeling of immersion, wearable multi-string cables offer haptics to complex geometries in virtual reality. These strings offer fine control of each finger joint to simulate the haptics involved in touching these virtual geometries. Special input devices are required for interaction with the virtual world. Some of the most common input devices are motion controllers and optical tracking sensors. In some cases, wired gloves are used. Controllers typically use optical tracking systems (primarily infrared cameras) for location and navigation, so that the user can move freely without wiring. Some input devices provide the user with force feedback to the hands or other parts of the body, so that the user can orientate themselves in the three-dimensional world through haptics and sensor technology as a further sensory sensation and carry out realistic simulations. This allows for the viewer to have a sense of direction in the artificial landscape. Additional haptic feedback can be obtained from omnidirectional treadmills (with which walking in virtual space is controlled by real walking movements) and vibration gloves and suits. Virtual reality cameras can be used to create VR photography using 360-degree panorama videos. VR cameras are available in various formats, with varying numbers of lenses installed in the camera. Software The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for the development of "virtual worlds" without dependency on headsets. The Web3D consortium was subsequently founded in 1997 for the development of industry standards for web-based 3D graphics. The consortium subsequently developed X3D from the VRML framework as an archival, open-source standard for web-based distribution of VR content. WebVR is an experimental JavaScript application programming interface (API) that provides support for various virtual reality devices, such as the HTC Vive, Oculus Rift, Google Cardboard or OSVR, in a web browser. Visual immersion experience Display resolution Minimal Angle of Resolution (MAR) refers to the minimum distance between two display pixels. At the distance, viewer can clearly distinguish the independent pixels. Often measured in arc-seconds, MAR between two pixels has to do with the viewing distance. For the general public, resolution is about 30–65 arc-seconds, which is referred to as the spatial resolution when combined with distance. Given the viewing distance of 1m and 2m respectively, regular viewers won't be able to perceive two pixels as separate if they are less than 0.29mm apart at 1m and less than 0.58mm apart at 2m. Image latency and display refresh frequency Most small-size displays have a refresh rate of 60 Hz, which adds about 15ms of additional latency. The number is reduced to less than 7ms if the refresh rate is increased to 120 Hz or even 240 Hz and more. Participants generally feel that the experience is more immersive with higher refresh rates as a result. However, higher refresh rates require a more powerful graphics processing unit. Relationship between display and field of view In assessing the achieved immersion by a VR device, we need to consider our field of view (FOV) in addition to image quality. Our eyes have a horizontal FOV from about 107 or 110 degrees to the temporal side to about 60 or 70 degrees toward the nose, and a vertical FOV from about 95 degrees downward to 85 degrees upward, and eye movements are estimated as roughly 30 deg to either side horizontally and 20 vertically. Binocular vision is limited to the 120 or 140 degrees where the right and the left visual fields overlap. With eye movements, we have a FOV of roughly 300 degrees x 175 degrees with two eyes, i.e., approximately one third of the full 360-deg sphere. Applications Virtual reality is most commonly used in entertainment applications such as video games, 3D cinema, amusement park rides including dark rides and social virtual worlds. Consumer virtual reality headsets were first released by video game companies in the early-mid 1990s. Beginning in the 2010s, next-generation commercial tethered headsets were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR), setting off a new wave of application development. 3D cinema has been used for sporting events, pornography, fine art, music videos and short films. Since 2015, roller coasters and theme parks have incorporated virtual reality to match visual effects with haptic feedback. VR not only fits the trend of the digital industry but also enhances the film's visual effect. The film gives the audience more ways to interact through VR technology. In social sciences and psychology, virtual reality offers a cost-effective tool to study and replicate interactions in a controlled environment. It can be used as a form of therapeutic intervention. For instance, there is the case of the virtual reality exposure therapy (VRET), a form of exposure therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias. A VR therapy has been designed to help people with psychosis and agoraphobia manage their avoidance of outside environments. In the therapy, the user wears a headset and a virtual character provides psychological advice and guides them as they explore simulated environments (such as a cafe or a busy street). NICE is assessing the therapy to see if it should be recommended on the NHS. During the COVID-19 pandemic, social VR has also been used as a mental-health tool in a form of self-administered, non-traditional cognitive behavioural therapy. Virtual reality programs are being used in the rehabilitation processes with elderly individuals that have been diagnosed with Alzheimer's disease. This gives these elderly patients the opportunity to simulate real experiences that they would not otherwise be able to experience due to their current state. 17 recent studies with randomized controlled trials have shown that virtual reality applications are effective in treating cognitive deficits with neurological diagnoses. Loss of mobility in elderly patients can lead to a sense of loneliness and depression. Virtual reality is able to assist in making aging in place a lifeline to an outside world that they cannot easily navigate. Virtual reality allows exposure therapy to take place in a safe environment. In medicine, simulated VR surgical environments were first developed in the 1990s. Under the supervision of experts, VR can provide effective and repeatable training at a low cost, allowing trainees to recognize and amend errors as they occur. Virtual reality has been used in physical rehabilitation since the 2000s. Despite numerous studies conducted, good quality evidence of its efficacy compared to other rehabilitation methods without sophisticated and expensive equipment is lacking for the treatment of Parkinson's disease. A 2018 review on the effectiveness of mirror therapy by virtual reality and robotics for any type of pathology concluded in a similar way. Another study was conducted that showed the potential for VR to promote mimicry and revealed the difference between non-autistic and autistic individuals in their response to a two-dimensional avatar. Immersive virtual reality technology with myoelectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain. Pain scale measurements were taken into account and an interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked VR headset. A systematic search in Pubmed and Embase was performed to determine results that were pooled in two meta-analysis. Meta-analysis showed a significant result in favor of VRT for balance. In the fast-paced and globalised business world, meetings in VR are used to create an environment in which interactions with other people (e.g. colleagues, customers, partners) can feel more natural than a phone call or video chat. In the customisable meeting rooms all parties can join using the VR headset and interact as if they are in the same physical room. Presentations, videos or 3D models (of e.g. products or prototypes) can be uploaded and interacted with. Compared to traditional text-based CMC, Avatar-based interactions in 3D virtual environment lead to higher levels of consensus, satisfaction, and cohesion among group members. VR can simulate real workspaces for workplace occupational safety and health purposes, educational purposes, and training purposes. It can be used to provide learners with a virtual environment where they can develop their skills without the real-world consequences of failing. It has been used and studied in primary education, anatomy teaching, military, astronaut training, flight simulators, miner training, medical education, geography education, architectural design, driver training, and bridge inspection. Immersive VR engineering systems enable engineers to see virtual prototypes prior to the availability of any physical prototypes. Supplementing training with virtual training environments has been claimed to offer avenues of realism in military and healthcare training while minimizing cost. It also has been claimed to reduce military training costs by minimizing the amounts of ammunition expended during training periods. VR can be used for the healthcare training and education for medical practitioners. Further, several application have been developed for multiple types of safety training. The latest results indicates that virtual reality safety training is more effective than traditional training in terms of knowledge acquisition and knowledge retention. In the engineering field, VR has proved very useful for both engineering educators and the students. A previously expensive cost in the educational department now being much more accessible due to lowered overall costs, has proven to be a very useful tool in educating future engineers. The most significant element lies in the ability for the students to be able to interact with 3-D models that accurately respond based on real world possibilities. This added tool of education provides many the immersion needed to grasp complex topics and be able to apply them. As noted, the future architects and engineers benefit greatly by being able to form understandings between spatial relationships and providing solutions based on real-world future applications. The first fine art virtual world was created in the 1970s. As the technology developed, more artistic programs were produced throughout the 1990s, including feature films. When commercially available technology became more widespread, VR festivals began to emerge in the mid-2010s. The first uses of VR in museum settings began in the 1990s, seeing a significant increase in the mid-2010s. Additionally, museums have begun making some of their content virtual reality accessible. Virtual reality's growing market presents an opportunity and an alternative channel for digital marketing. It is also seen as a new platform for e-commerce, particularly in the bid to challenge traditional "brick and mortar" retailers. However, a 2018 study revealed that the majority of goods are still purchased in physical stores. In the case of education, the uses of virtual reality have demonstrated being capable of promoting higher order thinking, promoting the interest and commitment of students, the acquisition of knowledge, promoting mental habits and understanding that are generally useful within an academic context. A case has also been made for including virtual reality technology in the context of public libraries. This would give library users access to cutting-edge technology and unique educational experiences. This could include giving users access to virtual, interactive copies of rare texts and artifacts and to tours of famous landmarks and archeological digs (as in the case with the Virtual Ganjali Khan Project). Starting in the early 2020s, virtual reality has also been discussed as a technological setting that may support people's grieving process, based on digital recreations of deceased individuals. In 2021, this practice received substantial media attention following a South Korean TV documentary, which invited a grieving mother to interact with a virtual replica of her deceased daughter. Subsequently, scientists have summarized several potential implications of such endeavours, including its potential to facilitate adaptive mourning, but also many ethical challenges. Growing interest in the metaverse has resulted in organizational efforts to incorporate the many diverse applications of virtual reality into ecosystems like VIVERSE, reportedly offering connectivity between platforms for a wide range of uses. Medical uses of VR Virtual reality (VR) technology has emerged as a significant tool in medical training and education. Specifically, there has been a major leap in innovation in surgical simulation and surgical real-time enhancement. Studies done at North Carolina medical institutions have demonstrated improvement in technical performance and skills among medical students and active surgeons using VR training as compared to traditional training, especially in procedures such as total hip arthroplasty. Alongside this, other VR simulation programs such as LapSim, improve basic coordination, instrument handling, and procedure-based skills. These simulations aim to have high ratings for feedback and haptic touch, which provides a more realistic surgical feel. Studies show significant improvement in task completion time and scores after 4-week training sessions of LapSim. This simulation environment also allows surgeons to practice without risk to real patients, promoting patient safety. Based on data from research conducted from the University Hospitals Schleswig-Holstein and collaborators from other institutions, medical students and surgeons with years of experience, show marked performance boosts after practicing with LapSim VR technology. Another recent study at North Carolina University of Chapel Hill has shown that developing VR and Augmented Reality (AR) systems have allowed surgeons to keep their eyes on a patient while accessing CT scans. This VR system allows for laparoscopic imaging integration, real-time skin layer visualization, and enhanced surgical precision capabilities. These are both examples of how studies have shown surgeons can take advantage of additional virtual reality simulation practices, which can create incredible experiences, provide customized scenarios, and provide independent learning with haptic feedback. These VR systems need to be realistic enough for education tools alongside being able to measure performance of a surgeon. Some potential future challenges of this technology would be enhancing complex scenarios alongside the realism aspects. These technologies would need to incorporate stress-inducing factors along with other realistic simulation ideas. Furthermore, there would be a need to have better AR integration to help the surgeon have better eyes-on precision guidance. Lastly, there would be a strong need to keep things cost-effective with an abundance of availability. Concerts In June of 2020, Jean Michel Jarre performed in VRChat. In July, Brendan Bradley released the free FutureStages web-based virtual reality venue for live events and concerts throughout the 2020 shutdown, Justin Bieber performed on November 18, 2021 in WaveXR. On December 2, 2021, non-player characters performed at the Mugar Omni Theater with audiences interacting with a live performer in both virtual reality and projected on the IMAX dome screen. Meta's Foo Fighters Super Bowl VR concert performed on Venues. Post Malone performed in Venues starting July 15, 2022. Megan Thee Stallion performed on AmazeVR at AMC Theaters throughout 2022. On October 24, 2021, Billie Eilish performed on Oculus Venues. Pop group Imagine Dragons performed on June 15, 2022. Concerns and challenges Health and safety There are many health and safety considerations of virtual reality. A number of unwanted symptoms have been caused by prolonged use of virtual reality, and these may have slowed proliferation of the technology. Most virtual reality systems come with consumer warnings, including: seizures; developmental issues in children; trip-and-fall and collision warnings; discomfort; repetitive stress injury; and interference with medical devices. Some users may experience twitches, seizures or blackouts while using VR headsets, even if they do not have a history of epilepsy and have never had blackouts or seizures before. One in 4,000 people, or .025%, may experience these symptoms. Motion sickness, eyestrain, headaches, and discomfort are the most prevalent short-term adverse effects. In addition, because of the virtual reality headsets' heavy weight, discomfort may be more likely among children. Therefore, children are advised against using VR headsets. Other problems may occur in physical interactions with one's environment. While wearing VR headsets, people quickly lose awareness of their real-world surroundings and may injure themselves by tripping over, or colliding with real-world objects. VR headsets may regularly cause eye fatigue, as does all screened technology, because people tend to blink less when watching screens, causing their eyes to become more dried out. There have been some concerns about VR headsets contributing to myopia, but although VR headsets sit close to the eyes, they may not necessarily contribute to nearsightedness if the focal length of the image being displayed is sufficiently far away. Virtual reality sickness (also known as cybersickness) occurs when a person's exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms. Women are significantly more affected than men by headset-induced symptoms, at rates of around 77% and 33% respectively. The most common symptoms are general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. For example, Nintendo's Virtual Boy received much criticism for its negative physical effects, including "dizziness, nausea, and headaches". These motion sickness symptoms are caused by a disconnect between what is being seen and what the rest of the body perceives. When the vestibular system, the body's internal balancing system, does not experience the motion that it expects from visual input through the eyes, the user may experience VR sickness. This can also happen if the VR system does not have a high enough frame rate, or if there is a lag between the body's movement and the onscreen visual reaction to it. Because approximately 25–40% of people experience some kind of VR sickness when using VR machines, companies are actively looking for ways to reduce VR sickness. Vergence-accommodation conflict (VAC) is one of the main causes of virtual reality sickness. In January 2022 The Wall Street Journal found that VR usage could lead to physical injuries including leg, hand, arm and shoulder injuries. VR usage has also been tied to incidents that resulted in neck injuries (especially injures to the cervical vertebrae). Children and teenagers in virtual reality Children are becoming increasingly aware of VR, with the number in the USA having never heard of it dropping by half from Autumn 2016 (40%) to Spring 2017 (19%). A 2022 research report by Piper Sandler revealed that only 26% of U.S. teens own a VR device, 5% use it daily, while 48% of teen headset owners "seldom" use it. Of the teens who don't own a VR headset, 9% plan to buy one. 50% of surveyed teens are unsure about the metaverse or don't have any interest, and don't have any plans to purchase a VR headset. Studies show that young children, compared to adults, may respond cognitively and behaviorally to immersive VR in ways that differ from adults. VR places users directly into the media content, potentially making the experience very vivid and real for children. For example, children of 6–18 years of age reported higher levels of presence and "realness" of a virtual environment compared with adults 19–65 years of age. Studies on VR consumer behavior or its effect on children and a code of ethical conduct involving underage users are especially needed, given the availability of VR porn and violent content. Related research on violence in video games suggests that exposure to media violence may affect attitudes, behavior, and even self-concept. Self-concept is a key indicator of core attitudes and coping abilities, particularly in adolescents. Early studies conducted on observing versus participating in violent VR games suggest that physiological arousal and aggressive thoughts, but not hostile feelings, are higher for participants than for observers of the virtual reality game. Experiencing VR by children may further involve simultaneously holding the idea of the virtual world in mind while experiencing the physical world. Excessive usage of immersive technology that has very salient sensory features may compromise children's ability to maintain the rules of the physical world, particularly when wearing a VR headset that blocks out the location of objects in the physical world. Immersive VR can provide users with multisensory experiences that replicate reality or create scenarios that are impossible or dangerous in the physical world. Observations of 10 children experiencing VR for the first time suggested that 8-12-years-old kids were more confident to explore VR content when it was in a familiar situation, e.g. the children enjoyed playing in the kitchen context of Job Simulator, and enjoyed breaking rules by engaging in activities they are not allowed to do in reality, such as setting things on fire. Privacy Digital privacy concerns have been associated with VR platforms; the persistent tracking required by all VR systems makes the technology particularly useful for, and vulnerable to, mass surveillance, including information gathering of personal actions, movements and responses. Data from eye tracking sensors, which are projected to become a standard feature in virtual reality headsets, may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health conditions. The nature of VR technology means that it can gather a wide range of data about its users. This can include obvious information such as usernames and account information, but also extends to more personal data like physical movements, interaction habits, and responses to virtual environments. In addition, advanced VR systems can capture biometric data like voice patterns, eye movements, and physiological responses to VR experiences. Virtual reality technology has grown substantially since its inception, moving from a niche technology to a mainstream consumer product. As the user base has grown, so too has the amount of personal data collected by these systems. This data can be used to improve VR systems, to provide personalized experiences, or to collect demographic information for marketing purposes. However, it also raises significant privacy concerns, especially when this data is stored, shared, or sold without the user's explicit consent. Existing data protection and privacy laws like the General Data Protection Regulation (GDPR) in the EU, and the California Consumer Privacy Act (CCPA) in the United States, can be applied to VR. These regulations require companies to disclose how they collect and use data, and give users a degree of control over their personal information. Despite these regulations, enforcing privacy laws in VR can be challenging due to the global nature of the technology and the vast amounts of data collected. Due to its history of privacy issues, the involvement of Meta Platforms (formerly Facebook, Inc.) in the VR market has led to privacy concerns specific to its platforms. In August 2020, Facebook announced that Oculus products would become subject to the terms of use and privacy policy of the Facebook social network, and that a Facebook account would be required to use future Oculus headset models, and all existing models (via deprecation of the separate Oculus account system) beginning January 2023. The announcement was criticized for the mandatory integration of Oculus headsets with Facebook data collection and policies (including the Facebook real-name policy), and preventing use of the hardware if the user's account is suspended. The following month, Facebook halted the sale of Oculus products in Germany due to concerns from regulators that the new policy was a violation of GDPR. In 2022, the company would later establish a separate "Meta account" system. In 2024, researchers from the University of Chicago demonstrated a security vulnerability in Meta Quest's Android-based system software (leveraging "Developer Mode" to inject an infected app), allowing them to obtain users' login credentials and inject false details during online banking sessions. This attack was considered to be difficult to execute outside of research settings but would make its target vulnerable to risks such as phishing, Internet fraud, and grooming. Virtual reality in fiction
Technology
Computer science
null
32623
https://en.wikipedia.org/wiki/Vanilla
Vanilla
Vanilla is a spice derived from orchids of the genus Vanilla, primarily obtained from pods of the flat-leaved vanilla (V. planifolia). Vanilla is not autogamous, so pollination is required to make the plants produce the fruit from which the vanilla spice is obtained. In 1837, Belgian botanist Charles François Antoine Morren discovered this fact and pioneered a method of artificially pollinating the plant. The method proved financially unworkable and was not deployed commercially. In 1841, Edmond Albius, a 12-year-old slave who lived on the French island of Réunion in the Indian Ocean, discovered that the plant could be hand-pollinated. Hand-pollination allowed global cultivation of the plant. Noted French botanist and plant collector Jean Michel Claude Richard falsely claimed to have discovered the technique three or four years earlier. By the end of the 20th century, Albius was considered the true discoverer. Three major species of vanilla currently are grown globally, all derived from a species originally found in Mesoamerica, including parts of modern-day Mexico. They are V. planifolia (syn. V. fragrans), grown on Madagascar, Réunion, and other tropical areas along the Indian Ocean; V. × tahitensis, grown in the South Pacific; and V. pompona, found in the West Indies, Central America, and South America. The majority of the world's vanilla is the V. planifolia species, more commonly known as Bourbon vanilla (after the former name of Réunion, Île Bourbon) or Madagascar vanilla, which is produced in Madagascar and neighboring islands in the southwestern Indian Ocean, and in Indonesia. Madagascar's and Indonesia's cultivations produce two-thirds of the world's supply of vanilla. Measured by weight, vanilla is the second-most expensive spice after saffron, because growing the vanilla seed pods is labor-intensive. Nevertheless, vanilla is widely used in both commercial and domestic baking, perfume production, and aromatherapy, as only small amounts are needed to impart its signature flavor and aroma. History Vanilla planifolia traditionally grew wild around the Gulf of Mexico from Tampico around to the northeast tip of South America, and from Colima to Ecuador on the Pacific side, as well as throughout the Caribbean. The Totonac people, who live along the eastern coast of Mexico in the present-day state of Veracruz, were among the first people to domesticate vanilla, cultivated on farms since at least 1185. The Totonac used vanilla as a fragrance in temples and as a good-luck charm in amulets, as well as flavoring for food and beverages. The cultivation of vanilla was a low-profile affair, as few people from outside these regions knew of it. Although the Totonacs are the most famously associated with human use of vanilla, it is speculated that the Olmecs, who also lived in the regions of wild vanilla growth thousands of years earlier, were one of the first people to use wild vanilla in cuisine. Aztecs from the central highlands of Mexico invaded the Totonacs in 1427, developed a taste for the vanilla pods, and began using vanilla to flavor their foods and drinks, often mixing it with cocoa in a drink called "xocolatl" that later inspired modern hot chocolate. The fruit was named tlilxochitl, wrongly interpreted as "black flower" instead of the more probable "black pod" because the matured fruit shrivels and turns a dark color shortly after being picked. For the Aztecs, much like earlier Mesoamerican peoples before them, it is probable that vanilla was used to tame the otherwise bitter taste of cacao, as sugarcane was not harvested in these regions at the time and there were no other sweeteners available. Hernán Cortés is credited with introducing both vanilla and chocolate to Europe in the 1520s. In Europe, vanilla was seen mostly as an additive to chocolate until the early 17th century when Hugh Morgan, a creative apothecary in the employ of Queen Elizabeth I, created chocolate-free, vanilla-flavored "sweetmeats". By the 18th century, the French were using vanilla to flavor ice cream. Until the mid-19th century, Mexico was the chief producer of vanilla. In 1819, French entrepreneurs shipped vanilla fruits to the islands of Réunion and Mauritius in hopes of producing vanilla there. After 1841, when Edmond Albius discovered how to pollinate the flowers quickly by hand, the pods began to thrive. Soon, the tropical orchids were sent from Réunion to the Comoros Islands, Seychelles, and Madagascar, along with instructions for pollinating them. By 1898, Madagascar, Réunion, and the Comoros Islands produced 200 metric tons of vanilla beans, about 80% of world production in that year. According to the United Nations Food and Agriculture Organization 2019 data, Madagascar, followed by Indonesia, were the largest producers of vanilla in 2018. After a tropical cyclone ravaged key croplands, the market price of vanilla rose sharply in the late 1970s and remained high through the early 1980s despite the introduction of Indonesian vanilla. In the mid-1980s, the cartel that had controlled vanilla prices and distribution since its creation in 1930 disbanded. Prices dropped 70% over the next few years, to nearly US$20 per kilogram; prices rose sharply again after tropical cyclone Hudah struck Madagascar in April 2000. The cyclone, political instability, and poor weather in the third year drove vanilla prices to US$500/kg in 2004, bringing new countries into the vanilla industry. A good crop, coupled with decreased demand caused by the production of imitation vanilla, pushed the market price down to the $40/kg range in the middle of 2005. By 2010, prices were down to $20/kg. Cyclone Enawo caused a similar spike to $500/kg in 2017. An estimated 95% of "vanilla" products are artificially flavored with vanillin derived from lignin instead of vanilla fruits. Although vanilla was domesticated in Mesoamerica and subsequently spread to the Old World, the use of an unidentified, Old World-endemic Vanilla species is attested in Canaan/Israel during the Middle Bronze Age and later. Traces of vanillin were found in wine jars in Jerusalem, which were used by the Judahite elite before the city was destroyed in 586 BCE. Etymology The word vanilla is derived from the Spanish word meaning "little pod", the diminutive of vaina derived from the Latin vagina (sheath) describing the shape of the pods. The word "vanilla" entered the English language in 1754, when the botanist Philip Miller wrote about the genus in his Gardener’s Dictionary. Biology Vanilla orchid The main species of vanilla cultivated is V. planifolia. Although it is native to Mesoamerica and South America, it is now widely grown throughout the tropics. Indonesia and Madagascar are the world's largest producers. Additional sources include V. pompona and V. tahitiensis (grown in Niue and Tahiti), although the vanillin content of these species is much less than V. planifolia. Vanilla grows as a vine, climbing up an existing tree (also called a tutor), pole, or other support. It can be grown in a wood (on trees), in a plantation (on trees or poles), or in a "shader", in increasing orders of productivity. Its growth environment is referred to as its terroir, and includes not only the adjacent plants, but also the climate, geography, and local geology. Left alone, it will grow as high as possible on the support, with few flowers. Every year, growers fold the higher parts of the plant downward so the plant stays at heights accessible by a standing human. This also greatly stimulates flowering. The distinctively flavored compounds are found in the fruit, which results from the pollination of the flower. These seed pods are roughly a third of an inch (8 mm) by six inches (15 cm), and brownish red to black when ripe. Inside of these pods is an oily liquid full of tiny seeds. One flower produces one fruit. V. planifolia flowers are hermaphroditic: they carry both male (anther) and female (stigma) organs. However, self-pollination is blocked by a membrane which separates those organs. Despite various claims otherwise, the only pollinators definitively documented to date are orchid bees in the genus Eulaema and the Western honey bee. All commercial vanilla production takes place via hand pollination by humans. The first vanilla orchid to flower in Europe was in the London collection of the Honourable Charles Greville in 1806. Cuttings from that plant went to Netherlands and Paris, from which the French first transplanted the vines to their overseas colonies. The vines grew, but would not fruit outside Mexico. The only known way to produce fruits is artificial pollination. Today, even in Mexico, hand pollination is used extensively. In 1837, botanist Charles François Antoine Morren began experimenting with hand pollination of Vanilla orchids in cultivation in Europe. The method proved financially unworkable and was not deployed commercially. A few years later in 1841, a simple and efficient artificial hand-pollination method was developed by a 12-year-old slave named Edmond Albius on Réunion, a method still used today. Using a beveled sliver of bamboo, an agricultural worker lifts the membrane separating the anther and the stigma, then, using the thumb, transfers the pollinia from the anther to the stigma. The flower, self-pollinated, will then produce a fruit. The vanilla flower lasts about one day, sometimes less, so growers have to inspect their plantations every day for open flowers, a labor-intensive task. The fruit, a seed capsule, if left on the plant, ripens and opens at the end; as it dries, the phenolic compounds crystallize, giving the fruits a diamond-dusted appearance, which the French call givre (hoarfrost). It then releases the distinctive vanilla smell. The fruit contains tiny, black seeds. In dishes prepared with whole natural vanilla, these seeds are recognizable as black specks. Both the pod and the seeds are used in cooking. Like other orchids' seeds, vanilla seeds will not germinate without the presence of certain mycorrhizal fungi. Instead, growers reproduce the plant by cutting: they remove sections of the vine with six or more leaf nodes, a root opposite each leaf. The two lower leaves are removed, and this area is buried in loose soil at the base of support. The remaining upper roots cling to the support, and often grow down into the soil. Growth is rapid under good conditions. Cultivars Bourbon vanilla or Bourbon-Madagascar vanilla, produced from V. planifolia plants introduced from the Americas, is from Indian Ocean islands such as Madagascar, the Comoros, Mauritius and Réunion, formerly named the Île Bourbon. It is also used to describe the distinctive vanilla flavor derived from V. planifolia grown successfully in tropical countries such as India. However, there is no Bourbon whiskey in Bourbon vanilla extract, despite common confusion about this. Mexican vanilla, made from the native V. planifolia, is produced in much less quantity and marketed as the vanilla from the land of its origin. Tahitian vanilla is from French Polynesia, made with V. tahitensis. Genetic analysis shows this species is possibly a cultivar from a hybrid of V. planifolia and V. odorata. The species was introduced by French Admiral François Alphonse Hamelin to French Polynesia from the Philippines, where it was introduced from Guatemala by the Manila Galleon trade. It comprises less than one percent of vanilla production and is only grown by a handful of skilled growers and preparers. West Indian vanilla is made from V. pompona grown in the Caribbean and Central and South America. The term French vanilla is often used to designate particular preparations with a strong vanilla aroma, containing vanilla grains and sometimes also containing eggs (especially egg yolks). The appellation originates from the French style of making vanilla ice cream with a custard base, using vanilla pods, cream, and egg yolks. Inclusion of vanilla varietals from any of the former French dependencies or overseas France may be a part of the flavoring. Alternatively, French vanilla is taken to refer to a vanilla-custard flavor. Chemistry Vanilla essence occurs in two forms. Real seedpod extract is a complex mixture of several hundred different compounds, including vanillin, acetaldehyde, acetic acid, furfural, hexanoic acid, 4-hydroxybenzaldehyde, eugenol, methyl cinnamate, and isobutyric acid. Synthetic essence consists of a solution of synthetic vanillin in ethanol. The chemical compound vanillin (4-hydroxy-3-methoxybenzaldehyde) is a major contributor to the characteristic flavor and aroma of real vanilla and is the main flavor component of cured vanilla beans. Vanillin was first isolated from vanilla pods by Gobley in 1858. By 1874, it had been obtained from glycosides of pine tree sap, temporarily causing a depression in the natural vanilla industry. Vanillin can be easily synthesized from various raw materials, but the majority of food-grade (> 99% pure) vanillin is made from guaiacol. Cultivation In general, quality vanilla only comes from good vines and through careful production methods. Commercial vanilla production can be performed under open field and "greenhouse" operations. The two production systems share these similarities: Plant height and number of years before producing the first grains Shade necessities Amount of organic matter needed A tree or frame to grow around (bamboo, coconut or Erythrina lanceolata) Labor intensity (pollination and harvest activities) Vanilla grows best in a hot, humid climate from sea level to an elevation of 1,500 m. The ideal climate has moderate rainfall, 1,500–3,000 mm, evenly distributed through 10 months of the year. Optimum temperatures for cultivation are during the day and during the night. Ideal humidity is around 80%, and under normal greenhouse conditions, it can be achieved by an evaporative cooler. However, since greenhouse vanilla is grown near the equator and under polymer (HDPE) netting (shading of 50%), this humidity can be achieved by the environment. Most successful vanilla growing and processing is done in the region within 10 to 20° of the equator. Soils for vanilla cultivation should be loose, with high organic matter content and loamy texture. They must be well drained, and a slight slope helps in this condition. Soil pH has not been well documented, but some researchers have indicated an optimum soil pH around 5.3. Mulch is very important for proper growth of the vine, and a considerable portion of mulch should be placed in the base of the vine. Fertilization varies with soil conditions, but general recommendations are: 40 to 60 g of N, 20 to 30 g of P2O5 and 60 to 100 g of K2O should be applied to each plant per year besides organic manures, such as vermicompost, oil cakes, poultry manure, and wood ash. Foliar applications are also good for vanilla, and a solution of 1% NPK (17:17:17) can be sprayed on the plant once a month. Vanilla requires organic matter, so three or four applications of mulch a year are adequate for the plant. Propagation, preparation and type of stock Dissemination of vanilla can be achieved either by stem cutting or by tissue culture. For stem cutting, a progeny garden needs to be established. All plants need to grow under 50% shade, as well as the rest of the crop. Mulching the trenches with coconut husk and micro irrigation provide an ideal microclimate for vegetative growth. Cuttings between should be selected for planting in the field or greenhouse. Cuttings below need to be rooted and raised in a separate nursery before planting. Planting material should always come from unflowered portions of the vine. Wilting of the cuttings before planting provides better conditions for root initiation and establishment. Before planting the cuttings, trees to support the vine must be planted at least three months before sowing the cuttings. Pits of 30 × 30 × 30 cm are dug away from the tree and filled with farm yard manure (vermicompost), sand and top soil mixed well. An average of 2000 cuttings can be planted per hectare (2.5 acres). One important consideration is that when planting the cuttings from the base, four leaves should be pruned and the pruned basal point must be pressed into the soil in a way such that the nodes are in close contact with the soil, and are placed at a depth of . The top portion of the cutting is tied to the tree using natural fibers such as banana or hemp. Tissue culture Tissue culture was first used as a means of creating vanilla plants during the 1980s at Tamil Nadu University. This was the part of the first project to grow V. planifolia in India. At that time, a shortage of vanilla planting stock was occurring in India. The approach was inspired by the work going on to tissue culture other flowering plants. Several methods have been proposed for vanilla tissue culture, but all of them begin from axillary buds of the vanilla vine. In vitro multiplication has also been achieved through culture of callus masses, protocorms, root tips and stem nodes. Description of any of these processes can be obtained from the references listed before, but all of them are successful in generation of new vanilla plants that first need to be grown up to a height of at least before they can be planted in the field or greenhouse. Scheduling considerations In the tropics, the ideal time for planting vanilla is from September to November, when the weather is neither too rainy nor too dry, but this recommendation varies with growing conditions. Cuttings take one to eight weeks to establish roots, and show initial signs of growth from one of the leaf axils. A thick mulch of leaves should be provided immediately after planting as an additional source of organic matter. Three years are required for cuttings to grow enough to produce flowers and subsequent pods. As with most orchids, the blossoms grow along stems branching from the main vine. The buds, growing along the stems, bloom and mature in sequence, each at a different interval. Pollination Flowering normally occurs every spring, and without pollination, the blossom wilts and falls, and no vanilla bean can grow. In the wild in the New World, the only organisms ever observed to carry Vanilla pollen are orchid bees in the genus Eulaema, though direct evidence documenting seed set is lacking. Claims that pollination is achieved by stingless bees of the genus Melipona or hummingbirds have never been substantiated, though they do visit the flowers. Even within the range of orchid bees, wild vanilla orchids have only a 1% chance of successful pollination. As a result, all vanilla grown today is pollinated by hand. Each flower must be hand-pollinated within 12 hours of opening. A small splinter of wood or a grass stem is used to lift the rostellum or move the flap upward, so the overhanging anther can be pressed against the stigma and self-pollinate the vine. Generally, one flower per raceme opens per day, so the raceme may be in flower for over 20 days. A healthy vine should produce about 50 to 100 beans per year, but growers are careful to pollinate only five or six flowers from the 20 on each raceme. The first flowers that open in each raceme are usually the only ones that are pollinated, so the resulting beans are similar in age and mature together. This agronomic practice facilitates harvest and increases bean quality, as over-pollination results in diseases and inferior bean quality. The fruits require five to six weeks to develop, but around six months to mature. A vine remains productive between 12 and 14 years. Pest and disease management Vanilla is susceptible to many fungal and viral diseases. Fusarium, Sclerotium, Phytophthora, and Colletrotrichum species cause rots of root, stem, leaf, bean, and shoot apex. Development of most diseases is favoured by unsuitable growing conditions such as overwatering, insufficient drainage, heavy mulch, overpollination, and too much shade. Fungal diseases can be controlled by spraying Bordeaux mixture (1%), carbendazim (0.2%) and copper oxychloride (0.2%). Biological control of the spread of such diseases can be managed by applying to the soil Trichoderma ( per plant in the rhizosphere) and foliar application of pseudomonas (0.2%). Mosaic virus, leaf curl, and cymbidium mosaic potexvirus are the common viral diseases. These diseases are transmitted through the sap, so affected plants must be destroyed. The insect pests of vanilla include beetles and weevils that attack the flower, caterpillars, snakes, and slugs that damage the tender parts of shoot, flower buds, and immature fruit, and grasshoppers that affect cutting shoot tips. If organic agriculture is practiced, insecticides are avoided, and mechanical measures are adopted for pest management. Most of these practices are implemented under greenhouse cultivation, since such field conditions are very difficult to achieve. Artificial vanilla Most artificial vanilla products contain vanillin, which can be produced synthetically from lignin, a natural polymer found in wood. Most synthetic vanillin is a byproduct from the pulp used in papermaking, in which the lignin is broken down using sulfites or sulfates. However, vanillin is only one of 171 identified aromatic components of real vanilla fruits. The orchid species Leptotes bicolor is used as a natural vanilla replacement in Paraguay and southern Brazil. In 1996 the US Food and Drug Administration cautioned that some vanilla products sold in Mexico were made from the cheaper tonka bean which as well as vanillin also contains the toxin coumarin. They advised consumers to always check the ingredients label and avoid suspiciously cheap products. Nonplant vanilla flavoring In the United States, castoreum, the exudate from the castor sacs of mature beavers, has been approved by the Food and Drug Administration as a food additive, often referenced simply as a "natural flavoring" in the product's list of ingredients. It is used in both food and beverages, especially as vanilla and raspberry flavoring, with a total annual U.S. production of less than 300 pounds. It is also used to flavor some cigarettes and in perfume-making, and is used by fur trappers as a scent lure. Harvest Harvesting vanilla fruits is as labor-intensive as pollinating the blossoms. Immature, dark green pods are not harvested. Pale yellow discoloration that commences at the distal end of the fruits is not a good indication of the maturity of pods. Each fruit ripens at its own time, requiring a daily harvest. "Current methods for determining the maturity of vanilla (Vanilla planifolia Andrews) beans are unreliable. Yellowing at the blossom end, the current index, occurs before beans accumulate maximum glucovanillin concentrations. Beans left on the vine until they turn brown have higher glucovanillin concentrations but may split and have low quality. Judging bean maturity is difficult as they reach full size soon after pollination. Glucovanillin accumulates from 20 weeks, maximum about 40 weeks after pollination. Mature green beans have 20% dry matter but less than 2% glucovanillin." The accumulation of dry matter and glucovanillin are highly correlated. To ensure the finest flavor from every fruit, each individual pod must be picked by hand just as it begins to split on the end. Overmatured fruits are likely to split, causing a reduction in market value. Its commercial value is fixed based on the length and appearance of the pod. If the fruit is more than in length, it is categorized as first-quality. The largest fruits greater than and up to as much as are usually reserved for the gourmet vanilla market, for sale to top chefs and restaurants. If the fruits are between 10 and 15 cm long, pods are in the second-quality category, and fruits less than in length are in the third-quality category. Each fruit contains thousands of tiny black vanilla seeds. Vanilla fruit yield depends on the care and management given to the hanging and fruiting vines. Any practice directed to stimulate aerial root production has a direct effect on vine productivity. A five-year-old vine can produce between of pods, and this production can increase up to after a few years. The harvested green fruit can be commercialized as such or cured to get a better market price. Curing Several methods exist in the market for curing vanilla; nevertheless, all of them consist of four basic steps: killing, sweating, slow-drying, and conditioning of the beans. Killing The vegetative tissue of the vanilla pod is killed to stop the vegetative growth of the pods and disrupt the cells and tissue of the fruits, which initiates enzymatic reactions responsible for the aroma. The method of killing varies, but may be accomplished by heating in hot water, freezing, or scratching, or killing by heating in an oven or exposing the beans to direct sunlight. The different methods give different profiles of enzymatic activity. Testing has shown mechanical disruption of fruit tissues can cause curing processes, including the degeneration of glucovanillin to vanillin, so the reasoning goes that disrupting the tissues and cells of the fruit allow enzymes and enzyme substrates to interact. Hot-water killing may consist of dipping the pods in hot water () for three minutes, or at for 10 seconds. In scratch killing, fruits are scratched along their length. Frozen or quick-frozen fruits must be thawed again for the subsequent sweating stage. Tied in bundles and rolled in blankets, fruits may be placed in an oven at for 36 to 48 hours. Exposing the fruits to sunlight until they turn brown, a method originating in Mexico, was practiced by the Aztecs. Sweating Sweating is a hydrolytic and oxidative process. Traditionally, it consists of keeping fruits, for 7 to 10 days, densely stacked and insulated in wool or other cloth. This retains a temperature of and high humidity. Daily exposure to the sun may also be used, or dipping the fruits in hot water. The fruits are brown and have attained much of the characteristic vanilla flavor and aroma by the end of this process, but still retain a 60–70% moisture content by weight. Drying Reduction of the beans to 25–30% moisture by weight, to prevent rotting and to lock the aroma in the pods, is always achieved by some exposure of the beans to air, and usually (and traditionally) intermittent shade and sunlight. Fruits may be laid out in the sun during the mornings and returned to their boxes in the afternoons, or spread on a wooden rack in a room for three to four weeks, sometimes with periods of sun exposure. Drying is the most problematic of the curing stages; unevenness in the drying process can lead to the loss of vanillin content of some fruits by the time the others are cured. Conditioning Conditioning is performed by storing the pods for five to six months in closed boxes, where the fragrance develops. The processed fruits are sorted, graded, bundled, and wrapped in paraffin paper and preserved for the development of desired bean qualities, especially flavor and aroma. The cured vanilla fruits contain an average of 2.5% vanillin. Grading Once fully cured, the vanilla fruits are sorted by quality and graded. Several vanilla fruit grading systems are in use. Each country which produces vanilla has its own grading system, and individual vendors, in turn, sometimes use their own criteria for describing the quality of the fruits they offer for sale. In general, vanilla fruit grade is based on the length, appearance (color, sheen, presence of any splits, presence of blemishes), and moisture content of the fruit. Whole, dark, plump and oily pods that are visually attractive, with no blemishes, and that have a higher moisture content are graded most highly. Such pods are particularly prized by chefs for their appearance and can be featured in gourmet dishes. Beans that show localized signs of disease or other physical defects are cut to remove the blemishes; the shorter fragments left are called "cuts" and are assigned lower grades, as are fruits with lower moisture contents. Lower-grade fruits tend to be favored for uses in which the appearance is not as important, such as in the production of vanilla flavoring extract and in the fragrance industry. Higher-grade fruits command higher prices in the market. However, because grade is so dependent on visual appearance and moisture content, fruits with the highest grade do not necessarily contain the highest concentration of characteristic flavor molecules such as vanillin, and are not necessarily the most flavorful. † moisture content varies among sources cited A simplified, alternative grading system has been proposed for classifying vanilla fruits suitable for use in cooking: Under this scheme, vanilla extract is normally made from Grade B fruits. Production In 2020, world production of vanilla was 7,614 tonnes, led by Madagascar with 39.1% of the total, and Indonesia with 30.3% (table). Due to drought, cyclones, and poor farming practices in Madagascar, there were concerns about the global supply and costs of vanilla in 2017 and 2018. The intensity of criminal enterprises against Madagascar farmers is high, elevating the worldwide cost of using Madagascar vanilla in consumer products. Uses The four main commercial preparations of natural vanilla are: Whole pod Powder (ground pods, kept pure or blended with sugar, starch, or other ingredients) Extract (in alcoholic or occasionally glycerol solution; both pure and imitation forms of vanilla contain at least 35% alcohol) Vanilla sugar, a packaged mix of sugar and vanilla extract Vanilla flavoring in food may be achieved by adding vanilla extract or by cooking vanilla pods in the liquid preparation. A stronger aroma may be attained if the pods are split in two, exposing more of a pod's surface area to the liquid. In this case, the pods' seeds are mixed into the preparation. Natural vanilla gives a brown or yellow color to preparations, depending on the concentration. Good-quality vanilla has a strong, aromatic flavor, but food with small amounts of low-quality vanilla or artificial vanilla-like flavorings are far more common, since true vanilla is much more expensive. Regarded as the world's most popular aroma and flavor, vanilla is a widely used aroma and flavor compound for foods, beverages and cosmetics, as indicated by its popularity as an ice cream flavor. Although vanilla is a prized flavoring agent on its own, it is also used to enhance the flavor of other substances, to which its own flavor is often complementary, such as chocolate, custard, caramel, coffee, and others. Vanilla is a common ingredient in Western sweet baked goods, such as cookies and cakes. Despite the expense, vanilla is highly valued for its flavor. The food industry uses methyl and ethyl vanillin as less-expensive substitutes for real vanilla. Ethyl vanillin is more expensive, but has a stronger note. Cook's Illustrated ran several taste tests pitting vanilla against vanillin in baked goods and other applications, and to the consternation of the magazine editors, tasters could not differentiate the flavor of vanillin from vanilla; however, for the case of vanilla ice cream, natural vanilla won out. A more recent and thorough test by the same group produced a more interesting variety of results; namely, high-quality artificial vanilla flavoring is best for cookies, while high-quality real vanilla is slightly better for cakes and significantly better for unheated or lightly heated foods. The liquid extracted from vanilla pods was once believed to have medical properties, helping with various stomach ailments. Contact dermatitis The sap of most species of vanilla orchid which exudes from cut stems or where beans are harvested can cause moderate to severe dermatitis if it comes in contact with bare skin. The sap of vanilla orchids contains calcium oxalate crystals, which are thought to be the main causative agent of contact dermatitis in vanilla plantation workers. Gallery
Biology and health sciences
Monocots
null
32640
https://en.wikipedia.org/wiki/Vector%20calculus
Vector calculus
Vector calculus or vector analysis is a branch of mathematics concerned with the differentiation and integration of vector fields, primarily in three-dimensional Euclidean space, The term vector calculus is sometimes used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, gravitational fields, and fluid flow. Vector calculus was developed from the theory of quaternions by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis. In its standard form using the cross product, vector calculus does not generalize to higher dimensions, but the alternative approach of geometric algebra, which uses the exterior product, does (see below for more). Basic objects Scalar fields A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory. Vector fields A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line. Vectors and pseudovectors In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below. Vector algebra The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field. The basic algebraic operations consist of: Also commonly used are the two triple products: Operators and theorems Differential operators Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator (), also known as "nabla". The three basic vector operators are: Also commonly used are the two Laplace operators: A quantity called the Jacobian matrix is useful for studying functions when both the domain and range of the function are multivariable, such as a change of variables during integration. Integral theorems The three basic vector operators have corresponding theorems which generalize the fundamental theorem of calculus to higher dimensions: In two dimensions, the divergence and curl theorems reduce to the Green's theorem: Applications Linear approximations Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable function with real values, one can approximate for close to by the formula The right-hand side is the equation of the plane tangent to the graph of at Optimization For a continuously differentiable function of several real variables, a point (that is, a set of values for the input variables, which is viewed as a point in ) is critical if all of the partial derivatives of the function are zero at , or, equivalently, if its gradient is zero. The critical values are the values of the function at the critical points. If the function is smooth, or, at least twice continuously differentiable, a critical point may be either a local maximum, a local minimum or a saddle point. The different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives. By Fermat's theorem, all local maxima and minima of a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros. Generalizations Vector calculus can also be generalized to other 3-manifolds and higher-dimensional spaces. Different 3-manifolds Vector calculus is initially defined for Euclidean 3-space, which has additional structure beyond simply being a 3-dimensional real vector space, namely: a norm (giving a notion of length) defined via an inner product (the dot product), which in turn gives a notion of angle, and an orientation, which gives a notion of left-handed and right-handed. These structures give rise to a volume form, and also the cross product, which is used pervasively in vector calculus. The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of the coordinate system to be taken into account (see for more detail). Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetric nondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (the special orthogonal group ). More generally, vector calculus can be defined on any 3-dimensional oriented Riemannian manifold, or more generally pseudo-Riemannian manifold. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point. Other dimensions Most of the analytic results are easily understood, in a more general form, using the machinery of differential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yielding harmonic analysis), while curl and cross product do not generalize as directly. From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being -vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to , , or dimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors. In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7 (and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or 7 dimensions can a cross product be defined (generalizations in other dimensionalities either require vectors to yield 1 vector, or are alternative Lie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated at Curl § Generalizations; in brief, the curl of a vector field is a bivector field, which may be interpreted as the special orthogonal Lie algebra of infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally dimensions of rotations in dimensions). There are two important alternative generalizations of vector calculus. The first, geometric algebra, uses -vector fields instead of vector fields (in 3 or fewer dimensions, every -vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with the exterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yields Clifford algebras as the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions. The second generalization uses differential forms (-covector fields) instead of vector fields or -vector fields, and is widely used in mathematics, particularly in differential geometry, geometric topology, and harmonic analysis, in particular yielding Hodge theory on oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to the exterior derivative of 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form of Stokes' theorem. From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear. From the point of view of geometric algebra, vector calculus implicitly identifies -vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifies -forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
Mathematics
Calculus and analysis
null
32653
https://en.wikipedia.org/wiki/Vaccine
Vaccine
A vaccine is a biological preparation that provides active acquired immunity to a particular infectious or malignant disease. The safety and effectiveness of vaccines has been widely studied and verified. A vaccine typically contains an agent that resembles a disease-causing microorganism and is often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins. The agent stimulates the body's immune system to recognize the agent as a threat, destroy it, and recognize further and destroy any of the microorganisms associated with that agent that it may encounter in the future. Vaccines can be prophylactic (to prevent or alleviate the effects of a future infection by a natural or "wild" pathogen), or therapeutic (to fight a disease that has already occurred, such as cancer). Some vaccines offer full sterilizing immunity, in which infection is prevented. The administration of vaccines is called vaccination. Vaccination is the most effective method of preventing infectious diseases; widespread immunity due to vaccination is largely responsible for the worldwide eradication of smallpox and the restriction of diseases such as polio, measles, and tetanus from much of the world. The World Health Organization (WHO) reports that licensed vaccines are currently available for twenty-five different preventable infections. The first recorded use of inoculation to prevent smallpox occurred in the 16th century in China, with the earliest hints of the practice in China coming during the 10th century. It was also the first disease for which a vaccine was produced. The folk practice of inoculation against smallpox was brought from Turkey to Britain in 1721 by Lady Mary Wortley Montagu. The terms vaccine and vaccination are derived from Variolae vaccinae (smallpox of the cow), the term devised by Edward Jenner (who both developed the concept of vaccines and created the first vaccine) to denote cowpox. He used the phrase in 1798 for the long title of his Inquiry into the Variolae vaccinae Known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In 1881, to honor Jenner, Louis Pasteur proposed that the terms should be extended to cover the new protective inoculations then being developed. The science of vaccine development and production is termed vaccinology. Effects There is overwhelming scientific consensus that vaccines are a very safe and effective way to fight and eradicate infectious diseases. The immune system recognizes vaccine agents as foreign, destroys them, and "remembers" them. When the virulent version of an agent is encountered, the body recognizes the protein coat on the agent, and thus is prepared to respond, by first neutralizing the target agent before it can enter cells, and secondly by recognizing and destroying infected cells before that agent can multiply to vast numbers. Limitations to their effectiveness, nevertheless, exist. Sometimes, protection fails for vaccine-related reasons such as failures in vaccine attenuation, vaccination regimens or administration. Failure may also occur for host-related reasons if the host's immune system does not respond adequately or at all. Host-related lack of response occurs in an estimated 2-10% of individuals, due to factors including genetics, immune status, age, health and nutritional status. One type of primary immunodeficiency disorder resulting in genetic failure is X-linked agammaglobulinemia, in which the absence of an enzyme essential for B cell development prevents the host's immune system from generating antibodies to a pathogen. Host–pathogen interactions and responses to infection are dynamic processes involving multiple pathways in the immune system. A host does not develop antibodies instantaneously: while the body's innate immunity may be activated in as little as twelve hours, adaptive immunity can take 1–2 weeks to fully develop. During that time, the host can still become infected. Once antibodies are produced, they may promote immunity in any of several ways, depending on the class of antibodies involved. Their success in clearing or inactivating a pathogen will depend on the amount of antibodies produced and on the extent to which those antibodies are effective at countering the strain of the pathogen involved, since different strains may be differently susceptible to a given immune reaction. In some cases vaccines may result in partial immune protection (in which immunity is less than 100% effective but still reduces risk of infection) or in temporary immune protection (in which immunity wanes over time) rather than full or permanent immunity. They can still raise the reinfection threshold for the population as a whole and make a substantial impact. They can also mitigate the severity of infection, resulting in a lower mortality rate, lower morbidity, faster recovery from illness, and a wide range of other effects. Those who are older often display less of a response than those who are younger, a pattern known as Immunosenescence. Adjuvants commonly are used to boost immune response, particularly for older people whose immune response to a simple vaccine may have weakened. The efficacy or performance of the vaccine is dependent on several factors: the disease itself (for some diseases vaccination performs better than for others) the strain of vaccine (some vaccines are specific to, or at least most effective against, particular strains of the disease) whether the vaccination schedule has been properly observed. idiosyncratic response to vaccination; some individuals are "non-responders" to certain vaccines, meaning that they do not generate antibodies even after being vaccinated correctly. assorted factors such as ethnicity, age, or genetic predisposition. If a vaccinated individual does develop the disease vaccinated against (breakthrough infection), the disease is likely to be less virulent than in unvaccinated cases. Important considerations in an effective vaccination program: careful modeling to anticipate the effect that an immunization campaign will have on the epidemiology of the disease in the medium to long term ongoing surveillance for the relevant disease following introduction of a new vaccine maintenance of high immunization rates, even when a disease has become rare In 1958, there were 763,094 cases of measles in the United States; 552 deaths resulted. After the introduction of new vaccines, the number of cases dropped to fewer than 150 per year (median of 56). In early 2008, there were 64 suspected cases of measles. Fifty-four of those infections were associated with importation from another country, although only thirteen percent were actually acquired outside the United States; 63 of the 64 individuals either had never been vaccinated against measles or were uncertain whether they had been vaccinated. Vaccines led to the eradication of smallpox, one of the most contagious and deadly diseases in humans. Other diseases such as rubella, polio, measles, mumps, chickenpox, and typhoid are nowhere near as common as they were a hundred years ago thanks to widespread vaccination programs. As long as the vast majority of people are vaccinated, it is much more difficult for an outbreak of disease to occur, let alone spread. This effect is called herd immunity. Polio, which is transmitted only among humans, is targeted by an extensive eradication campaign that has seen endemic polio restricted to only parts of three countries (Afghanistan, Nigeria, and Pakistan). However, the difficulty of reaching all children, cultural misunderstandings, and disinformation have caused the anticipated eradication date to be missed several times. Vaccines also help prevent the development of antibiotic resistance. For example, by greatly reducing the incidence of pneumonia caused by Streptococcus pneumoniae, vaccine programs have greatly reduced the prevalence of infections resistant to penicillin or other first-line antibiotics. The measles vaccine is estimated to prevent a million deaths every year. Adverse effects Vaccinations given to children, adolescents, or adults are generally safe. Adverse effects, if any, are generally mild. The rate of side effects depends on the vaccine in question. Some common side effects include fever, pain around the injection site, and muscle aches. Additionally, some individuals may be allergic to ingredients in the vaccine. MMR vaccine is rarely associated with febrile seizures. Host-("vaccinee")-related determinants that render a person susceptible to infection, such as genetics, health status (underlying disease, nutrition, pregnancy, sensitivities or allergies), immune competence, age, and economic impact or cultural environment can be primary or secondary factors affecting the severity of infection and response to a vaccine. Elderly (above age 60), allergen-hypersensitive, and obese people have susceptibility to compromised immunogenicity, which prevents or inhibits vaccine effectiveness, possibly requiring separate vaccine technologies for these specific populations or repetitive booster vaccinations to limit virus transmission. Severe side effects are extremely rare. Varicella vaccine is rarely associated with complications in immunodeficient individuals, and rotavirus vaccines are moderately associated with intussusception. At least 19 countries have no-fault compensation programs to provide compensation for those with severe adverse effects of vaccination. The United States' program is known as the National Childhood Vaccine Injury Act, and the United Kingdom employs the Vaccine Damage Payment. Types Vaccines typically contain attenuated, inactivated or dead organisms or purified products derived from them. There are several types of vaccines in use. These represent different strategies used to try to reduce the risk of illness while retaining the ability to induce a beneficial immune response. Attenuated Some vaccines contain live, attenuated microorganisms. Many of these are active viruses that have been cultivated under conditions that disable their virulent properties, or that use closely related but less dangerous organisms to produce a broad immune response. Although most attenuated vaccines are viral, some are bacterial in nature. Examples include the viral diseases yellow fever, measles, mumps, and rubella, and the bacterial disease typhoid. The live Mycobacterium tuberculosis vaccine developed by Calmette and Guérin is not made of a contagious strain but contains a virulently modified strain called "BCG" used to elicit an immune response to the vaccine. The live attenuated vaccine containing strain Yersinia pestis EV is used for plague immunization. Attenuated vaccines have some advantages and disadvantages. Attenuated, or live, weakened, vaccines typically provoke more durable immunological responses. Attenuated vaccines also elicit a cellular and humoral response. However, they may not be safe for use in immunocompromised individuals, and on rare occasions mutate to a virulent form and cause disease. Inactivated Some vaccines contain microorganisms that have been killed or inactivated by physical or chemical means. Examples include IPV (polio vaccine), hepatitis A vaccine, rabies vaccine and most influenza vaccines. Toxoid Toxoid vaccines are made from inactivated toxic compounds that cause illness rather than the microorganism. Examples of toxoid-based vaccines include tetanus and diphtheria. Not all toxoids are for microorganisms; for example, Crotalus atrox toxoid is used to vaccinate dogs against rattlesnake bites. Subunit Rather than introducing an inactivated or attenuated microorganism to an immune system (which would constitute a "whole-agent" vaccine), a subunit vaccine uses a fragment of it to create an immune response. One example is the subunit vaccine against hepatitisB, which is composed of only the surface proteins of the virus (previously extracted from the blood serum of chronically infected patients but now produced by recombination of the viral genes into yeast). Other examples include the Gardasil virus-like particle human papillomavirus (HPV) vaccine, the hemagglutinin and neuraminidase subunits of the influenza virus, and edible algae vaccines. A subunit vaccine is being used for plague immunization. Conjugate Certain bacteria have a polysaccharide outer coat that is poorly immunogenic. By linking these outer coats to proteins (e.g., toxins), the immune system can be led to recognize the polysaccharide as if it were a protein antigen. This approach is used in the Haemophilus influenzae type B vaccine. Outer membrane vesicle Outer membrane vesicles (OMVs) are naturally immunogenic and can be manipulated to produce potent vaccines. The best known OMV vaccines are those developed for serotype B meningococcal disease. Heterotypic Heterologous vaccines also known as "Jennerian vaccines", are vaccines that are pathogens of other animals that either do not cause disease or cause mild disease in the organism being treated. The classic example is Jenner's use of cowpox to protect against smallpox. A current example is the use of BCG vaccine made from Mycobacterium bovis to protect against tuberculosis. Genetic vaccine Genetic vaccines are based on the principle of uptake of a nucleic acid into cells, whereupon a protein is produced according to the nucleic acid template. This protein is usually the immunodominant antigen of the pathogen or a surface protein that enables the formation of neutralizing antibodies. The subgroup of genetic vaccines encompass viral vector vaccines, RNA vaccines and DNA vaccines. Viral vector Viral vector vaccines use a safe virus to insert pathogen genes in the body to produce specific antigens, such as surface proteins, to stimulate an immune response. Viruses being researched for use as viral vectors include adenovirus, vaccinia virus, and VSV. RNA An mRNA vaccine (or RNA vaccine) is a novel type of vaccine which is composed of the nucleic acid RNA, packaged within a vector such as lipid nanoparticles. Among the COVID-19 vaccines are a number of RNA vaccines to combat the COVID-19 pandemic and some have been approved or have received emergency use authorization in some countries. For example, the Pfizer-BioNTech vaccine and Moderna mRNA vaccine are approved for use in adults and children in the US. DNA A DNA vaccine uses a DNA plasmid (pDNA)) that encodes for an antigenic protein originating from the pathogen upon which the vaccine will be targeted. pDNA is inexpensive, stable, and relatively safe, making it an excellent option for vaccine delivery. This approach offers a number of potential advantages over traditional approaches, including the stimulation of both B- and T-cell responses, improved vaccine stability, the absence of any infectious agent and the relative ease of large-scale manufacture. Experimental Many innovative vaccines are also in development and use. Dendritic cell vaccines combine dendritic cells with antigens to present the antigens to the body's white blood cells, thus stimulating an immune reaction. These vaccines have shown some positive preliminary results for treating brain tumors and are also tested in malignant melanoma. Recombinant vectorby combining the physiology of one microorganism and the DNA of another, immunity can be created against diseases that have complex infection processes. An example is the RVSV-ZEBOV vaccine licensed to Merck that is being used in 2018 to combat ebola in Congo. T-cell receptor peptide vaccines are under development for several diseases using models of Valley Fever, stomatitis, and atopic dermatitis. These peptides have been shown to modulate cytokine production and improve cell-mediated immunity. Targeting of identified bacterial proteins that are involved in complement inhibition would neutralize the key bacterial virulence mechanism. The use of plasmids has been validated in preclinical studies as a protective vaccine strategy for cancer and infectious diseases. However, in human studies, this approach has failed to provide clinically relevant benefit. The overall efficacy of plasmid DNA immunization depends on increasing the plasmid's immunogenicity while also correcting for factors involved in the specific activation of immune effector cells. Bacterial vector – Similar in principle to viral vector vaccines, but using bacteria instead. Antigen-presenting cell Technologies which may allow rapid vaccine deployment in response to a novel pathogen include the use of virus-like particles or protein nanoparticles. Inverse vaccines are vaccines that train the immune system to not respond to certain substances. While most vaccines are created using inactivated or attenuated compounds from microorganisms, synthetic vaccines are composed mainly or wholly of synthetic peptides, carbohydrates, or antigens. Valence Vaccines may be monovalent (also called univalent) or multivalent (also called polyvalent). A monovalent vaccine is designed to immunize against a single antigen or single microorganism. A multivalent or polyvalent vaccine is designed to immunize against two or more strains of the same microorganism, or against two or more microorganisms. The valency of a multivalent vaccine may be denoted with a Greek or Latin prefix (e.g., bivalent, trivalent, or tetravalent/quadrivalent). In certain cases, a monovalent vaccine may be preferable for rapidly developing a strong immune response. Interactions When two or more vaccines are mixed in the same formulation, the two vaccines can interfere. This most frequently occurs with live attenuated vaccines, where one of the vaccine components is more robust than the others and suppresses the growth and immune response to the other components. This phenomenon was first noted in the trivalent Sabin polio vaccine, where the amount of serotype2 virus in the vaccine had to be reduced to stop it from interfering with the "take" of the serotype1 and3 viruses in the vaccine. It was also noted in a 2001 study to be a problem with dengue vaccines, where the DEN-3 serotype was found to predominate and suppress the response to DEN-1, -2 and -4 serotypes. Other contents Adjuvants Vaccines typically contain one or more adjuvants, used to boost the immune response. Tetanus toxoid, for instance, is usually adsorbed onto alum. This presents the antigen in such a way as to produce a greater action than the simple aqueous tetanus toxoid. People who have an adverse reaction to adsorbed tetanus toxoid may be given the simple vaccine when the time comes for a booster. In the preparation for the 1990 Persian Gulf campaign, the whole cell pertussis vaccine was used as an adjuvant for anthrax vaccine. This produces a more rapid immune response than giving only the anthrax vaccine, which is of some benefit if exposure might be imminent. Preservatives Vaccines may also contain preservatives to prevent contamination with bacteria or fungi. Until recent years, the preservative thiomersal ( Thimerosal in the US and Japan) was used in many vaccines that did not contain live viruses. As of 2005, the only childhood vaccine in the U.S. that contains thiomersal in greater than trace amounts is the influenza vaccine, which is currently recommended only for children with certain risk factors. Single-dose influenza vaccines supplied in the UK do not list thiomersal in the ingredients. Preservatives may be used at various stages of the production of vaccines, and the most sophisticated methods of measurement might detect traces of them in the finished product, as they may in the environment and population as a whole. Many vaccines need preservatives to prevent serious adverse effects such as Staphylococcus infection, which in one 1928 incident killed 12 of 21 children inoculated with a diphtheria vaccine that lacked a preservative. Several preservatives are available, including thiomersal, phenoxyethanol, and formaldehyde. Thiomersal is more effective against bacteria, has a better shelf-life, and improves vaccine stability, potency, and safety; but, in the U.S., the European Union, and a few other affluent countries, it is no longer used as a preservative in childhood vaccines, as a precautionary measure due to its mercury content. Although controversial claims have been made that thiomersal contributes to autism, no convincing scientific evidence supports these claims. Furthermore, a 10–11-year study of 657,461 children found that the MMR vaccine does not cause autism and actually reduced the risk of autism by seven percent. Excipients Beside the active vaccine itself, the following excipients and residual manufacturing compounds are present or may be present in vaccine preparations: Aluminum salts or gels are added as adjuvants. Adjuvants are added to promote an earlier, more potent response, and more persistent immune response to the vaccine; they allow for a lower vaccine dosage. Antibiotics are added to some vaccines to prevent the growth of bacteria during production and storage of the vaccine. Egg protein is present in the influenza vaccine and yellow fever vaccine as they are prepared using chicken eggs. Other proteins may be present. Formaldehyde is used to inactivate bacterial products for toxoid vaccines. Formaldehyde is also used to inactivate unwanted viruses and kill bacteria that might contaminate the vaccine during production. Monosodium glutamate (MSG) and 2-phenoxyethanol are used as stabilizers in a few vaccines to help the vaccine remain unchanged when the vaccine is exposed to heat, light, acidity, or humidity. Thiomersal is a mercury-containing antimicrobial that is added to vials of vaccines that contain more than one dose to prevent contamination and growth of potentially harmful bacteria. Due to the controversy surrounding thiomersal, it has been removed from most vaccines except multi-use influenza, where it was reduced to levels so that a single dose contained less than a microgram of mercury, a level similar to eating ten grams of canned tuna. Nomenclature Various fairly standardized abbreviations for vaccine names have developed, although the standardization is by no means centralized or global. For example, the vaccine names used in the United States have well-established abbreviations that are also widely known and used elsewhere. An extensive list of them provided in a sortable table and freely accessible is available at a US Centers for Disease Control and Prevention web page. The page explains that "The abbreviations [in] this table (Column 3) were standardized jointly by staff of the Centers for Disease Control and Prevention, ACIP Work Groups, the editor of the Morbidity and Mortality Weekly Report (MMWR), the editor of Epidemiology and Prevention of Vaccine-Preventable Diseases (the Pink Book), ACIP members, and liaison organizations to the ACIP." Some examples are "DTaP" for diphtheria and tetanus toxoids and acellular pertussis vaccine, "DT" for diphtheria and tetanus toxoids, and "Td" for tetanus and diphtheria toxoids. At its page on tetanus vaccination, the CDC further explains that "Upper-case letters in these abbreviations denote full-strength doses of diphtheria (D) and tetanus (T) toxoids and pertussis (P) vaccine. Lower-case "d" and "p" denote reduced doses of diphtheria and pertussis used in the adolescent/adult-formulations. The 'a' in DTaP and Tdap stands for 'acellular', meaning that the pertussis component contains only a part of the pertussis organism." Another list of established vaccine abbreviations is at the CDC's page called "Vaccine Acronyms and Abbreviations", with abbreviations used on U.S. immunization records. The United States Adopted Name system has some conventions for the word order of vaccine names, placing head nouns first and adjectives postpositively. This is why the USAN for "OPV" is "poliovirus vaccine live oral" rather than "oral poliovirus vaccine". Licensing A vaccine licensure occurs after the successful conclusion of the development cycle and further the clinical trials and other programs involved through PhasesI–III demonstrating safety, immunoactivity, immunogenetic safety at a given specific dose, proven effectiveness in preventing infection for target populations, and enduring preventive effect (time endurance or need for revaccination must be estimated). Because preventive vaccines are predominantly evaluated in healthy population cohorts and distributed among the general population, a high standard of safety is required. As part of a multinational licensing of a vaccine, the World Health Organization Expert Committee on Biological Standardization developed guidelines of international standards for manufacturing and quality control of vaccines, a process intended as a platform for national regulatory agencies to apply for their own licensing process. Vaccine manufacturers do not receive licensing until a complete clinical cycle of development and trials proves the vaccine is safe and has long-term effectiveness, following scientific review by a multinational or national regulatory organization, such as the European Medicines Agency (EMA) or the US Food and Drug Administration (FDA). Upon developing countries adopting WHO guidelines for vaccine development and licensure, each country has its own responsibility to issue a national licensure, and to manage, deploy, and monitor the vaccine throughout its use in each nation. Building trust and acceptance of a licensed vaccine among the public is a task of communication by governments and healthcare personnel to ensure a vaccination campaign proceeds smoothly, saves lives, and enables economic recovery. When a vaccine is licensed, it will initially be in limited supply due to variable manufacturing, distribution, and logistical factors, requiring an allocation plan for the limited supply and which population segments should be prioritized to first receive the vaccine. World Health Organization Vaccines developed for multinational distribution via the United Nations Children's Fund (UNICEF) require pre-qualification by the WHO to ensure international standards of quality, safety, immunogenicity, and efficacy for adoption by numerous countries. The process requires manufacturing consistency at WHO-contracted laboratories following Good Manufacturing Practice (GMP). When UN agencies are involved in vaccine licensure, individual nations collaborate by 1) issuing marketing authorization and a national license for the vaccine, its manufacturers, and distribution partners; and 2) conducting postmarketing surveillance, including records for adverse events after the vaccination program. The WHO works with national agencies to monitor inspections of manufacturing facilities and distributors for compliance with GMP and regulatory oversight. Some countries choose to buy vaccines licensed by reputable national organizations, such as EMA, FDA, or national agencies in other affluent countries, but such purchases typically are more expensive and may not have distribution resources suitable to local conditions in developing countries. European Union In the European Union (EU), vaccines for pandemic pathogens, such as seasonal influenza, are licensed EU-wide where all the member states comply ("centralized"), are licensed for only some member states ("decentralized"), or are licensed on an individual national level. Generally, all EU states follow regulatory guidance and clinical programs defined by the European Committee for Medicinal Products for Human Use (CHMP), a scientific panel of the European Medicines Agency (EMA) responsible for vaccine licensure. The CHMP is supported by several expert groups who assess and monitor the progress of a vaccine before and after licensure and distribution. United States Under the FDA, the process of establishing evidence for vaccine clinical safety and efficacy is the same as for the approval process for prescription drugs. If successful through the stages of clinical development, the vaccine licensing process is followed by a Biologics License Application which must provide a scientific review team (from diverse disciplines, such as physicians, statisticians, microbiologists, chemists) and comprehensive documentation for the vaccine candidate having efficacy and safety throughout its development. Also during this stage, the proposed manufacturing facility is examined by expert reviewers for GMP compliance, and the label must have a compliant description to enable health care providers' definition of vaccine-specific use, including its possible risks, to communicate and deliver the vaccine to the public. After licensure, monitoring of the vaccine and its production, including periodic inspections for GMP compliance, continue as long as the manufacturer retains its license, which may include additional submissions to the FDA of tests for potency, safety, and purity for each vaccine manufacturing step. India In India, the Drugs Controller General, the head of department of the Central Drugs Standard Control Organization, India's national regulatory body for cosmetics, pharmaceuticals and medical devices, is responsible for the approval of licences for specified categories of drugs such as vaccines and other medicinal items, such as blood or blood products, IV fluids, and sera. Postmarketing surveillance Until a vaccine is in use amongst the general population, all potential adverse events from the vaccine may not be known, requiring manufacturers to conduct PhaseIV studies for postmarketing surveillance of the vaccine while it is used widely in the public. The WHO works with UN member states to implement post-licensing surveillance. The FDA relies on a Vaccine Adverse Event Reporting System to monitor safety concerns about a vaccine throughout its use in the American public. Scheduling In order to provide the best protection, children are recommended to receive vaccinations as soon as their immune systems are sufficiently developed to respond to particular vaccines, with additional "booster" shots often required to achieve "full immunity". This has led to the development of complex vaccination schedules. Global recommendations of vaccination schedule are issued by Strategic Advisory Group of Experts and will be further translated by advisory committee at the country level with considering of local factors such as disease epidemiology, acceptability of vaccination, equity in local populations, and programmatic and financial constraint. In the United States, the Advisory Committee on Immunization Practices, which recommends schedule additions for the Centers for Disease Control and Prevention, recommends routine vaccination of children against hepatitis A, hepatitis B, polio, mumps, measles, rubella, diphtheria, pertussis, tetanus, HiB, chickenpox, rotavirus, influenza, meningococcal disease and pneumonia. The large number of vaccines and boosters recommended (up to 24 injections by age two) has led to problems with achieving full compliance. To combat declining compliance rates, various notification systems have been instituted and many combination injections are now marketed (e.g., Pentavalent vaccine and MMRV vaccine), which protect against multiple diseases. Besides recommendations for infant vaccinations and boosters, many specific vaccines are recommended for other ages or for repeated injections throughout lifemost commonly for measles, tetanus, influenza, and pneumonia. Pregnant women are often screened for continued resistance to rubella. The human papillomavirus vaccine is recommended in the U.S. (as of 2011) and UK (as of 2009). Vaccine recommendations for the elderly concentrate on pneumonia and influenza, which are more deadly to that group. In 2006, a vaccine was introduced against shingles, a disease caused by the chickenpox virus, which usually affects the elderly. Scheduling and dosing of a vaccination may be tailored to the level of immunocompetence of an individual and to optimize population-wide deployment of a vaccine when it supply is limited, e.g. in the setting of a pandemic. Economics of development One challenge in vaccine development is economic: Many of the diseases most demanding a vaccine, including HIV, malaria and tuberculosis, exist principally in poor countries. Pharmaceutical firms and biotechnology companies have little incentive to develop vaccines for these diseases because there is little revenue potential. Even in more affluent countries, financial returns are usually minimal and the financial and other risks are great. Most vaccine development to date has relied on "push" funding by government, universities and non-profit organizations. Many vaccines have been highly cost effective and beneficial for public health. The number of vaccines actually administered has risen dramatically in recent decades. This increase, particularly in the number of different vaccines administered to children before entry into schools may be due to government mandates and support, rather than economic incentive. Patents According to the World Health Organization, the biggest barrier to vaccine production in less developed countries has not been patents, but the substantial financial, infrastructure, and workforce requirements needed for market entry. Vaccines are complex mixtures of biological compounds, and unlike the case for prescription drugs, there are no true generic vaccines. The vaccine produced by a new facility must undergo complete clinical testing for safety and efficacy by the manufacturer. For most vaccines, specific processes in technology are patented. These can be circumvented by alternative manufacturing methods, but this required R&D infrastructure and a suitably skilled workforce. In the case of a few relatively new vaccines, such as the human papillomavirus vaccine, the patents may impose an additional barrier. When increased production of vaccines was urgently needed during the COVID-19 pandemic in 2021, the World Trade Organization and governments around the world evaluated whether to waive intellectual property rights and patents on COVID-19 vaccines, which would "eliminate all potential barriers to the timely access of affordable COVID-19 medical products, including vaccines and medicines, and scale up the manufacturing and supply of essential medical products". Production Vaccine production is fundamentally different from other kinds of manufacturingincluding regular pharmaceutical manufacturingin that vaccines are intended to be administered to millions of people of whom the vast majority are perfectly healthy. This fact drives an extraordinarily rigorous production process with strict compliance requirements that go far beyond what is required of other products. Depending upon the antigen, it can cost anywhere from US$50 to $500 million to build a vaccine production facility, which requires highly specialized equipment, clean rooms, and containment rooms. There is a global scarcity of personnel with the right combination of skills, expertise, knowledge, competence and personality to staff vaccine production lines. With the notable exceptions of Brazil, China, and India, many developing countries' educational systems are unable to provide enough qualified candidates, and vaccine makers based in such countries must hire expatriate personnel to keep production going. Vaccine production has several stages. First, the antigen itself is generated. Viruses are grown either on primary cells such as chicken eggs (e.g., for influenza) or on continuous cell lines such as cultured human cells (e.g., for hepatitis A). Bacteria are grown in bioreactors (e.g., Haemophilus influenzae type b). Likewise, a recombinant protein derived from the viruses or bacteria can be generated in yeast, bacteria, or cell cultures. After the antigen is generated, it is isolated from the cells used to generate it. A virus may need to be inactivated, possibly with no further purification required. Recombinant proteins need many operations involving ultrafiltration and column chromatography. Finally, the vaccine is formulated by adding adjuvant, stabilizers, and preservatives as needed. The adjuvant enhances the immune response to the antigen, stabilizers increase the storage life, and preservatives allow the use of multidose vials. Combination vaccines are harder to develop and produce, because of potential incompatibilities and interactions among the antigens and other ingredients involved. The final stage in vaccine manufacture before distribution is fill and finish, which is the process of filling vials with vaccines and packaging them for distribution. Although this is a conceptually simple part of the vaccine manufacture process, it is often a bottleneck in the process of distributing and administering vaccines. Vaccine production techniques are evolving. Cultured mammalian cells are expected to become increasingly important, compared to conventional options such as chicken eggs, due to greater productivity and low incidence of problems with contamination. Recombination technology that produces genetically detoxified vaccines is expected to grow in popularity for the production of bacterial vaccines that use toxoids. Combination vaccines are expected to reduce the quantities of antigens they contain, and thereby decrease undesirable interactions, by using pathogen-associated molecular patterns. Vaccine manufacturers The companies with the highest market share in vaccine production are Merck, Sanofi, GlaxoSmithKline, Pfizer and Novartis, with 70% of vaccine sales concentrated in the EU or US (2013). Vaccine manufacturing plants require large capital investments ($50 million up to $300 million) and may take between 4 and 6 years to construct, with the full process of vaccine development taking between 10 and 15 years. Manufacturing in developing countries is playing an increasing role in supplying these countries, specifically with regards to older vaccines and in Brazil, India and China. The manufacturers in India are the most advanced in the developing world and include the Serum Institute of India, one of the largest producers of vaccines by number of doses and an innovator in processes, recently improving efficiency of producing the measles vaccine by 10 to 20-fold, due to switching to a MRC-5 cell culture instead of chicken eggs. China's manufacturing capabilities are focused on supplying their own domestic need, with Sinopharm (CNPGC) alone providing over 85% of the doses for 14 different vaccines in China. Brazil is approaching the point of supplying its own domestic needs using technology transferred from the developed world. Delivery systems One of the most common methods of delivering vaccines into the human body is injection. The development of new delivery systems raises the hope of vaccines that are safer and more efficient to deliver and administer. Lines of research include liposomes and ISCOM (immune stimulating complex). Notable developments in vaccine delivery technologies have included oral vaccines. Early attempts to apply oral vaccines showed varying degrees of promise, beginning early in the 20th century, at a time when the very possibility of an effective oral antibacterial vaccine was controversial. By the 1930s there was increasing interest in the prophylactic value of an oral typhoid fever vaccine for example. An oral polio vaccine turned out to be effective when vaccinations were administered by volunteer staff without formal training; the results also demonstrated increased ease and efficiency of administering the vaccines. Effective oral vaccines have many advantages; for example, there is no risk of blood contamination. Vaccines intended for oral administration need not be liquid, and as solids, they commonly are more stable and less prone to damage or spoilage by freezing in transport and storage. Such stability reduces the need for a "cold chain": the resources required to keep vaccines within a restricted temperature range from the manufacturing stage to the point of administration, which, in turn, may decrease costs of vaccines. A microneedle approach, which is still in stages of development, uses "pointed projections fabricated into arrays that can create vaccine delivery pathways through the skin". An experimental needle-free vaccine delivery system is undergoing animal testing. A stamp-size patch similar to an adhesive bandage contains about 20,000 microscopic projections per square cm. This dermal administration potentially increases the effectiveness of vaccination, while requiring less vaccine than injection. In veterinary medicine Vaccinations of animals are used both to prevent their contracting diseases and to prevent transmission of disease to humans. Both animals kept as pets and animals raised as livestock are routinely vaccinated. In some instances, wild populations may be vaccinated. This is sometimes accomplished with vaccine-laced food spread in a disease-prone area and has been used to attempt to control rabies in raccoons. Where rabies occurs, rabies vaccination of dogs may be required by law. Other canine vaccines include canine distemper, canine parvovirus, infectious canine hepatitis, adenovirus-2, leptospirosis, Bordetella, canine parainfluenza virus, and Lyme disease, among others. Cases of veterinary vaccines used in humans have been documented, whether intentional or accidental, with some cases of resultant illness, most notably with brucellosis. However, the reporting of such cases is rare and very little has been studied about the safety and results of such practices. With the advent of aerosol vaccination in veterinary clinics, human exposure to pathogens not naturally carried in humans, such as Bordetella bronchiseptica, has likely increased in recent years. In some cases, most notably rabies, the parallel veterinary vaccine against a pathogen may be as much as orders of magnitude more economical than the human one. DIVA vaccines DIVA (Differentiation of Infected from Vaccinated Animals), also known as SIVA (Segregation of Infected from Vaccinated Animals) vaccines, make it possible to differentiate between infected and vaccinated animals. DIVA vaccines carry at least one epitope less than the equivalent wild microorganism. An accompanying diagnostic test that detects the antibody against that epitope assists in identifying whether the animal has been vaccinated or not. The first DIVA vaccines (formerly termed marker vaccines and since 1999 coined as DIVA vaccines) and companion diagnostic tests were developed by J. T. van Oirschot and colleagues at the Central Veterinary Institute in Lelystad, The Netherlands. They found that some existing vaccines against pseudorabies (also termed Aujeszky's disease) had deletions in their viral genome (among which was the gE gene). Monoclonal antibodies were produced against that deletion and selected to develop an ELISA that demonstrated antibodies against gE. In addition, novel genetically engineered gE-negative vaccines were constructed. Along the same lines, DIVA vaccines and companion diagnostic tests against bovine herpesvirus1 infections have been developed. The DIVA strategy has been applied in various countries to successfully eradicate pseudorabies virus from those countries. Swine populations were intensively vaccinated and monitored by the companion diagnostic test and, subsequently, the infected pigs were removed from the population. Bovine herpesvirus1 DIVA vaccines are also widely used in practice. Considerable efforts are ongoing to apply the DIVA principle to a wide range of infectious diseases, such as classical swine fever, avian influenza, Actinobacillus pleuropneumonia and Salmonella infections in pigs. History Prior to the introduction of vaccination with material from cases of cowpox (heterotypic immunisation), smallpox could be prevented by deliberate variolation with smallpox virus. The earliest hints of the practice of variolation for smallpox in China come during the tenth century. The Chinese also practiced the oldest documented use of variolation, dating back to the fifteenth century. They implemented a method of "nasal insufflation" administered by blowing powdered smallpox material, usually scabs, up the nostrils. Various insufflation techniques have been recorded throughout the sixteenth and seventeenth centuries within China. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. In France, Voltaire reports that the Chinese have practiced variolation "these hundred years". Mary Wortley Montagu, who had witnessed variolation in Turkey, had her four-year-old daughter variolated in the presence of physicians of the Royal Court in 1721 upon her return to England. Later on that year, Charles Maitland conducted an experimental variolation of six prisoners in Newgate Prison in London. The experiment was a success, and soon variolation was drawing attention from the royal family, who helped promote the procedure. However, in 1783, several days after Prince Octavius of Great Britain was inoculated, he died. In 1796, the physician Edward Jenner took pus from the hand of a milkmaid with cowpox, scratched it into the arm of an 8-year-old boy, James Phipps, and six weeks later variolated the boy with smallpox, afterwards observing that he did not catch smallpox. Jenner extended his studies and, in 1798, reported that his vaccine was safe in children and adults, and could be transferred from arm-to-arm, which reduced reliance on uncertain supplies from infected cows. In 1804, the Spanish Balmis smallpox vaccination expedition to Spain's colonies Mexico and Philippines used the arm-to-arm transport method to get around the fact the vaccine survived for only 12 days in vitro. They used cowpox. Since vaccination with cowpox was much safer than smallpox inoculation, the latter, though still widely practiced in England, was banned in 1840. Following on from Jenner's work, the second generation of vaccines was introduced in the 1880s by Louis Pasteur who developed vaccines for chicken cholera and anthrax, and from the late nineteenth century vaccines were considered a matter of national prestige. National vaccination policies were adopted and compulsory vaccination laws were passed. In 1931 Alice Miles Woodruff and Ernest Goodpasture documented that the fowlpox virus could be grown in embryonated chicken egg. Soon scientists began cultivating other viruses in eggs. Eggs were used for virus propagation in the development of a yellow fever vaccine in 1935 and an influenza vaccine in 1945. In 1959 growth media and cell culture replaced eggs as the standard method of virus propagation for vaccines. Vaccinology flourished in the twentieth century, which saw the introduction of several successful vaccines, including those against diphtheria, measles, mumps, and rubella. Major achievements included the development of the polio vaccine in the 1950s and the eradication of smallpox during the 1960s and 1970s. Maurice Hilleman was the most prolific of the developers of the vaccines in the twentieth century. As vaccines became more common, many people began taking them for granted. However, vaccines remain elusive for many important diseases, including herpes simplex, malaria, gonorrhea, and HIV. Generations of vaccines First generation vaccines are whole-organism vaccineseither live and weakened, or killed forms. Live, attenuated vaccines, such as smallpox and polio vaccines, are able to induce killer T-cell (TC or CTL) responses, helper T-cell (TH) responses and antibody immunity. However, attenuated forms of a pathogen can convert to a dangerous form and may cause disease in immunocompromised vaccine recipients (such as those with AIDS). While killed vaccines do not have this risk, they cannot generate specific killer T-cell responses and may not work at all for some diseases. Second generation vaccines were developed to reduce the risks from live vaccines. These are subunit vaccines, consisting of specific protein antigens (such as tetanus or diphtheria toxoid) or recombinant protein components (such as the hepatitis B surface antigen). They can generate TH and antibody responses, but not killer T cell responses. RNA vaccines and DNA vaccines are examples of third generation vaccines. In 2016 a DNA vaccine for the Zika virus began testing at the National Institutes of Health. Separately, Inovio Pharmaceuticals and GeneOne Life Science began tests of a different DNA vaccine against Zika in Miami. Manufacturing the vaccines in volume was unsolved as of 2016. Clinical trials for DNA vaccines to prevent HIV are underway. mRNA vaccines such as BNT162b2 were developed in the year 2020 with the help of Operation Warp Speed and massively deployed to combat the COVID-19 pandemic. In 2021, Katalin Karikó and Drew Weissman received Columbia University's Horwitz Prize for their pioneering research in mRNA vaccine technology. Trends Since at least 2013, scientists have been trying to develop synthetic third-generation vaccines by reconstructing the outside structure of a virus; it was hoped that this will help prevent vaccine resistance. Principles that govern the immune response can now be used in tailor-made vaccines against many noninfectious human diseases, such as cancers and autoimmune disorders. For example, the experimental vaccine CYT006-AngQb has been investigated as a possible treatment for high blood pressure. Factors that affect the trends of vaccine development include progress in translatory medicine, demographics, regulatory science, political, cultural, and social responses. Plants as bioreactors for vaccine production The idea of vaccine production via transgenic plants was identified as early as 2003. Plants such as tobacco, potato, tomato, and banana can have genes inserted that cause them to produce vaccines usable for humans. In 2005, bananas were developed that produce a human vaccine against hepatitis B. Vaccine hesitancy Vaccine hesitancy is a delay in acceptance, or refusal of vaccines despite the availability of vaccine services. The term covers outright refusals to vaccinate, delaying vaccines, accepting vaccines but remaining uncertain about their use, or using certain vaccines but not others. There is an overwhelming scientific consensus that vaccines are generally safe and effective. Vaccine hesitancy often results in disease outbreaks and deaths from vaccine-preventable diseases. The World Health Organization therefore characterized vaccine hesitancy as one of the top ten global health threats in 2019.
Biology and health sciences
Drugs and medication
null
32654
https://en.wikipedia.org/wiki/Veterinary%20medicine
Veterinary medicine
Veterinary medicine is the branch of medicine that deals with the prevention, management, diagnosis, and treatment of disease, disorder, and injury in non-human animals. The scope of veterinary medicine is wide, covering all animal species, both domesticated and wild, with a wide range of conditions that can affect different species. Veterinary medicine is widely practiced, both with and without professional supervision. Professional care is most often led by a veterinary physician (also known as a veterinarian, veterinary surgeon, or "vet"), but also by paraveterinary workers, such as veterinary nurses, veterinary technicians, and veterinary assistants. This can be augmented by other paraprofessionals with specific specialties, such as animal physiotherapy or dentistry, and species-relevant roles such as farriers. Veterinary science helps human health through the monitoring and control of zoonotic disease (infectious disease transmitted from nonhuman animals to humans), food safety, and through human applications via medical research. They also help to maintain food supply through livestock health monitoring and treatment, and mental health by keeping pets healthy and long-living. Veterinary scientists often collaborate with epidemiologists and other health or natural scientists, depending on type of work. Ethically, veterinarians are usually obliged to look after animal welfare. Veterinarians diagnose, treat, and help keep animals safe and healthy. History Premodern era Archeological evidence, in the form of a cow skull upon which trepanation had been performed, shows that people were performing veterinary procedures in the Neolithic (3400–3000 BCE). The Egyptian Papyrus of Kahun (Twelfth Dynasty of Egypt) is the first extant record of veterinary medicine. The Shalihotra Samhita, dating from the time of Ashoka, is an early Indian veterinary treatise. The edicts of Asoka read: "Everywhere King Piyadasi (Asoka) made two kinds of medicine (चिकित्सा) available, medicine for people, and medicine for animals. Where no healing herbs for people and animals were available, he ordered that they be bought and planted." Hippiatrica is a Byzantine compilation of hippiatrics, dated to the fifth or sixth century AD. The first attempts to organize and regulate the practice of treating animals tended to focus on horses because of their economic significance. In the Middle Ages, farriers combined their work in horseshoeing with the more general task of "horse doctoring". The Arabic tradition of Bayṭara, or Shiyāt al-Khayl, originates with the treatise of Ibn Akhī Hizām (fl. late ninth century). In 1356, the Lord Mayor of London, Sir Henry Picard, concerned at the poor standard of care given to horses in the city, requested that all farriers operating within a 7-mile (11-km) radius of the City of London form a "fellowship" to regulate and improve their practices. This ultimately led to the establishment of the Worshipful Company of Farriers in 1674. Meanwhile, Carlo Ruini's book (Anatomy of the Horse) was published in 1598. It was the first comprehensive treatise on the anatomy of a nonhuman species. Establishment of profession The first veterinary school was founded in Lyon, France, in 1762 by Claude Bourgelat. According to Lupton, after observing the devastation being caused by cattle plague to the French herds, Bourgelat devoted his time to seeking out a remedy. This resulted in founding a veterinary school in Lyon in 1761, from which establishment he dispatched students to combat the disease; in a short time, the plague was stayed and the health of stock restored, through the assistance rendered to agriculture by veterinary science and art. The school received immediate international recognition in the 18th century and its pedagogical model drew on the existing fields of human medicine, natural history, and comparative anatomy. The Swedish veterinary education received funding 1774, and was officially started May 8th 1775 when the king Gustaf III signed the document. Peter Hernquist, who had studied for Carl von Linné in Uppsala, and also studied in Lyon with Claude Bourgelat, was head of school and is considered father of veterinary medicine in Sweden. The Odiham Agricultural Society was founded in 1783 in England to promote agriculture and industry, and played an important role in the foundation of the veterinary profession in Britain. A founding member, Thomas Burgess, began to take up the cause of animal welfare and campaign for the more humane treatment of sick animals. A 1785 society meeting resolved to "promote the study of Farriery upon rational scientific principles." Physician James Clark wrote a treatise entitled Prevention of Disease in which he argued for the professionalization of the veterinary trade, and the establishment of veterinary colleges. This was finally achieved in 1790, through the campaigning of Granville Penn, who persuaded Frenchman Benoit Vial de St. Bel to accept the professorship of the newly established veterinary college in London. The Royal College of Veterinary Surgeons was established by royal charter in 1844. Veterinary science came of age in the late 19th century, with notable contributions from Sir John McFadyean, credited by many as having been the founder of modern veterinary research. In the United States, the first schools were established in the early 19th century in Boston, New York City, and Philadelphia. In 1879, Iowa Agricultural College became the first land-grant college to establish a school of veterinary medicine. Veterinary workers Veterinary physicians Veterinary care and management are usually led by a veterinary physician (usually called a veterinarian, veterinary surgeon or "vet") who has received their doctor of veterinary medicine degree. This role is the equivalent of a physician or surgeon (medical doctor) in human medicine, and involves postgraduate study and qualification. In many countries, the local nomenclature for a vet is a protected term, meaning that people without the prerequisite qualifications and/or registration are not able to use the title, and in many cases, the activities that may be undertaken by a vet (such as animal treatment or surgery) are restricted only to those people who are registered as vet. For instance, in the United Kingdom, as in other jurisdictions, animal treatment may be performed only by registered vets (with a few designated exceptions, such as paraveterinary workers), calling oneself a vet without being registered or performing any treatment is illegal. Most vets work in clinical settings, treating animals directly. They may be involved in a general practice, treating animals of all types; may be specialized in a specific group of animals such as companion animals, livestock, laboratory animals, zoo animals, or horses; or may specialize in a narrow medical discipline such as veterinary surgery, dermatology, cardiology, neurology, laboratory animal medicine, internal medicine, and more. As healthcare professionals, vets face ethical decisions about the care of their patients. Current debates within the profession include the veterinary ethics of purely cosmetic procedures on animals, such as declawing of cats, docking of tails, cropping of ears, and debarking on dogs. A wide range of surgeries and operations is performed on various types of animals, but not all of them are carried out by vets. In a case in Iran, for instance, an eye surgeon managed to perform a successful cataract surgery on a rooster for the first time in the world. Paraveterinary workers Paraveterinary workers, including veterinary nurses, veterinary technicians, and veterinary assistants, either assist vets in their work, or may work within their own scope of practice, depending on skills and qualifications, including in some cases, performing minor surgery. The role of paraveterinary workers is less homogeneous globally than that of a vet, and qualification levels, and the associated skill mix, vary widely. Allied professions A number of professions exist within the scope of veterinary medicine, but may not necessarily be performed by vets or veterinary nurses. This includes those performing roles which are also found in human medicine, such as practitioners dealing with musculoskeletal disorders, including osteopaths, chiropractors, and physiotherapists. Some roles are specific to animals, but which have parallels in human society, such as animal grooming and animal massage. Some roles are specific to a species or group of animals, such as farriers, who are involved in the shoeing of horses, and in many cases have a major role to play in ensuring the medical fitness of horses. Veterinary research Veterinary research includes prevention, control, diagnosis, and treatment of diseases of animals, and basic biology, welfare, and care of animals. Veterinary research transcends species boundaries and includes the study of spontaneously occurring and experimentally induced models of both human and animal diseases and research at human-animal interfaces, such as food safety, wildlife and ecosystem health, zoonotic diseases, and public policy. By value the most important Animal Health pharmaceutical supplier worldwide is by far Zoetis (United States). Clinical veterinary research As in medicine, randomized controlled trials also are fundamental in veterinary medicine to establish the effectiveness of a treatment. Clinical veterinary research is far behind human medical research, though, with fewer randomized controlled trials, that have a lower quality and are mostly focused on research animals. Possible improvement consists in creation of networks for inclusion of private veterinary practices in randomized controlled trials. Although the FDA approves drugs for use in humans, the FDA keeps a separate "Green Book", which lists drugs approved specifically for veterinary medicine (about half of which are separately approved for use in humans). No studies exist on the effect of community animal health services on improving household wealth and the health status of low-income farmers. The first recorded use of regenerative stem-cell therapy to treat lesions in a wild animal occurred in 2011 in Brazil. On that occasion, the used stem cells to treat a maned wolf who had been run over by a car, which was later returned, fully recovered, to nature.
Biology and health sciences
General concepts
null
32664
https://en.wikipedia.org/wiki/Virial%20theorem
Virial theorem
In mechanics, the virial theorem provides a general equation that relates the average over time of the total kinetic energy of a stable system of discrete particles, bound by a conservative force (where the work done is independent of path), with that of the total potential energy of the system. Mathematically, the theorem states that where is the total kinetic energy of the particles, represents the force on the th particle, which is located at position , and angle brackets represent the average over time of the enclosed quantity. The word virial for the right-hand side of the equation derives from , the Latin word for "force" or "energy", and was given its technical definition by Rudolf Clausius in 1870. The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form. If the force between any two particles of the system results from a potential energy that is proportional to some power of the interparticle distance , the virial theorem takes the simple form Thus, twice the average total kinetic energy equals times the average total potential energy . Whereas represents the potential energy between two particles of distance , represents the total potential energy of the system, i.e., the sum of the potential energy over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, where History In 1870, Rudolf Clausius delivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20-year study of thermodynamics. The lecture stated that the mean vis viva of the system is equal to its virial, or that the average kinetic energy is one half of the average potential energy. The virial theorem can be obtained directly from Lagrange's identity as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772. Carl Jacobi's generalization of the identity to N bodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics. The theorem was later utilized, popularized, generalized and further developed by James Clerk Maxwell, Lord Rayleigh, Henri Poincaré, Subrahmanyan Chandrasekhar, Enrico Fermi, Paul Ledoux, Richard Bader and Eugene Parker. Fritz Zwicky was the first to use the virial theorem to deduce the existence of unseen matter, which is now called dark matter. Richard Bader showed that the charge distribution of a total system can be partitioned into its kinetic and potential energies that obey the virial theorem. As another example of its many applications, the virial theorem has been used to derive the Chandrasekhar limit for the stability of white dwarf stars. Illustrative special case Consider particles with equal mass , acted upon by mutually attractive forces. Suppose the particles are at diametrically opposite points of a circular orbit with radius . The velocities are and , which are normal to forces and . The respective magnitudes are fixed at and . The average kinetic energy of the system in an interval of time from to is Taking center of mass as the origin, the particles have positions and with fixed magnitude . The attractive forces act in opposite directions as positions, so . Applying the centripetal force formula results in as required. Note: If the origin is displaced, then we'd obtain the same result. This is because the dot product of the displacement with equal and opposite forces , results in net cancellation. Statement and derivation Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step. For a collection of point particles, the scalar moment of inertia about the origin is where and represent the mass and position of the th particle. is the position vector magnitude. Consider the scalar where is the momentum vector of the th particle. Assuming that the masses are constant, is one-half the time derivative of this moment of inertia: In turn, the time derivative of is where is the mass of the th particle, is the net force on that particle, and is the total kinetic energy of the system according to the velocity of each particle, Connection with the potential energy between particles The total force on particle is the sum of all the forces from the other particles in the system: where is the force applied by particle on particle . Hence, the virial can be written as Since no particle acts on itself (i.e., for ), we split the sum in terms below and above this diagonal and add them together in pairs: where we have used Newton's third law of motion, i.e., (equal and opposite reaction). It often happens that the forces can be derived from a potential energy that is a function only of the distance between the point particles and . Since the force is the negative gradient of the potential energy, we have in this case which is equal and opposite to , the force applied by particle on particle , as may be confirmed by explicit calculation. Hence, Thus Special case of power-law forces In a common special case, the potential energy between two particles is proportional to a power of their distance : where the coefficient and the exponent are constants. In such cases, the virial is where is the total potential energy of the system. Thus For gravitating systems the exponent equals −1, giving Lagrange's identity which was derived by Joseph-Louis Lagrange and extended by Carl Jacobi. Time averaging The average of this derivative over a duration is defined as from which we obtain the exact equation The virial theorem states that if , then There are many reasons why the average of the time derivative might vanish. One often-cited reason applies to stably bound systems, that is, to systems that hang together forever and whose parameters are finite. In this case, velocities and coordinates of the particles of the system have upper and lower limits, so that is bounded between two extremes, and , and the average goes to zero in the limit of infinite : Even if the average of the time derivative of is only approximately zero, the virial theorem holds to the same degree of approximation. For power-law forces with an exponent , the general equation holds: For gravitational attraction, , and the average kinetic energy equals half of the average negative potential energy: This general result is useful for complex gravitating systems such as planetary systems or galaxies. A simple application of the virial theorem concerns galaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied. Doppler effect measurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter. If the ergodic hypothesis holds for the system under consideration, the averaging need not be taken over time; an ensemble average can also be taken, with equivalent results. In quantum mechanics Although originally derived for classical mechanics, the virial theorem also holds for quantum mechanics, as first shown by Fock using the Ehrenfest theorem. Evaluate the commutator of the Hamiltonian with the position operator and the momentum operator of particle , Summing over all particles, one finds that for the commutator is where is the kinetic energy. The left-hand side of this equation is just , according to the Heisenberg equation of motion. The expectation value of this time derivative vanishes in a stationary state, leading to the quantum virial theorem: Pokhozhaev's identity In the field of quantum mechanics, there exists another form of the virial theorem, applicable to localized solutions to the stationary nonlinear Schrödinger equation or Klein–Gordon equation, is Pokhozhaev's identity, also known as Derrick's theorem. Let be continuous and real-valued, with . Denote . Let be a solution to the equation in the sense of distributions. Then satisfies the relation In special relativity For a single particle in special relativity, it is not the case that . Instead, it is true that , where is the Lorentz factor and . We have The last expression can be simplified to Thus, under the conditions described in earlier sections (including Newton's third law of motion, , despite relativity), the time average for particles with a power law potential is In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval: where the more relativistic systems exhibit the larger ratios. Examples The virial theorem has a particularly simple form for periodic motion. It can be used to perform perturbative calculation for nonlinear oscillators. It can also be used to study motion in a central potential. If the central potential is of the form , the virial theorem simplifies to . In particular, for gravitational or electrostatic (Coulomb) attraction, . Driven damped harmonic oscillator Analysis based on Sivardiere, 1986. For a one-dimensional oscillator with mass , position , driving force , spring constant , and damping coefficient , the equation of motion is When the oscillator has reached a steady state, it performs a stable oscillation , where is the amplitude, and is the phase angle. Applying the virial theorem, we have , which simplifies to , where is the natural frequency of the oscillator. To solve the two unknowns, we need another equation. In steady state, the power lost per cycle is equal to the power gained per cycle: which simplifies to . Now we have two equations that yield the solution Ideal-gas law Consider a container filled with an ideal gas consisting of point masses. The force applied to the point masses is the negative of the forces applied to the wall of the container, which is of the form , where is the unit normal vector pointing outwards. Then the virial theorem states that By the divergence theorem, . And since the average total kinetic energy , we have . Dark matter In 1933, Fritz Zwicky applied the virial theorem to estimate the mass of Coma Cluster, and discovered a discrepancy of mass of about 450, which he explained as due to "dark matter". He refined the analysis in 1937, finding a discrepancy of about 500. Theoretical analysis He approximated the Coma cluster as a spherical "gas" of stars of roughly equal mass , which gives . The total gravitational potential energy of the cluster is , giving . Assuming the motion of the stars are all the same over a long enough time (ergodicity), . Zwicky estimated as the gravitational potential of a uniform ball of constant density, giving . So by the virial theorem, the total mass of the cluster is Data Zwicky estimated that there are galaxies in the cluster, each having observed stellar mass (suggested by Hubble), and the cluster has radius . He also measured the radial velocities of the galaxies by doppler shifts in galactic spectra to be . Assuming equipartition of kinetic energy, . By the virial theorem, the total mass of the cluster should be . However, the observed mass is , meaning the total mass is 450 times that of observed mass. Generalizations Lord Rayleigh published a generalization of the virial theorem in 1900, which was partially reprinted in 1903. Henri Poincaré proved and applied a form of the virial theorem in 1911 to the problem of formation of the Solar System from a proto-stellar cloud (then known as cosmogony). A variational form of the virial theorem was developed in 1945 by Ledoux. A tensor form of the virial theorem was developed by Parker, Chandrasekhar and Fermi. The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law: A boundary term otherwise must be added. Inclusion of electromagnetic fields The virial theorem can be extended to include electric and magnetic fields. The result is where is the moment of inertia, is the momentum density of the electromagnetic field, is the kinetic energy of the "fluid", is the random "thermal" energy of the particles, and are the electric and magnetic energy content of the volume considered. Finally, is the fluid-pressure tensor expressed in the local moving coordinate system and is the electromagnetic stress tensor, A plasmoid is a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion time . If a total mass is confined within a radius , then the moment of inertia is roughly , and the left hand side of the virial theorem is . The terms on the right hand side add up to about , where is the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving for , we find where is the speed of the ion acoustic wave (or the Alfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time. Relativistic uniform system In case when in the physical system the pressure field, the electromagnetic and gravitational fields are taken into account, as well as the field of particles’ acceleration, the virial theorem is written in the relativistic form as follows: where the value exceeds the kinetic energy of the particles by a factor equal to the Lorentz factor of the particles at the center of the system. Under normal conditions we can assume that , then we can see that in the virial theorem the kinetic energy is related to the potential energy not by the coefficient , but rather by the coefficient close to 0.6. The difference from the classical case arises due to considering the pressure field and the field of particles’ acceleration inside the system, while the derivative of the scalar is not equal to zero and should be considered as the material derivative. An analysis of the integral theorem of generalized virial makes it possible to find, on the basis of field theory, a formula for the root-mean-square speed of typical particles of a system without using the notion of temperature: where is the speed of light, is the acceleration field constant, is the mass density of particles, is the current radius. Unlike the virial theorem for particles, for the electromagnetic field the virial theorem is written as follows: where the energy considered as the kinetic field energy associated with four-current , and sets the potential field energy found through the components of the electromagnetic tensor. In astrophysics The virial theorem is frequently applied in astrophysics, especially relating the gravitational potential energy of a system to its kinetic or thermal energy. Some common virial relations are for a mass , radius , velocity , and temperature . The constants are Newton's constant , the Boltzmann constant , and proton mass . Note that these relations are only approximate, and often the leading numerical factors (e.g. or ) are neglected entirely. Galaxies and cosmology (virial mass and radius) In astronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial mass" and "virial radius" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models, such as an isothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties. In galaxy dynamics, the mass of a galaxy is often inferred by measuring the rotation velocity of its gas and stars, assuming circular Keplerian orbits. Using the virial theorem, the velocity dispersion can be used in a similar way. Taking the kinetic energy (per particle) of the system as , and the potential energy (per particle) as we can write Here is the radius at which the velocity dispersion is being measured, and is the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e. As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in an order of magnitude sense, or when used self-consistently. An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on a galaxy or a galaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than the critical density where is the Hubble parameter and is the gravitational constant. A common choice for the factor is 200, which corresponds roughly to the typical over-density in spherical top-hat collapse (see Virial mass), in which case the virial radius is approximated as The virial mass is then defined relative to this radius as Stars The virial theorem is applicable to the cores of stars, by establishing a relation between gravitational potential energy and thermal kinetic energy (i.e. temperature). As stars on the main sequence convert hydrogen into helium in their cores, the mean molecular weight of the core increases and it must contract to maintain enough pressure to support its own weight. This contraction decreases its potential energy and, the virial theorem states, increases its thermal energy. The core temperature increases even as energy is lost, effectively a negative specific heat. This continues beyond the main sequence, unless the core becomes degenerate since that causes the pressure to become independent of temperature and the virial relation with equals −1 no longer holds.
Physical sciences
Basics_4
Physics
32712
https://en.wikipedia.org/wiki/Vega
Vega
Vega is the brightest star in the northern constellation of Lyra. It has the Bayer designation α Lyrae, which is Latinised to Alpha Lyrae and abbreviated Alpha Lyr or α Lyr. This star is relatively close at only from the Sun, and one of the most luminous stars in the Sun's neighborhood. It is the fifth-brightest star in the night sky, and the second-brightest star in the northern celestial hemisphere, after Arcturus. Vega has been extensively studied by astronomers, leading it to be termed "arguably the next most important star in the sky after the Sun". Vega was the northern pole star around 12,000 BCE and will be so again around the year 13,727, when its declination will be . Vega was the first star other than the Sun to have its image and spectrum photographed. It was one of the first stars whose distance was estimated through parallax measurements. Vega has functioned as the baseline for calibrating the photometric brightness scale and was one of the stars used to define the zero point for the UBV photometric system. Vega is only about a tenth of the age of the Sun, but since it is 2.1 times as massive, its expected lifetime is also one tenth of that of the Sun; both stars are at present approaching the midpoint of their main sequence lifetimes. Compared with the Sun, Vega has a lower abundance of elements heavier than helium. Vega is also a variable star—that is, a star whose brightness fluctuates. It is rotating rapidly with a speed of at the equator. This causes the equator to bulge outward due to centrifugal effects, and, as a result, there is a variation of temperature across the star's photosphere that reaches a maximum at the poles. From Earth, Vega is observed from the direction of one of these poles. Based on observations of more infrared radiation than expected, Vega appears to have a circumstellar disk of dust. This dust is likely to be the result of collisions between objects in an orbiting debris disk, which is analogous to the Kuiper belt in the Solar System. Stars that display an infrared excess due to dust emission are termed Vega-like stars. Observations by the James Webb Space Telescope show that the disk is exceptionally smooth, with no evidence of shaping by massive planets, though there is some evidence that there may be one or more Neptune-mass planets closer to the star. Nomenclature α Lyrae (Latinised to Alpha Lyrae) is the star's Bayer designation. The traditional name Vega (earlier Wega) comes from a loose transliteration of the Arabic word (Arabic: واقع) meaning "falling" or "landing", via the phrase (Arabic: النّسر الْواقع), "the falling eagle". In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Vega for this star. It is now so entered in the IAU Catalog of Star Names. Observation Vega can often be seen near the zenith in the mid-northern latitudes during the evening in the Northern Hemisphere summer. From mid-southern latitudes, it can be seen low above the northern horizon during the Southern Hemisphere winter. With a declination of +38.78°, Vega can only be viewed at latitudes north of 51° S. Therefore, it does not rise at all anywhere in Antarctica or in the southernmost part of South America, including Punta Arenas, Chile (53° S). At latitudes to the north of 51° N, Vega remains continuously above the horizon as a circumpolar star. Around July 1, Vega reaches midnight culmination when it crosses the meridian at that time. Complementarily, Vega swoops down and kisses the horizon at true North at midnight on Dec 31/Jan 1, as seen from 51° N. Each night the positions of the stars appear to change as the Earth rotates. However, when a star is located along the Earth's axis of rotation, it will remain in the same position and thus is called a pole star. The direction of the Earth's axis of rotation gradually changes over time in a process known as the precession of the equinoxes. A complete precession cycle requires 25,770 years, during which time the pole of the Earth's rotation follows a circular path across the celestial sphere that passes near several prominent stars. At present the pole star is Polaris, but around 12,000 BCE the pole was pointed only five degrees away from Vega. Through precession, the pole will again pass near Vega around 14,000 CE. Vega is the brightest of the successive northern pole stars. In 210,000 years, Vega will become the brightest star in the night sky, and will peak in brightness in 290,000 years with an apparent magnitude of –0.81. This star lies at a vertex of a widely spaced asterism called the Summer Triangle, which consists of Vega plus the two first-magnitude stars Altair, in Aquila, and Deneb in Cygnus. This formation is the approximate shape of a right triangle, with Vega located at its right angle. The Summer Triangle is recognizable in the northern skies for there are few other bright stars in its vicinity. Observational history Astrophotography, the photography of celestial objects, began in 1840 when John William Draper took an image of the Moon using the daguerreotype process. On 17 July 1850, Vega became the first star (other than the Sun) to be photographed, when it was imaged by William Bond and John Adams Whipple at the Harvard College Observatory, also with a daguerreotype. In August 1872, Henry Draper took a photograph of Vega's spectrum, the first photograph of a star's spectrum showing absorption lines. Similar lines had already been identified in the spectrum of the Sun. In 1879, William Huggins used photographs of the spectra of Vega and similar stars to identify a set of twelve "very strong lines" that were common to this stellar category. These were later identified as lines from the Hydrogen Balmer series. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. The distance to Vega can be determined by measuring its parallax shift against the background stars as the Earth orbits the Sun. Giuseppe Calandrelli noted stellar parallax in 1805-6 and came up with a 4-second value for the star which was a gross overestimate. The first person to publish a star's parallax was Friedrich G. W. von Struve, when he announced a value of 0.125 arcsecond () for Vega. Friedrich Bessel was skeptical about Struve's data, and, when Bessel published a parallax of 0.314″ for the star system 61 Cygni, Struve revised his value for Vega's parallax to nearly double the original estimate. This change cast further doubt on Struve's data. Thus most astronomers at the time, including Struve, credited Bessel with the first published parallax result. However, Struve's initial result was actually close to the currently accepted value of 0.129″, as determined by the Hipparcos astrometry satellite. The brightness of a star, as seen from Earth, is measured with a standardized, logarithmic scale. This apparent magnitude is a numerical value that decreases in value with increasing brightness of the star. The faintest stars visible to the unaided eye are sixth magnitude, while the brightest in the night sky, Sirius, is of magnitude −1.46. To standardize the magnitude scale, astronomers chose Vega and several similar stars and averaged their brightness to represent magnitude zero at all wavelengths. Thus, for many years, Vega was used as a baseline for the calibration of absolute photometric brightness scales. However, this is no longer the case, as the apparent magnitude zero point is now commonly defined in terms of a particular numerically specified flux. This approach is more convenient for astronomers, since Vega is not always available for calibration and varies in brightness. The UBV photometric system measures the magnitude of stars through ultraviolet, blue and yellow filters, producing U, B and V values, respectively. Vega is one of six A0V stars that were used to set the initial mean values for this photometric system when it was introduced in the 1950s. The mean magnitudes for these six stars were defined as: = = 0. In effect, the magnitude scale has been calibrated so that the magnitude of these stars is the same in the yellow, blue and ultraviolet parts of the electromagnetic spectrum. Thus, Vega has a relatively flat electromagnetic spectrum in the visual region—wavelength range 350–850 nanometers, most of which can be seen with the human eye—so the flux densities are roughly equal; 2,000–. However, the flux density of Vega drops rapidly in the infrared, and is near at . Photometric measurements of Vega during the 1930s appeared to show that the star had a low-magnitude variability on the order of ±0.03 magnitude (around ±2.8% luminosity). This range of variability was near the limits of observational capability for that time, and so the subject of Vega's variability has been controversial. The magnitude of Vega was measured again in 1981 at the David Dunlap Observatory and showed some slight variability. Thus it was suggested that Vega showed occasional low-amplitude pulsations associated with a Delta Scuti variable. This is a category of stars that oscillate in a coherent manner, resulting in periodic pulsations in the star's luminosity. Although Vega fits the physical profile for this type of variable, other observers have found no such variation. Thus the variability was thought to possibly be the result of systematic errors in measurement. However, a 2007 article surveyed these and other results, and concluded that "A conservative analysis of the foregoing results suggests that Vega is quite likely variable in the 1–2% range, with possible occasional excursions to as much as 4% from the mean". Also, a 2011 article affirms that "The long-term (year-to-year) variability of Vega was confirmed". Vega became the first solitary main-sequence star beyond the Sun known to be an X-ray emitter when in 1979 it was observed from an imaging X-ray telescope launched on an Aerobee 350 from the White Sands Missile Range. In 1983, Vega became the first star found to have a disk of dust. The Infrared Astronomical Satellite (IRAS) discovered an excess of infrared radiation coming from the star, and this was attributed to energy emitted by the orbiting dust as it was heated by the star. Physical characteristics Vega's spectral class is A0V, making it a blue-tinged white main-sequence star that is fusing hydrogen to helium in its core. Since more massive stars use their fusion fuel more quickly than smaller ones, Vega's main-sequence lifetime is roughly one billion years, a tenth of the Sun's. The current age of this star is about 455 million years, or up to about half its expected total main-sequence lifespan. After leaving the main sequence, Vega will become a class-M red giant and shed much of its mass, finally becoming a white dwarf. At present, Vega has more than twice the mass of the Sun and its bolometric luminosity is about 40 times the Sun's. Because it is rotating rapidly, approximately once every 16.5 hours, and seen nearly pole-on, its apparent luminosity, calculated assuming it was the same brightness all over, is about 57 times the Sun's. If Vega is variable, then it may be a Delta Scuti type with a period of about 0.107 day. Most of the energy produced at Vega's core is generated by the carbon–nitrogen–oxygen cycle (CNO cycle), a nuclear fusion process that combines protons to form helium nuclei through intermediary nuclei of carbon, nitrogen and oxygen. This process becomes dominant at a temperature of about 17 million K, which is slightly higher than the core temperature of the Sun, but is less efficient than the Sun's proton–proton chain fusion reaction. The CNO cycle is highly temperature sensitive, which results in a convection zone about the core that evenly distributes the 'ash' from the fusion reaction within the core region. The overlying atmosphere is in radiative equilibrium. This is in contrast to the Sun, which has a radiation zone centered on the core with an overlying convection zone. The energy flux from Vega has been precisely measured against standard light sources. At , the flux density is with an error margin of 2%. The visual spectrum of Vega is dominated by absorption lines of hydrogen; specifically by the hydrogen Balmer series with the electron at the n=2 principal quantum number. The lines of other elements are relatively weak, with the strongest being ionized magnesium, iron and chromium. The X-ray emission from Vega is very low, demonstrating that the corona for this star must be very weak or non-existent. However, as the pole of Vega is facing Earth and a polar coronal hole may be present, confirmation of a corona as the likely source of the X-rays detected from Vega (or the region very close to Vega) may be difficult as most of any coronal X-rays would not be emitted along the line of sight. Using spectropolarimetry, a magnetic field has been detected on the surface of Vega by a team of astronomers at the Observatoire du Pic du Midi. This is the first such detection of a magnetic field on a spectral class A star that is not an Ap chemically peculiar star. The average line of sight component of this field has a strength of gauss (G). This is comparable to the mean magnetic field on the Sun. Magnetic fields of roughly 30 G have been reported for Vega, compared to about 1 G for the Sun. In 2015, bright starspots were detected on the star's surface—the first such detection for a normal A-type star, and these features show evidence of rotational modulation with a period of 0.68 day. Rotation Vega has a rotation period of 16.3 hours, much faster than the Sun's rotational period but similar to, and slightly slower than, those of Jupiter and Saturn. Because of that, Vega is significantly oblate like those two planets. When the radius of Vega was measured to high accuracy with an interferometer, it resulted in an unexpectedly large estimated value of times the radius of the Sun. This is 60% larger than the radius of the star Sirius, while stellar models indicated it should only be about 12% larger. However, this discrepancy can be explained if Vega is a rapidly rotating star that is being viewed from the direction of its pole of rotation. Observations by the CHARA array in 2005–06 confirmed this deduction. The pole of Vega—its axis of rotation—is inclined no more than five degrees from the line-of-sight to the Earth. At the high end of estimates for the rotation velocity for Vega is along the equator, much higher than the observed (i.e. projected) rotational velocity because Vega is seen almost pole-on. This is 88% of the speed that would cause the star to start breaking up from centrifugal effects. This rapid rotation of Vega produces a pronounced equatorial bulge, so the radius of the equator is 19% larger than the polar radius, compared to just under 11% for Saturn, the most oblate of the Solar System's planets. (The estimated polar radius of this star is solar radii, while the equatorial radius is solar radii.) From the Earth, this bulge is being viewed from the direction of its pole, producing the overly large radius estimate. The local surface gravity at the poles is greater than at the equator, which produces a variation in effective temperature over the star: the polar temperature is near , while the equatorial temperature is about . This large temperature difference between the poles and the equator produces a strong gravity darkening effect. As viewed from the poles, this results in a darker (lower-intensity) limb than would normally be expected for a spherically symmetric star. The temperature gradient may also mean that Vega has a convection zone around the equator, while the remainder of the atmosphere is likely to be in almost pure radiative equilibrium. By the Von Zeipel theorem, the local luminosity is higher at the poles. As a result, if Vega were viewed along the plane of its equator instead of almost pole-on, then its overall brightness would be lower. As Vega had long been used as a standard star for calibrating telescopes, the discovery that it is rapidly rotating may challenge some of the underlying assumptions that were based on it being spherically symmetric. With the viewing angle and rotation rate of Vega now better known, this will allow improved instrument calibrations. Element abundance In astronomy, those elements with higher atomic numbers than helium are termed "metals". The metallicity of Vega's photosphere is only about 32% of the abundance of heavy elements in the Sun's atmosphere. (Compare this, for example, to a threefold metallicity abundance in the similar star Sirius as compared to the Sun.) For comparison, the Sun has an abundance of elements heavier than helium of about ZSol = . Thus, in terms of abundances, only about 0.54% of Vega consists of elements heavier than helium. Nitrogen is slightly more abundant, oxygen is only marginally less abundant and sulfur abundance is about 50% of solar. On the other hand, Vega has only 10% to 30% of the solar abundance for most other major elements with barium and scandium below 10%. The unusually low metallicity of Vega makes it a weak Lambda Boötis star. However, the reason for the existence of such chemically peculiar, spectral class A0–F0 stars remains unclear. One possibility is that the chemical peculiarity may be the result of diffusion or mass loss, although stellar models show that this would normally only occur near the end of a star's hydrogen-burning lifespan. Another possibility is that the star formed from an interstellar medium of gas and dust that was unusually metal-poor. The observed helium to hydrogen ratio in Vega is , which is about 40% lower than the Sun. This may be caused by the disappearance of a helium convection zone near the surface. Energy transfer is instead performed by the radiative process, which may be causing an abundance anomaly through diffusion. Kinematics The radial velocity of Vega is the component of this star's motion along the line-of-sight to the Earth. Movement away from the Earth will cause the light from Vega to shift to a lower frequency (toward the red), or to a higher frequency (toward the blue) if the motion is toward the Earth. Thus the velocity can be measured from the amount of shift of the star's spectrum. Precise measurements of this blueshift give a value of . The minus sign indicates a relative motion toward the Earth. Motion transverse to the line of sight causes the position of Vega to shift with respect to the more distant background stars. Careful measurement of the star's position allows this angular movement, known as proper motion, to be calculated. Vega's proper motion is (mas) per year in right ascension—the celestial equivalent of longitude—and in declination, which is equivalent to a change in latitude. The net proper motion of Vega is , which results in angular movement of a degree every . In the galactic coordinate system, the space velocity components of Vega are (U, V, W) = , for a net space velocity of . The radial component of this velocity—in the direction of the Sun—is , while the transverse velocity is . Although Vega is at present only the fifth-brightest star in the night sky, the star is slowly brightening as proper motion causes it to approach the Sun. Vega will make its closest approach in an estimated 264,000 years at a perihelion distance of . Based on this star's kinematic properties, it appears to belong to a stellar association called the Castor Moving Group. However, Vega may be much older than this group, so the membership remains uncertain. This group contains about 16 stars, including Alpha Librae, Alpha Cephei, Castor, Fomalhaut and Vega. All members of the group are moving in nearly the same direction with similar space velocities. Membership in a moving group implies a common origin for these stars in an open cluster that has since become gravitationally unbound. The estimated age of this moving group is , and they have an average space velocity of . Possible planetary system Infrared excess One of the early results from the Infrared Astronomy Satellite (IRAS) was the discovery of excess infrared flux coming from Vega, beyond what would be expected from the star alone. This excess was measured at wavelengths of 25, 60 and , and came from within an angular radius of () centered on the star. At the measured distance of Vega, this corresponded to an actual radius of (AU), where an AU is the average radius of the Earth's orbit around the Sun. It was proposed that this radiation came from a field of orbiting particles with a dimension on the order of a millimetre, as anything smaller would eventually be removed from the system by radiation pressure or drawn into the star by means of Poynting–Robertson drag. The latter is the result of radiation pressure creating an effective force that opposes the orbital motion of a dust particle, causing it to spiral inward. This effect is most pronounced for tiny particles that are closer to the star. Subsequent measurements of Vega at showed a lower than expected flux for the hypothesized particles, suggesting that they must instead be on the order of or less. To maintain this amount of dust in orbit around Vega, a continual source of replenishment would be required. A proposed mechanism for maintaining the dust was a disk of coalesced bodies that were in the process of collapsing to form a planet. Models fitted to the dust distribution around Vega indicate that it is a 120-astronomical-unit-radius circular disk viewed from nearly pole-on. In addition, there is a hole in the center of the disk with a radius of no less than . Following the discovery of an infrared excess around Vega, other stars have been found that display a similar anomaly that is attributable to dust emission. As of 2002, about 400 of these stars have been found, and they have come to be termed "Vega-like" or "Vega-excess" stars. It is believed that these may provide clues to the origin of the Solar System. Debris disks By 2005, the Spitzer Space Telescope had produced high-resolution infrared images of the dust around Vega. It was shown to extend out to 43″ () at a wavelength of , 70″ () at and () at . These much wider disks were found to be circular and free of clumps, with dust particles ranging from 1– in size. The estimated total mass of this dust is 3 times the mass of the Earth (around 7.5 times more massive than the asteroid belt). Production of the dust would require collisions between asteroids in a population corresponding to the Kuiper Belt around the Sun. Thus the dust is more likely created by a debris disk around Vega, rather than from a protoplanetary disk as was earlier thought. The inner boundary of the debris disk was estimated at , or 70–. The disk of dust is produced as radiation pressure from Vega pushes debris from collisions of larger objects outward. However, continuous production of the amount of dust observed over the course of Vega's lifetime would require an enormous starting mass—estimated as hundreds of times the mass of Jupiter. Hence it is more likely to have been produced as the result of a relatively recent breakup of a moderate-sized (or larger) comet or asteroid, which then further fragmented as the result of collisions between the smaller components and other bodies. This dusty disk would be relatively young on the time scale of the star's age, and it will eventually be removed unless other collision events supply more dust. Observations, first with the Palomar Testbed Interferometer by David Ciardi and Gerard van Belle in 2001 and then later confirmed with the CHARA array at Mt. Wilson in 2006 and the Infrared Optical Telescope Array at Mt. Hopkins in 2011, revealed evidence for an inner dust band around Vega. Originating within of the star, this exozodiacal dust may be evidence of dynamical perturbations within the system. This may be caused by an intense bombardment of comets or meteors, and may be evidence for the existence of a planetary system. The disk was also observed with ALMA in 2020, the LMT in 2022 and with Hubble STIS and JWST MIRI in 2024. The ALMA image did resolve the outer disk for the first time. The Hubble observation is the first image of the disk in scattered light and found an outer halo made up of small dust grains. JWST observations also detected the Halo, the outer disk and for the first time the inner disk. The infrared observations also showed a gap at 60 AU for the first time. The dust interior of the outer disk is consistent with dust being dragged by the Poynting-Robertson effect. The inner edge of the inner disk is hidden behind the coronagraph, but it was inferred to be 3-5 AU from photometry. The star is also surrounded by hot infrared excess, located at the sub-AU region, leaving a second gap between the inner disk and the hot dust around the star. This hot infrared excess lies within about 0.2 AU or closer and is made up of small grains, like graphite and iron and manganese oxides, which was previously verified. Possible planets Observations from the James Clerk Maxwell Telescope in 1997 revealed an "elongated bright central region" that peaked at 9″ () to the northeast of Vega. This was hypothesized as either a perturbation of the dust disk by a planet or else an orbiting object that was surrounded by dust. However, images by the Keck telescope had ruled out a companion down to magnitude 16, which would correspond to a body with more than 12 times the mass of Jupiter. Astronomers at the Joint Astronomy Centre in Hawaii and at UCLA suggested that the image may indicate a planetary system still undergoing formation. Determining the nature of the planet has not been straightforward; a 2002 paper hypothesizes that the clumps are caused by a roughly Jupiter-mass planet on an eccentric orbit. Dust would collect in orbits that have mean-motion resonances with this planet—where their orbital periods form integer fractions with the period of the planet—producing the resulting clumpiness. In 2003, it was hypothesized that these clumps could be caused by a roughly Neptune-mass planet having migrated from 40 to over 56 million years, an orbit large enough to allow the formation of smaller rocky planets closer to Vega. The migration of this planet would likely require gravitational interaction with a second, higher-mass planet in a smaller orbit. Using a coronagraph on the Subaru Telescope in Hawaii in 2005, astronomers were able to further constrain the size of a planet orbiting Vega to no more than 5–10 times the mass of Jupiter. The issue of possible clumps in the debris disc was revisited in 2007 using newer, more sensitive instrumentation on the Plateau de Bure Interferometer. The observations showed that the debris ring is smooth and symmetric. No evidence was found of the blobs reported earlier, casting doubts on the hypothesized giant planet. The smooth structure has been confirmed in follow-up observations by Hughes et al. (2012) and the Herschel Space Telescope. Although a planet has yet to be directly observed around Vega, the presence of a planetary system cannot yet be ruled out. Thus there could be smaller, terrestrial planets orbiting closer to the star. The inclination of planetary orbits around Vega is likely to be closely aligned to the equatorial plane of this star. From the perspective of an observer on a hypothetical planet around Vega, the Sun would appear as a faint 4.3-magnitude star in the Columba constellation. In 2021, a paper analyzing 10 years of spectra of Vega detected a candidate 2.43-day signal around Vega, statistically estimated to have only a 1% chance of being a false positive. Considering the amplitude of the signal, the authors estimated a minimum mass of Earth masses, but considering the very oblique rotation of Vega itself of only 6.2° from Earth's perspective, the planet may be aligned to this plane as well, giving it an actual mass of Earth masses. The researchers also detected a faint -day signal which could translate to Earth masses ( at 6.2° inclination) but is too faint to claim as a real signal with available data. Observations of the disk with JWST MIRI did find a very circular face-on disk. The morphology indicate that there is no planet more massive than Saturn beyond 10 AU. The disk has a gap at around 60 AU. Gap-opening planets are inferred for disks around other stars and the team tests this idea for Vega by running simulations. The simulations have shown that a planet with <6 at 65 AU would introduce interior asymetric structures that are not seen in the disk of Vega. Any gap-opening planet would need to be less massive. Additionally the inner edge of the inner disk was inferred to be 3-5 AU. Vega shows also evidence for hot infrared excess at the sub-AU region. The inner boundary of the warm debris might indicate that there is a Neptune-mass planet inside, shepherding it. Etymology and cultural significance The name is believed to be derived from the Arabic term Al Nesr al Waki النسر الواقع which appeared in the Al Achsasi al Mouakket star catalogue and was translated into Latin as Vultur Cadens, "the falling eagle/vulture". The constellation was represented as a vulture in ancient Egypt, and as an eagle or vulture in ancient India. The Arabic name then appeared in the western world in the Alfonsine tables, which were drawn up between 1215 and 1270 by order of King Alfonso X. Medieval astrolabes of England and Western Europe used the names Wega and Alvaca, and depicted it and Altair as birds. Among the northern Polynesian people, Vega was known as whetu o te tau, the year star. For a period of history it marked the start of their new year when the ground would be prepared for planting. Eventually this function became denoted by the Pleiades. The Assyrians named this pole star Dayan-same, the "Judge of Heaven", while in Akkadian it was Tir-anna, "Life of Heaven". In Babylonian astronomy, Vega may have been one of the stars named Dilgan, "the Messenger of Light". To the ancient Greeks, the constellation Lyra was formed from the harp of Orpheus, with Vega as its handle. For the Roman Empire, the start of autumn was based upon the hour at which Vega set below the horizon. In Chinese, (), meaning Weaving Girl (asterism), refers to an asterism consisting of Vega, ε Lyrae and ζ1 Lyrae. Consequently, the Chinese name for Vega is (, ). In Chinese mythology, there is a love story of Qixi () in which Niulang (, Altair) and his two children (β Aquilae and γ Aquilae) are separated from their mother Zhinü (, lit. "weaver girl", Vega) who is on the far side of the river, the Milky Way. However, one day per year on the seventh day of the seventh month of the Chinese lunisolar calendar, magpies make a bridge so that Niulang and Zhinü can be together again for a brief encounter. The Japanese Tanabata festival, in which Vega is known as Orihime (織姫), is also based on this legend. In Zoroastrianism, Vega was sometimes associated with Vanant, a minor divinity whose name means "conqueror". The indigenous Boorong people of north-western Victoria, Australia, named it Neilloan, "the flying loan". In the Srimad Bhagavatam, Shri Krishna tells Arjuna, that among the Nakshatras he is Abhijit, which remark indicates the auspiciousness of this Nakshatra. Medieval astrologers counted Vega as one of the Behenian stars and related it to chrysolite and winter savory. Cornelius Agrippa listed its kabbalistic sign under Vultur cadens, a literal Latin translation of the Arabic name. Medieval star charts also listed the alternate names Waghi, Vagieh and Veka for this star. W. H. Auden's 1933 poem "A Summer Night (to Geoffrey Hoyland)" famously opens with the couplet, "Out on the lawn I lie in bed,/Vega conspicuous overhead". Vega became the first star to have a car named after it with the French Facel Vega line of cars from 1954 onwards, and later on, in America, Chevrolet launched the Vega in 1971. Other vehicles named after Vega include the ESA's Vega launch system and the Lockheed Vega aircraft.
Physical sciences
Notable stars
null
32718
https://en.wikipedia.org/wiki/Condorcet%20paradox
Condorcet paradox
In social choice theory, Condorcet's voting paradox is a fundamental discovery by the Marquis de Condorcet that majority rule is inherently self-contradictory. The result implies that it is logically impossible for any voting system to guarantee that a winner will have support from a majority of voters: for example there can be rock-paper-scissors scenario where a majority of voters will prefer A to B, B to C, and also C to A, even if every voter's individual preferences are rational and avoid self-contradiction. Examples of Condorcet's paradox are called Condorcet cycles or cyclic ties. In such a cycle, every possible choice is rejected by the electorate in favor of another alternative, who is preferred by more than half of all voters. Thus, any attempt to ground social decision-making in majoritarianism must accept such self-contradictions (commonly called spoiler effects). Systems that attempt to do so, while minimizing the rate of such self-contradictions, are called Condorcet methods. Condorcet's paradox is a special case of Arrow's paradox, which shows that any kind of social decision-making process is either self-contradictory, a dictatorship, or incorporates information about the strength of different voters' preferences (e.g. cardinal utility or rated voting). History Condorcet's paradox was first discovered by Catalan philosopher and theologian Ramon Llull in the 13th century, during his investigations into church governance, but his work was lost until the 21st century. The mathematician and political philosopher Marquis de Condorcet rediscovered the paradox in the late 18th century. Condorcet's discovery means he arguably identified the key result of Arrow's impossibility theorem, albeit under stronger conditions than required by Arrow: Condorcet cycles create situations where any ranked voting system that respects majorities must have a spoiler effect. Example Suppose we have three candidates, A, B, and C, and that there are three voters with preferences as follows: If C is chosen as the winner, it can be argued that B should win instead, since two voters (1 and 2) prefer B to C and only one voter (3) prefers C to B. However, by the same argument A is preferred to B, and C is preferred to A, by a margin of two to one on each occasion. Thus the society's preferences show cycling: A is preferred over B which is preferred over C which is preferred over A. As a result, any attempt to appeal to the principle of majority rule will lead to logical self-contradiction. Regardless of which alternative we select, we can find another alternative that would be preferred by most voters. Practical scenario The voters in Cactus County prefer the incumbent county executive Alex of the Farmers' Party over rival Beatrice of the Solar Panel Party by about a 2-to-1 margin. This year a third candidate, Charlie, is running as an independent. Charlie is a wealthy and outspoken businessman, of whom the voters hold polarized views. The voters divide into three groups: Group 1 revere Charlie for saving the high school football team. They rank Charlie first, and then Alex above Beatrice as usual (CAB). Group 2 despise Charlie for his sharp business practices. They rank Charlie last, and then Alex above Beatrice as usual (ABC). Group 3 are Beatrice's core supporters. They want the Farmers' Party out of office in favor of the Solar Panel Party, and regard Charlie's candidacy as a sideshow. They rank Beatrice first and Alex last as usual, and Charlie second by default (BCA). Therefore a majority of voters prefer Alex to Beatrice (A > B), as they always have. A majority of voters are either Beatrice-lovers or Charlie-haters, so prefer Beatrice to Charlie (B > C). And a majority of voters are either Charlie-lovers or Alex-haters, so prefer Charlie to Alex (C > A). Combining the three preferences gives us A > B > C > A, a Condorcet cycle. Likelihood It is possible to estimate the probability of the paradox by extrapolating from real election data, or using mathematical models of voter behavior, though the results depend strongly on which model is used. Impartial culture model We can calculate the probability of seeing the paradox for the special case where voter preferences are uniformly distributed among the candidates. (This is the "impartial culture" model, which is known to be a "worst-case scenario"—most models show substantially lower probabilities of Condorcet cycles.) For voters providing a preference list of three candidates A, B, C, we write (resp. , ) the random variable equal to the number of voters who placed A in front of B (respectively B in front of C, C in front of A). The sought probability is (we double because there is also the symmetric case A> C> B> A). We show that, for odd , where which makes one need to know only the joint distribution of and . If we put , we show the relation which makes it possible to compute this distribution by recurrence: . The following results are then obtained: The sequence seems to be tending towards a finite limit. Using the central limit theorem, we show that tends to where is a variable following a Cauchy distribution, which gives (constant quoted in the OEIS). The asymptotic probability of encountering the Condorcet paradox is therefore which gives the value 8.77%. Some results for the case of more than three candidates have been calculated and simulated. The simulated likelihood for an impartial culture model with 25 voters increases with the number of candidates: The likelihood of a Condorcet cycle for related models approach these values for three-candidate elections with large electorates: Impartial anonymous culture (IAC): 6.25% Uniform culture (UC): 6.25% Maximal culture condition (MC): 9.17% All of these models are unrealistic, but can be investigated to establish an upper bound on the likelihood of a cycle. Group coherence models When modeled with more realistic voter preferences, Condorcet paradoxes in elections with a small number of candidates and a large number of voters become very rare. Spatial model A study of three-candidate elections analyzed 12 different models of voter behavior, and found the spatial model of voting to be the most accurate to real-world ranked-ballot election data. Analyzing this spatial model, they found the likelihood of a cycle to decrease to zero as the number of voters increases, with likelihoods of 5% for 100 voters, 0.5% for 1000 voters, and 0.06% for 10,000 voters. Another spatial model found likelihoods of 2% or less in all simulations of 201 voters and 5 candidates, whether two or four-dimensional, with or without correlation between dimensions, and with two different dispersions of candidates. Empirical studies Many attempts have been made at finding empirical examples of the paradox. Empirical identification of a Condorcet paradox presupposes extensive data on the decision-makers' preferences over all alternatives—something that is only very rarely available. While examples of the paradox seem to occur occasionally in small settings (e.g., parliaments) very few examples have been found in larger groups (e.g. electorates), although some have been identified. A summary of 37 individual studies, covering a total of 265 real-world elections, large and small, found 25 instances of a Condorcet paradox, for a total likelihood of 9.4% (and this may be a high estimate, since cases of the paradox are more likely to be reported on than cases without). An analysis of 883 three-candidate elections extracted from 84 real-world ranked-ballot elections of the Electoral Reform Society found a Condorcet cycle likelihood of 0.7%. These derived elections had between 350 and 1,957 voters. A similar analysis of data from the 1970–2004 American National Election Studies thermometer scale surveys found a Condorcet cycle likelihood of 0.4%. These derived elections had between 759 and 2,521 "voters". Andrew Myers, who operates the Condorcet Internet Voting Service, analyzed 10,354 nonpolitical CIVS elections and found cycles in 17% of elections with at least 10 votes, with the figure dropping to 2.1% for elections with at least 100 votes, and 1.2% for ≥300 votes. Real world instances A database of 189 ranked United States elections from 2004 to 2022 contained only one Condorcet cycle: the 2021 Minneapolis City Council election in Ward 2, with a narrow circular tie between candidates of the Green Party (Cam Gordon), the Minnesota Democratic–Farmer–Labor Party, (Yusra Arab) and an independent democratic socialist (Robin Wonsley). Voters' preferences were non-transitive: Arab was preferred over Gordon, Gordon over Wonsley, and Wonsley over Arab, creating a cyclical pattern with no clear winner. Additionally, the election exhibited a downward monotonicity paradox, as well as a paradox akin to Simpson’s paradox. Implications When a Condorcet method is used to determine an election, the voting paradox of cyclical societal preferences implies that the election has no Condorcet winner: no candidate who can win a one-on-one election against each other candidate. There will still be a smallest group of candidates, known as the Smith set, such that each candidate in the group can win a one-on-one election against each of the candidates outside the group. The several variants of the Condorcet method differ on how they resolve such ambiguities when they arise to determine a winner. The Condorcet methods which always elect someone from the Smith set when there is no Condorcet winner are known as Smith-efficient. Note that using only rankings, there is no fair and deterministic resolution to the trivial example given earlier because each candidate is in an exactly symmetrical situation. Situations having the voting paradox can cause voting mechanisms to violate the axiom of independence of irrelevant alternatives—the choice of winner by a voting mechanism could be influenced by whether or not a losing candidate is available to be voted for. Two-stage voting processes One important implication of the possible existence of the voting paradox in a practical situation is that in a paired voting process like those of standard parliamentary procedure, the eventual winner will depend on the way the majority votes are ordered. For example, say a popular bill is set to pass, before some other group offers an amendment; this amendment passes by majority vote. This may result in a majority of a legislature rejecting the bill as a whole, thus creating a paradox (where a popular amendment to a popular bill has made it unpopular). This logical inconsistency is the origin of the poison pill amendment, which deliberately engineers a false Condorcet cycle to kill a bill. Likewise, the order of votes in a legislature can be manipulated by the person arranging them to ensure their preferred outcome wins. Despite frequent objections by social choice theorists about the logically incoherent results of such procedures, and the existence of better alternatives for choosing between multiple versions of a bill, the procedure of pairwise majority-rule is widely-used and is codified into the by-laws or parliamentary procedures of almost every kind of deliberative assembly. Spoiler effects Condorcet paradoxes imply majoritarian methods fail independent of irrelevant alternatives. Label the three candidates in a race Rock, Paper, and Scissors. In a one-on-one race, Rock loses to Paper, Paper to Scissors, etc. Without loss of generality, say that Rock wins the election with a certain method. Then, Scissors is a spoiler candidate for Paper: if Scissors were to drop out, Paper would win the only one-on-one race (Paper defeats Rock). The same reasoning applies regardless of the winner. This example also shows why Condorcet elections are rarely (if ever) spoiled: spoilers can only happen when there is no Condorcet winner. Condorcet cycles are rare in large elections, and the median voter theorem shows cycles are impossible whenever candidates are arrayed on a left-right spectrum.
Mathematics
Game theory
null
32745
https://en.wikipedia.org/wiki/Venus
Venus
Venus is the second planet from the Sun. It is a terrestrial planet and is the closest in mass and size to its orbital neighbour Earth. Venus has by far the densest atmosphere of the terrestrial planets, composed mostly of carbon dioxide with a thick, global sulfuric acid cloud cover. At the surface it has a mean temperature of and a pressure 92 times that of Earth's at sea level. These extreme conditions compress carbon dioxide into a supercritical state at Venus's surface. Internally, Venus has a core, mantle, and crust. Venus lacks an internal dynamo, and its weakly induced magnetosphere is caused by atmospheric interactions with the solar wind. Internal heat escapes through active volcanism, resulting in resurfacing instead of plate tectonics. Venus is one of two planets in the Solar System, the other being Mercury, that have no moons. Conditions perhaps favourable for life on Venus have been identified at its cloud layers. Venus may have had liquid surface water early in its history with a habitable environment, before a runaway greenhouse effect evaporated any water and turned Venus into its present state. The rotation of Venus has been slowed and turned against its orbital direction (retrograde) by the currents and drag of its atmosphere. It takes 224.7 Earth days for Venus to complete an orbit around the Sun, and a Venusian solar year is just under two Venusian days long. The orbits of Venus and Earth are the closest between any two Solar System planets, approaching each other in synodic periods of 1.6 years. Venus and Earth have the lowest difference in gravitational potential of any pair of Solar System planets. This allows Venus to be the most accessible destination and a useful gravity assist waypoint for interplanetary flights from Earth. Venus figures prominently in human culture and in the history of astronomy. Orbiting inferiorly (inside of Earth's orbit), it always appears close to the Sun in Earth's sky, as either a "morning star" or an "evening star". While this is also true for Mercury, Venus appears more prominent, since it is the third brightest object in Earth's sky after the Moon and the Sun. In 1961, Venus became the target of the first interplanetary flight, Venera 1, followed by many essential interplanetary firsts, such as the first soft landing on another planet by Venera 7 in 1970. These probes demonstrated the extreme surface conditions, an insight that has informed predictions about global warming on Earth. This finding ended the theories and then popular science fiction about Venus being a habitable or inhabited planet. Physical characteristics Venus is one of the four terrestrial planets in the Solar System, meaning that it is a rocky body like Earth. It is similar to Earth in size and mass and is often described as Earth's "sister" or "twin". Venus is close to spherical due to its slow rotation. Venus has a diameter of —only less than Earth's—and its mass is 81.5% of Earth's, making it the third-smallest planet in the Solar System. Conditions on the Venusian surface differ radically from those on Earth because its dense atmosphere is 96.5% carbon dioxide, with most of the remaining 3.5% being nitrogen. The surface pressure is , and the average surface temperature is , above the critical points of both major constituents and making the surface atmosphere a supercritical fluid out of mainly supercritical carbon dioxide and some supercritical nitrogen. Geography The Venusian surface was a subject of speculation until some of its secrets were revealed by planetary science in the 20th century. Venera landers in 1975 and 1982 returned images of a surface covered in sediment and relatively angular rocks. The surface was mapped in detail by Magellan in 1990–91. The ground shows evidence of extensive volcanism, and the sulphur in the atmosphere may indicate that there have been recent eruptions. About 80% of the Venusian surface is covered by smooth, volcanic plains, consisting of 70% plains with wrinkle ridges and 10% smooth or lobate plains. Two highland "continents" make up the rest of its surface area, one lying in the planet's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is above the Venusian average surface elevation. The southern continent is called Aphrodite Terra, after the Greek mythological goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area. There is recent evidence of lava flow on Venus (2024), such as flows on Sif Mons, a shield volcano, and on Niobe Planitia, a flat plain. There are visible calderas. The planet has few impact craters, demonstrating that the surface is relatively young, at 300–600million years old. Venus has some unique surface features in addition to the impact craters, mountains, and valleys commonly found on rocky planets. Among these are flat-topped volcanic features called "farra", which look somewhat like pancakes and range in size from across, and from high; radial, star-like fracture systems called "novae"; features with both radial and concentric fractures resembling spider webs, known as "arachnoids"; and "coronae", circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin. Most Venusian surface features are named after historical and mythological women. Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regions Alpha Regio, Beta Regio, and Ovda Regio. The last three features were named before the current system was adopted by the International Astronomical Union, the body which oversees planetary nomenclature. The longitude of physical features on Venus is expressed relative to its prime meridian. The original prime meridian passed through the radar-bright spot at the centre of the oval feature Eve, located south of Alpha Regio. After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne on Sedna Planitia. The stratigraphically oldest tessera terrains have consistently lower thermal emissivity than the surrounding basaltic plains measured by Venus Express and Magellan, indicating a different, possibly a more felsic, mineral assemblage. The mechanism to generate a large amount of felsic crust usually requires the presence of water ocean and plate tectonics, implying that habitable condition had existed on early Venus with large bodies of water at some point. However, the nature of tessera terrains is far from certain. Studies reported on 26 October 2023 suggest for the first time that Venus may have had plate tectonics during ancient times and, as a result, may have had a more habitable environment, possibly one capable of sustaining life. Venus has gained interest as a case for research into the development of Earth-like planets and their habitability. Volcanism Much of the Venusian surface appears to have been shaped by volcanic activity. Venus has several times as many volcanoes as Earth, and it has 167 large volcanoes that are over across. The only volcanic complex of this size on Earth is the Big Island of Hawaii. More than 85,000 volcanoes on Venus were identified and mapped. This is not because Venus is more volcanically active than Earth, but because its crust is older and is not subject to the same erosion process. Earth's oceanic crust is continually recycled by subduction at the boundaries of tectonic plates, and has an average age of about 100 million years, whereas the Venusian surface is estimated to be 300–600million years old. Several lines of evidence point to ongoing volcanic activity on Venus. Sulfur dioxide concentrations in the upper atmosphere dropped by a factor of 10 between 1978 and 1986, jumped in 2006, and again declined 10-fold. This may mean that levels had been boosted several times by large volcanic eruptions. It has been suggested that Venusian lightning (discussed below) could originate from volcanic activity (i.e. volcanic lightning). In January 2020, astronomers reported evidence that suggests that Venus is currently volcanically active, specifically the detection of olivine, a volcanic product that would weather quickly on the planet's surface. This massive volcanic activity is fuelled by a superheated interior, which models say could be explained by energetic collisions from when the planet was young. Impacts would have had significantly higher velocity than on Earth, both because Venus's orbit is faster due to its closer proximity to the Sun and because objects would require higher orbital eccentricities to collide with the planet. In 2008 and 2009, the first direct evidence for ongoing volcanism was observed by Venus Express, in the form of four transient localized infrared hot spots within the rift zone Ganis Chasma, near the shield volcano Maat Mons. Three of the spots were observed in more than one successive orbit. These spots are thought to represent lava freshly released by volcanic eruptions. The actual temperatures are not known, because the size of the hot spots could not be measured, but are likely to have been in the range, relative to a normal temperature of . In 2023, scientists reexamined topographical images of the Maat Mons region taken by the Magellan orbiter. Using computer simulations, they determined that the topography had changed during an 8-month interval, and concluded that active volcanism was the cause. Craters Almost a thousand impact craters on Venus are evenly distributed across its surface. On other cratered bodies, such as Earth and the Moon, craters show a range of states of degradation. On the Moon, degradation is caused by subsequent impacts, whereas on Earth it is caused by wind and rain erosion. On Venus, about 85% of the craters are in pristine condition. The number of craters, together with their well-preserved condition, indicates the planet underwent a global resurfacing event 300–600million years ago, followed by a decay in volcanism. Whereas Earth's crust is in continuous motion, Venus is thought to be unable to sustain such a process. Without plate tectonics to dissipate heat from its mantle, Venus instead undergoes a cyclical process in which mantle temperatures rise until they reach a critical level that weakens the crust. Then, over a period of about 100million years, subduction occurs on an enormous scale, completely recycling the crust. Venusian craters range from in diameter. No craters are smaller than 3km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed so much by the atmosphere that they do not create an impact crater. Incoming projectiles less than in diameter will fragment and burn up in the atmosphere before reaching the ground. Internal structure Without data from reflection seismology or knowledge of its moment of inertia, little direct information is available about the internal structure and geochemistry of Venus. The similarity in size and density between Venus and Earth suggests that they share a similar internal structure: a core, mantle, and crust. Like that of Earth, the Venusian core is most likely at least partially liquid because the two planets have been cooling at about the same rate, although a completely solid core cannot be ruled out. The slightly smaller size of Venus means pressures are 24% lower in its deep interior than Earth's. The predicted values for the moment of inertia based on planetary models suggest a core radius of 2,900–3,450 km. This is in line with the first observation-based estimate of 3,500 km. The principal difference between the two planets is the lack of evidence for plate tectonics on Venus, possibly because its crust is too strong to subduct without water to make it less viscous. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field. Instead, Venus may lose its internal heat in periodic major resurfacing events. Magnetic field and core In 1967, Venera 4 found Venus's magnetic field to be much weaker than that of Earth. This magnetic field is induced by an interaction between the ionosphere and the solar wind, rather than by an internal dynamo as in the Earth's core. Venus's small induced magnetosphere provides negligible protection to the atmosphere against solar and cosmic radiation. The lack of an intrinsic magnetic field on Venus was surprising, given that it is similar to Earth in size and was expected to contain a dynamo at its core. A dynamo requires three things: a conducting liquid, rotation, and convection. The core is thought to be electrically conductive and, although its rotation is often thought to be too slow, simulations show it is adequate to produce a dynamo. This implies that the dynamo is missing because of a lack of convection in Venus's core. On Earth, convection occurs in the liquid outer layer of the core because the bottom of the liquid layer is much higher in temperature than the top. On Venus, a global resurfacing event may have shut down plate tectonics and led to a reduced heat flux through the crust. This insulating effect would cause the mantle temperature to increase, thereby reducing the heat flux out of the core. As a result, no internal geodynamo is available to drive a magnetic field. Instead, the heat from the core is reheating the crust. One possibility is that Venus has no solid inner core, or that its core is not cooling, so that the entire liquid part of the core is at approximately the same temperature. Another possibility is that its core has already been completely solidified. The state of the core is highly dependent on the concentration of sulphur, which is unknown at present. Another possibility is that the absence of a late, large impact on Venus (contra the Earth's "Moon-forming" impact) left the core of Venus stratified from the core's incremental formation, and without the forces to initiate/sustain convection, and thus a "geodynamo". The weak magnetosphere around Venus means that the solar wind is interacting directly with its outer atmosphere. Here, ions of hydrogen and oxygen are being created by the dissociation of water molecules from ultraviolet radiation. The solar wind then supplies energy that gives some of these ions sufficient velocity to escape Venus's gravity field. This erosion process results in a steady loss of low-mass hydrogen, helium, and oxygen ions, whereas higher-mass molecules, such as carbon dioxide, are more likely to be retained. Atmospheric erosion by the solar wind could have led to the loss of most of Venus's water during the first billion years after it formed. However, the planet may have retained a dynamo for its first 2–3 billion years, so the water loss may have occurred more recently. The erosion has increased the ratio of higher-mass deuterium to lower-mass hydrogen in the atmosphere 100 times compared to the rest of the solar system. Atmosphere and climate Venus has a dense atmosphere composed of 96.5% carbon dioxide, 3.5% nitrogen—both exist as supercritical fluids at the planet's surface with a density 6.5% that of water—and traces of other gases including sulphur dioxide. The mass of its atmosphere is 92 times that of Earth's, whereas the pressure at its surface is about 93 times that at Earth's—a pressure equivalent to that at a depth of nearly under Earth's ocean surfaces. The density at the surface is , 6.5% that of water or 50 times as dense as Earth's atmosphere at at sea level. The -rich atmosphere generates the strongest greenhouse effect in the Solar System, creating surface temperatures of at least . This makes the Venusian surface hotter than Mercury's, which has a minimum surface temperature of and maximum surface temperature of , even though Venus is nearly twice Mercury's distance from the Sun and thus receives only 25% of Mercury's solar irradiance, of 2,600 W/m2 (double that of Earth). Because of its runaway greenhouse effect, Venus has been identified by scientists such as Carl Sagan as a warning and research object linked to climate change on Earth. Venus's atmosphere is rich in primordial noble gases compared to that of Earth. This enrichment indicates an early divergence from Earth in evolution. An unusually large comet impact or accretion of a more massive primary atmosphere from solar nebula have been proposed to explain the enrichment. However, the atmosphere is depleted of radiogenic argon, a proxy for mantle degassing, suggesting an early shutdown of major magmatism. Studies have suggested that billions of years ago, Venus's atmosphere could have been much more like the one surrounding the early Earth, and that there may have been substantial quantities of liquid water on the surface. After a period of 600 million to several billion years, solar forcing from rising luminosity of the Sun and possibly large volcanic resurfacing caused the evaporation of the original water and the current atmosphere. A runaway greenhouse effect was created once a critical level of greenhouse gases (including water) was added to its atmosphere. Although the surface conditions on Venus are no longer hospitable to any Earth-like life that may have formed before this event, there is speculation on the possibility that life exists in the upper cloud layers of Venus, up from the surface, where the atmospheric conditions are the most Earth-like in the Solar System, with temperatures ranging between , and the pressure and radiation being about the same as at Earth's surface, but with acidic clouds and the carbon dioxide air. Venus's atmosphere could also have a potential thermal habitable zone at elevations of 54 to 48 km, with lower elevations inhibiting cell growth and higher elevations exceeding evaporation temperature. The putative detection of an absorption line of phosphine in Venus's atmosphere, with no known pathway for abiotic production, led to speculation in September 2020 that there could be extant life currently present in the atmosphere. Later research attributed the spectroscopic signal that was interpreted as phosphine to sulphur dioxide, or found that in fact there was no absorption line. Thermal inertia and the transfer of heat by winds in the lower atmosphere mean that the temperature of Venus's surface does not vary significantly between the planet's two hemispheres, those facing and not facing the Sun, despite Venus's slow rotation. Winds at the surface are slow, moving at a few kilometres per hour, but because of the high density of the atmosphere at the surface, they exert a significant amount of force against obstructions, and transport dust and small stones across the surface. This alone would make it difficult for a human to walk through, even without the heat, pressure, and lack of oxygen. Above the dense layer are thick clouds, consisting mainly of sulfuric acid, which is formed by sulphur dioxide and water through a chemical reaction resulting in sulfuric acid hydrate. Additionally, the clouds consist of approximately 1% ferric chloride. Other possible constituents of the cloud particles are ferric sulfate, aluminium chloride and phosphoric anhydride. Clouds at different levels have different compositions and particle size distributions. These clouds reflect, similar to thick cloud cover on Earth, about 70% of the sunlight that falls on them back into space, and since they cover the whole planet they prevent visual observation of Venus's surface. The permanent cloud cover means that although Venus is closer than Earth to the Sun, it receives less sunlight on the ground, with only 10% of the received sunlight reaching the surface, resulting in average daytime levels of illumination at the surface of 14,000 lux, comparable to that on Earth "in the daytime with overcast clouds". Strong winds at the cloud tops go around Venus about every four to five Earth days. Winds on Venus move at up to 60 times the speed of its rotation, whereas Earth's fastest winds are only 10–20% rotation speed. The surface of Venus is effectively isothermal; it retains a constant temperature not only between the two hemispheres but between the equator and the poles. Venus's minute axial tilt—less than 3°, compared to 23° on Earth—also minimizes seasonal temperature variation. Altitude is one of the few factors that affect Venusian temperatures. The highest point on Venus, Maxwell Montes, is therefore the coolest point on Venus, with a temperature of about and an atmospheric pressure of about . In 1995, the Magellan spacecraft imaged a highly reflective substance at the tops of the highest mountain peaks, a "Venus snow" that bore a strong resemblance to terrestrial snow. This substance likely formed from a similar process to snow, albeit at a far higher temperature. Too volatile to condense on the surface, it rose in gaseous form to higher elevations, where it is cooler and could precipitate. The identity of this substance is not known with certainty, but speculation has ranged from elemental tellurium to lead sulfide (galena). Although Venus has no seasons, in 2019 astronomers identified a cyclical variation in sunlight absorption by the atmosphere, possibly caused by opaque, absorbing particles suspended in the upper clouds. The variation causes observed changes in the speed of Venus's zonal winds and appears to rise and fall in time with the Sun's 11-year sunspot cycle. The existence of lightning in the atmosphere of Venus has been controversial since the first suspected bursts were detected by the Soviet Venera probes. In 2006–07, Venus Express clearly detected whistler mode waves, the signatures of lightning. Their intermittent appearance indicates a pattern associated with weather activity. According to these measurements, the lightning rate is at least half that on Earth, however other instruments have not detected lightning at all. The origin of any lightning remains unclear, but could originate from clouds or Venusian volcanoes. In 2007, Venus Express discovered that a huge double atmospheric polar vortex exists at the south pole. Venus Express discovered, in 2011, that an ozone layer exists high in the atmosphere of Venus. On 29 January 2013, ESA scientists reported that the ionosphere of Venus streams outwards in a manner similar to "the ion tail seen streaming from a comet under similar conditions." In December 2015, and to a lesser extent in April and May 2016, researchers working on Japan's Akatsuki mission observed bow-shaped objects in the atmosphere of Venus. This was considered direct evidence of the existence of perhaps the largest stationary gravity waves in the solar system. Orbit and rotation Venus orbits the Sun at an average distance of about , and completes an orbit every 224.7 days. Although all planetary orbits are elliptical, Venus's orbit is currently the closest to circular, with an eccentricity of less than 0.01. Simulations of the early solar system orbital dynamics have shown that the eccentricity of the Venus orbit may have been substantially larger in the past, reaching values as high as 0.31 and possibly impacting early climate evolution. All planets in the Solar System orbit the Sun in an anticlockwise direction as viewed from above Earth's north pole. Most planets rotate on their axes in an anticlockwise direction, but Venus rotates clockwise in retrograde rotation once every 243 Earth days—the slowest rotation of any planet. This Venusian sidereal day lasts therefore longer than a Venusian year (243 versus 224.7 Earth days). Slowed by its strong atmospheric current the length of the day also fluctuates by up to 20 minutes. Venus's equator rotates at , whereas Earth's rotates at . Venus's rotation period measured with Magellan spacecraft data over a 500-day period is smaller than the rotation period measured during the 16-year period between the Magellan spacecraft and Venus Express visits, with a difference of about 6.5minutes. Because of the retrograde rotation, the length of a solar day on Venus is significantly shorter than the sidereal day, at 116.75 Earth days (making the Venusian solar day shorter than Mercury's 176 Earth days — the 116-day figure is close to the average number of days it takes Mercury to slip underneath the Earth in its orbit [the number of days of Mercury's synodic orbital period]). One Venusian year is about 1.92Venusian solar days. To an observer on the surface of Venus, the Sun would rise in the west and set in the east, although Venus's opaque clouds prevent observing the Sun from the planet's surface. Venus may have formed from the solar nebula with a different rotation period and obliquity, reaching its current state because of chaotic spin changes caused by planetary perturbations and tidal effects on its dense atmosphere, a change that would have occurred over the course of billions of years. The rotation period of Venus may represent an equilibrium state between tidal locking to the Sun's gravitation, which tends to slow rotation, and an atmospheric tide created by solar heating of the thick Venusian atmosphere. The 584-day average interval between successive close approaches to Earth is almost exactly equal to 5Venusian solar days (5.001444 to be precise), but the hypothesis of a spin-orbit resonance with Earth has been discounted. Venus has no natural satellites. It has several trojan asteroids: the quasi-satellite and two other temporary trojans, and . In the 17th century, Giovanni Cassini reported a moon orbiting Venus, which was named Neith and numerous sightings were reported over the following , but most were determined to be stars in the vicinity. Alex Alemi's and David Stevenson's 2006 study of models of the early Solar System at the California Institute of Technology shows Venus likely had at least one moon created by a huge impact event billions of years ago. About 10millionyears later, according to the study, another impact reversed the planet's spin direction and the resulting tidal deceleration caused the Venusian moon gradually to spiral inward until it collided with Venus. If later impacts created moons, these were removed in the same way. An alternative explanation for the lack of satellites is the effect of strong solar tides, which can destabilize large satellites orbiting the inner terrestrial planets. The orbital space of Venus has a dust ring-cloud, with a suspected origin either from Venus–trailing asteroids, interplanetary dust migrating in waves, or the remains of the Solar System's original circumstellar disc that formed the planetary system. Orbit in respect to Earth Earth and Venus have a near orbital resonance of 13:8 (Earth orbits eight times for every 13 orbits of Venus). Therefore, they approach each other and reach inferior conjunction in synodic periods of 584 days, on average. The path that Venus makes in relation to Earth viewed geocentrically draws a pentagram over five synodic periods, shifting every period by 144°. This pentagram of Venus is sometimes referred to as the petals of Venus due to the path's visual similarity to a flower. When Venus lies between Earth and the Sun in inferior conjunction, it makes the closest approach to Earth of any planet at an average distance of . Because of the decreasing eccentricity of Earth's orbit, the minimum distances will become greater over tens of thousands of years. From the year1 to 5383, there are 526 approaches less than ; then, there are none for about 60,158 years. While Venus approaches Earth the closest, Mercury is more often the closest to Earth of all planets. Venus has the lowest gravitational potential difference to Earth than any other planet, needing the lowest delta-v to transfer between them. Tidally Venus exerts the third strongest tidal force on Earth, after the Moon and the Sun, though significantly less. Observability To the naked eye, Venus appears as a white point of light brighter than any other planet or star (apart from the Sun). The planet's mean apparent magnitude is −4.14 with a standard deviation of 0.31. The brightest magnitude occurs during the crescent phase about one month before or after an inferior conjunction. Venus fades to about magnitude −3 when it is backlit by the Sun. The planet is bright enough to be seen in broad daylight, but is more easily visible when the Sun is low on the horizon or setting. As an inferior planet, it always lies within about 47° of the Sun. Venus "overtakes" Earth every 584 days as it orbits the Sun. As it does so, it changes from the "Evening Star", visible after sunset, to the "Morning Star", visible before sunrise. Although Mercury, the other inferior planet, reaches a maximum elongation of only 28° and is often difficult to discern in twilight, Venus is hard to miss when it is at its brightest. Its greater maximum elongation means it is visible in dark skies long after sunset. As the brightest point-like object in the sky, Venus is a commonly misreported "unidentified flying object". Phases As it orbits the Sun, Venus displays phases like those of the Moon in a telescopic view. The planet appears as a small and "full" disc when it is on the opposite side of the Sun (at superior conjunction). Venus shows a larger disc and "quarter phase" at its maximum elongations from the Sun, and appears at its brightest in the night sky. The planet presents a much larger thin "crescent" in telescopic views as it passes along the near side between Earth and the Sun. Venus displays its largest size and "new phase" when it is between Earth and the Sun (at inferior conjunction). Its atmosphere is visible through telescopes by the halo of sunlight refracted around it. The phases are clearly visible in a 4" telescope. Although naked eye visibility of Venus's phases is disputed, records exist of observations of its crescent. Daylight apparitions When Venus is sufficiently bright with enough angular distance from the sun, it is easily observed in a clear daytime sky with the naked eye, though most people do not know to look for it. Astronomer Edmund Halley calculated its maximum naked eye brightness in 1716, when many Londoners were alarmed by its appearance in the daytime. French emperor Napoleon Bonaparte once witnessed a daytime apparition of the planet while at a reception in Luxembourg. Another historical daytime observation of the planet took place during the inauguration of the American president Abraham Lincoln in Washington, D.C., on 4March 1865. Transits A transit of Venus is the appearance of Venus in front of the Sun, during inferior conjunction. Since the orbit of Venus is slightly inclined relative to Earth's orbit, most inferior conjunctions with Earth, which occur every synodic period of 1.6 years, do not produce a transit of Venus above Earth. Consequently, Venus transits above Earth only occur when an inferior conjunction takes place during some days of June or December, the time where the orbits of Venus and Earth cross a straight line with the Sun. This results in Venus transiting above Earth in a sequence of currently , , and , forming cycles of . Historically, transits of Venus were important, because they allowed astronomers to determine the size of the astronomical unit, and hence the size of the Solar System as shown by Jeremiah Horrocks in 1639 with the first known observation of a Venus transit (after history's first observed planetary transit in 1631, of Mercury). Only seven Venus transits have been observed so far, since their occurrences were calculated in the 1621 by Johannes Kepler. Captain Cook sailed to Tahiti in 1768 to record the third observed transit of Venus, which subsequently resulted in the exploration of the east coast of Australia. The latest pair was June 8, 2004 and June 5–6, 2012. The transit could be watched live from many online outlets or observed locally with the right equipment and conditions. The preceding pair of transits occurred in December 1874 and December 1882. The next transit will occur in December 2117 and December 2125. Ashen light A long-standing mystery of Venus observations is the so-called ashen light—an apparent weak illumination of its dark side, seen when the planet is in the crescent phase. The first claimed observation of ashen light was made in 1643, but the existence of the illumination has never been reliably confirmed. Observers have speculated it may result from electrical activity in the Venusian atmosphere, but it could be illusory, resulting from the physiological effect of observing a bright, crescent-shaped object. The ashen light has often been sighted when Venus is in the evening sky, when the evening terminator of the planet is towards Earth. Observation and exploration history Early observation Venus is in Earth's sky bright enough to be visible without aid, making it one of the classical planets that human cultures have known and identified throughout history, particularly for being the third brightest object in Earth's sky after the Sun and the Moon. Because the movements of Venus appear to be discontinuous (it disappears due to its proximity to the sun, for many days at a time, and then reappears on the other horizon), some cultures did not recognize Venus as a single entity; instead, they assumed it to be two separate stars on each horizon: the morning and evening star. Nonetheless, a cylinder seal from the Jemdet Nasr period and the Venus tablet of Ammisaduqa from the First Babylonian dynasty indicate that the ancient Sumerians already knew that the morning and evening stars were the same celestial object. In the Old Babylonian period, the planet Venus was known as Ninsi'anna, and later as Dilbat. The name "Ninsi'anna" translates to "divine lady, illumination of heaven", which refers to Venus as the brightest visible "star". Earlier spellings of the name were written with the cuneiform sign si4 (= SU, meaning "to be red"), and the original meaning may have been "divine lady of the redness of heaven", in reference to the colour of the morning and evening sky. The Chinese historically referred to the morning Venus as "the Great White" ( ) or "the Opener (Starter) of Brightness" ( ), and the evening Venus as "the Excellent West One" ( ). The ancient Greeks initially believed Venus to be two separate stars: Phosphorus, the morning star, and Hesperus, the evening star. Pliny the Elder credited the realization that they were a single object to Pythagoras in the sixth century BC, while Diogenes Laërtius argued that Parmenides (early fifth century) was probably responsible for this discovery. Though they recognized Venus as a single object, the ancient Romans continued to designate the morning aspect of Venus as Lucifer, literally "Light-Bringer", and the evening aspect as Vesper, both of which are literal translations of their traditional Greek names. In the second century, in his astronomical treatise Almagest, Ptolemy theorized that both Mercury and Venus were located between the Sun and the Earth. The 11th-century Persian astronomer Avicenna claimed to have observed a transit of Venus (although there is some doubt about it), which later astronomers took as confirmation of Ptolemy's theory. In the 12th century, the Andalusian astronomer Ibn Bajjah observed "two planets as black spots on the face of the Sun"; these were thought to be the transits of Venus and Mercury by 13th-century Maragha astronomer Qotb al-Din Shirazi, though this cannot be true as there were no Venus transits in Ibn Bajjah's lifetime. Venus and early modern astronomy When the Italian physicist Galileo Galilei first observed the planet with a telescope in the early 17th century, he found it showed phases like the Moon, varying from crescent to gibbous to full and vice versa. When Venus is furthest from the Sun in the sky, it shows a half-lit phase, and when it is closest to the Sun in the sky, it shows as a crescent or full phase. This could be possible only if Venus orbited the Sun, and this was among the first observations to clearly contradict the Ptolemaic geocentric model that the Solar System was concentric and centred on Earth. The 1639 transit of Venus was accurately predicted by Jeremiah Horrocks and observed by him and his friend, William Crabtree, at each of their respective homes, on 4December 1639 (24 November under the Julian calendar in use at that time). The atmosphere of Venus was discovered in 1761 by Russian polymath Mikhail Lomonosov. Venus's atmosphere was observed in 1790 by German astronomer Johann Schröter. Schröter found when the planet was a thin crescent, the cusps extended through more than 180°. He correctly surmised this was due to scattering of sunlight in a dense atmosphere. Later, American astronomer Chester Smith Lyman observed a complete ring around the dark side of the planet when it was at inferior conjunction, providing further evidence for an atmosphere. The atmosphere complicated efforts to determine a rotation period for the planet, and observers such as Italian-born astronomer Giovanni Cassini and Schröter incorrectly estimated periods of about from the motions of markings on the planet's apparent surface. Early 20th century advances Little more was discovered about Venus until the 20th century. Its almost featureless disc gave no hint what its surface might be like, and it was only with the development of spectroscopic and ultraviolet observations that more of its secrets were revealed. Spectroscopic observations in the 1900s gave the first clues about the Venusian rotation. Vesto Slipher tried to measure the Doppler shift of light from Venus, but found he could not detect any rotation. He surmised the planet must have a much longer rotation period than had previously been thought. The first ultraviolet observations were carried out in the 1920s, when Frank E. Ross found that ultraviolet photographs revealed considerable detail that was absent in visible and infrared radiation. He suggested this was due to a dense, yellow lower atmosphere with high cirrus clouds above it. It had been noted that Venus had no discernible oblateness in its disk, suggesting a slow rotation, and some astronomers concluded based on this that it was tidally locked like Mercury was believed to be at the time; but other researchers had detected a significant quantity of heat coming from the planet's nightside, suggesting a quick rotation (a high surface temperature was not suspected at the time), confusing the issue. Later work in the 1950s showed the rotation was retrograde. Space age Humanity's first interplanetary spaceflight was achieved in 1961 with the robotic space probe Venera 1 of the Soviet Venera programme flying to Venus, but it lost contact en route. The first successful interplanetary mission, also to Venus, was Mariner 2 of the United States' Mariner programme, passing on 14 December 1962 at above the surface of Venus and gathering data on the planet's atmosphere. Additionally radar observations of Venus were first carried out in the 1960s, and provided the first measurements of the rotation period, which were close to the actual value. Venera 3, launched in 1966, became humanity's first probe and lander to reach and impact another celestial body other than the Moon, but could not return data as it crashed into the surface of Venus. In 1967, Venera 4 was launched and successfully deployed science experiments in the Venusian atmosphere before impacting. Venera 4 showed the surface temperature was hotter than Mariner 2 had calculated, at almost , determined that the atmosphere was 95% carbon dioxide (), and discovered that Venus's atmosphere was considerably denser than Venera 4 designers had anticipated. In an early example of space cooperation the data of Venera 4 was joined with the 1967 Mariner 5 data, analysed by a combined Soviet–American science team in a series of colloquia over the following year. On 15 December 1970, Venera 7 became the first spacecraft to soft land on another planet and the first to transmit data from there back to Earth. In 1974, Mariner 10 swung by Venus to bend its path towards Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere. This was the first interplanetary gravity assist ever used, a technique which would be used by later probes. Radar observations in the 1970s revealed details of the Venusian surface for the first time. Pulses of radio waves were beamed at the planet using the radio telescope at Arecibo Observatory, and the echoes revealed two highly reflective regions, designated the Alpha and Beta regions. The observations revealed a bright region attributed to mountains, which was called Maxwell Montes. These three features are now the only ones on Venus that do not have female names. In 1975, the Soviet Venera 9 and 10 landers transmitted the first images from the surface of Venus, which were in black and white. NASA obtained additional data with the Pioneer Venus project, consisting of two separate missions: the Pioneer Venus Multiprobe and Pioneer Venus Orbiter, orbiting Venus between 1978 and 1992. In 1982 the first colour images of the surface were obtained with the Soviet Venera 13 and 14 landers. After Venera 15 and 16 operated between 1983 and 1984 in orbit, conducting detailed mapping of 25% of Venus's terrain (from the north pole to 30°N latitude), the Soviet Venera programme came to a close. In 1985 the Soviet Vega programme with its Vega 1 and Vega 2 missions carried the last entry probes and carried the first ever extraterrestrial aerobots for the first time achieving atmospheric flight outside Earth by employing inflatable balloons. Between 1990 and 1994, Magellan operated in orbit until deorbiting, mapping the surface of Venus. Furthermore, probes like Galileo (1990), Cassini–Huygens (1998/1999), and MESSENGER (2006/2007) visited Venus with flybys en route to other destinations. In April 2006, Venus Express, the first dedicated Venus mission by the European Space Agency (ESA), entered orbit around Venus. Venus Express provided unprecedented observation of Venus's atmosphere. ESA concluded the Venus Express mission in December 2014 deorbiting it in January 2015. In 2010, the first successful interplanetary solar sail spacecraft IKAROS travelled to Venus for a flyby. Between 2015 and 2024 Japan's Akatsuki probe was active in orbit around Venus and BepiColombo performed flybys in 2020/2021. Active and future missions Currently NASA's Parker Solar Probe and BepiColombo have been performing flybys at Venus. Beside these flybys there are at the moment several probes under development as well as multiple proposed missions still in their early conceptual stages. Venus has been identified for future research as an important case for understanding: the origins of the solar system and Earth, and if systems and planets like ours are common or rare in the universe. how planetary bodies evolve from their primordial states to today's diverse objects. the development of conditions leading to habitable environments and life. Search for life Speculation on the possibility of life on Venus's surface decreased significantly after the early 1960s when it became clear that conditions were extreme compared to those on Earth. Venus's extreme temperatures and atmospheric pressure make water-based life, as currently known, unlikely. Some scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the cooler, acidic upper layers of the Venusian atmosphere. Such speculations go back to 1967, when Carl Sagan and Harold J. Morowitz suggested in a Nature article that tiny objects detected in Venus's clouds might be organisms similar to Earth's bacteria (which are of approximately the same size): While the surface conditions of Venus make the hypothesis of life there implausible, the clouds of Venus are a different story altogether. As was pointed out some years ago, water, carbon dioxide and sunlight—the prerequisites for photosynthesis—are plentiful in the vicinity of the clouds. In August 2019, astronomers led by Yeon Joo Lee reported that long-term pattern of absorbance and albedo changes in the atmosphere of the planet Venus caused by "unknown absorbers", which may be chemicals or even large colonies of microorganisms high up in the atmosphere of the planet, affect the climate. Their light absorbance is almost identical to that of micro-organisms in Earth's clouds. Similar conclusions have been reached by other studies. In September 2020, a team of astronomers led by Jane Greaves from Cardiff University announced the likely detection of phosphine, a gas not known to be produced by any known chemical processes on the Venusian surface or atmosphere, in the upper levels of the planet's clouds. One proposed source for this phosphine is living organisms. The phosphine was detected at heights of at least above the surface, and primarily at mid-latitudes with none detected at the poles. The discovery prompted NASA administrator Jim Bridenstine to publicly call for a new focus on the study of Venus, describing the phosphine find as "the most significant development yet in building the case for life off Earth". Subsequent analysis of the data-processing used to identify phosphine in the atmosphere of Venus has raised concerns that the detection-line may be an artefact. The use of a 12th-order polynomial fit may have amplified noise and generated a false reading (see Runge's phenomenon). Observations of the atmosphere of Venus at other parts of the electromagnetic spectrum in which a phosphine absorption line would be expected did not detect phosphine. By late October 2020, re-analysis of data with a proper subtraction of background did not show a statistically significant detection of phosphine. Members of the team around Greaves, are working as part of a project by the MIT to send with the rocket company Rocket Lab the first private interplanetary space craft, to look for organics by entering the atmosphere of Venus with a probe, set to launch in January 2025. Planetary protection The Committee on Space Research is a scientific organization established by the International Council for Science. Among their responsibilities is the development of recommendations for avoiding interplanetary contamination. For this purpose, space missions are categorized into five groups. Due to the harsh surface environment of Venus, Venus has been under the planetary protection category two. This indicates that there is only a remote chance that spacecraft-borne contamination could compromise investigations. Human presence Venus is the place of the first interplanetary human presence, mediated through robotic missions, with the first successful landings on another planet and extraterrestrial body other than the Moon. Currently in orbit is Akatsuki, and other probes routinely use Venus for gravity assist manoeuvres capturing some data about Venus on the way. The only nation that has sent lander probes to the surface of Venus has been the Soviet Union, which has been used by Russian officials to call Venus a "Russian planet". Crewed flight Studies of routes for crewed missions to Mars have since the 1960s proposed opposition missions instead of direct conjunction missions with Venus gravity assist flybys, demonstrating that they should be quicker and safer missions to Mars, with better return or abort flight windows, and less or the same amount of radiation exposure from the flight as direct Mars flights. Early in the space age the Soviet Union and the United States proposed the TMK-MAVR and Manned Venus flyby crewed flyby missions to Venus, though they were never realized. Habitation While the surface conditions of Venus are inhospitable, the atmospheric pressure, temperature, and solar and cosmic radiation 50 km above the surface are similar to those at Earth's surface. With this in mind, Soviet engineer Sergey Zhitomirskiy (Сергей Житомирский, 1929–2004) in 1971 and NASA aerospace engineer Geoffrey A. Landis in 2003 suggested the use of aerostats for crewed exploration and possibly for permanent "floating cities" in the Venusian atmosphere, an alternative to the popular idea of living on planetary surfaces such as Mars. Among the many engineering challenges for any human presence in the atmosphere of Venus are the corrosive amounts of sulfuric acid in the atmosphere. NASA's High Altitude Venus Operational Concept is a mission concept that proposed a crewed aerostat design. In culture Venus is a primary feature of the night sky, and so has been of remarkable importance in mythology, astrology and fiction throughout history and in different cultures. Several hymns praise Inanna in her role as the goddess of the planet Venus. Theology professor Jeffrey Cooley has argued that, in many myths, Inanna's movements may correspond with the movements of the planet Venus in the sky. The discontinuous movements of Venus relate to both mythology as well as Inanna's dual nature. In Inanna's Descent to the Underworld, unlike any other deity, Inanna is able to descend into the netherworld and return to the heavens. The planet Venus appears to make a similar descent, setting in the West and then rising again in the East. An introductory hymn describes Inanna leaving the heavens and heading for Kur, what could be presumed to be, the mountains, replicating the rising and setting of Inanna to the West. In Inanna and Shukaletuda and Inanna's Descent into the Underworld appear to parallel the motion of the planet Venus. In Inanna and Shukaletuda, Shukaletuda is described as scanning the heavens in search of Inanna, possibly searching the eastern and western horizons. In the same myth, while searching for her attacker, Inanna herself makes several movements that correspond with the movements of Venus in the sky. The Ancient Egyptians and ancient Greeks possibly knew by the second millennium BC or at the latest by the Late Period, under mesopotamian influence that the morning star and an evening star were one and the same. The Egyptians knew the morning star as Tioumoutiri and the evening star as Ouaiti. They depicted Venus at first as a phoenix or heron (see Bennu), calling it "the crosser" or "star with crosses", associating it with Osiris, and later depicting it two-headed with human or falco heads, and associated it with Horus, son of Isis (which during the even later Hellenistic period was together with Hathor identified with Aphrodite). The Greeks used the names Phōsphoros (Φωσϕόρος), meaning "light-bringer" (whence the element phosphorus; alternately Ēōsphoros (Ἠωσϕόρος), meaning "dawn-bringer"), for the morning star, and Hesperos (Ἕσπερος), meaning "Western one", for the evening star, both children of dawn Eos and therefore grandchildren of Aphrodite. Though by the Roman era they were recognized as one celestial object, known as "the star of Venus", the traditional two Greek names continued to be used, though usually translated to Latin as Lūcifer and Vesper. Classical poets such as Homer, Sappho, Ovid and Virgil spoke of the star and its light. Poets such as William Blake, Robert Frost, Letitia Elizabeth Landon, Alfred Lord Tennyson and William Wordsworth wrote odes to it. The composer Holst included it as the second movement of his The Planets suite. In India, Shukra Graha ("the planet Shukra") is named after the powerful saint Shukra. Shukra which is used in Indian Vedic astrology means "clear, pure" or "brightness, clearness" in Sanskrit. One of the nine Navagraha, it is held to affect wealth, pleasure and reproduction; it was the son of Bhrgu, preceptor of the Daityas, and guru of the Asuras. The English name of Venus was originally the ancient Roman name for it. Romans named Venus after their goddess of love, who in turn was based on the ancient Greek goddess of love Aphrodite, who was herself based on the similar Sumerian religion goddess Inanna (which is Ishtar in Akkadian religion), all of whom were associated with the planet. The weekday of the planet and these goddesses is Friday, named after the Germanic goddess Frigg, who has been associated with the Roman goddess Venus. Venus is known as Kejora in Indonesian and Malaysian Malay. In Chinese the planet is called Jīn-xīng (金星), the golden planet of the metal element. Modern Chinese, Japanese, Korean and Vietnamese cultures refer to the planet literally as the "metal star" (), based on the Five elements. The Maya considered Venus to be the most important celestial body after the Sun and Moon. They called it Chac ek, or Noh Ek', "the Great Star". The cycles of Venus were important to their calendar and were described in some of their books such as Maya Codex of Mexico and Dresden Codex. The Estrella Solitaria ("Lone Star") Flag of Chile depicts Venus. Modern culture The impenetrable Venusian cloud cover gave science fiction writers free rein to speculate on conditions at its surface; all the more so when early observations showed that not only was it similar in size to Earth, it possessed a substantial atmosphere. Closer to the Sun than Earth, the planet was often depicted as warmer, but still habitable by humans. The genre reached its peak between the 1930s and 1950s, at a time when science had revealed some aspects of Venus, but not yet the harsh reality of its surface conditions. Findings from the first missions to Venus showed reality to be quite different and brought this particular genre to an end. As scientific knowledge of Venus advanced, science fiction authors tried to keep pace, particularly by conjecturing human attempts to terraform Venus. Symbols The symbol of a circle with a small cross beneath is the so-called Venus symbol, gaining its name for being used as the astronomical symbol for Venus. The symbol is of ancient Greek origin, and represents more generally femininity, adopted by biology as gender symbol for female, like the Mars symbol for male and sometimes the Mercury symbol for hermaphrodite. This gendered association of Venus and Mars has been used to pair them heteronormatively, describing women and men stereotypically as being so different that they can be understood as coming from different planets, an understanding popularized in 1992 by the book titled Men Are from Mars, Women Are from Venus. The Venus symbol was also used in Western alchemy representing the element copper (like the symbol of Mercury is also the symbol of the element mercury), and since polished copper has been used for mirrors from antiquity the symbol for Venus has sometimes been called Venus mirror, representing the mirror of the goddess, although this origin has been discredited as an unlikely origin. Besides the Venus symbol, many other symbols have been associated with Venus, other common ones are the crescent or particularly the star, as with the Star of Ishtar.
Physical sciences
Astronomy
null
32754
https://en.wikipedia.org/wiki/Valve
Valve
A valve is a device or natural object that regulates, directs or controls the flow of a fluid (gases, liquids, fluidized solids, or slurries) by opening, closing, or partially obstructing various passageways. Valves are technically fittings, but are usually discussed as a separate category. In an open valve, fluid flows in a direction from higher pressure to lower pressure. The word is derived from the Latin valva, the moving part of a door, in turn from volvere, to turn, roll. The simplest, and very ancient, valve is simply a freely hinged flap which swings down to obstruct fluid (gas or liquid) flow in one direction, but is pushed up by the flow itself when the flow is moving in the opposite direction. This is called a check valve, as it prevents or "checks" the flow in one direction. Modern control valves may regulate pressure or flow downstream and operate on sophisticated automation systems. Valves have many uses, including controlling water for irrigation, industrial uses for controlling processes, residential uses such as on/off and pressure control to dish and clothes washers and taps in the home. Valves are also used in the military and transport sectors. In HVAC ductwork and other near-atmospheric air flows, valves are instead called dampers. In compressed air systems, however, valves are used with the most common type being ball valves. Applications Valves are found in virtually every industrial process, including water and sewage processing, mining, power generation, processing of oil, gas and petroleum, food manufacturing, chemical and plastic manufacturing and many other fields. People in developed nations use valves in their daily lives, including plumbing valves, such as taps for tap water, gas control valves on cookers, small valves fitted to washing machines and dishwashers, safety devices fitted to hot water systems, and poppet valves in car engines. In nature, there are valves, for example one-way valves in veins controlling the blood circulation, and heart valves controlling the flow of blood in the chambers of the heart and maintaining the correct pumping action. Valves may be operated manually, either by a handle or grip, lever, pedal or wheel. Valves may also be automatic, driven by changes in pressure, temperature, or flow. These changes may act upon a diaphragm or a piston which in turn activates the valve, examples of this type of valve found commonly are safety valves fitted to hot water systems or boilers. More complex control systems using valves requiring automatic control based on an external input (i.e., regulating flow through a pipe to a changing set point) require an actuator. An actuator will stroke the valve depending on its input and set-up, allowing the valve to be positioned accurately, and allowing control over a variety of requirements. Variation Valves vary widely in form and application. Sizes typically range from 0.1 mm to 60 cm. Special valves can have a diameter exceeding 5 meters. Valve costs range from simple inexpensive disposable valves to specialized valves which cost thousands of dollars (US) per inch of the diameter of the valve. Disposable valves may be found in common household items including mini-pump dispensers and aerosol cans. A common use of the term valve refers to the poppet valves found in the vast majority of modern internal combustion engines such as those in most fossil fuel powered vehicles which are used to control the intake of the fuel-air mixture and allow exhaust gas venting. Types Valves are quite diverse and may be classified into a number of basic types. Valves may also be classified by how they are actuated: Hydraulic Pneumatic Manual Solenoid valve Motor Components The main parts of the most usual type of valve are the body and the bonnet. These two parts form the casing that holds the fluid going through the valve. Body The valve's body is the outer casing of most or all of the valve that contains the internal parts or trim. The bonnet is the part of the encasing through which the stem (see below) passes and that forms a guide and seal for the stem. The bonnet typically screws into or is bolted to the valve body. Valve bodies are usually metallic or plastic. Brass, bronze, gunmetal, cast iron, steel, alloy steels and stainless steels are very common. Seawater applications, like desalination plants, often use duplex valves, as well as super duplex valves, due to their corrosion resistant properties, particularly against warm seawater. Alloy 20 valves are typically used in sulphuric acid plants, whilst monel valves are used in hydrofluoric acid (HF Acid) plants. Hastelloy valves are often used in high temperature applications, such as nuclear plants, whilst inconel valves are often used in hydrogen applications. Plastic bodies are used for relatively low pressures and temperatures. PVC, PP, PVDF and glass-reinforced nylon are common plastics used for valve bodies. Bonnet A bonnet acts as a cover on the valve body. It is commonly semi-permanently screwed into the valve body or bolted onto it. During manufacture of the valve, the internal parts are put into the body and then the bonnet is attached to hold everything together inside. To access internal parts of a valve, a user would take off the bonnet, usually for maintenance. Many valves do not have bonnets; for example, plug valves usually do not have bonnets. Many ball valves do not have bonnets since the valve body is put together in a different style, such as being screwed together at the middle of the valve body. Ports Ports are passages that allow fluid to pass through the valve. Ports are obstructed by the valve member or disc to control flow. Valves most commonly have 2 ports, but may have as many as 20. The valve is almost always connected at its ports to pipes or other components. Connection methods include threadings, compression fittings, glue, cement, flanges, or welding. Handle or actuator A handle is used to manually control a valve from outside the valve body. Automatically controlled valves often do not have handles, but some may have a handle (or something similar) anyway to manually override automatic control, such as a stop-check valve. An actuator is a mechanism or device to automatically or remotely control a valve from outside the body. Some valves have neither handle nor actuator because they automatically control themselves from inside; for example, check valves and relief valves may have neither. Disc A disc, also known as a valve member, is a movable obstruction inside the stationary body that adjustably restricts flow through the valve. Although traditionally disc-shaped, discs come in various shapes. Depending on the type of valve, a disc can move linearly inside a valve, or rotate on the stem (as in a butterfly valve), or rotate on a hinge or trunnion (as in a check valve). A ball is a round valve member with one or more paths between ports passing through it. By rotating the ball, flow can be directed between different ports. Ball valves use spherical rotors with a cylindrical hole drilled as a fluid passage. Plug valves use cylindrical or conically tapered rotors called plugs. Other round shapes for rotors are possible as well in rotor valves, as long as the rotor can be turned inside the valve body. However, not all round or spherical discs are rotors; for example, a ball check valve uses the ball to block reverse flow, but is not a rotor because operating the valve does not involve rotation of the ball. Seat The "seat" is the interior surface of the body which contacts the disc to form a leak-tight seal. In discs that move linearly or swing on a hinge or trunnion, the disc comes into contact with the seat only when the valve is shut. In disks that rotate, the seat is always in contact with the disk, but the area of contact changes as the disc is turned. The seat always remains stationary relative to the body. Seats are classified by whether they are cut directly into the body, or if they are made of a different material: Hard seats are integral to the valve body. Nearly all hard seated metal valves have a small amount of leakage. Soft seats are fitted to the valve body and made of softer materials such as PTFE or various elastomers such as NBR, EPDM, or FKM depending on the maximum operating temperature. A closed soft seated valve is much less liable to leak when shut while hard seated valves are more durable. Gate, globe, and check valves are usually hard seated while butterfly, ball, plug, and diaphragm valves are usually soft seated. Stem The stem transmits motion from the handle or controlling device to the disc. The stem typically passes through the bonnet when present. In some cases, the stem and the disc can be combined in one piece, or the stem and the handle are combined in one piece. The motion transmitted by the stem may be a linear force, a rotational torque, or some combination of these (Angle valve using torque reactor pin and Hub Assembly). The valve and stem can be threaded such that the stem can be screwed into or out of the valve by turning it in one direction or the other, thus moving the disc back or forth inside the body. Packing is often used between the stem and the bonnet to maintain a seal. Some valves have no external control and do not need a stem as in most check valves. Valves whose disc is between the seat and the stem and where the stem moves in a direction into the valve to shut it are normally-seated or front seated. Valves whose seat is between the disc and the stem and where the stem moves in a direction out of the valve to shut it are reverse-seated or back seated. These terms don't apply to valves with no stem or valves using rotors. Gaskets Gaskets are the mechanical seals, or packings, used to prevent the leakage of a gas or fluids from valves. Valve balls A valve ball is also used for severe duty, high-pressure, high-tolerance applications. They are typically made of stainless steel, titanium, Stellite, Hastelloy, brass, or nickel. They can also be made of different types of plastic, such as ABS, PVC, PP or PVDF. Spring Many valves have a spring for spring-loading, to normally shift the disc into some position by default but allow control to reposition the disc. Relief valves commonly use a spring to keep the valve shut, but allow excessive pressure to force the valve open against the spring-loading. Coil springs are normally used. Typical spring materials include zinc plated steel, stainless steel, and for high temperature applications Inconel X750. Trim The internal elements of a valve are collectively referred to as a valve's trim. According to API Standards 600, "Steel Gate Valve-Flanged and Butt-welding Ends, Bolted Bonnets", the trim consists of stem, seating surface in the body, gate seating surface, bushing or a deposited weld for the backseat and stem hole guide, and small internal parts that normally contact the service fluid, excluding the pin that is used to make a stem-to-gate connection (this pin shall be made of an austenitic stainless steel material). Valve operating positions Valve positions are operating conditions determined by the position of the disc or rotor in the valve. Some valves are made to be operated in a gradual change between two or more positions. Return valves and non-return valves allow fluid to move in 2 or 1 directions respectively. Two-port valves Operating positions for 2-port valves can be either shut (closed) so that no flow at all goes through, fully open for maximum flow, or sometimes partially open to any degree in between. Many valves are not designed to precisely control intermediate degree of flow; such valves are considered to be either open or shut. Some valves are specially designed to regulate varying amounts of flow. Such valves have been called by various names such as regulating, throttling, metering, or needle valves. For example, needle valves have elongated conically tapered discs and matching seats for fine flow control. For some valves, there may be a mechanism to indicate by how much the valve is open, but in many cases other indications of flow rate are used, such as separate flow meters. In plants with remote-controlled process operation, such as oil refineries and petrochemical plants, some 2-way valves can be designated as normally closed (NC) or normally open (NO) during regular operation. Examples of normally-closed valves are sampling valves, which are only opened while a sample is taken. Other examples of normally-closed valves are emergency shutdown valves, which are kept open when the system is in operation and will automatically shut by taking away the power supply. This happens when there is a problem with a unit or a section of a fluid system such as a leak in order to isolate the problem from the rest of the system. Examples of normally-open valves are purge-gas supply valves or emergency-relief valves. When there is a problem these valves open (by switching them 'off') causing the unit to be flushed and emptied. Although many 2-way valves are made in which the flow can go in either direction between the two ports, when a valve is placed into a certain application, flow is often expected to go from one certain port on the upstream side of the valve, to the other port on the downstream side. Pressure regulators are variations of valves in which flow is controlled to produce a certain downstream pressure, if possible. They are often used to control flow of gas from a gas cylinder. A back-pressure regulator is a variation of a valve in which flow is controlled to maintain a certain upstream pressure, if possible. Three-port valves Valves with three ports serve many different functions. A few of the possibilities are listed here. Three-way ball valves come with T- or L-shaped fluid passageways inside the rotor. The T valve might be used to permit connection of one inlet to either or both outlets or connection of the two outlets. The L valve could be used to permit disconnection of both or connection of either but not both of two inlets to one outlet. Shuttle valves automatically connect the higher pressure inlet to the outlet while (in some configurations) preventing flow from one inlet to the other. Single handle mixer valves produce a variable mixture of hot and cold water at a variable flow rate under control of a single handle. Thermostatic mixing valves mix hot and cold water to produce a constant temperature in the presence of variable pressures and temperatures on the two input ports. Four-port valves A 4-port valve is a valve whose body has four ports equally spaced round the body and the disc has two passages to connect adjacent ports. It is operated with two positions. It can be used to isolate and to simultaneously bypass a sampling cylinder installed on a pressurized water line. It is useful to take a fluid sample without affecting the pressure of a hydraulic system and to avoid degassing (no leak, no gas loss or air entry, no external contamination).... Control Many valves are controlled manually with a handle attached to the stem. If the handle is turned ninety degrees between operating positions, the valve is called a quarter-turn valve. Butterfly, ball valves, and plug valves are often quarter-turn valves. If the handle is circular with the stem as the axis of rotation in the center of the circle, then the handle is called a handwheel. Valves can also be controlled by actuators attached to the stem. They can be electromechanical actuators such as an electric motor or solenoid, pneumatic actuators which are controlled by air pressure, or hydraulic actuators which are controlled by the pressure of a liquid such as oil or water. Actuators can be used for the purposes of automatic control such as in washing machine cycles, remote control such as the use of a centralised control room, or because manual control is too difficult such as when the valve is very large. Pneumatic actuators and hydraulic actuators need pressurised air or liquid lines to supply the actuator: an inlet line and an outlet line. Pilot valves are valves which are used to control other valves. Pilot valves in the actuator lines control the supply of air or liquid going to the actuators. The fill valve in a toilet water tank is a liquid level-actuated valve. When a high water level is reached, a mechanism shuts the valve which fills the tank. In some valve designs, the pressure of the flow fluid itself or pressure difference of the flow fluid between the ports automatically controls flow through the valve. Other considerations Valves are typically rated for maximum temperature and pressure by the manufacturer. The wetted materials in a valve are usually identified also. Some valves rated at very high pressures are available. When a designer, engineer, or user decides to use a valve for an application, he/she should ensure the rated maximum temperature and pressure are never exceeded and that the wetted materials are compatible with the fluid the valve interior is exposed to. In Europe, valve design and pressure ratings are subject to statutory regulation under the Pressure Equipment Directive 97/23/EC (PED). Some fluid system designs, especially in chemical or power plants, are schematically represented in piping and instrumentation diagrams. In such diagrams, different types of valves are represented by certain symbols. Valves in good condition should be leak-free. However, valves may eventually wear out from use and develop a leak, either between the inside and outside of the valve or, when the valve is shut to stop flow, between the disc and the seat. A particle trapped between the seat and disc could also cause such leakage. Images
Technology
Components_2
null
32771
https://en.wikipedia.org/wiki/Venom
Venom
Venom or zootoxin is a type of toxin produced by an animal that is actively delivered through a wound by means of a bite, sting, or similar action. The toxin is delivered through a specially evolved venom apparatus, such as fangs or a stinger, in a process called envenomation. Venom is often distinguished from poison, which is a toxin that is passively delivered by being ingested, inhaled, or absorbed through the skin, and toxungen, which is actively transferred to the external surface of another animal via a physical delivery mechanism. Venom has evolved in terrestrial and marine environments and in a wide variety of animals: both predators and prey, and both vertebrates and invertebrates. Venoms kill through the action of at least four major classes of toxin, namely necrotoxins and cytotoxins, which kill cells; neurotoxins, which affect nervous systems; myotoxins, which damage muscles; and haemotoxins, which disrupt blood clotting. Venomous animals cause tens of thousands of human deaths per year. Venoms are often complex mixtures of toxins of differing types. Toxins from venom are used to treat a wide range of medical conditions including thrombosis, arthritis, and some cancers. Studies in venomics are investigating the potential use of venom toxins for many other conditions. Evolution The use of venom across a wide variety of taxa is an example of convergent evolution. It is difficult to conclude exactly how this trait came to be so intensely widespread and diversified. The multigene families that encode the toxins of venomous animals are actively selected, creating more diverse toxins with specific functions. Venoms adapt to their environment and victims, evolving to become maximally efficient on a predator's particular prey (particularly the precise ion channels within the prey). Consequently, venoms become specialized to an animal's standard diet. Mechanisms Venoms cause their biological effects via the many toxins that they contain; some venoms are complex mixtures of toxins of differing types. Major classes of toxin in venoms include: Necrotoxins, which cause necrosis (i.e., death) in the cells they encounter. The venoms of vipers and bees contain phospholipases; viper venoms often also contain trypsin-like serine proteases. Neurotoxins, which primarily affect the nervous systems of animals, such as ion channel toxins. These are found in many venomous taxa, including black widow spiders, scorpions, box jellyfish, cone snails, centipedes and blue-ringed octopuses. Myotoxins, which damage muscles by binding to a receptor. These small, basic peptides are found in snake (such as rattlesnake) and lizard venoms. Cytotoxins, which kill individual cells and are found in the apitoxin of honey bees and the venom of black widow spiders. Taxonomic range Venom is widely distributed taxonomically, being found in both invertebrates and vertebrates, in aquatic and terrestrial animals, and among both predators and prey. The major groups of venomous animals are described below. Arthropods Venomous arthropods include spiders, which use fangs on their chelicerae to inject venom, and centipedes, which use modified to deliver venom, while scorpions and stinging insects inject venom with a sting. In bees and wasps, the stinger is a modified ovipositor (egg-laying device). In Polistes fuscatus, the female continuously releases a venom that contains a sex pheromone that induces copulatory behavior in males. In wasps such as Polistes exclamans, venom is used as an alarm pheromone, coordinating a response from the nest and attracting nearby wasps to attack the predator. In some species, such as Parischnogaster striatula, venom is applied all over the body as an antimicrobial protection. Many caterpillars have defensive venom glands associated with specialized bristles on the body called urticating hairs. These are usually merely irritating, but those of the Lonomia moth can be fatal to humans. Bees synthesize and employ an acidic venom (apitoxin) to defend their hives and food stores, whereas wasps use a chemically different venom to paralyse prey, so their prey remains alive to provision the food chambers of their young. The use of venom is much more widespread than just these examples; many other insects, such as true bugs and many ants, also produce venom. The ant species Polyrhachis dives uses venom topically for the sterilisation of pathogens. Other invertebrates There are venomous invertebrates in several phyla, including jellyfish such as the dangerous box jellyfish, the Portuguese man-of-war (a siphonophore) and sea anemones among the Cnidaria, sea urchins among the Echinodermata, and cone snails and cephalopods, including octopuses, among the Molluscs. Vertebrates Fish Venom is found in some 200 cartilaginous fishes, including stingrays, sharks, and chimaeras; the catfishes (about 1000 venomous species); and 11 clades of spiny-rayed fishes (Acanthomorpha), containing the scorpionfishes (over 300 species), stonefishes (over 80 species), gurnard perches, blennies, rabbitfishes, surgeonfishes, some velvetfishes, some toadfishes, coral crouchers, red velvetfishes, scats, rockfishes, deepwater scorpionfishes, waspfishes, weevers, and stargazers. Amphibians Some salamanders can extrude sharp venom-tipped ribs. Two frog species in Brazil have tiny spines around the crown of their skulls which, on impact, deliver venom into their targets. Reptiles Some 450 species of snake are venomous. Snake venom is produced by glands below the eye (the mandibular glands) and delivered to the target through tubular or channeled fangs. Snake venoms contain a variety of peptide toxins, including proteases, which hydrolyze protein peptide bonds; nucleases, which hydrolyze the phosphodiester bonds of DNA; and neurotoxins, which disrupt signalling in the nervous system. Snake venom causes symptoms including pain, swelling, tissue necrosis, low blood pressure, convulsions, haemorrhage (varying by species of snake), respiratory paralysis, kidney failure, coma, and death. Snake venom may have originated with duplication of genes that had been expressed in the salivary glands of ancestors. Venom is found in a few other reptiles such as the Mexican beaded lizard, the gila monster, and some monitor lizards, including the Komodo dragon. Mass spectrometry showed that the mixture of proteins present in their venom is as complex as the mixture of proteins found in snake venom. Some lizards possess a venom gland; they form a hypothetical clade, Toxicofera, containing the suborders Serpentes and Iguania and the families Varanidae, Anguidae, and Helodermatidae. Mammals Euchambersia, an extinct genus of therocephalians, is hypothesized to have had venom glands attached to its canine teeth. A few species of living mammals are venomous, including solenodons, shrews, the European mole, vampire bats, male platypuses, and slow lorises. Shrews have venomous saliva and most likely evolved their trait similarly to snakes. The presence of tarsal spurs akin to those of the platypus in many non-therian Mammaliaformes groups suggests that venom was an ancestral characteristic among mammals. Extensive research on platypuses shows that their toxin was initially formed from gene duplication, but data provides evidence that the further evolution of platypus venom does not rely as much on gene duplication as was once thought. Modified sweat glands are what evolved into platypus venom glands. Although it is proven that reptile and platypus venom have independently evolved, it is thought that there are certain protein structures that are favored to evolve into toxic molecules. This provides more evidence of why venom has become a homoplastic trait and why very different animals have convergently evolved. Venom and humans Envenomation resulted in 57,000 human deaths in 2013, down from 76,000 deaths in 1990. Venoms, found in over 173,000 species, have potential to treat a wide range of diseases, explored in over 5,000 scientific papers. In medicine, snake venom proteins are used to treat conditions including thrombosis, arthritis, and some cancers. Gila monster venom contains exenatide, used to treat type 2 diabetes. Solenopsins extracted from fire ant venom has demonstrated biomedical applications, ranging from cancer treatment to psoriasis. A branch of science, venomics, has been established to study the proteins associated with venom and how individual components of venom can be used for pharmaceutical means. Resistance Venom is used as a trophic weapon by many predator species. The coevolution between predators and prey is the driving force of venom resistance, which has evolved multiple times throughout the animal kingdom. The coevolution between venomous predators and venom-resistant prey has been described as a chemical arms race. Predator/prey pairs are expected to coevolve over long periods of time. As the predator capitalizes on susceptible individuals, the surviving individuals are limited to those able to evade predation. Resistance typically increases over time as the predator becomes increasingly unable to subdue resistant prey. The cost of developing venom resistance is high for both predator and prey. The payoff for the cost of physiological resistance is an increased chance of survival for prey, but it allows predators to expand into underutilised trophic niches. The California ground squirrel has varying degrees of resistance to the venom of the Northern Pacific rattlesnake. The resistance involves toxin scavenging and depends on the population. Where rattlesnake populations are denser, squirrel resistance is higher. Rattlesnakes have responded locally by increasing the effectiveness of their venom. The kingsnakes of the Americas are constrictors that prey on many venomous snakes. They have evolved resistance which does not vary with age or exposure. They are immune to the venom of snakes in their immediate environment, like copperheads, cottonmouths, and North American rattlesnakes, but not to the venom of, for example, king cobras or black mambas. Among marine animals, eels are resistant to sea snake venoms, which contain complex mixtures of neurotoxins, myotoxins, and nephrotoxins, varying according to species. Eels are especially resistant to the venom of sea snakes that specialise in feeding on them, implying coevolution; non-prey fishes have little resistance to sea snake venom. Clownfish always live among the tentacles of venomous sea anemones (an obligatory symbiosis for the fish), and are resistant to their venom. Only 10 known species of anemones are hosts to clownfish and only certain pairs of anemones and clownfish are compatible. All sea anemones produce venoms delivered through discharging nematocysts and mucous secretions. The toxins are composed of peptides and proteins. They are used to acquire prey and to deter predators by causing pain, loss of muscular coordination, and tissue damage. Clownfish have a protective mucus that acts as a chemical camouflage or macromolecular mimicry preventing "not self" recognition by the sea anemone and nematocyst discharge. Clownfish may acclimate their mucus to resemble that of a specific species of sea anemone.
Biology and health sciences
Animal: General
null
32781
https://en.wikipedia.org/wiki/Voyager%201
Voyager 1
Voyager 1 is a space probe launched by NASA on September 5, 1977, as part of the Voyager program to study the outer Solar System and the interstellar space beyond the Sun's heliosphere. It was launched 16 days after its twin, Voyager 2. It communicates through the NASA Deep Space Network (DSN) to receive routine commands and to transmit data to Earth. Real-time distance and velocity data are provided by NASA and JPL. At a distance of from Earth , it is the most distant human-made object from Earth. The probe made flybys of Jupiter, Saturn, and Saturn's largest moon, Titan. NASA had a choice of either doing a Pluto or Titan flyby; exploration of the moon took priority because it was known to have a substantial atmosphere. Voyager 1 studied the weather, magnetic fields, and rings of the two gas giants and was the first probe to provide detailed images of their moons. As part of the Voyager program and like its sister craft Voyager 2, the spacecraft's extended mission is to locate and study the regions and boundaries of the outer heliosphere and to begin exploring the interstellar medium. Voyager 1 crossed the heliopause and entered interstellar space on August 25, 2012, making it the first spacecraft to do so. Two years later, Voyager 1 began experiencing a third wave of coronal mass ejections from the Sun that continued to at least December 15, 2014, further confirming that the probe is in interstellar space. In 2017, the Voyager team successfully fired the spacecraft's trajectory correction maneuver (TCM) thrusters for the first time since 1980, enabling the mission to be extended by two to three years. Voyager 1s extended mission is expected to continue to return scientific data until at least 2025, with a maximum lifespan of until 2030. Its radioisotope thermoelectric generators (RTGs) may supply enough electric power to return engineering data until 2036. Mission background History A 1960s proposal for a Grand Tour to study the outer planets led NASA to begin work on a mission during the early 1970s. Information gathered by the Pioneer 10 spacecraft helped engineers design Voyager to better cope with the intense radiation around Jupiter. Still, shortly before launch, strips of kitchen-grade aluminum foil were applied to certain cables to improve radiation shielding. Initially, Voyager 1 was planned as Mariner 11 of the Mariner program. Due to budget cuts, the mission was reduced to a flyby of Jupiter and Saturn and renamed the Mariner Jupiter-Saturn probes. The name was changed to Voyager when the probe designs began to differ substantially from Mariner missions. Spacecraft components Voyager 1 was built by the Jet Propulsion Laboratory (JPL). It has 16 hydrazine thrusters, three-axis stabilization gyroscopes, and referencing instruments to keep the probe's radio antenna pointed toward Earth. Collectively, these instruments are part of the Attitude and Articulation Control Subsystem (AACS), along with redundant units of most instruments and eight backup thrusters. The spacecraft also included 11 scientific instruments to study celestial objects such as planets as it travels through space. Communication system The radio communication system of Voyager 1 was designed to be used up to and beyond the limits of the Solar System. It has a diameter high-gain Cassegrain antenna to send and receive radio waves via the three Deep Space Network stations on the Earth. The spacecraft normally transmits data to Earth over Deep Space Network Channel 18, using a frequency of either 2.3 GHz or 8.4 GHz, while signals from Earth to Voyager are transmitted at 2.1 GHz. When Voyager 1 is unable to communicate with the Earth, its digital tape recorder (DTR) can record about 67 kilobytes of data for later transmission. , signals from Voyager 1 take more than 22 hours to reach Earth. Power Voyager 1 has three radioisotope thermoelectric generators (RTGs) mounted on a boom. Each MHW-RTG contains 24 pressed plutonium-238 oxide spheres. The RTGs generated about 470 W of electric power at the time of launch, with the remainder being dissipated as waste heat. The power output of the RTGs declines over time due to the 87.7-year half-life of the fuel and degradation of the thermocouples, but they will continue to support some of its operations until at least 2025. Computers Unlike Voyager's other instruments, the operation of the cameras for visible light is not autonomous, but is controlled by an imaging parameter table contained in one of the digital computers, the Flight Data Subsystem (FDS). Since the 1990s, most space probes have been equipped with completely autonomous cameras. The computer command subsystem (CCS) controls the cameras. The CCS contains fixed computer programs, such as command decoding, fault-detection and fault-correction routines, antenna pointing routines, and spacecraft sequencing routines. This computer is an improved version of the one that was used in the 1970s Viking orbiters. The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation (its attitude). It keeps the high-gain antenna pointing towards the Earth, controls attitude changes, and points the scan platform. The custom-built AACS systems on both Voyagers are the same. Scientific instruments Mission profile Timeline of travel Launch and trajectory The Voyager 1 probe was launched on September 5, 1977, from Launch Complex 41 at the Cape Canaveral Air Force Station, aboard a Titan IIIE launch vehicle. The Voyager 2 probe had been launched two weeks earlier, on August 20, 1977. Despite being launched later, Voyager 1 reached both Jupiter and Saturn sooner, following a shorter trajectory. Voyager 1s launch almost failed because Titan's LR-91 second stage shut down prematurely, leaving of propellant unburned. Recognizing the deficiency, the Centaur stage's on-board computers ordered a burn that was far longer than planned in order to compensate. Centaur extended its own burn and was able to give Voyager 1 the additional velocity it needed. At cutoff, the Centaur was only 3.4 seconds from propellant exhaustion. If the same failure had occurred during Voyager 2s launch a few weeks earlier, the Centaur would have run out of propellant before the probe reached the correct trajectory. Jupiter was in a more favorable position vis-à-vis Earth during the launch of Voyager 1 than during the launch of Voyager 2. Voyager 1 initial orbit had an aphelion of , just a little short of Saturn's orbit of . Voyager 2s initial orbit had an aphelion of , well short of Saturn's orbit. Flyby of Jupiter Voyager 1 began photographing Jupiter in January 1979. Its closest approach to Jupiter was on March 5, 1979, at a distance of about from the planet's center. Because of the greater photographic resolution allowed by a closer approach, most observations of the moons, rings, magnetic fields, and the radiation belt environment of the Jovian system were made during the 48-hour period that bracketed the closest approach. Voyager 1 finished photographing the Jovian system in April 1979. The discovery of ongoing volcanic activity on the moon Io was probably the greatest surprise. It was the first time active volcanoes had been seen on another body in the Solar System. It appears that activity on Io affects the entire Jovian system. Io appears to be the primary source of matter that pervades the Jovian magnetosphere – the region of space that surrounds the planet influenced by the planet's strong magnetic field. Sulfur, oxygen, and sodium, apparently erupted by Io's volcanoes and sputtered off the surface by the impact of high-energy particles, were detected at the outer edge of the magnetosphere of Jupiter. The two Voyager space probes made a number of important discoveries about Jupiter, its satellites, its radiation belts, and its never-before-seen planetary rings. Flyby of Saturn The gravitational assist trajectories at Jupiter were successfully carried out by both Voyagers, and the two spacecraft went on to visit Saturn and its system of moons and rings. Voyager 1 encountered Saturn in November 1980, with the closest approach on November 12, 1980, when the space probe came within of Saturn's cloud-tops. The space probe's cameras detected complex structures in the rings of Saturn, and its remote sensing instruments studied the atmospheres of Saturn and its giant moon Titan. Voyager 1 found that about seven percent of the volume of Saturn's upper atmosphere is helium (compared with 11 percent of Jupiter's atmosphere), while almost all the rest is hydrogen. Since Saturn's internal helium abundance was expected to be the same as Jupiter's and the Sun's, the lower abundance of helium in the upper atmosphere may imply that the heavier helium may be slowly sinking through Saturn's hydrogen; that might explain the excess heat that Saturn radiates over energy it receives from the Sun. Winds blow at high speeds on Saturn. Near the equator, the Voyagers measured winds about . The wind blows mostly in an easterly direction. The Voyagers found aurora-like ultraviolet emissions of hydrogen at mid-latitudes in the atmosphere, and auroras at polar latitudes (above 65 degrees). The high-level auroral activity may lead to the formation of complex hydrocarbon molecules that are carried toward the equator. The mid-latitude auroras, which occur only in sunlit regions, remain a puzzle, since bombardment by electrons and ions, known to cause auroras on Earth, occurs primarily at high latitudes. Both Voyagers measured the rotation of Saturn (the length of a day) at 10 hours, 39 minutes, 24 seconds. Voyager 1s mission included a flyby of Titan, Saturn's largest moon, which had long been known to have an atmosphere. Images taken by Pioneer 11 in 1979 had indicated the atmosphere was substantial and complex, further increasing interest. The Titan flyby occurred as the spacecraft entered the system to avoid any possibility of damage closer to Saturn compromising observations, and approached to within , passing behind Titan as seen from Earth and the Sun. Voyager's measurement of the atmosphere's effect on sunlight and Earth-based measurement of its effect on the probe's radio signal were used to determine the atmosphere's composition, density, and pressure. Titan's mass was also measured by observing its effect on the probe's trajectory. The thick haze prevented any visual observation of the surface, but the measurement of the atmosphere's composition, temperature, and pressure led to speculation that lakes of liquid hydrocarbons could exist on the surface. Because observations of Titan were considered vital, the trajectory chosen for Voyager 1 was designed around the optimum Titan flyby, which took it below the south pole of Saturn and out of the plane of the ecliptic, ending its planetary science mission. Had Voyager 1 failed or been unable to observe Titan, Voyager 2's trajectory would have been altered to incorporate the Titan flyby, precluding any visit to Uranus and Neptune. The trajectory Voyager 1 was launched into would not have allowed it to continue on to Uranus and Neptune, but could have been altered to avoid a Titan flyby and travel from Saturn to Pluto, arriving in 1986. Exit from the heliosphere On February 14, 1990, Voyager 1 took the first "family portrait" of the Solar System as seen from outside, which includes the image of planet Earth known as Pale Blue Dot. Soon afterward, its cameras were deactivated to conserve energy and computer resources for other equipment. The camera software has been removed from the spacecraft, so it would now be complex to get them working again. Earth-side software and computers for reading the images are also no longer available. On February 17, 1998, Voyager 1 reached a distance of from the Sun and overtook Pioneer 10 as the most distant spacecraft from Earth. Traveling at about , it has the fastest heliocentric recession speed of any spacecraft. As Voyager 1 headed for interstellar space, its instruments continued to study the Solar System. Jet Propulsion Laboratory scientists used the plasma wave experiments aboard Voyager 1 and 2 to look for the heliopause, the boundary at which the solar wind transitions into the interstellar medium. , the probe was moving with a relative velocity to the Sun of about . With the velocity the probe is currently maintaining, Voyager 1 is traveling about per year, or about one light-year per 18,000 years. Termination shock Scientists at the Johns Hopkins University Applied Physics Laboratory believe that Voyager 1 entered the termination shock in February 2003. This marks the point where the solar wind slows to subsonic speeds. Some other scientists expressed doubt and discussed this in the journal Nature of November 6, 2003. The issue would not be resolved until other data became available, since Voyager 1 solar-wind detector ceased functioning in 1990. This failure meant that termination shock detection would have to be inferred from the data from the other instruments on board. In May 2005, a NASA press release said that the consensus was that Voyager 1 was then in the heliosheath. In a scientific session at the American Geophysical Union meeting in New Orleans on May 25, 2005, Ed Stone presented evidence that the craft crossed the termination shock in late 2004. This event is estimated to have occurred on December 15, 2004, at a distance of from the Sun. Heliosheath On March 31, 2006, amateur radio operators from AMSAT in Germany tracked and received radio waves from Voyager 1 using the dish at Bochum with a long integration technique. Retrieved data was checked and verified against data from the Deep Space Network station at Madrid, Spain. This seems to be the first such amateur tracking of Voyager 1. It was confirmed on December 13, 2010, that Voyager 1 had passed the reach of the radial outward flow of the solar wind, as measured by the Low Energy Charged Particle device. It is suspected that solar wind at this distance turns sideways because of interstellar wind pushing against the heliosphere. Since June 2010, detection of solar wind had been consistently at zero, providing conclusive evidence of the event. On this date, the spacecraft was approximately from the Sun. Voyager 1 was commanded to change its orientation to measure the sideways motion of the solar wind at that location in space in March 2011 (~33yr 6mo from launch). A test roll done in February had confirmed the spacecraft's ability to maneuver and reorient itself. The course of the spacecraft was not changed. It rotated 70 degrees counterclockwise with respect to Earth to detect the solar wind. This was the first time the spacecraft had done any major maneuvering since the Family Portrait photograph of the planets was taken in 1990. After the first roll the spacecraft had no problem in reorienting itself with Alpha Centauri, Voyager 1's guide star, and it resumed sending transmissions back to Earth. Voyager 1 was expected to enter interstellar space "at any time". Voyager 2 was still detecting outward flow of solar wind at that point but it was estimated that in the following months or years it would experience the same conditions as Voyager 1. The spacecraft was reported at 12.44° declination and 17.163 hours right ascension, and at an ecliptic latitude of 34.9° (the ecliptic latitude changes very slowly), placing it in the constellation Ophiuchus as observed from the Earth on May 21, 2011. On December 1, 2011, it was announced that Voyager 1 had detected the first Lyman-alpha radiation originating from the Milky Way galaxy. Lyman-alpha radiation had previously been detected from other galaxies, but because of interference from the Sun, the radiation from the Milky Way was not detectable. NASA announced on December 5, 2011, that Voyager 1 had entered a new region referred to as a "cosmic purgatory". Within this stagnation region, charged particles streaming from the Sun slow and turn inward, and the Solar System's magnetic field is doubled in strength as interstellar space appears to be applying pressure. Energetic particles originating in the Solar System decline by nearly half, while the detection of high-energy electrons from outside increases 100-fold. The inner edge of the stagnation region is located approximately 113 AU from the Sun. Heliopause NASA announced in June 2012 that the probe was detecting changes in the environment that were suspected to correlate with arrival at the heliopause. Voyager 1 had reported a marked increase in its detection of charged particles from interstellar space, which are normally deflected by the solar winds within the heliosphere from the Sun. The craft thus began to enter the interstellar medium at the edge of the Solar System. Voyager 1 became the first spacecraft to cross the heliopause in August 2012, then at a distance of from the Sun, although this was not confirmed for another year. As of September 2012, sunlight took 16.89 hours to get to Voyager 1 which was at a distance of 121 AU. The apparent magnitude of the Sun from the spacecraft was −16.3 (about 30 times brighter than the full Moon). The spacecraft was traveling at relative to the Sun. At this rate, it would need about 17,565 years at this speed to travel a single light-year. To compare, Proxima Centauri, the closest star to the Sun, is about 4.2 light-years () distant. If the spacecraft was traveling in the direction of that star, it would take 73,775 years to reach it. (Voyager 1 is heading in the direction of the constellation Ophiuchus.) In late 2012, researchers reported that particle data from the spacecraft suggested that the probe had passed through the heliopause. Measurements from the spacecraft revealed a steady rise since May in collisions with high energy particles (above 70 MeV), which are thought to be cosmic rays emanating from supernova explosions far beyond the Solar System, with a sharp increase in these collisions in late August. At the same time, in late August, there was a dramatic drop in collisions with low-energy particles, which are thought to originate from the Sun. Ed Roelof, space scientist at Johns Hopkins University and principal investigator for the Low-Energy Charged Particle instrument on the spacecraft, declared that "most scientists involved with Voyager 1 would agree that [these two criteria] have been sufficiently satisfied". However, the last criterion for officially declaring that Voyager 1 had crossed the boundary, the expected change in magnetic field direction (from that of the Sun to that of the interstellar field beyond), had not been observed (the field had changed direction by only 2 degrees), which suggested to some that the nature of the edge of the heliosphere had been misjudged. On December 3, 2012, Voyager project scientist Ed Stone of the California Institute of Technology said, "Voyager has discovered a new region of the heliosphere that we had not realized was there. We're still inside, apparently. But the magnetic field now is connected to the outside. So it's like a highway letting particles in and out." The magnetic field in this region was 10 times more intense than Voyager 1 encountered before the termination shock. It was expected to be the last barrier before the spacecraft exited the Solar System completely and entered interstellar space. Interstellar medium In March 2013, it was announced that Voyager 1 might have become the first spacecraft to enter interstellar space, having detected a marked change in the plasma environment on August 25, 2012. However, until September 12, 2013, it was still an open question as to whether the new region was interstellar space or an unknown region of the Solar System. At that time, the former alternative was officially confirmed. In 2013 Voyager 1 was exiting the Solar System at a speed of about per year, which is 61,602 km/h, 4.83 times the diameter of Earth (12,742 km) per hour; whereas Voyager 2 is going slower, leaving the Solar System at per year. Each year, Voyager 1 increases its lead over Voyager 2. Voyager 1 reached a distance of from the Sun on May 18, 2016. On September 5, 2017, that had increased to about from the Sun, or just over 19 light-hours; at that time, Voyager 2 was from the Sun. Its progress can be monitored at NASA's website. On September 12, 2013, NASA officially confirmed that Voyager 1 had reached the interstellar medium in August 2012 as previously observed. The generally accepted date of arrival is August 25, 2012 (approximately 10 days before the 35th anniversary of its launch), the date durable changes in the density of energetic particles were first detected. By this point, most space scientists had abandoned the hypothesis that a change in magnetic field direction must accompany a crossing of the heliopause; a new model of the heliopause predicted that no such change would be found. A key finding that persuaded many scientists that the heliopause had been crossed was an indirect measurement of an 80-fold increase in electron density, based on the frequency of plasma oscillations observed beginning on April 9, 2013, triggered by a solar outburst that had occurred in March 2012 (electron density is expected to be two orders of magnitude higher outside the heliopause than within). Weaker sets of oscillations measured in October and November 2012 provided additional data. An indirect measurement was required because Voyager 1's plasma spectrometer had stopped working in 1980. In September 2013, NASA released recordings of audio transductions of these plasma waves, the first to be measured in interstellar space. While Voyager 1 is commonly spoken of as having left the Solar System simultaneously with having left the heliosphere, the two are not the same. The Solar System is usually defined as the vastly larger region of space populated by bodies that orbit the Sun. The craft is presently less than one-seventh the distance to the aphelion of Sedna, and it has not yet entered the Oort cloud, the source region of long-period comets, regarded by astronomers as the outermost zone of the Solar System. In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2 space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose". In May 2021, NASA reported on the continuous measurement, for the first time, of the density of material in interstellar space and, as well, the detection of interstellar sounds for the first time. Communication issues In May 2022, NASA reported that Voyager 1 had begun transmitting "mysterious" and "peculiar" telemetric data to the Deep Space Network (DSN). It confirmed that the operational status of the craft remained unchanged, but that the issue stemmed from the Attitude Articulation and Control System (AACS). NASA's Jet Propulsion Laboratory published a statement on May 18, 2022, that the AACS was functional but sending invalid data. The problem was eventually traced to the AACS sending its telemetry through a computer that had been non-operational for years, resulting in data corruption. In August 2022, NASA transmitted a command to the AACS to use another computer, which resolved the problem. An investigation into what caused the initial switch is underway, though engineers have hypothesized that the AACS had executed a bad command from another onboard computer. Voyager 1 began transmitting unreadable data on November 14, 2023. On December 12, 2023, NASA announced that Voyager 1 flight data system was unable to use its telemetry modulation unit, preventing it from transmitting scientific data. On March 24, 2024, NASA announced that they had made significant progress on interpreting the data being received from the spacecraft. Engineers reported in April 2024 that the failure was likely in a memory bank of the Flight Data Subsystem (FDS), one of the three onboard computer systems, probably from being struck by a high-energy particle or that it simply wore out due to age. The FDS was not communicating properly with the telemetry modulation unit (TMU), which began transmitting a repeating sequence of ones and zeros indicating that the system was in a stuck condition. After a reboot of the FDS, communications remained unusable. The probe still received commands from Earth, and was sending a carrier tone indicating it was still operational. Commands sent to alter the modulation of the tone succeeded, confirming that the probe was still responsive. The Voyager team began developing a workaround, and on April 20 communication of health and status was restored by rearranging code away from the defective FDS memory chip, three percent of which was corrupted beyond repair. Because the memory is corrupted, the code needed to be relocated, but there were no place for an extra 256 bits; the spacecraft's total memory is only 69.63 kilobytes. To make it work, the engineers deleted unused code, for example the code used to transmit the data from Jupiter, that cannot be used at the current transmission rate. All the data from the "anomaly period" is lost. On May 22, NASA announced that Voyager 1 "resumed returning science data from two of its four instruments", with work towards the others ongoing. On June 13, NASA confirmed that the probe returns data from all four instruments. In October 2024, the probe turned off its X-band radio transmitter that was used for communications with the DSN. It was caused by the probe's fault protection system that was activated after NASA turned on one of the heaters on October 16. Fault protection system lowered the transmission rate, but the engineers were able to find the signal. Later, on October 19, the transmission stopped; the fault protection system was triggered once again and switched to the S-band transmitter, that was previously used in 1981. NASA reported that the team reactivated the X-band transmitter and then resumed collecting data in mid-November. Future of the probe Remaining lifespan In December 2017, NASA successfully fired all four of Voyager 1s trajectory correction maneuver (TCM) thrusters for the first time since 1980. The TCM thrusters were used in the place of a degraded set of jets to help keep the probe's antenna pointed towards Earth. Using the TCM thrusters allowed Voyager 1 to continue transmitting data to NASA for two to three more years. Due to the diminishing electrical power available, the Voyager team has had to prioritize which instruments to keep on and which to turn off. Heaters and other spacecraft systems have been turned off one by one as part of power management. The fields and particles instruments that are the most likely to send back key data about the heliosphere and interstellar space have been prioritized to keep operating. Engineers expect the spacecraft to continue operating at least one science instrument until around 2025. Concerns with the orientation thrusters Some thrusters needed to control the attitude of the spacecraft and point its high-gain antenna in the direction of Earth are out of use due to clogging problems in their hydrazine lines. The spacecraft no longer has a backup available for its thruster system and "everything onboard is single-string," according to Suzanne Dodd, Voyager project manager at JPL, in an interview with Ars Technica. NASA has accordingly decided to modify the spacecraft's computer software in order to reduce the rate at which the hydrazine lines clog. NASA will first deploy the modified software on Voyager 2, which is less distant from Earth, before deploying it on Voyager 1. In September 2024, NASA performed a "thruster swap", switching from a clogged set of thrusters to less clogged ones that had not been used since 2018. Far future Provided Voyager 1 does not collide with anything and is not retrieved, the New Horizons space probe will never pass it, despite being launched from Earth at a higher speed than either Voyager spacecraft. The Voyager spacecraft benefited from multiple planetary flybys to increase its heliocentric velocities, whereas New Horizons received only a single such boost, from its Jupiter flyby in 2007. , New Horizons is traveling at about , slower than Voyager 1, and New Horizons, being closer to the sun, is slowing more rapidly. Voyager 1 is expected to reach the theorized Oort cloud in about 300 years and take about 30,000 years to pass through it. Though it is not heading towards any particular star, in about 40,000 years, it will pass within of the star Gliese 445, which is at present in the constellation Camelopardalis and 17.1 light-years from Earth. That star is generally moving towards the Solar System at about . NASA says that "The Voyagers are destinedperhaps eternallyto wander the Milky Way." In 300,000 years, it will pass within less than 1 light-year of the M3V star TYC 3135–52–1. Golden record Both Voyager space probes carry a gold-plated audio-visual disc, a compilation meant to showcase the diversity of life and culture on Earth in the event that either spacecraft is ever found by any extraterrestrial discoverer. The record, made under the direction of a team including Carl Sagan and Timothy Ferris, includes photos of the Earth and its lifeforms, a range of scientific information, spoken greetings from people such as the Secretary-General of the United Nations (Kurt Waldheim) and the President of the United States (Jimmy Carter) and a medley, "Sounds of Earth", that includes the sounds of whales, a baby crying, waves breaking on a shore, and a collection of music spanning different cultures and eras including works by Wolfgang Amadeus Mozart, Blind Willie Johnson, Chuck Berry and Valya Balkanska. Other Eastern and Western classics are included, as well as performances of indigenous and folk music from around the world. The record also contains greetings in 55 different languages. The project aimed to portray the richness of life on Earth and stand as a testament to human creativity and the desire to connect with the cosmos.
Technology
Unmanned spacecraft
null
32782
https://en.wikipedia.org/wiki/Voyager%202
Voyager 2
Voyager 2 is a space probe launched by NASA on August 20, 1977, as a part of the Voyager program. It was launched on a trajectory towards the gas giants Jupiter and Saturn and enabled further encounters with the ice giants Uranus and Neptune. It remains the only spacecraft to have visited either of the ice giant planets, and was the third of five spacecraft to achieve Solar escape velocity, which allowed it to leave the Solar System. Launched 16 days before its twin Voyager 1, the primary mission of the spacecraft was to study the outer planets and its extended mission is to study interstellar space beyond the Sun's heliosphere. Voyager 2 successfully fulfilled its primary mission of visiting the Jovian system in 1979, the Saturnian system in 1981, Uranian system in 1986, and the Neptunian system in 1989. The spacecraft is now in its extended mission of studying the interstellar medium. It is at a distance of from Earth . The probe entered the interstellar medium on November 5, 2018, at a distance of from the Sun and moving at a velocity of relative to the Sun. Voyager 2 has left the Sun's heliosphere and is traveling through the interstellar medium, though still inside the Solar System, joining Voyager 1, which had reached the interstellar medium in 2012. Voyager 2 has begun to provide the first direct measurements of the density and temperature of the interstellar plasma. Voyager 2 remains in contact with Earth through the NASA Deep Space Network. Communications are the responsibility of Australia's DSS 43 communication antenna, located near Canberra. History Background In the early space age, it was realized that a periodic alignment of the outer planets would occur in the late 1970s and enable a single probe to visit Jupiter, Saturn, Uranus, and Neptune by taking advantage of the then-new technique of gravity assists. NASA began work on a Grand Tour, which evolved into a massive project involving two groups of two probes each, with one group visiting Jupiter, Saturn, and Pluto and the other Jupiter, Uranus, and Neptune. The spacecraft would be designed with redundant systems to ensure survival throughout the entire tour. By 1972 the mission was scaled back and replaced with two Mariner program-derived spacecraft, the Mariner Jupiter-Saturn probes. To keep apparent lifetime program costs low, the mission would include only flybys of Jupiter and Saturn, but keep the Grand Tour option open. As the program progressed, the name was changed to Voyager. The primary mission of Voyager 1 was to explore Jupiter, Saturn, and Saturn's largest moon, Titan. Voyager 2 was also to explore Jupiter and Saturn, but on a trajectory that would have the option of continuing on to Uranus and Neptune, or being redirected to Titan as a backup for Voyager 1. Upon successful completion of Voyager 1's objectives, Voyager 2 would get a mission extension to send the probe on towards Uranus and Neptune. Titan was selected due to the interest developed after the images taken by Pioneer 11 in 1979, which had indicated the atmosphere of the moon was substantial and complex. Hence the trajectory was designed for optimum Titan flyby. Spacecraft design Constructed by the Jet Propulsion Laboratory (JPL), Voyager 2 included 16 hydrazine thrusters, three-axis stabilization, gyroscopes and celestial referencing instruments (Sun sensor/Canopus Star Tracker) to maintain pointing of the high-gain antenna toward Earth. Collectively these instruments are part of the Attitude and Articulation Control Subsystem (AACS) along with redundant units of most instruments and 8 backup thrusters. The spacecraft also included 11 scientific instruments to study celestial objects as it traveled through space. Communications Built with the intent for eventual interstellar travel, Voyager 2 included a large, parabolic, high-gain antenna (see diagram) to transceive data via the Deep Space Network on Earth. Communications are conducted over the S-band (about 13 cm wavelength) and X-band (about 3.6 cm wavelength) providing data rates as high as 115.2 kilobits per second at the distance of Jupiter, and then ever-decreasing as distance increases, because of the inverse-square law. When the spacecraft is unable to communicate with Earth, the Digital Tape Recorder (DTR) can record about 64 megabytes of data for transmission at another time. Power Voyager 2 is equipped with three multihundred-watt radioisotope thermoelectric generators (MHW RTGs). Each RTG includes 24 pressed plutonium oxide spheres. At launch, each RTG provided enough heat to generate approximately 157 W of electrical power. Collectively, the RTGs supplied the spacecraft with 470 watts at launch (halving every 87.7 years). They were predicted to allow operations to continue until at least 2020, and continued to provide power to five scientific instruments through the early part of 2023. In April 2023 JPL began using a reservoir of backup power intended for an onboard safety mechanism. As a result, all five instruments had been expected to continue operation through 2026. In October 2024 NASA announced that the plasma science instrument had been turned off, preserving power for the remaining four instruments. Attitude control and propulsion Because of the energy required to achieve a Jupiter trajectory boost with an payload, the spacecraft included a propulsion module made of a solid-rocket motor and eight hydrazine monopropellant rocket engines, four providing pitch and yaw attitude control, and four for roll control. The propulsion module was jettisoned shortly after the successful Jupiter burn. Sixteen hydrazine Aerojet MR-103 thrusters on the mission module provide attitude control. Four are used to execute trajectory correction maneuvers; the others in two redundant six-thruster branches, to stabilize the spacecraft on its three axes. Only one branch of attitude control thrusters is needed at any time. Thrusters are supplied by a single diameter spherical titanium tank. It contained of hydrazine at launch, providing enough fuel until 2034. Scientific instruments Mission profile Launch and trajectory The Voyager 2 probe was launched on August 20, 1977, by NASA from Space Launch Complex 41 at Cape Canaveral, Florida, aboard a Titan IIIE/Centaur launch vehicle. Two weeks later, the twin Voyager 1 probe was launched on September 5, 1977. However, Voyager 1 reached both Jupiter and Saturn sooner, as Voyager 2 had been launched into a longer, more circular trajectory. Voyager 1s initial orbit had an aphelion of , just a little short of Saturn's orbit of . Whereas, Voyager 2s initial orbit had an aphelion of , well short of Saturn's orbit. In April 1978, no commands were transmitted to Voyager 2 for a period of time, causing the spacecraft to switch from its primary radio receiver to its backup receiver. Sometime afterwards, the primary receiver failed altogether. The backup receiver was functional, but a failed capacitor in the receiver meant that it could only receive transmissions that were sent at a precise frequency, and this frequency would be affected by the Earth's rotation (due to the Doppler effect) and the onboard receiver's temperature, among other things. Encounter with Jupiter Voyager 2s closest approach to Jupiter occurred at 22:29 UT on July 9, 1979. It came within of the planet's cloud tops. Jupiter's Great Red Spot was revealed as a complex storm moving in a counterclockwise direction. Other smaller storms and eddies were found throughout the banded clouds. Voyager 2 returned images of Jupiter, as well as its moons Amalthea, Io, Callisto, Ganymede, and Europa. During a 10-hour "volcano watch", it confirmed Voyager 1s observations of active volcanism on the moon Io, and revealed how the moon's surface had changed in the four months since the previous visit. Together, the Voyagers observed the eruption of nine volcanoes on Io, and there is evidence that other eruptions occurred between the two Voyager fly-bys. Jupiter's moon Europa displayed a large number of intersecting linear features in the low-resolution photos from Voyager 1. At first, scientists believed the features might be deep cracks, caused by crustal rifting or tectonic processes. Closer high-resolution photos from Voyager 2, however, were puzzling: the features lacked topographic relief, and one scientist said they "might have been painted on with a felt marker". Europa is internally active due to tidal heating at a level about one-tenth that of Io. Europa is thought to have a thin crust (less than thick) of water ice, possibly floating on a -deep ocean. Two new, small satellites, Adrastea and Metis, were found orbiting just outside the ring. A third new satellite, Thebe, was discovered between the orbits of Amalthea and Io. Encounter with Saturn The closest approach to Saturn occurred at 03:24:05 UT on August 26, 1981. When Voyager 2 passed behind Saturn, viewed from Earth, it utilized its radio link to investigate Saturn's upper atmosphere, gathering data on both temperature and pressure. In the highest regions of the atmosphere, where the pressure was measured at , Voyager 2 recorded a temperature of . Deeper within the atmosphere, where the pressure was recorded to be , the temperature rose to . The spacecraft also observed that the north pole was approximately cooler at than mid-latitudes, a variance potentially attributable to seasonal shifts (see also Saturn Oppositions). After its Saturn fly-by, Voyager 2s scan platform experienced an anomaly causing its azimuth actuator to seize. This malfunction led to some data loss and posed challenges for the spacecraft's continued mission. The anomaly was traced back to a combination of issues, including a design flaw in the actuator shaft bearing and gear lubrication system, corrosion, and debris build-up. While overuse and depleted lubricant were factors, other elements, such as dissimilar metal reactions and a lack of relief ports, compounded the problem. Engineers on the ground were able to issue a series of commands, rectifying the issue to a degree that allowed the scan platform to resume its function. Voyager 2, which would have been diverted to perform the Titan flyby if Voyager 1 had been unable to, did not pass near Titan due to the malfunction, and subsequently, proceeded with its mission to explore the Uranian system. Encounter with Uranus The closest approach to Uranus occurred on January 24, 1986, when Voyager 2 came within of the planet's cloudtops. Voyager 2 also discovered 11 previously unknown moons: Cordelia, Ophelia, Bianca, Cressida, Desdemona, Juliet, Portia, Rosalind, Belinda, Puck and Perdita. The mission also studied the planet's unique atmosphere, caused by its axial tilt of 97.8°; and examined the Uranian ring system. The length of a day on Uranus as measured by Voyager 2 is 17 hours, 14 minutes. Uranus was shown to have a magnetic field that was misaligned with its rotational axis, unlike other planets that had been visited to that point, and a helix-shaped magnetic tail stretching 10 million kilometers (6 million miles) away from the Sun. When Voyager 2 visited Uranus, much of its cloud features were hidden by a layer of haze; however, false-color and contrast-enhanced images show bands of concentric clouds around its south pole. This area was also found to radiate large amounts of ultraviolet light, a phenomenon that is called "dayglow". The average atmospheric temperature is about . The illuminated and dark poles, and most of the planet, exhibit nearly the same temperatures at the cloud tops. The Voyager 2 Planetary Radio Astronomy (PRA) experiment observed 140 lightning flashes, or Uranian electrostatic discharges with a frequency of 0.9-40 MHz. The UEDs were detected from 600,000 km of Uranus over 24 hours, most of which were not visible. However, microphysical modeling suggests that Uranian lightning occurs in convective storms occurring in deep troposphere water clouds. If this is the case, lightning will not be visible due to the thick cloud layers above the troposphere. Uranian lightning has a power of around 108 W, emits 1×10^7 J – 2×10^7 J of energy, and lasts an average of 120 ms. Detailed images from Voyager 2s flyby of the Uranian moon Miranda showed huge canyons made from geological faults. One hypothesis suggests that Miranda might consist of a reaggregation of material following an earlier event when Miranda was shattered into pieces by a violent impact. Voyager 2 discovered two previously unknown Uranian rings. Measurements showed that the Uranian rings are different from those at Jupiter and Saturn. The Uranian ring system might be relatively young, and it did not form at the same time that Uranus did. The particles that make up the rings might be the remnants of a moon that was broken up by either a high-velocity impact or torn up by tidal effects. In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded during the flyby. Encounter with Neptune Following a course correction in 1987, Voyager 2s closest approach to Neptune occurred on August 25, 1989. Through repeated computerized test simulations of trajectories through the Neptunian system conducted in advance, flight controllers determined the best way to route Voyager 2 through the Neptune–Triton system. Since the plane of the orbit of Triton is tilted significantly with respect to the plane of the ecliptic; through course corrections, Voyager 2 was directed into a path about above the north pole of Neptune. Five hours after Voyager 2 made its closest approach to Neptune, it performed a close fly-by of Triton, Neptune's largest moon, passing within about . In 1989, the Voyager 2 Planetary Radio Astronomy (PRA) experiment observed around 60 lightning flashes, or Neptunian electrostatic discharges emitting energies over 7×10 J. A plasma wave system (PWS) detected 16 electromagnetic wave events with a frequency range of 50 Hz – 12 kHz at magnetic latitudes 7˚-33˚. These plasma wave detections were possibly triggered by lightning over 20 minutes in the ammonia clouds of the magnetosphere. During Voyager 2s closest approach to Neptune, the PWS instrument provided Neptune’s first plasma wave detections at a sample rate of 28,800 samples per second. The measured plasma densities range from 10 – 10 cm. Voyager 2 discovered previously unknown Neptunian rings, and confirmed six new moons: Despina, Galatea, Larissa, Proteus, Naiad and Thalassa. While in the neighborhood of Neptune, Voyager 2 discovered the "Great Dark Spot", which has since disappeared, according to observations by the Hubble Space Telescope. The Great Dark Spot was later hypothesized to be a region of clear gas, forming a window in the planet's high-altitude methane cloud deck. Interstellar mission Once its planetary mission was over, Voyager 2 was described as working on an interstellar mission, which NASA is using to find out what the Solar System is like beyond the heliosphere. Voyager 2 is transmitting scientific data at about 160 bits per second. Information about continuing telemetry exchanges with Voyager 2 is available from Voyager Weekly Reports. In 1992, Voyager 2 observed the nova V1974 Cygni in the far-ultraviolet, first of its kind. The further increase in the brightness at those wavelengths helped in the more detailed study of the nova. In July 1994, an attempt was made to observe the impacts from fragments of the comet Comet Shoemaker–Levy 9 with Jupiter. The craft's position meant it had a direct line of sight to the impacts and observations were made in the ultraviolet and radio spectrum. Voyager 2 failed to detect anything, with calculations showing that the fireballs were just below the craft's limit of detection. On November 29, 2006, a telemetered command to Voyager 2 was incorrectly decoded by its on-board computer—in a random error—as a command to turn on the electrical heaters of the spacecraft's magnetometer. These heaters remained turned on until December 4, 2006, and during that time, there was a resulting high temperature above , significantly higher than the magnetometers were designed to endure, and a sensor rotated away from the correct orientation. On August 30, 2007, Voyager 2 passed the termination shock and then entered into the heliosheath, approximately closer to the Sun than Voyager 1 did. This is due to the interstellar magnetic field of deep space. The southern hemisphere of the Solar System's heliosphere is being pushed in. On April 22, 2010, Voyager 2 encountered scientific data format problems. On May 17, 2010, JPL engineers revealed that a flipped bit in an on-board computer had caused the problem, and scheduled a bit reset for May 19. On May 23, 2010, Voyager 2 resumed sending science data from deep space after engineers fixed the flipped bit. In 2013, it was originally thought that Voyager 2 would enter interstellar space in two to three years, with its plasma spectrometer providing the first direct measurements of the density and temperature of the interstellar plasma. But the Voyager project scientist, Edward C. Stone and his colleagues said they lacked evidence of what would be the key signature of interstellar space: a shift in the direction of the magnetic field. Finally, in December 2018, Stone announced that Voyager 2 reached interstellar space on November 5, 2018. Maintenance to the Deep Space Network cut outbound contact with the probe for eight months in 2020. Contact was reestablished on November 2, when a series of instructions was transmitted, subsequently executed, and relayed back with a successful communication message. On February 12, 2021, full communications were restored after a major ground station antenna upgrade that took a year to complete. In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2; this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose". On July 18, 2023, Voyager 2 overtook Pioneer 10 as the second farthest spacecraft from the Sun. On July 21, 2023, a programming error misaligned Voyager 2's high gain antenna 2 degrees away from Earth, breaking communications with the spacecraft. By August 1, the spacecraft's carrier signal was detected using multiple antennas of the Deep Space Network. A high-power "shout" on August 4 sent from the Canberra station successfully commanded the spacecraft to reorient towards Earth, resuming communications. As a failsafe measure, the probe is also programmed to autonomously reset its orientation to point towards Earth, which would have occurred by October 15. Reductions in capabilities As the power from the RTG slowly reduces, various items of equipment have been turned off on the spacecraft. The first science equipment turned off on Voyager 2 was the PPS in 1991, which saved 1.2 watts. Concerns with the orientation thrusters Some thrusters needed to control the correct attitude of the spacecraft and to point its high-gain antenna in the direction of Earth are out of use due to clogging problems in their hydrazine injector. The spacecraft no longer has backups available for its thruster system and "everything onboard is running on single-string" as acknowledged by Suzanne Dodd, Voyager project manager at JPL, in an interview with Ars Technica. NASA has decided to patch the computer software in order to modify the functioning of the remaining thrusters to slow down the clogging of the small diameter hydrazine injector jets. Before uploading the software update on the Voyager 1 computer, NASA will first try the procedure with Voyager 2, which is closer to Earth. Future of the probe The probe is expected to keep transmitting weak radio messages until at least the mid-2020s, more than 48 years after it was launched. NASA says that "The Voyagers are destined—perhaps eternally—to wander the Milky Way." Voyager 2 is not headed toward any particular star. The nearest star is 4.2 light-years away, and at 15.341 km/s, the spacecraft travels one light-year in about 19,541 years - during which time the nearby stars will also move substantially. In roughly 42,000 years, Voyager 2 will pass the star Ross 248 (10.30 light-years away from Earth) at a distance of 1.7 light-years. If undisturbed for 296,000 years, Voyager 2 should pass by the star Sirius (8.6 light-years from Earth) at a distance of 4.3 light-years. Golden record Both Voyager space probes carry a gold-plated audio-visual disc, a compilation meant to showcase the diversity of life and culture on Earth in the event that either spacecraft is ever found by any extraterrestrial discoverer. The record, made under the direction of a team including Carl Sagan and Timothy Ferris, includes photos of the Earth and its lifeforms, a range of scientific information, spoken greetings from people such as the Secretary-General of the United Nations and the President of the United States and a medley, "Sounds of Earth", that includes the sounds of whales, a baby crying, waves breaking on a shore, and a collection of music spanning different cultures and eras including works by Wolfgang Amadeus Mozart, Blind Willie Johnson, Chuck Berry and Valya Balkanska. Other Eastern and Western classics are included, as well as performances of indigenous music from around the world. The record also contains greetings in 55 different languages. The project aimed to portray the richness of life on Earth and stand as a testament to human creativity and the desire to connect with the cosmos.
Technology
Unmanned spacecraft
null
32786
https://en.wikipedia.org/wiki/V-2%20rocket
V-2 rocket
The V2 (), with the technical name Aggregat 4 (A4), was the world's first long-range guided ballistic missile. The missile, powered by a liquid-propellant rocket engine, was developed during the Second World War in Nazi Germany as a "vengeance weapon" and assigned to attack Allied cities as retaliation for the Allied bombings of German cities. The rocket also became the first artificial object to travel into space by crossing the Kármán line (edge of space) with the vertical launch of MW 18014 on 20 June 1944. Research of military use of long-range rockets began when the graduate studies of Wernher von Braun were noticed by the German Army. A series of prototypes culminated in the A4, which went to war as the . Beginning in September 1944, more than 3,000 were launched by the Wehrmacht against Allied targets, first London and later Antwerp and Liège. According to a 2011 BBC documentary, the attacks from resulted in the deaths of an estimated 9,000 civilians and military personnel, while a further 12,000 laborers and concentration camp prisoners died as a result of their forced participation in the production of the weapons. The rockets travelled at supersonic speeds, impacted without audible warning, and proved unstoppable. No effective defense existed. Teams from the Allied forces—the United States, the United Kingdom, France and the Soviet Union—raced to seize major German manufacturing facilities, procure the Germans' missile technology, and capture the V-2s' launching sites. Von Braun and more than 100 core R&D personnel surrendered to the Americans, and many of the original team transferred their work to the Redstone Arsenal, where they were relocated as part of Operation Paperclip. The US also captured enough hardware to build approximately 80 of the missiles. The Soviets gained possession of the manufacturing facilities after the war, re-established production, and moved it to the Soviet Union. Development history During the late 1920s, a young Wernher von Braun bought a copy of Hermann Oberth's book, Die Rakete zu den Planetenräumen (The Rocket into Interplanetary Spaces). In 1928 a Raketenrummel or "Rocket Rumble" fad in the popular media was initiated by Fritz von Opel and Max Valier, a collaborator of Oberth, by experimenting with rockets, including public demonstrations of manned rocket cars and rocket planes. The “Rocket Rumble” was highly influential on von Braun as a teenage space enthusiast. He was so enthusiastic after seeing one of the public Opel-RAK rocket car demonstrations, that he constructed and launched his own homemade toy rocket car in a crowded sidewalk and was later taken in for questioning by the local police, until released to his father for disciplinary action. Starting in 1930, von Braun attended the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), where he assisted Oberth in liquid-fueled rocket motor tests. Von Braun was working on his doctorate when the Nazi Party gained power in Germany. An artillery captain, Walter Dornberger, arranged an Ordnance Department research grant for von Braun, who from then on worked next to Dornberger's existing solid-fuel rocket test site at Kummersdorf. Von Braun's thesis, Construction, Theoretical, and Experimental Solution to the Problem of the Liquid Propellant Rocket (dated 16 April 1934), was kept classified by the German Army and was not published until 1960. By the end of 1934, his group had successfully launched two rockets that reached heights of . At the time, many Germans were interested in American physicist Robert H. Goddard's research. Before 1939, German engineers and scientists occasionally contacted Goddard directly with technical questions. Von Braun used Goddard's plans from various journals and incorporated them into the building of the Aggregate (A) series of rockets, named for the German word for mechanism or mechanical system. After successes at Kummersdorf with the first two Aggregate series rockets, Braun and Walter Riedel began thinking of a much larger rocket in the summer of 1936, based on a projected thrust engine. In addition, Dornberger specified the military requirements needed to include a 1-ton payload, a range of 172 miles with a dispersion of 2 or 3 miles, and transportable using road vehicles. After the A-4 project was postponed due to unfavorable aerodynamic stability testing of the A-3 in July 1936, Braun specified the A-4 performance in 1937, and, after an "extensive" series of test firings of the A-5 scale test model, using a motor redesigned from the troublesome A-3 by Walter Thiel, A-4 design and construction was ordered 1938–39. During 28–30 September 1939, (English: The Day of Wisdom) conference met at Peenemünde to initiate the funding of university research to solve rocket problems. By late 1941, the Army Research Center at Peenemünde possessed the technologies essential to the success of the A-4. The four main technologies for the A-4 were large liquid-fuel rocket engines, supersonic aerodynamics, gyroscopic guidance and rudders in jet control. At the time, Adolf Hitler was not particularly impressed by the V-2; he opined that it was merely an artillery shell with a longer range and much higher cost. During early September 1943, Braun promised the Long-Range Bombardment Commission that the A-4 development was "practically complete/concluded", but even by the middle of 1944, a complete A-4 parts list was still unavailable. Hitler was sufficiently impressed by the enthusiasm of its developers, and needed a "wonder weapon" to maintain German morale, so he authorized its deployment in large numbers. The V-2s were constructed at the Mittelwerk site by prisoners from Mittelbau-Dora, a concentration camp where 20,000 prisoners died. In 1943, the Austrian resistance group including Heinrich Maier managed to send exact drawings of the V-2 rocket to the American Office of Strategic Services. Location sketches of V-rocket manufacturing facilities, such as those in Peenemünde, were also sent to the Allied general staff in order to enable Allied bombers to perform airstrikes. This information was particularly important for Operation Crossbow and Operation Hydra, both preliminary missions for Operation Overlord. The group was gradually captured by the Gestapo and most of the members were executed. Technical details The A4 used a 75% ethanol/25% water mixture (B-Stoff) for fuel and liquid oxygen (LOX) (A-Stoff) for oxidizer. The water reduced the flame temperature, acted as a coolant by turning to steam and augmented the thrust, tended to produce a smoother burn, and reduced thermal stress. Rudolf Hermann's supersonic wind tunnel was used to measure the A4's aerodynamic characteristics and center of pressure, using a model of the A4 within a 40 square centimeter chamber. Measurements were made using a Mach 1.86 blowdown nozzle on 8 August 1940. Tests at Mach numbers 1.56 and 2.5 were made after 24 September 1940. At launch the A4 propelled itself for up to 65 seconds on its own power, and a program motor held the inclination at the specified angle until engine shutdown, after which the rocket continued on a ballistic free-fall trajectory. The rocket reached a height of or 264,000 ft after shutting off the engine. The fuel and oxidizer pumps were driven by a steam turbine, and the steam was produced by concentrated hydrogen peroxide (T-Stoff) with sodium permanganate (Z-Stoff) catalyst. Both the alcohol and oxygen tanks were an aluminum-magnesium alloy. The turbopump, rotating at 4,000 rpm, forced the alcohol and oxygen into the combustion chamber at 125 liters (33 US gallons) per second, where they were ignited by a spinning electrical igniter. Thrust increased from 8 tons during this preliminary stage whilst the fuel was gravity-fed, before increasing to 25 tons as the turbopump pressurised the fuel, lifting the 13.5 ton rocket. Combustion gases exited the chamber at , and a speed of per second. The oxygen to fuel mixture was 1.0:0.85 at 25 tons of thrust, but as ambient pressure decreased with flight altitude, thrust increased until it reached 29 tons. The turbopump assembly contained two centrifugal pumps, one for the alcohol, and one for the oxygen, The turbine connects directly by a shaft to the alcohol pump and through a flexible joint and shaft to the oxygen pump. Hydrogen peroxide converted to steam, using a sodium permanganate catalyst powered the pump, which delivered of alcohol and of liquid oxygen per second to a combustion chamber at . Dr. Thiel's development of the 25 ton rocket motor relied on pump feeding, rather than on the earlier pressure feeding. The motor used centrifugal injection, while using both regenerative cooling and film cooling. Film cooling admitted alcohol into the combustion chamber and exhaust nozzle under slight pressure through four rings of small perforations. The mushroom-shaped injection head was removed from the combustion chamber to a mixing chamber, the combustion chamber was made more spherical while being shortened from 6 to 1-foot in length, and the connection to the nozzle was made cone shaped. The resultant 1.5 ton chamber operated at a combustion pressure of . Thiel's 1.5 ton chamber was then scaled up to a 4.5 ton motor by arranging three injection heads above the combustion chamber. By 1939, eighteen injection heads in two concentric circles at the head of the thick sheet-steel chamber, were used to make the 25 ton motor. The warhead was a source of trouble. The explosive used was amatol 60/40 detonated by an electric contact fuze. Amatol had the advantage of stability, and the warhead was protected by a thick layer of glass wool, but even so it could still explode during the re-entry phase. The warhead weighed and contained of explosive. The warhead's percentage by weight that was explosive was 93%, a very great percentage when compared with other types of munition. A protective layer of glass wool was also used for the fuel tanks so the A-4 did not have a tendency to form ice, a problem which plagued other early ballistic missiles such as the balloon tank-design SM-65 Atlas which entered US service in 1959. The tanks held of ethyl alcohol and of oxygen. The V-2 was guided by four external rudders on the tail fins, and four internal graphite vanes in the jet stream at the exit of the motor. These 8 control surfaces were controlled by Helmut Hölzer's analog computer, the , via electrical-hydraulic servomotors, based on electrical signals from the gyros. The Siemens Vertikant LEV-3 guidance system consisted of two free gyroscopes (a horizontal for pitch and a vertical with two degrees of freedom for yaw and roll) for lateral stabilization, coupled with a PIGA accelerometer, or the Walter Wolman radio control system, to control engine cutoff at a specified velocity. Other gyroscopic systems used in the A-4 included Kreiselgeräte's SG-66 and SG-70. The V-2 was launched from a pre-surveyed location, so the distance and azimuth to the target were known. Fin 1 of the missile was aligned to the target azimuth. Some later V-2s used "guide beams", radio signals transmitted from the ground, to keep the missile on course, but the first models used a simple analog computer that adjusted the azimuth for the rocket, and the flying distance was controlled by the timing of the engine cut-off, Brennschluss, ground-controlled by a Doppler system or by different types of on-board integrating accelerometers. Thus, range was a function of engine burn time, which ended when a specific velocity was achieved. Just before engine cutoff, thrust was reduced to eight tons, in an effort to avoid any water hammer problems a rapid cutoff could cause. Dr. Friedrich Kirchstein of Siemens of Berlin developed the V-2 radio control for motor-cut-off (). For velocity measurement, Professor Wolman of Dresden created an alternative of his Doppler tracking system in 1940–41, which used a ground signal transponded by the A-4 to measure the velocity of the missile. By 9 February 1942, Peenemünde engineer Gerd had documented the radio interference area of a V-2 as around the "Firing Point", and the first successful A-4 flight on 3 October 1942, used radio control for . Although Hitler commented on 22 September 1943 that "It is a great load off our minds that we have dispensed with the radio guiding-beam; now no opening remains for the British to interfere technically with the missile in flight", about 20% of the operational V-2 launches were beam-guided. The Operation Pinguin V-2 offensive began on 8 September 1944, when (English: 'Training and Testing Battery 444') launched a single rocket guided by a radio beam directed at Paris. Wreckage of combat V-2s occasionally contained the transponder for velocity and fuel cutoff. The painting of the operational V-2s was mostly a ragged-edged pattern with several variations, but at the end of the war a plain olive green rocket was also used. During tests the rocket was painted in a characteristic black-and-white chessboard pattern, which aided in determining if the rocket was spinning around its longitudinal axis. The original German designation of the rocket was "V2", unhyphenated – exactly as used for any Third Reich-era "second prototype" example of an RLM-registered German aircraft design – but U.S. publications such as Life magazine were using the hyphenated form "V-2" as early as December 1944. Testing The first successful test flight was on 3 October 1942, reaching an altitude of . On that day, Walter Dornberger declared in a meeting at Peenemünde: Two test launches were recovered by the Allies: the Bäckebo rocket, the remnants of which landed in Sweden on 13 June 1944, and one recovered by the Polish resistance on 30 May 1944 from the Blizna V-2 missile launch site and transported to the UK during Operation Most III. The highest altitude reached during the war was (20 June 1944). Test launches of V-2 rockets were made at Peenemünde, Blizna and Tuchola Forest, and after the war, at Cuxhaven by the British, White Sands Proving Grounds and Cape Canaveral by the U.S., and Kapustin Yar by the USSR. Various design issues were identified and solved during V-2 development and testing: To reduce tank pressure and weight, rapid flow turbopumps were used to increase pressure. A short and lighter combustion chamber without burn-through was developed by using centrifugal injection nozzles, a mixing compartment, and a converging nozzle to the throat for homogeneous combustion. Film cooling was used to prevent burn-through at the nozzle throat. Relay contacts were made more durable to withstand vibration and prevent thrust cut-off just after lift-off. Ensuring that the fuel pipes had tension-free curves reduced the likelihood of explosions at . Fins were shaped with clearance to prevent damage as the exhaust jet expanded with altitude. To control trajectory at liftoff and supersonic speeds, heat-resistant graphite vanes were used as rudders in the exhaust jet. Air burst problem Through mid-March 1944, only four of the 26 successful Blizna launches had satisfactorily reached the Sarnaki target area due to in-flight breakup () on re-entry into the atmosphere. (As mentioned above, one rocket was collected by the Polish Home Army, with parts of it transported to London for tests.) Initially, the German developers suspected excessive alcohol tank pressure, but by April 1944, after five months of test firings, the cause was still not determined. Major-General Rossmann, the Army Weapons Office department chief, recommended stationing observers in the target area – May/June, Dornberger and von Braun set up a camp at the centre of the Poland target zone. After moving to the Heidekraut, SS Mortar Battery 500 of the 836th Artillery Battalion (Motorized) was ordered on 30 August to begin test launches of eighty 'sleeved' rockets. Testing confirmed that the so-called 'tin trousers' – a tube designed to strengthen the forward end of the rocket cladding – reduced the likelihood of air bursts. Production On 27 March 1942, Dornberger proposed production plans and the building of a launching site on the Channel coast. In December, Speer ordered Major Thom and Dr. Steinhoff to reconnoitre the site near Watten. Assembly rooms were established at Peenemünde and in the Friedrichshafen facilities of Zeppelin Works. In 1943, a third factory, Raxwerke, was added. On 22 December 1942, Hitler signed the order for mass production, when Albert Speer assumed final technical data would be ready by July 1943. However, many issues still remained to be solved even by the autumn of 1943. On 8 January 1943, Dornberger and von Braun met with Speer. Speer stated, "As head of the Todt organisation I will take it on myself to start at once with the building of the launching site on the Channel coast," and established an A-4 production committee under Degenkolb. On 26 May 1943, the Long-Range Bombardment Commission, chaired by AEG director Petersen, met at Peenemünde to review the V-1 and V-2 automatic long-range weapons. In attendance were Speer, Air Marshal Erhard Milch, Admiral Karl Dönitz, Col. General Friedrich Fromm, and Karl Saur. Both weapons had reached the final stage of development, and the commission decided to recommend to Hitler that both weapons be mass-produced. As Dornberger observed, "The disadvantages of the one would be compensated by the other's advantages." On 7 July 1943, Major General Dornberger, von Braun, and Dr. Steinhof briefed Hitler in his Wolf's Lair. Also in attendance were Speer, Wilhelm Keitel, and Alfred Jodl. The briefing included von Braun narrating a movie showing the successful launch on 3 October 1942, with scale models of the Channel coast firing bunker, and supporting vehicles, including the . Hitler then gave Peenemünde top priority in the German armaments program stating, "Why was it I could not believe in the success of your work? if we had had these rockets in 1939 we should never have had this war..." Hitler also wanted a second launch bunker built. Saur planned to build 2,000 rockets per month, between the existing three factories and the Nordhausen Mittelwerk factory being built. However, alcohol production was dependent upon the potato harvest. A production line was nearly ready at Peenemünde when the Operation Hydra attack occurred. The main targets of the attack included the test stands, the development works, the Pre-Production Works, the settlement where the scientists and technicians lived, the Trassenheide camp, and the harbor sector. According to Dornberger, "Serious damage to the works, contrary to first impressions, was surprisingly small." Work resumed after a delay of four to six weeks, and because of camouflage to mimic complete destruction, there were no more raids during the next nine months. The raid resulted in 735 lives lost, with heavy losses at Trassenheide, while 178 were killed in the settlement, including Dr. Thiel, his family, and Chief Engineer Walther. The Germans eventually moved production to the underground Mittelwerk in the Kohnstein where 5,200 V-2 rockets were built with the use of forced labour. Launch sites After the Operation Crossbow bombing, initial plans for launching from the massive underground Watten, Wizernes and Sottevast bunkers or from fixed pads such as near the Château du Molay were dismissed in favour of mobile launching. Eight main storage dumps were planned and four had been completed by July 1944 (the one at Mery-sur-Oise was begun during August 1943 and completed by February 1944). The missile could be launched practically anywhere, roads running through forests being a particular favourite. The system was so mobile and small that only one was ever caught in action by Allied aircraft, during the Operation Bodenplatte attack on 1 January 1945 near Lochem by a USAAF 4th Fighter Group aircraft, although Raymond Baxter described flying over a site during a launch and his wingman firing at the missile without hitting it. It was estimated that a sustained rate of 350 V-2s could be launched per week, with 100 per day at maximum effort, given sufficient supply of the rockets. Operational history The LXV Armeekorps z.b.V. formed during the last days of November 1943 in France commanded by General der Artillerie z.V. Erich Heinemann was responsible for the operational use of V-2. Three launch battalions were formed in late 1943, Artillerie Abteilung 836 (Mot.), Grossborn, Artillerie Abteilung 485 (Mot.), Naugard, and Artillerie Abteilung 962 (Mot.). Combat operations commenced in Sept. 1944, when training Batterie 444 deployed. On 2 September 1944, the SS Werfer-Abteilung 500 was formed, and by October, the SS under the command of SS Lt. Gen Hans Kammler, took operational control of all units. He formed Gruppe Süd with Art. Abt. 836, Merzig, and Gruppe Nord with Art. Abt. 485 and Batterie 444, Burgsteinfurt and The Hague. After Hitler's 29 August 1944 declaration to begin V-2 attacks as soon as possible, the offensive began on 7 September 1944 when two were launched at Paris (which the Allies had liberated less than two weeks earlier), but both crashed soon after launch. On 8 September a single rocket was launched at Paris, which caused modest damage near Porte d'Italie. Two more launches by the 485th followed, including one from The Hague against London on the same day at 6:43 pm. – the first landed at Staveley Road, Chiswick, killing 63-year-old Mrs. Ada Harrison, three-year-old Rosemary Clarke, and Sapper Bernard Browning on leave from the Royal Engineers, and one that hit Epping with no casualties. The British government, concerned about spreading panic or giving away vital intelligence to German forces, initially attempted to conceal the cause of the explosions by making no official announcement, and euphemistically blaming them on defective gas mains. The public did not believe this explanation and therefore began referring to the V-2s as "flying gas mains". The Germans themselves finally announced the V-2 on 8 November 1944 and only then, on 10 November 1944, did Winston Churchill inform Parliament, and the world, that England had been under rocket attack "for the last few weeks". In September 1944, control of the V-2 mission was transferred to the Waffen-SS and Division z.V. Positions of the German launch units changed a number of times. For example, Artillerie Init 444 arrived in the southwest Netherlands (in Zeeland) in September 1944. From a field near the village of Serooskerke, five V-2s were launched on 15 and 16 September, with one more successful and one failed launch on the 18th. That same date, a transport carrying a missile took a wrong turn and ended up in Serooskerke itself, giving a villager the opportunity to surreptitiously take some photographs of the weapon; these were smuggled to London by the Dutch Resistance. After that the unit moved to the woods near Rijs, Gaasterland in the northwest Netherlands, to ensure that the technology was not captured by the Allies. From Gaasterland V-2s were launched against Ipswich and Norwich from 25 September (London being out of range). Because of their inaccuracy, these V-2s did not hit their target cities. Soon after that only London and Antwerp remained as designated targets as ordered by Adolf Hitler himself, Antwerp being targeted in the period of 12 to 20 October, after which time the unit moved to The Hague. Targets During the succeeding months about 3,172 V-2 rockets were fired at the following targets: Belgium, 1,664: Antwerp (1,610), Liège (27), Hasselt (13), Tournai (9), Mons (3), Diest (2) United Kingdom, 1,402: London (1,358), Norwich (43), Ipswich (1) France, 76: Lille (25), Paris (22), Tourcoing (19), Arras (6), Cambrai (4) Netherlands, 19: Maastricht (19) Germany, 11: Remagen (Ludendorff Bridge) (11) Antwerp, Belgium was a target for a large number of V-weapon attacks from October 1944 through to the virtual end of the war in March 1945, leaving 1,736 dead and 4,500 injured in greater Antwerp. Thousands of buildings were damaged or destroyed as the city was struck by 590 direct hits. The largest loss of life by a single rocket attack during the war came on 16 December 1944, when the roof of the crowded Cine Rex was struck, leaving 567 dead and 291 injured. An estimated 2,754 civilians were killed in London by V-2 attacks with another 6,523 injured, which is two people killed per V-2 rocket. The death toll in London did not meet the Nazis' full expectations, during early usage, as they had not yet perfected the accuracy of the V-2, with many rockets being misdirected and exploding harmlessly. Accuracy increased during the war, particularly for batteries where the (radio guide beam) system was used. Missile strikes that did hit targets could cause large numbers of deaths; 160 were killed and 108 seriously injured in one explosion at 12:26 pm on 25 November 1944, at a Woolworth's department store in New Cross, south-east London. British intelligence also helped impede the effectiveness of the Nazi weapon, sending false reports via their Double-Cross System implying that the rockets were over-shooting their London target by . This tactic worked; more than half of the V-2s aimed at London landed short of the London Civil Defence Region. Most landed on less-heavily populated areas in Kent due to erroneous recalibration. For the remainder of the war, British intelligence maintained the ruse by repeatedly sending bogus reports implying that these failed rockets were striking the British capital with heavy loss of life. Possible use during Operation Bodenplatte At least one V-2 missile on a mobile Meillerwagen launch trailer was observed being elevated to launch position by a USAAF 4th Fighter Group pilot defending against the massive New Year's Day 1945 Operation Bodenplatte strike by the Luftwaffe over the northern German attack route near the town of Lochem on 1 January 1945. Possibly, from the potential sighting of the American fighter by the missile's launch crew, the rocket was quickly lowered from a near launch-ready 85° elevation to 30°. Tactical use on German target After the US Army captured the Ludendorff Bridge during the Battle of Remagen on 7 March 1945, the Germans were desperate to destroy it. On 17 March 1945, they fired eleven V-2 missiles at the bridge, their first use against a tactical target and the only time they were fired on a German target during the war. They could not employ the more accurate device because it was oriented towards Antwerp and could not be easily adjusted for another target. Fired from near Hellendoorn, the Netherlands, one of the missiles landed as far away as Cologne, to the north, while one missed the bridge by only . They also struck the town of Remagen, destroying a number of buildings and killing at least six American soldiers. Final use The final two rockets exploded on 27 March 1945. One of these was the last V-2 to kill a British civilian and the final civilian casualty of the war on British soil: Ivy Millichamp, aged 34, killed in her home in Kynaston Road, Orpington in Kent. A scientific reconstruction performed in 2010 demonstrated that the V-2 creates a crater wide and deep, ejecting approximately 3,000 tons of material into the air. Countermeasures Big Ben and Operation Crossbow Unlike the V-1, the V-2's speed and trajectory made it practically invulnerable to anti-aircraft guns and fighters, as it dropped from an altitude of at up to three times the speed of sound at sea level (approximately ). Nevertheless, the threat of what was then code-named "Big Ben" was great enough that efforts were made to seek countermeasures. The situation was similar to the pre-war concerns about manned bombers and resulted in a similar solution, the formation of the Crossbow Committee, to collect, examine and develop countermeasures. Early on, it was believed that the V-2 employed some form of radio guidance, a belief that persisted in spite of several rockets being examined without discovering anything like a radio receiver. This resulted in efforts to jam this non-existent guidance system as early as September 1944, using both ground and air-based jammers flying over the UK. In October, a group had been sent to jam the missiles during launch. By December it was clear these systems were not having any obvious effect, and jamming efforts ended. Anti-aircraft gun system (proposed) General Frederick Alfred Pile, commander of Anti-Aircraft Command, studied the problem and proposed that enough anti-aircraft guns were available to produce a barrage of fire in the rocket's path, but only if provided with a reasonable prediction of the trajectory. The first estimates suggested that 320,000 shells would have to be fired for each rocket. About 2% of these were expected to fall back to the ground, almost 90 tons of rounds, which would cause far more damage than the missile. At a 25 August 1944 meeting of the Crossbow Committee, the concept was rejected. Pile continued studying the problem and returned with a proposal to fire only 150 shells at a single rocket, with those shells using a new fuse that would greatly reduce the number that fell back to Earth unexploded. Some low-level analysis suggested that this would be successful against 1 in 50 rockets, provided that accurate trajectories were forwarded to the gunners in time. Work on this basic concept continued and developed into a plan to deploy a large number of guns in Hyde Park that were provided with pre-configured firing data for grids of the London area. After the trajectory was determined, the guns would aim and fire between 60 and 500 rounds. At a Crossbow meeting on 15 January 1945 Pile's updated plan was presented with some strong advocacy from Roderic Hill and Charles Drummond Ellis. However, the Committee suggested that a test not be performed as no technique for tracking the missiles with sufficient accuracy had yet been developed. By March this had changed significantly, with 81% of incoming missiles correctly allotted to the grid square each fell into, or the one beside it. At a 26 March meeting Pile was directed to a subcommittee with RV Jones and Ellis to further develop the statistics. Three days later the team returned a report stating that if the guns fired 2,000 rounds at a missile there was a 1 in 60 chance of shooting it down. Plans for an operational test began, but as Pile later put it, "Monty beat us to it", as the attacks ended with the Allied capture of their launching areas. With the Germans no longer in control of any part of the continent that could be used as a launching site capable of striking London, they began targeting Antwerp. Plans were made to move the Pile system to protect that city, but the war ended before anything could be done. Direct attack and disinformation The only effective defences against the V-2 campaign were to destroy the launch infrastructure—expensive in terms of bomber resources and casualties—or to cause the Germans to aim at the wrong place by disinformation. The British were able to convince the Germans to direct V-1s and V-2s aimed at London to less populated areas east of the city. This was done by sending deceptive reports on the sites hit and damage caused via the German espionage network in Britain, which was secretly controlled by the British (the Double-Cross System). According to the BBC television presenter Raymond Baxter, who served with the RAF during the war, in February 1945 his squadron was performing a mission against a V2 launch site, when they saw one missile being launched. One member of Baxter's squadron opened fire on it, without effect. On 3 March 1945, the Allies attempted to destroy V-2s and launching equipment in the "Haagse Bos" in The Hague by a large-scale bombardment, but due to navigational errors the Bezuidenhout quarter was destroyed, killing 511 Dutch civilians. Assessment The German V-weapons (V-1 and V-2) cost the equivalent of about US$500 million. Given the relatively smaller size of the German economy, this represented an industrial effort equivalent to but slightly less than that of the U.S. Manhattan Project that produced the atomic bomb. 6,048 V-2s were built, at a cost of approximately ( in 2011) each; 3,225 were launched. SS General Hans Kammler, who as an engineer had constructed several concentration camps including Auschwitz, had a reputation for brutality and had originated the idea of using concentration camp prisoners as slave laborers for the rocket program. More people died manufacturing the V-2 than were killed by its deployment. The V-2 consumed a third of Germany's fuel alcohol production and major portions of other critical technologies: to distil the fuel alcohol for one V-2 launch required 30 tonnes of potatoes at a time when food was becoming scarce. Due to a lack of explosives, some warheads were simply filled with concrete, using the kinetic energy alone for destruction, and sometimes the warhead contained photographic propaganda of German citizens who had died in Allied bombings. The psychological effect of the V-2 was considerable, as the V-2, traveling faster than the speed of sound, gave no warning before impact (unlike bombing planes or the V-1 flying bomb, which made a characteristic buzzing sound). There was no effective defence and no risk of pilot or crew casualties. An example of the impression it made is in the reaction of American pilot and future nuclear strategist and Congressional aide William Liscum Borden, who in November 1944 while returning from a nighttime air mission over Holland saw a V-2 in flight on its way to strike London: "It resembled a meteor, streaming red sparks and whizzing past us as though the aircraft were motionless. I became convinced that it was only a matter of time until rockets would expose the United States to direct, transoceanic attack." With the war all but lost, regardless of the factory output of conventional weapons, the Nazis resorted to V-weapons as a tenuous last hope to influence the war militarily (hence Antwerp as V-2 target), as an extension of their desire to "punish" their foes and most importantly to give hope to their sympathizers with their miracle weapon. The V-2 did not affect the outcome of the war, but it resulted in the development of the intercontinental ballistic missiles of the Cold War, which were also used for space exploration. Unfulfilled plans A submarine-towed launch platform was tested successfully, making it the prototype for submarine-launched ballistic missiles. The project codename was ("Test stand XII"), sometimes termed the rocket U-boat. If deployed, it would have allowed a U-boat to launch V-2 missiles against United States cities, though only with considerable effort (and limited effect). Hitler, in July 1944 and Speer, in January 1945, made speeches alluding to the scheme, though Germany did not possess the capability to fulfill these threats. These schemes were met by the Americans with Operation Teardrop. While interned after the war by the British at CSDIC camp 11, Dornberger was recorded saying that he had begged the Führer to stop the V-weapon propaganda, because nothing more could be expected from one ton of explosive. To this Hitler had replied that Dornberger might not expect more, but he (Hitler) certainly did. According to decrypted messages from the Japanese embassy in Germany, twelve dismantled V-2 rockets were shipped to Japan. These left Bordeaux in August 1944 on the transport U-boats and , which reached Jakarta in December 1944. A civilian V-2 expert was a passenger on , bound for Japan in May 1945 when the war ended in Europe. The fate of these V-2 rockets is unknown. Post-war use At the end of the war, a competition began between the United States and the USSR to retrieve as many V-2 rockets and staff as possible. Three hundred rail-car loads of V-2s and parts were captured and shipped to the United States and 126 of the principal designers, including Wernher von Braun and Walter Dornberger, were captives of the Americans. Von Braun, his brother Magnus von Braun, and seven others decided to surrender to the United States military (Operation Paperclip) to ensure they were not captured by the advancing Soviets or shot dead by the Nazis to prevent their capture. After the Nazi defeat, German engineers were relocated to the United States, the USSR, France and the United Kingdom where they further developed the V-2 rocket for military and civilian purposes. The V-2 rocket also laid the foundation for the liquid fuel missiles and space launchers used later. United States Operation Paperclip recruited German engineers and Special Mission V-2 transported the captured V-2 parts to the United States. At the close of the Second World War, more than 300 rail cars filled with V-2 engines, fuselages, propellant tanks, gyroscopes, and associated equipment were brought to the railyards in Las Cruces, New Mexico, so they could be placed on trucks and driven to the White Sands Proving Grounds, also in New Mexico. In addition to V-2 hardware, the U.S. Government delivered German mechanization equations for the V-2 guidance, navigation, and control systems, as well as for advanced development concept vehicles, to U.S. defence contractors for analysis. During the 1950s, some of these documents were useful to U.S. contractors in developing direction cosine matrix transformations and other inertial navigation architecture concepts that were applied to early U.S. programs, such as the Atlas and Minuteman guidance systems as well as the Navy's Subs Inertial Navigation System. A committee was formed with military and civilian scientists to review payload proposals for the reassembled V-2 rockets. By January 1946, the U.S. Army Ordnance Corps invited civilian scientists and engineers to participate in developing a space research program using the V-2. The committee was initially named the "V2 Rocket Panel", then the "V2 Upper Atmosphere Research Panel", and finally the "Upper Atmosphere Rocket Research Panel". This resulted in an eclectic array of experiments that flew on V-2s and helped prepare for American manned space exploration. Devices were sent aloft to sample the air at all levels to determine atmospheric pressures and to see what gases were present. Other instruments measured the level of cosmic radiation. Only 68 percent of the V-2 trials were considered successful. A supposed V-2 launched on 29 May 1947 landed near Juarez, Mexico and was actually a Hermes B-1 vehicle. The U.S. Navy attempted to launch a German V-2 rocket at sea—one test launch from the aircraft carrier USS Midway was performed on 6 September 1947 as part of the Navy's Operation Sandy. The test launch was a partial success; the V-2 went off the pad but splashed down in the ocean only some from the carrier. The launch setup on the Midway's deck is notable in that it used foldaway arms to prevent the missile from falling over. The arms pulled away just after the engine ignited, releasing the missile. The setup may look similar to the R-7 Semyorka launch procedure but in the case of the R-7 the trusses hold the full weight of the rocket, rather than just reacting to side forces. The PGM-11 Redstone rocket is a direct descendant of the V-2. USSR The USSR captured a number of V-2s and staff, letting them stay in Germany for a time. The first work contracts were signed in the middle of 1945. During October 1946 (as part of Operation Osoaviakhim) they were obliged to relocate to Branch 1 of NII-88 on Gorodomlya Island in Lake Seliger where Helmut Gröttrup directed a group of 150 engineers. In October 1947, a group of German scientists supported the USSR in launching rebuilt V-2s in Kapustin Yar. The German team was indirectly overseen by Sergei Korolev, one of the leaders of the Soviet rocketry program. The first Soviet missile was the R-1, a duplicate of the V-2 manufactured completely in the USSR, which was launched first during October 1948. From 1947 until the end of 1950, the German team elaborated concepts and improvements for extended payload and range for the projects G-1, G-2 and G-4. The German team had to remain on Gorodomlya island until as late as 1952 and 1953. In parallel, Soviet work emphasized larger missiles, the R-2 and R-5, based on further developing the V-2 technology with using ideas of the German concept studies. Details of Soviet achievements were unknown to the German team and completely underestimated by Western intelligence until, in November 1957, the satellite Sputnik 1 was launched successfully to orbit by the Sputnik rocket based on R-7, the world's first intercontinental ballistic missile. France Between May and September of 1946, CEPA, the forerunner to today's French space agency CNES, undertook the recruitment of approximately thirty German engineers, who had previous experience working on rocket programs for Nazi Germany at the Peenemünde Army Research Center. Much like their counterparts in the United Kingdom, the United States, and the Soviet Union, France's objective was to acquire and advance the rocket technology developed by Germany during World War II. The initial initiative, known as the Super V-2 program, had plans for four rocket variants capable of achieving ranges of up to and carrying warheads weighing up to . However, this program was canceled in 1948. From 1950 to 1969, the research done on the Super V-2 program was repurposed to develop the Véronique sounding rocket, which became the first liquid-fuel research rocket in Western Europe and was ultimately capable of carrying a payload to an altitude of . The Véronique program then led to the Diamant rocket and the Ariane rocket family. UK During October 1945, the Allied Operation Backfire assembled a small number of V-2 missiles and launched three of them from a site in northern Germany. The engineers involved had already agreed to relocate to the US when the test firings were complete. The Backfire report, published in January 1946, contains extensive technical documentation of the rocket, including all support procedures, tailored vehicles and fuel composition. In 1946, the British Interplanetary Society proposed an enlarged man-carrying version of the V-2, named Megaroc. It could have enabled sub-orbital spaceflight similar to, but at least a decade earlier than, the Mercury-Redstone flights of 1961. China The first Chinese Dongfeng missile, the DF-1 was a licensed copy of the Soviet R-2; this design was produced during the 1960s. Surviving V-2 examples and components At least 20 V-2s still existed during 2014. Australia One at the Australian War Memorial, Canberra, including a complete Meillerwagen transporter. The rocket has the most complete set of guidance components of all surviving A4s. The is the most complete of the three examples known to exist. Another A4 was on display at the RAAF Museum at Point Cook outside Melbourne. Both rockets are now in Canberra. Netherlands One example, partly skeletonized, is in the collection of the Nationaal Militair Museum. In this collection are also a launching table and some loose parts, as well as the remains of a V-2 that crashed in The Hague immediately after launch. Poland Several large components, including the hydrogen peroxide tank and reaction chamber, the propellant turbopump and the HWK rocket engine chamber (partly cut-out) are displayed at the Polish Aviation Museum in Kraków. A reconstruction of a V-2 missile containing multiple original recovered parts is on display at the Armia Krajowa Museum in Kraków. France One engine at in Toulouse. V-2 display including engine, parts, rocket body and many documents and photographs relating to its development and use at La Coupole museum, Wizernes, Pas de Calais. One rocket body with no engine, one complete engine, one lower engine section and one wrecked engine on display in museum La Coupole. One engine complete with steering pallets, feed lines and tank bottoms, plus one cut-out thrust chamber and one cut-out turbopump at the Snecma (Space Engines Div.) museum in Vernon. One complete rocket in WWII wing of the Musée de l'Armée (Army Museum) in Paris. Germany One complete V2 rocket and several engines at the Deutsches Museum in Munich. One engine at the German Museum of Technology in Berlin. One engine at the Deutsches Historisches Museum in Berlin. One rusty engine in the original V-2 underground production facilities at the Dora-Mittelbau concentration camp memorial site. One rusty engine in Buchenwald concentration camp. One replica was constructed for the Historical and Technical Information Centre in Peenemünde, where it is displayed near what remains of the factory where it was built. United Kingdom One at the Science Museum, London. One, at the Imperial War Museum, London, on loan from Cranfield University. The RAF Museum possesses two rockets, one of which is displayed at its Cosford site. The Museum also owns a , a , a Strabo crane, and a firing table with towing dolly. One at the Royal Engineers Museum in Chatham, Kent. A propulsion unit (minus injectors) is in Norfolk and Suffolk Aviation Museum near Bungay. A complete turbo-pump is at Solway Aviation Museum, Carlisle Airport as part of the Blue Streak Rocket exhibition. The venturi segment of a V-2 discovered in April 2012 was donated to the Harwich Sailing Club after it was found buried in a mudflat. Fuel combustion chamber recovered from the sea near Clacton at the East Essex Aviation Museum, St Oysth. A gyroscope unit, a turbo pump unit and a steam generating chamber are on display at the National Space Centre in Leicester. United States Complete missiles One at the Flying Heritage Collection, Everett, Washington One at the National Museum of the United States Air Force, including complete , Dayton, Ohio. One (checkerboard-painted) at the Cosmosphere in Hutchinson, Kansas. One at the National Air and Space Museum, Washington, D.C. One at the Fort Bliss Air Defense Museum, El Paso, Texas. One (yellow and black) at Missile Park, White Sands Missile Range in White Sands, New Mexico. One at Marshall Space Flight Center in Huntsville, Alabama. One at the U.S. Space & Rocket Center in Huntsville, Alabama. Components One engine at the Stafford Air & Space Museum in Weatherford, Oklahoma. One engine at the U.S. Space & Rocket Center in Huntsville, Alabama. Two engines at the National Museum of the United States Air Force. (one was transferred from United States Army Ordnance Museum in Aberdeen, Maryland in about 2005 when the museum closed). Combustion chambers and other components plus a U.S. built engine at the Steven F. Udvar-Hazy Center in Dulles, Virginia. One engine at the Museum of Science and Industry in Chicago. One rocket body at Picatinny Arsenal in Dover, New Jersey. One engine in the Auburn University Engineering Laboratory. One engine in the Exhibit Hall adjacent to the Blockhouse building on the Historic Cape Canaveral Tour in Cape Canaveral, Florida. One engine at Parks College of Engineering, Aviation and Technology in St. Louis, Missouri. One engine and tail section at New Mexico Museum of Space History in Alamogordo, New Mexico.
Technology
Missiles
null
32823
https://en.wikipedia.org/wiki/Very-large-scale%20integration
Very-large-scale integration
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (metal oxide semiconductor) chips were developed and then widely adopted, enabling complex semiconductor and telecommunications technologies. The microprocessor and memory chips are VLSI devices. Before the introduction of VLSI technology, most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI enables IC designers to add all of these into one chip. History Background The history of the transistor dates to the 1920s when several inventors attempted devices that were intended to control current in solid-state diodes and convert them into triodes. Success came after World War II, when the use of silicon and germanium crystals as radar detectors led to improvements in fabrication and theory. Scientists who had worked on radar returned to solid-state device development. With the invention of the first transistor at Bell Labs in 1947, the field of electronics shifted from vacuum tubes to solid-state devices. With the small transistor at their hands, electrical engineers of the 1950s saw the possibilities of constructing far more advanced circuits. However, as the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s. VLSI General Microelectronics introduced the first commercial MOS integrated circuit in 1964. In the early 1970s, MOS integrated circuit technology allowed the integration of more than 10,000 transistors in a single chip. This paved the way for VLSI in the 1970s and 1980s, with tens of thousands of MOS transistors on a single chip (later hundreds of thousands, then millions, and now billions). The first semiconductor chips held two transistors each. Subsequent advances added more transistors, and as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration (SSI), improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration (MSI). Further improvements led to large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors. At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like ultra-large-scale integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use. In 2008, billion-transistor processors became commercially available. This became more commonplace as semiconductor fabrication advanced from the then-current generation of 65 nm processors. Current designs, unlike the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks, like the SRAM (static random-access memory) cell, are still designed by hand to ensure the highest efficiency. Structured design Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabric area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting. Structured VLSI design had been popular in the early 1980s, but lost its popularity later because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's law. When introducing the hardware description language KARL in the mid-1970s, Reiner Hartenstein coined the term "structured VLSI design" (originally as "structured LSI design"), echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs. Difficulties As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, and look ahead to post-silicon: Process variation – As photolithography techniques get closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now must simulate across multiple fabrication process corners before a chip is certified ready for production, or use system-level techniques for dealing with effects of variation. Stricter design rules – Due to lithography and etch issues with scaling, design rule checking for layout has become increasingly stringent. Designers must keep in mind an ever increasing list of rules when laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses opting to switch to electronic design automation (EDA) tools to automate their design process. Timing/design closure – As clock frequencies tend to scale up, designers are finding it more difficult to distribute and maintain low clock skew between these high frequency clocks across the entire chip. This has led to a rising interest in multicore and multiprocessor architectures, since an overall speedup can be obtained even with lower clock frequency by using the computational power of all the cores. First-pass success – As die sizes shrink (due to scaling), and wafer sizes go up (due to lower manufacturing costs), the number of dies per wafer increases, and the complexity of making suitable photomasks goes up rapidly. A mask set for a modern technology can cost several million dollars. This non-recurring expense deters the old iterative philosophy involving several "spin-cycles" to find errors in silicon, and encourages first-pass silicon success. Several design philosophies have been developed to aid this new design flow, including design for manufacturing (DFM), design for test (DFT), and Design for X. Electromigration
Technology
Semiconductors
null
32851
https://en.wikipedia.org/wiki/Wiki
Wiki
A wiki ( ) is a form of hypertext publication on the internet which is collaboratively edited and managed by its audience directly through a web browser. A typical wiki contains multiple pages that can either be edited by the public or limited to use within an organization for maintaining its internal knowledge base. Wikis are powered by wiki software, also known as wiki engines. Being a form of content management system, these differ from other web-based systems such as blog software or static site generators in that the content is created without any defined owner or leader. Wikis have little inherent structure, allowing one to emerge according to the needs of the users. Wiki engines usually allow content to be written using a lightweight markup language and sometimes edited with the help of a rich-text editor. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are free and open-source, whereas others are proprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding, or removing material. Others may permit access without enforcing access control. Further rules may be imposed to organize content. In addition to hosting user-authored content, wikis allow those users to interact, hold discussions, and collaborate. There are hundreds of thousands of wikis in use, both public and private, including wikis functioning as knowledge management resources, note-taking tools, community websites, and intranets. Ward Cunningham, the developer of the first wiki software, WikiWikiWeb, originally described wiki as "the simplest online database that could possibly work". "Wiki" (pronounced ) is a Hawaiian word meaning "quick". The online encyclopedia project Wikipedia is the most popular wiki-based website, as well being one of the internet's most popular websites, having been ranked consistently as such since at least 2007. Wikipedia is not a single wiki but rather a collection of hundreds of wikis, with each one pertaining to a specific language. The English-language Wikipedia has the largest collection of articles, standing at as of . Characteristics In their 2001 book The Wiki Way: Quick Collaboration on the Web, Cunningham and co-author Bo Leuf described the essence of the wiki concept: "A wiki invites all users—not just experts—to edit any page or to create new pages within the wiki website, using only a standard 'plain-vanilla' Web browser without any extra add-ons." "Wiki promotes meaningful topic associations between different pages by making page link creation intuitively easy and showing whether an intended target page exists or not." "A wiki is not a carefully crafted site created by experts and professional writers and designed for casual visitors. Instead, it seeks to involve the typical visitor/user in an ongoing process of creation and collaboration that constantly changes the website landscape." Editing Source editing Some wikis will present users with an edit button or link directly on the page being viewed. This will open an interface for writing, formatting, and structuring page content. The interface may be a source editor, which is text-based and employs a lightweight markup language (also known as wikitext, wiki markup, or wikicode), or a visual editor. For example, in a source editor, starting lines of text with asterisks could create a bulleted list. The syntax and features of wiki markup languages for denoting style and structure can vary greatly among implementations. Some allow the use of and , while others prevent the use of these to foster uniformity in appearance. Example of syntax A short section of Alice's Adventures in Wonderland rendered in wiki markup: Visual editing While wiki engines have traditionally offered source editing to users, in recent years some implementations have added a rich text editing mode. This is usually implemented, using JavaScript, as an interface which translates formatting instructions chosen from a toolbar into the corresponding wiki markup or HTML. This is generated and submitted to the server transparently, shielding users from the technical detail of markup editing and making it easier for them to change the content of pages. An example of such an interface is the VisualEditor in MediaWiki, the wiki engine used by Wikipedia. WYSIWYG editors may not provide all the features available in wiki markup, and some users prefer not to use them, so a source editor will often be available simultaneously. Version history Some wiki implementations keep a record of changes made to wiki pages, and may store every version of the page permanently. This allows authors to revert a page to an older version to rectify a mistake, or counteract a malicious or inappropriate edit to its content. These stores are typically presented for each page in a list, called a "log" or "edit history", available from the page via a link in the interface. The list displays metadata for each revision to the page, such as the time and date of when it was stored, and the name of the person who created it, alongside a link to view that specific revision. A diff (short for "difference") feature may be available, which highlights the changes between any two revisions. Edit summaries The edit history view in many wiki implementations will include edit summaries written by users when submitting changes to a page. Similar to the function of a log message in a revision control system, an edit summary is a short piece of text which summarizes and perhaps explains the change, for example "Corrected grammar" or "Fixed table formatting to not extend past page width". It is not inserted into the article's main text. Navigation Traditionally, wikis offer free navigation between their pages via hypertext links in page text, rather than requiring users to follow a formal or structured navigation scheme. Users may also create indexes or table of contents pages, hierarchical categorization via a taxonomy, or other forms of ad hoc content organization. Wiki implementations can provide one or more ways to categorize or tag pages to support the maintenance of such index pages, such as a backlink feature which displays all pages that link to a given page. Adding categories or tags to a page makes it easier for other users to find it. Most wikis allow the titles of pages to be searched amongst, and some offer full text search of all stored content. Navigation between wikis Some wiki communities have established navigational networks between each other using a system called WikiNodes. A WikiNode is a page on a wiki which describes and links to other, related wikis. Some wikis operate a structure of neighbors and delegates, wherein a neighbor wiki is one which discusses similar content or is otherwise of interest, and a delegate wiki is one which has agreed to have certain content delegated to it. WikiNode networks act as webrings which may be navigated from one node to another to find a wiki which addresses a specific subject. Linking to and naming pages The syntax used to create internal hyperlinks varies between wiki implementations. Beginning with the WikiWikiWeb in 1995, most wikis used camel case to name pages, which is when words in a phrase are capitalized and the spaces between them removed. In this system, the phrase "camel case" would be rendered as "CamelCase". In early wiki engines, when a page was displayed, any instance of a camel case phrase would be transformed into a link to another page named with the same phrase. While this system made it easy to link to pages, it had the downside of requiring pages to be named in a form deviating from standard spelling, and titles of a single word required abnormally capitalizing one of the letters (e.g. "WiKi" instead of "Wiki"). Some wiki implementations attempt to improve the display of camel case page titles and links by reinserting spaces and possibly also reverting to lower case, but this simplistic method is not able to correctly present titles of mixed capitalization. For example, "Kingdom of France" as a page title would be written as "KingdomOfFrance", and displayed as "Kingdom Of France". To avoid this problem, the syntax of wiki markup gained free links, wherein a term in natural language could be wrapped in special characters to turn it into a link without modifying it. The concept was given the name in its first implementation, in UseModWiki in February 2001. In that implementation, link terms were wrapped in a double set of square brackets, for example [[Kingdom of France]]. This syntax was adopted by a number of later wiki engines. It is typically possible for users of a wiki to create links to pages that do not yet exist, as a way to invite the creation of those pages. Such links are usually differentiated visually in some fashion, such as being colored red instead of the default blue, which was the case in the original WikiWikiWeb, or by appearing as a question mark next to the linked words. History WikiWikiWeb was the first wiki. Ward Cunningham started developing it in 1994, and installed it on the Internet domain c2.com on March 25, 1995. Cunningham gave it the name after remembering a Honolulu International Airport counter employee telling him to take the "Wiki Wiki Shuttle" bus that runs between the airport's terminals, later observing that "I chose wiki-wiki as an alliterative substitute for 'quick' and thereby avoided naming this stuff quick-web." Cunningham's system was inspired by his having used Apple's hypertext software HyperCard, which allowed users to create interlinked "stacks" of virtual cards. HyperCard, however, was single-user, and Cunningham was inspired to build upon the ideas of Vannevar Bush, the inventor of hypertext, by allowing users to "comment on and change one another's text." Cunningham says his goals were to link together people's experiences to create a new literature to document programming patterns, and to harness people's natural desire to talk and tell stories with a technology that would feel comfortable to those not used to "authoring". Wikipedia became the most famous wiki site, launched in January 2001 and entering the top ten most popular websites in 2007. In the early 2000s, wikis were increasingly adopted in enterprise as collaborative software. Common uses included project communication, intranets, and documentation, initially for technical users. Some companies use wikis as their collaborative software and as a replacement for static intranets, and some schools and universities use wikis to enhance group learning. On March 15, 2007, the word wiki was listed in the online Oxford English Dictionary. Alternative definitions In the late 1990s and early 2000s, the word "wiki" was used to refer to both user-editable websites and the software that powers them, and the latter definition is still occasionally in use. By 2014, Ward Cunningham's thinking on the nature of wikis had evolved, leading him to write that the word "wiki" should not be used to refer to a single website, but rather to a mass of user-editable pages or sites so that a single website is not "a wiki" but "an instance of wiki". In this concept of wiki federation, in which the same content can be hosted and edited in more than one location in a manner similar to distributed version control, the idea of a single discrete "wiki" no longer made sense. Implementations The software which powers a wiki may be implemented as a series of scripts which operate an existing web server, a standalone application server that runs on one or more web servers, or in the case of personal wikis, run as a standalone application on a single computer. Some wikis use flat file databases to store page content, while others use a relational database, as indexed database access is faster on large wikis, particularly for searching. Hosting Wikis can also be created on wiki hosting services (also known as wiki farms), where the server-side software is implemented by the wiki farm owner, and may do so at no charge in exchange for advertisements being displayed on the wiki's pages. Some hosting services offer private, password-protected wikis requiring authentication to access. Free wiki farms generally contain advertising on every page. Trust and security Access control The four basic types of users who participate in wikis are readers, authors, wiki administrators and system administrators. System administrators are responsible for the installation and maintenance of the wiki engine and the container web server. Wiki administrators maintain content and, through having elevated privileges, are granted additional functions (including, for example, preventing edits to pages, deleting pages, changing users' access rights, or blocking them from editing). Controlling changes Wikis are generally designed with a soft security philosophy in which it is easy to correct mistakes or harmful changes, rather than attempting to prevent them from happening in the first place. This allows them to be very open while providing a means to verify the validity of recent additions to the body of pages. Most wikis offer a recent changes page which shows recent edits, or a list of edits made within a given time frame. Some wikis can filter the list to remove edits flagged by users as "minor" and automated edits. The version history feature allows harmful changes to be reverted quickly and easily. Some wiki engines provide additional content control, allowing remote monitoring and management of a page or set of pages to maintain quality. A person willing to maintain pages will be alerted of modifications to them, allowing them to verify the validity of new editions quickly. Such a feature is often called a watchlist. Some wikis also implement patrolled revisions, in which editors with the requisite credentials can mark edits as being legitimate. A flagged revisions system can prevent edits from going live until they have been reviewed. Wikis may allow any person on the web to edit their content without having to register an account on the site first (anonymous editing), or require registration as a condition of participation. On implementations where an administrator is able to restrict editing of a page or group of pages to a specific group of users, they may have the option to prevent anonymous editing while allowing it for registered users. Trustworthiness and reliability of content Critics of publicly editable wikis argue that they could be easily tampered with by malicious individuals, or even by well-meaning but unskilled users who introduce errors into the content. Proponents maintain that these issues will be caught and rectified by a wiki's community of users. High editorial standards in medicine and health sciences articles, in which users typically use peer-reviewed journals or university textbooks as sources, have led to the idea of expert-moderated wikis. Wiki implementations retaining and allowing access to specific versions of articles has been useful to the scientific community, by allowing expert peer reviewers to provide links to trusted version of articles which they have analyzed. Security Trolling and cybervandalism on wikis, where content is changed to something deliberately incorrect or a hoax, offensive material or nonsense is added, or content is maliciously removed, can be a major problem. On larger wiki sites it is possible for such changes to go unnoticed for a long period. In addition to using the approach of soft security for protecting themselves, larger wikis may employ sophisticated methods, such as bots that automatically identify and revert vandalism. For example, on Wikipedia, the bot ClueBot NG uses machine learning to identify likely harmful changes, and reverts these changes within minutes or even seconds. Disagreements between users over the content or appearance of pages may cause edit wars, where competing users repetitively change a page back to a version that they favor. Some wiki software allows administrators to prevent pages from being editable until a decision has been made on what version of the page would be most appropriate. Some wikis may be subject to external structures of governance which address the behavior of persons with access to the system, for example in academic contexts. Harmful external links As most wikis allow the creation of hyperlinks to other sites and services, the addition of malicious hyperlinks, such as sites infected with malware, can also be a problem. For example, in 2006 a German Wikipedia article about the Blaster Worm was edited to include a hyperlink to a malicious website, and users of vulnerable Microsoft Windows systems who followed the link had their systems infected with the worm. Some wiki engines offer a blacklist feature which prevents users from adding hyperlinks to specific sites that have been placed on the list by the wiki's administrators. Communities Applications The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all Web sites in terms of traffic. Other large wikis include the WikiWikiWeb, Memory Alpha, Wikivoyage, and previously Susning.nu, a Swedish-language knowledge base. Medical and health-related wiki examples include Ganfyd, an online collaborative medical reference that is edited by medical professionals and invited non-medical experts. Many wiki communities are private, particularly within enterprises. They are often used as internal documentation for in-house systems and applications. Some companies use wikis to allow customers to help produce software documentation. A study of corporate wiki users found that they could be divided into "synthesizers" and "adders" of content. Synthesizers' frequency of contribution was affected more by their impact on other wiki users, while adders' contribution frequency was affected more by being able to accomplish their immediate work. From a study of thousands of wiki deployments, Jonathan Grudin concluded careful stakeholder analysis and education are crucial to successful wiki deployment. In 2005, the Gartner Group, noting the increasing popularity of wikis, estimated that they would become mainstream collaboration tools in at least 50% of companies by 2009. Wikis can be used for project management. Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. In the mid-2000s, the increasing trend among industries toward collaboration placed a heavier impetus upon educators to make students proficient in collaborative work, inspiring even greater interest in wikis being used in the classroom. Wikis have found some use within the legal profession and within the government. Examples include the Central Intelligence Agency's Intellipedia, designed to share and collect intelligence assessments, DKosopedia, which was used by the American Civil Liberties Union to assist with review of documents about the internment of detainees in Guantánamo Bay; and the wiki of the United States Court of Appeals for the Seventh Circuit, used to post court rules and allow practitioners to comment and ask questions. The United States Patent and Trademark Office operates Peer-to-Patent, a wiki to allow the public to collaborate on finding prior art relevant to the examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. Cornell Law School founded a wiki-based legal dictionary called Wex, whose growth has been hampered by restrictions on who can edit. In academic contexts, wikis have also been used as project collaboration and research support systems. City wikis A city wiki or local wiki is a wiki used as a knowledge base and social network for a specific geographical locale. The term city wiki is sometimes also used for wikis that cover not just a city, but a small town or an entire region. Such a wiki contains information about specific instances of things, ideas, people and places. Such highly localized information might be appropriate for a wiki targeted at local viewers, and could include: Details of public establishments such as public houses, bars, accommodation or social centers Owner name, opening hours and statistics for a specific shop Statistical information about a specific road in a city Flavors of ice cream served at a local ice cream parlor A biography of a local mayor and other persons Growth factors A study of several hundred wikis in 2008 showed that a relatively high number of administrators for a given content size is likely to reduce growth; access controls restricting editing to registered users tends to reduce growth; a lack of such access controls tends to fuel new user registration; and that a higher ratio of administrators to regular users has no significant effect on content or population growth. Legal environment Joint authorship of articles, in which different users participate in correcting, editing, and compiling the finished product, can also cause editors to become tenants in common of the copyright, making it impossible to republish without permission of all co-owners, some of whose identities may be unknown due to pseudonymous or anonymous editing. Some copyright issues can be alleviated through the use of an open content license. Version 2 of the GNU Free Documentation License includes a specific provision for wiki relicensing, and Creative Commons licenses are also popular. When no license is specified, an implied license to read and add content to a wiki may be deemed to exist on the grounds of business necessity and the inherent nature of a wiki. Wikis and their users can be held liable for certain activities that occur on the wiki. If a wiki owner displays indifference and forgoes controls (such as banning copyright infringers) that they could have exercised to stop copyright infringement, they may be deemed to have authorized infringement, especially if the wiki is primarily used to infringe copyrights or obtains a direct financial benefit, such as advertising revenue, from infringing activities. In the United States, wikis may benefit from Section 230 of the Communications Decency Act, which protects sites that engage in "Good Samaritan" policing of harmful material, with no requirement on the quality or quantity of such self-policing. It has also been argued that a wiki's enforcement of certain rules, such as anti-bias, verifiability, reliable sourcing, and no-original-research policies, could pose legal risks. When defamation occurs on a wiki, theoretically, all users of the wiki can be held liable, because any of them had the ability to remove or amend the defamatory material from the "publication". It remains to be seen whether wikis will be regarded as more akin to an internet service provider, which is generally not held liable due to its lack of control over publications' contents, than a publisher. It has been recommended that trademark owners monitor what information is presented about their trademarks on wikis, since courts may use such content as evidence pertaining to public perceptions, and they can edit entries to rectify misinformation. Conferences Active conferences and meetings about wiki-related topics include: Atlassian Summit, an annual conference for users of Atlassian software, including Confluence. OpenSym (called WikiSym until 2014), an academic conference dedicated to research about wikis and open collaboration. SMWCon, a bi-annual conference for users and developers of Semantic MediaWiki. TikiFest, a frequently held meeting for users and developers of Tiki Wiki CMS Groupware. Wikimania, an annual conference dedicated to the research and practice of Wikimedia Foundation projects like Wikipedia. Former wiki-related events include: RecentChangesCamp (2006–2012), an unconference on wiki-related topics. RegioWikiCamp (2009–2013), a semi-annual unconference on "regiowikis", or wikis on cities and other geographic areas.
Technology
Internet
null
33118
https://en.wikipedia.org/wiki/Woodworking
Woodworking
Woodworking is the skill of making items from wood, and includes cabinetry, furniture making, wood carving, joinery, carpentry, and woodturning. History Along with stone, clay and animal parts, wood was one of the first materials worked by early humans. Microwear analysis of the Mousterian stone tools used by the Neanderthals show that many were used to work wood. The development of civilization was closely tied to the development of increasingly greater degrees of skill in working these materials. Among early finds of wooden tools are the worked sticks from Kalambo Falls, Clacton-on-Sea and Lehringen. The spears from Schöningen (Germany) provide some of the first examples of wooden hunting implements. Flint tools were used for carving. Since Neolithic times, carved wooden vessels are known, for example, from the Linear Pottery culture wells at Kückhofen and Eythra. Examples of Bronze Age wood-carving include tree trunks worked into coffins from northern Germany and Denmark and wooden folding-chairs. The site of Fellbach-Schmieden in Germany has provided fine examples of wooden animal statues from the Iron Age. Wooden idols from the La Tène period known from a sanctuary at the source of the Seine in France. Ancient Egypt There is significant evidence of advanced woodworking in ancient Egypt. Woodworking is depicted in many extant ancient Egyptian drawings, and a considerable amount of ancient Egyptian furniture (such as stools, chairs, tables, beds, chests) have been preserved. Tombs represent a large collection of these artifacts and the inner coffins found in the tombs were also made of wood. The metal used by the Egyptians for woodworking tools was originally copper and eventually, after 2000 BC bronze as iron working was unknown until much later. Commonly used woodworking tools included axes, adzes, chisels, pull saws, and bow drills. Mortise and tenon joints are attested from the earliest Predynastic period. These joints were strengthened using pegs, dowels and leather or cord lashings. Animal glue came to be used only in the New Kingdom period. Ancient Egyptians invented the art of veneering and used varnishes for finishing, though the composition of these varnishes is unknown. Although different native acacias were used, as was the wood from the local sycamore and tamarisk trees, deforestation in the Nile valley resulted in the need for the importation of wood, notably cedar, but also Aleppo pine, boxwood and oak, starting from the Second Dynasty. Ancient Rome Woodworking was essential to the Romans. It provided, material for buildings, transportation, tools, and household items. Wood also provided pipes, dye, waterproofing materials, and energy for heat.Although most examples of Roman woodworking have been lost, the literary record preserved much of the contemporary knowledge. Vitruvius dedicates an entire chapter of his De architectura to timber, preserving many details. Pliny, while not a botanist, dedicated six books of his Natural History to trees and woody plants, providing a wealth of information on trees and their uses. Ancient China The progenitors of Chinese woodworking are considered to be Lu Ban (魯班) and his wife Lady Yun, from the Spring and Autumn period (771 to 476 BC). Lu Ban is said to have introduced the plane, chalk-line, and other tools to China. His teachings were supposedly left behind in the book Lu Ban Jing (魯班經, "Manuscript of Lu Ban"). Despite this, it is believed that the text was written some 1500 years after his death. This book is filled largely with descriptions of dimensions for use in building various items such as flower pots, tables, altars, etc., and also contains extensive instructions concerning Feng Shui. It mentions almost nothing of the intricate glue-less and nail-less joinery for which Chinese furniture was so famous. Modern day With the advances in modern technology and the demands of industry, woodwork as a field has changed. The development of Computer Numeric Controlled (CNC) Machines, for example, has made it possible to mass-produce and reproduce products faster, with less waste, and often with more complex design than ever before. CNC wood routers can carve complicated and highly detailed shapes into flat stock, to create signs or art. Rechargeable power tools speed up creation of many projects and require much less body strength than in the past, for example when boring multiple holes. Skilled fine woodworking, however, remains a craft pursued by many. There remains demand for hand crafted work such as furniture and arts, however with rate and cost of production, the cost for consumers is much higher. Modern woodcarving usually refers to works of wood art produced by woodcarvers in the form of contemporary art. This type of wood carving often combines traditional techniques with more modern artistic styles and concepts. Modern woodcarving can be produced in a variety of forms and styles, from realist to abstract carvings, and often uses unusual wood materials such as rainwood or wood with unique textures to highlight the uniqueness of the work. In recent years, the art of modern wood carving has become increasingly popular among woodworkers and visual art enthusiasts not only in Asia, but also around the world. Modern woodcarving art is often exhibited in art galleries and museums, and can be seen in several global contemporary art exhibitions. Styles and designs Woodworking, especially furniture making, has many different designs/styles. Throughout its history, woodworking designs and styles have changed. Some of the more common styles are listed below. Traditional furniture styles usually include styles that have been around for long periods of time and have shown a mark of wealth and luxury for centuries. More modern furniture styles are commonly used over the past few hundred years. Materials
Technology
Material and chemical
null
33125
https://en.wikipedia.org/wiki/Wavelength
Wavelength
In physics and mathematics, wavelength or spatial period of a wave or periodic function is the distance over which the wave's shape repeats. In other words, it is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings. Wavelength is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. The inverse of the wavelength is called the spatial frequency. Wavelength is commonly designated by the Greek letter lambda (λ). The term "wavelength" is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to the frequency of the wave: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. Wavelength depends on the medium (for example, vacuum, air, or water) that a wave travels through. Examples of waves are sound waves, light, water waves and periodic electrical signals in a conductor. A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary. The range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. Sinusoidal waves In linear media, any wave pattern can be described in terms of the independent propagation of sinusoidal components. The wavelength λ of a sinusoidal waveform traveling at constant speed is given by where is called the phase speed (magnitude of the phase velocity) of the wave and is the wave's frequency. In a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. In the case of electromagnetic radiation—such as light—in free space, the phase speed is the speed of light, about . Thus the wavelength of a 100 MHz electromagnetic (radio) wave is about: divided by = 3 m. The wavelength of visible light ranges from deep red, roughly 700 nm, to violet, roughly 400 nm (for other examples, see electromagnetic spectrum). For sound waves in air, the speed of sound is 343 m/s (at room temperature and atmospheric pressure). The wavelengths of sound frequencies audible to the human ear (20 Hz–20 kHz) are thus between approximately 17 m and 17 mm, respectively. Somewhat higher frequencies are used by bats so they can resolve targets smaller than 17 mm. Wavelengths in audible sound are much longer than those in visible light. Standing waves A standing wave is an undulatory motion that stays in one place. A sinusoidal standing wave includes stationary points of no motion, called nodes, and the wavelength is twice the distance between nodes. The upper figure shows three standing waves in a box. The walls of the box are considered to require the wave to have nodes at the walls of the box (an example of boundary conditions), thus determining the allowed wavelengths. For example, for an electromagnetic wave, if the box has ideal conductive walls, the condition for nodes at the walls results because the conductive walls cannot support a tangential electric field, forcing the wave to have zero amplitude at the wall. The stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Consequently, wavelength, period, and wave velocity are related just as for a traveling wave. For example, the speed of light can be determined from observation of standing waves in a metal box containing an ideal vacuum. Mathematical representation Traveling sinusoidal waves are often represented mathematically in terms of their velocity v (in the x direction), frequency f and wavelength λ as: where y is the value of the wave at any position x and time t, and A is the amplitude of the wave. They are also commonly expressed in terms of wavenumber k (2π times the reciprocal of wavelength) and angular frequency ω (2π times the frequency) as: in which wavelength and wavenumber are related to velocity and frequency as: or In the second form given above, the phase is often generalized to , by replacing the wavenumber k with a wave vector that specifies the direction and wavenumber of a plane wave in 3-space, parameterized by position vector r. In that case, the wavenumber k, the magnitude of k, is still in the same relationship with wavelength as shown above, with v being interpreted as scalar speed in the direction of the wave vector. The first form, using reciprocal wavelength in the phase, does not generalize as easily to a wave in an arbitrary direction. Generalizations to sinusoids of other phases, and to complex exponentials, are also common; see plane wave. The typical convention of using the cosine phase instead of the sine phase when describing a wave is based on the fact that the cosine is the real part of the complex exponential in the wave General media The speed of a wave depends upon the medium in which it propagates. In particular, the speed of light in a medium is less than in vacuum, which means that the same frequency will correspond to a shorter wavelength in the medium than in vacuum, as shown in the figure at right. This change in speed upon entering a medium causes refraction, or a change in direction of waves that encounter the interface between media at an angle. For electromagnetic waves, this change in the angle of propagation is governed by Snell's law. The wave velocity in one medium not only may differ from that in another, but the velocity typically varies with wavelength. As a result, the change in direction upon entering a different medium changes with the wavelength of the wave. For electromagnetic waves the speed in a medium is governed by its refractive index according to where c is the speed of light in vacuum and n(λ0) is the refractive index of the medium at wavelength λ0, where the latter is measured in vacuum rather than in the medium. The corresponding wavelength in the medium is When wavelengths of electromagnetic radiation are quoted, the wavelength in vacuum usually is intended unless the wavelength is specifically identified as the wavelength in some other medium. In acoustics, where a medium is essential for the waves to exist, the wavelength value is given for a specified medium. The variation in speed of light with wavelength is known as dispersion, and is also responsible for the familiar phenomenon in which light is separated into component colours by a prism. Separation occurs when the refractive index inside the prism varies with wavelength, so different wavelengths propagate at different speeds inside the prism, causing them to refract at different angles. The mathematical relationship that describes how the speed of light within a medium varies with wavelength is known as a dispersion relation. Nonuniform media Wavelength can be a useful concept even if the wave is not periodic in space. For example, in an ocean wave approaching shore, shown in the figure, the incoming wave undulates with a varying local wavelength that depends in part on the depth of the sea floor compared to the wave height. The analysis of the wave can be based upon comparison of the local wavelength with the local water depth. Waves that are sinusoidal in time but propagate through a medium whose properties vary with position (an inhomogeneous medium) may propagate at a velocity that varies with position, and as a result may not be sinusoidal in space. The figure at right shows an example. As the wave slows down, the wavelength gets shorter and the amplitude increases; after a place of maximum response, the short wavelength is associated with a high loss and the wave dies out. The analysis of differential equations of such systems is often done approximately, using the WKB method (also known as the Liouville–Green method). The method integrates phase through space using a local wavenumber, which can be interpreted as indicating a "local wavelength" of the solution as a function of time and space. This method treats the system locally as if it were uniform with the local properties; in particular, the local wave velocity associated with a frequency is the only thing needed to estimate the corresponding local wavenumber or wavelength. In addition, the method computes a slowly changing amplitude to satisfy other constraints of the equations or of the physical system, such as for conservation of energy in the wave. Crystals Waves in crystalline solids are not continuous, because they are composed of vibrations of discrete particles arranged in a regular lattice. This produces aliasing because the same vibration can be considered to have a variety of different wavelengths, as shown in the figure. Descriptions using more than one of these wavelengths are redundant; it is conventional to choose the longest wavelength that fits the phenomenon. The range of wavelengths sufficient to provide a description of all possible waves in a crystalline medium corresponds to the wave vectors confined to the Brillouin zone. This indeterminacy in wavelength in solids is important in the analysis of wave phenomena such as energy bands and lattice vibrations. It is mathematically equivalent to the aliasing of a signal that is sampled at discrete intervals. More general waveforms The concept of wavelength is most often applied to sinusoidal, or nearly sinusoidal, waves, because in a linear system the sinusoid is the unique shape that propagates with no shape change – just a phase change and potentially an amplitude change. The wavelength (or alternatively wavenumber or wave vector) is a characterization of the wave in space, that is functionally related to its frequency, as constrained by the physics of the system. Sinusoids are the simplest traveling wave solutions, and more complex solutions can be built up by superposition. In the special case of dispersion-free and uniform media, waves other than sinusoids propagate with unchanging shape and constant velocity. In certain circumstances, waves of unchanging shape also can occur in nonlinear media; for example, the figure shows ocean waves in shallow water that have sharper crests and flatter troughs than those of a sinusoid, typical of a cnoidal wave, a traveling wave so named because it is described by the Jacobi elliptic function of mth order, usually denoted as . Large-amplitude ocean waves with certain shapes can propagate unchanged, because of properties of the nonlinear surface-wave medium. If a traveling wave has a fixed shape that repeats in space or in time, it is a periodic wave. Such waves are sometimes regarded as having a wavelength even though they are not sinusoidal. As shown in the figure, wavelength is measured between consecutive corresponding points on the waveform. Wave packets Localized wave packets, "bursts" of wave action where each wave packet travels as a unit, find application in many fields of physics. A wave packet has an envelope that describes the overall amplitude of the wave; within the envelope, the distance between adjacent peaks or troughs is sometimes called a local wavelength. An example is shown in the figure. In general, the envelope of the wave packet moves at a speed different from the constituent waves. Using Fourier analysis, wave packets can be analyzed into infinite sums (or integrals) of sinusoidal waves of different wavenumbers or wavelengths. Louis de Broglie postulated that all particles with a specific value of momentum p have a wavelength λ = h/p, where h is the Planck constant. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a De Broglie wavelength of about . To prevent the wave function for such a particle being spread over all space, de Broglie proposed using wave packets to represent particles that are localized in space. The spatial spread of the wave packet, and the spread of the wavenumbers of sinusoids that make up the packet, correspond to the uncertainties in the particle's position and momentum, the product of which is bounded by Heisenberg uncertainty principle. Interference and diffraction Double-slit interference When sinusoidal waveforms add, they may reinforce each other (constructive interference) or cancel each other (destructive interference) depending upon their relative phase. This phenomenon is used in the interferometer. A simple example is an experiment due to Young where light is passed through two slits. As shown in the figure, light is passed through two slits and shines on a screen. The path of the light to a position on the screen is different for the two slits, and depends upon the angle θ the path makes with the screen. If we suppose the screen is far enough from the slits (that is, s is large compared to the slit separation d) then the paths are nearly parallel, and the path difference is simply . Accordingly, the condition for constructive interference is: where m is an integer, and for destructive interference is: Thus, if the wavelength of the light is known, the slit separation can be determined from the interference pattern or fringes, and vice versa. For multiple slits, the pattern is where q is the number of slits, and g is the grating constant. The first factor, I1, is the single-slit result, which modulates the more rapidly varying second factor that depends upon the number of slits and their spacing. In the figure I1 has been set to unity, a very rough approximation. The effect of interference is to redistribute the light, so the energy contained in the light is not altered, just where it shows up. Single-slit diffraction The notion of path difference and constructive or destructive interference used above for the double-slit experiment applies as well to the display of a single slit of light intercepted on a screen. The main result of this interference is to spread out the light from the narrow slit into a broader image on the screen. This distribution of wave energy is called diffraction. Two types of diffraction are distinguished, depending upon the separation between the source and the screen: Fraunhofer diffraction or far-field diffraction at large separations and Fresnel diffraction or near-field diffraction at close separations. In the analysis of the single slit, the non-zero width of the slit is taken into account, and each point in the aperture is taken as the source of one contribution to the beam of light (Huygens' wavelets). On the screen, the light arriving from each position within the slit has a different path length, albeit possibly a very small difference. Consequently, interference occurs. In the Fraunhofer diffraction pattern sufficiently far from a single slit, within a small-angle approximation, the intensity spread S is related to position x via a squared sinc function: with where L is the slit width, R is the distance of the pattern (on the screen) from the slit, and λ is the wavelength of light used. The function S has zeros where u is a non-zero integer, where are at x values at a separation proportion to wavelength. Diffraction-limited resolution Diffraction is the fundamental limitation on the resolving power of optical instruments, such as telescopes (including radiotelescopes) and microscopes. For a circular aperture, the diffraction-limited image spot is known as an Airy disk; the distance x in the single-slit diffraction formula is replaced by radial distance r and the sine is replaced by 2J1, where J1 is a first order Bessel function. The resolvable spatial size of objects viewed through a microscope is limited according to the Rayleigh criterion, the radius to the first null of the Airy disk, to a size proportional to the wavelength of the light used, and depending on the numerical aperture: where the numerical aperture is defined as for θ being the half-angle of the cone of rays accepted by the microscope objective. The angular size of the central bright portion (radius to first null of the Airy disk) of the image diffracted by a circular aperture, a measure most commonly used for telescopes and cameras, is: where λ is the wavelength of the waves that are focused for imaging, D the entrance pupil diameter of the imaging system, in the same units, and the angular resolution δ is in radians. As with other diffraction patterns, the pattern scales in proportion to wavelength, so shorter wavelengths can lead to higher resolution. Subwavelength The term subwavelength is used to describe an object having one or more dimensions smaller than the length of the wave with which the object interacts. For example, the term subwavelength-diameter optical fibre means an optical fibre whose diameter is less than the wavelength of light propagating through it. A subwavelength particle is a particle smaller than the wavelength of light with which it interacts (see Rayleigh scattering). Subwavelength apertures are holes smaller than the wavelength of light propagating through them. Such structures have applications in extraordinary optical transmission, and zero-mode waveguides, among other areas of photonics. Subwavelength may also refer to a phenomenon involving subwavelength objects; for example, subwavelength imaging. Angular wavelength A quantity related to the wavelength is the angular wavelength (also known as reduced wavelength), usually symbolized by ƛ ("lambda-bar" or barred lambda). It is equal to the ordinary wavelength reduced by a factor of 2π (), with SI units of meter per radian. It is the inverse of angular wavenumber (). It is usually encountered in quantum mechanics, where it is used in combination with the reduced Planck constant (symbol ħ, h-bar) and the angular frequency (symbol ).
Physical sciences
Waves
null
33139
https://en.wikipedia.org/wiki/World%20Wide%20Web
World Wide Web
The World Wide Web (WWW or simply the Web) is an information system that enables content sharing over the Internet through user-friendly ways meant to appeal to users beyond IT specialists and hobbyists. It allows documents and other web resources to be accessed over the Internet according to specific rules of the Hypertext Transfer Protocol (HTTP). The Web was invented by English computer scientist Tim Berners-Lee while at CERN in 1989 and opened to the public in 1993. It was conceived as a "universal linked information system". Documents and other media content are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators (URLs). The original and still very common document type is a web page formatted in Hypertext Markup Language (HTML). This markup language supports plain text, images, embedded video and audio contents, and scripts (short programs) that implement complex user interaction. The HTML language also supports hyperlinks (embedded URLs) which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application software. The information in the Web is transferred across the Internet using HTTP. Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government agencies, and individual users; and comprises an enormous amount of educational, entertainment, commercial, and government information. The Web has become the world's dominant information systems platform. It is the primary tool that billions of people worldwide use to interact with the Internet. History The Web was invented by English computer scientist Tim Berners-Lee while working at CERN. He was motivated by the problem of storing, updating, and finding documents and data files in that large and constantly changing organization, as well as distributing them to collaborators outside CERN. In his design, Berners-Lee dismissed the common tree structure approach, used for instance in the existing CERNDOC documentation system and in the Unix filesystem, as well as approaches that relied in tagging files with keywords, as in the VAX/NOTES system. Instead he adopted concepts he had put into practice with his private ENQUIRE system (1980) built at CERN. When he became aware of Ted Nelson's hypertext model (1965), in which documents can be linked in unconstrained ways through hyperlinks associated with "hot spots" embedded in the text, it helped to confirm the validity of his concept. The model was later popularized by Apple's HyperCard system. Unlike Hypercard, Berners-Lee's new system from the outset was meant to support links between multiple databases on independent computers, and to allow simultaneous access by many users from any computer on the Internet. He also specified that the system should eventually handle other media besides text, such as graphics, speech, and video. Links could refer to mutable data files, or even fire up programs on their server computer. He also conceived "gateways" that would allow access through the new system to documents organized in other ways (such as traditional computer file systems or the Usenet). Finally, he insisted that the system should be decentralized, without any central control or coordination over the creation of links. Berners-Lee submitted a proposal to CERN in May 1989, without giving the system a name. He got a working system implemented by the end of 1990, including a browser called WorldWideWeb (which became the name of the project and of the network) and an HTTP server running at CERN. As part of that development he defined the first version of the HTTP protocol, the basic URL syntax, and implicitly made HTML the primary document format. The technology was released outside CERN to other research institutions starting in January 1991, and then to the whole Internet on 23 August 1991. The Web was a success at CERN, and began to spread to other scientific and academic institutions. Within the next two years, there were 50 websites created. CERN made the Web protocol and code available royalty free in 1993, enabling its widespread use. After the NCSA released the Mosaic web browser later that year, the Web's popularity grew rapidly as thousands of websites sprang up in less than a year. Mosaic was a graphical browser that could display inline images and submit forms that were processed by the HTTPd server. Marc Andreessen and Jim Clark founded Netscape the following year and released the Navigator browser, which introduced Java and JavaScript to the Web. It quickly became the dominant browser. Netscape became a public company in 1995 which triggered a frenzy for the Web and started the dot-com bubble. Microsoft responded by developing its own browser, Internet Explorer, starting the browser wars. By bundling it with Windows, it became the dominant browser for 14 years. Berners-Lee founded the World Wide Web Consortium (W3C) which created XML in 1996 and recommended replacing HTML with stricter XHTML. In the meantime, developers began exploiting an IE feature called XMLHttpRequest to make Ajax applications and launched the Web 2.0 revolution. Mozilla, Opera, and Apple rejected XHTML and created the WHATWG which developed HTML5. In 2009, the W3C conceded and abandoned XHTML. In 2019, it ceded control of the HTML specification to the WHATWG. The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Nomenclature Tim Berners-Lee states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Nonetheless, it is often called simply the Web, and also often the web; see Capitalization of Internet for details. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (), which satisfies www and literally means "10,000-dimensional net", a translation that reflects the design concept and proliferation of the World Wide Web. Use of the www prefix has been declining, especially when web applications sought to brand their domain names and make them easily pronounceable. As the mobile Web grew in popularity, services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding "www." (or, indeed, ".com") to the domain. In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his "Podgrams" series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for". Function The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet transport protocols. Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' (after channel surfing), or 'navigating the Web'. Early studies of this new behaviour investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation. The following example demonstrates the functioning of a web browser when accessing a page at the URL . The browser resolves the server name of the URL () into an Internet Protocol address using the globally distributed Domain Name System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text: GET /home.html HTTP/1.1 Host: example.org The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the web server can fulfil the request it sends an HTTP response back to the browser indicating success: HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 followed by the content of the requested page. Hypertext Markup Language (HTML) for a basic web page might look like this: <html> <head> <title>Example.org – The World Wide Web</title> </head> <body> <p>The World Wide Web, abbreviated as WWW and commonly known ...</p> </body> </html> The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources. HTML Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web. Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document. HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page. HTML can embed programs written in a scripting language such as JavaScript, which affects the behaviour and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML Linking Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: <a href="http://example.org/home.html">Example.org Homepage</a>. Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990. The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called "dead" links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts. WWW prefix Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System (DNS) or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many websites do not use it; the first web server was nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root. When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering "" may be transformed to http://www.microsoft.com/ and "openoffice" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices. Scheme specifiers The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted. Pages A web page (also written as webpage) is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device. The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page. On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol (HTTP) to make such requests to the web server. A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc. Static page A static web page (sometimes called a flat page/stationary page) is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. Dynamic pages A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing. A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way. A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server. Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser. JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardised version is ECMAScript. To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available. Website A website is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com. A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site. Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet. Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal. Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time price quotations for different types of markets, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs. Browser A web browser (commonly referred to as a browser) is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer. In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies (see below), and home pages and may have facilities for recording passwords for logging into websites. The most popular browsers are Chrome, Safari, Edge, and Firefox. Server A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols. The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the webserver is implemented. While the primary function is to serve content, full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files. Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP (Hypertext Preprocessor), or other scripting languages. This means that the behaviour of the webserver can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content. Web servers can also frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems). Optical Networking Optical networking is a sophisticated infrastructure that utilizes optical fiber to transmit data over long distances, connecting countries, cities, and even private residences. The technology uses optical microsystems like tunable lasers, filters, attenuators, switches, and wavelength-selective switches to manage and operate these networks. The large quantity of optical fiber installed throughout the world at the end of the twentieth century set the foundation of the Internet as it’s used today. The information highway relies heavily on optical networking, a method of sending messages encoded in light to relay information in various telecommunication networks. The Advanced Research Projects Agency Network (ARPANET) was one of the first iterations of the Internet, created in collaboration with universities and researchers 1969. However, access to the ARPANET was limited to researchers, and in 1985, the National Science Foundation founded the National Science Foundation Network (NSFNET), a program that provided supercomputer access to researchers. Limited public access to the Internet led to pressure from consumers and corporations to privatize the network. In 1993, the US passed the National Information Infrastructure Act, which dictated that the National Science Foundation must hand over control of the optical capabilities to commercial operators. The privatization of the Internet and the release of the World Wide Web to the public in 1993 led to an increased demand for Internet capabilities. This spurred developers to seek solutions to reduce the time and cost of laying new fiber and increase the amount of information that can be sent on a single fiber, in order to meet the growing needs of the public. Cookie An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers. Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information or require the user to authenticate themselves by logging in. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples). Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device. Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like Wi-Fi hotspot providers. When in such circumstances, he recommends using the browser in private browsing mode (widely known as Incognito mode in Google Chrome). Search engine A web search engine or Internet search engine is a software system that is designed to carry out web search (Internet search), which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web. Deep web The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term. The content of the deep web is hidden behind HTTP forms, and includes many very common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, among others. The content of the deep web can be located and accessed by a direct URL or IP address and may require a password or other security access past the public website page. Caching A web cache is a server computer located either on the public Internet or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request. Most web browsers also implement a browser cache by writing recently obtained data to a local data storage device. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. Enterprise firewalls often cache Web resources requested by one user for the benefit of many users. Some search engines store cached content of frequently accessed websites. Security For criminals, the Web has become a venue to spread malware and engage in a range of cybercrime, including (but not limited to) identity theft, fraud, espionage, and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten web pages may contain malicious code. Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favours the use of scripts. In one 2007 estimate, 70% of all websites are open to XSS attacks on their users. Phishing is another common threat to the Web. In February 2013, RSA (the security division of EMC) estimated the global losses from phishing at $1.5 billion in 2012. Two of the well-known phishing methods are Covert Redirect and Open Redirect. Proposed solutions vary. Large security companies like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan Holdings have recommended active real-time inspection of programming code and all content regardless of its source. Some have argued that for enterprises to see Web security as a business opportunity rather than a cost centre, while others call for "ubiquitous, always-on digital rights management" enforced in the infrastructure to replace the hundreds of companies that secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet. Privacy Every time a client requests a web page, the server can identify the request's IP address. Web servers usually log IP addresses in a log file. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems. Another way to hide personally identifiable information is by using a virtual private network. A VPN encrypts traffic between the client and VPN server, and masks the original IP address, lowering the chance of user identification. When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc. web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username, and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way, a web-based organization can develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are of potential interest to marketers, advertisers, and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organizations without the user being informed. For many ordinary people, this means little more than some unexpected emails in their inbox or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counterterrorism, and espionage agencies can also identify, target, and track individuals based on their interests or proclivities on the Web. Social networking sites usually try to get users to use their real names, interests, and locations, rather than pseudonyms, as their executives believe that this makes the social networking experience more engaging for users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles, such as text posts or digital photos, that the posting individual did not intend for these audiences. Online bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine-grained control of the privacy settings for each posting, but these can be complex and not easy to find or use, especially for beginners. Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an online profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events, and scenarios that have been imaged elsewhere. Due to image caching, mirroring, and copying, it is difficult to remove an image from the World Wide Web. Standards Web standards include many interdependent standards and specifications, some of which govern aspects of the Internet, not just the World Wide Web. Even when not web-focused, such standards directly or indirectly affect the development and administration of websites and web services. Considerations include the interoperability, accessibility and usability of web pages and web sites. Web standards, in the broader sense, consist of the following: Recommendations published by the World Wide Web Consortium (W3C) "Living Standard" made by the Web Hypertext Application Technology Working Group (WHATWG) Request for Comments (RFC) documents published by the Internet Engineering Task Force (IETF) Standards published by the International Organization for Standardization (ISO) Standards published by Ecma International (formerly ECMA) The Unicode Standard and various Unicode Technical Reports (UTRs) published by the Unicode Consortium Name and number registries maintained by the Internet Assigned Numbers Authority (IANA) Web standards are not fixed sets of rules but are constantly evolving sets of finalized technical specifications of web technologies. Web standards are developed by standards organizations—groups of interested and often competing parties chartered with the task of standardization—not technologies developed and declared to be a standard by a single individual or company. It is crucial to distinguish those specifications that are under development from the ones that already reached the final development status (in the case of W3C specifications, the highest maturity level). Accessibility There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech-related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or ageing users as their abilities change. The Web is receiving information as well as providing information and interacting with society. The World Wide Web Consortium claims that it is essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities. Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect." Many countries regulate web accessibility as a requirement for websites. International co-operation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology. Internationalisation The W3C Internationalisation Activity assures that web technology works in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character map. Originally allowed resources to be identified by URI in a subset of US-ASCII. allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language.
Technology
Internet
null
33173
https://en.wikipedia.org/wiki/Web%20browser
Web browser
A web browser is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on a range of devices, including desktops, laptops, tablets, and smartphones. By 2020, an estimated 4.9 billion people had used a browser. The most-used browser is Google Chrome, with a 67% global market share on all devices, followed by Safari with 18%. A web browser is not the same thing as a search engine, though the two are often confused. A search engine is a website that provides links to other websites. However, to connect to a website's server and display its web pages, a user must have a web browser installed. In some technical contexts, browsers are referred to as user agents. Function The purpose of a web browser is to fetch content and display it on the user's device. This process begins when the user inputs a Uniform Resource Locator (URL), such as https://en.wikipedia.org/, into the browser. Virtually all URLs on the Web start with either http: or https: which means they are retrieved with the Hypertext Transfer Protocol (HTTP). For secure mode (HTTPS), the connection between the browser and web server is encrypted, providing a secure and private data transfer. Web pages usually contain hyperlinks to other pages and resources. Each link contains a URL, and when it is clicked or tapped, the browser navigates to the new resource. Most browsers use an internal cache of web page resources to improve loading times for subsequent visits to the same page. The cache can store many items, such as large images, so they do not need to be downloaded from the server again. Cached items are usually only stored for as long as the web server stipulates in its HTTP response messages. Privacy During the course of browsing, cookies received from various websites are stored by the browser. Some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods of time, so browsers typically provide a section in the menu for deleting cookies. Finer-grained management of cookies usually requires a browser extension. History The first web browser, called WorldWideWeb, was created in 1990 by Sir Tim Berners-Lee. He then recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. The Mosaic web browser was released in April 1993, and was later credited as the first web browser to find mainstream popularity. Its innovative graphical user interface made the World Wide Web easy to navigate and thus more accessible to the average person. This, in turn, sparked the Internet boom of the 1990s, when the Web grew at a very rapid rate. The lead developers of Mosaic then founded the Netscape corporation, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator quickly became the most popular browser. Microsoft debuted Internet Explorer in 1995, leading to a browser war with Netscape. Within a few years, Microsoft gained a dominant position in the browser market for two reasons: it bundled Internet Explorer with its popular Windows operating system and did so as freeware with no restrictions on usage. The market share of Internet Explorer peaked at over 95% in the early 2000s. In 1998, Netscape launched what would become the Mozilla Foundation to create a new browser using the open-source software model. This work evolved into the Firefox browser, first released by Mozilla in 2004. Firefox's market share peaked at 32% in 2010. Apple released its Safari browser in 2003; it remains the dominant browser on Apple devices, though it did not become popular elsewhere. Google debuted its Chrome browser in 2008, which steadily took market share from Internet Explorer and became the most popular browser in 2012. Chrome has remained dominant ever since. By 2015, Microsoft replaced Internet Explorer with Edge [Legacy] for the Windows 10 release. Since the early 2000s, browsers have greatly expanded their HTML, CSS, JavaScript, and multimedia capabilities. One reason has been to enable more sophisticated websites, such as web apps. Another factor is the significant increase of broadband connectivity in many parts of the world, enabling people to access data-intensive content, such as streaming HD video on YouTube, that was not possible during the era of dial-up modems. Browser market Google Chrome has been the dominant browser since the mid-2010s and currently has a 67% global market share on all devices. The vast majority of its source code comes from Google's open-source Chromium project; this code is also the basis for many other browsers, including Microsoft Edge, currently in third place with about a 5% share, as well as Samsung Internet and Opera in fifth and sixth places respectively with over 2% market share each. The other two browsers in the top four are made from different codebases. Safari, based on Apple's WebKit code, is the second most popular web browser and is dominant on Apple devices, resulting in an 18% global share. Firefox, in fourth place, with about 3% market share, is based on Mozilla's code. Both of these codebases are open-source, so a number of small niche browsers are also made from them. Features The most popular browsers share many features in common. They automatically log users' browsing history, unless the users turn off their browsing history or use the non-logging private mode. They also allow users to set bookmarks, customize the browser with extensions, and can manage user passwords. Some provide a sync service and web accessibility features. Common user interface (UI) features: Allowing the user to have multiple pages open at the same time, either in different browser windows or in different tabs of the same window. Back and forward buttons to go back to the previous page visited or forward to the next one. A refresh or reload and a stop button to reload and cancel loading the current page. (In most browsers, the stop button is merged with the reload button.) A home button to return to the start page. An address bar to input the URL of a page and display it, and a search bar to input queries into a search engine. (In most browsers, the search bar is merged with the address bar.) While mobile browsers have similar UI features as desktop versions, the limitations of touch screens require mobile UIs to be simpler. The difference is significant for users accustomed to keyboard shortcuts. The most popular desktop browsers also have sophisticated web development tools. Security Web browsers are popular targets for hackers, who exploit security holes to steal information, destroy files, and other malicious activities. Browser vendors regularly patch these security holes, so users are strongly encouraged to keep their browser software updated. Other protection measures are antivirus software and being aware of scams.
Technology
Computer software
null
33282
https://en.wikipedia.org/wiki/Coregonus
Coregonus
Coregonus is a diverse genus of fish in the salmon family (Salmonidae). The Coregonus species are known as whitefishes. The genus contains at least 68 described extant taxa, but the true number of species is a matter of debate. The type species of the genus is Coregonus lavaretus. Most Coregonus species inhabit lakes and rivers, and several species, including the Arctic cisco (C. autumnalis), the Bering cisco (C. laurettae), and the least cisco (C. sardinella) are anadromous, moving between salt water and fresh water. Many whitefish species or ecotypes, especially from the Great Lakes and the Alpine lakes of Europe, have gone extinct over the past century or are endangered. Among 12 freshwater fish considered extinct in Europe, 6 are Coregonus. All Coregonus species are protected under appendix III of the Bern Convention, and Annex IV of the EC Habitats Directive (92/43/EEC) Taxonomy Phylogenetic evidence indicates that the most basal member of the genus is the highly endangered Atlantic whitefish (C. huntsmani), which is endemic to a single river basin in Nova Scotia, Canada. The Atlantic whitefish is thought to have diverged from the rest of the genus during the mid-Miocene, about 15 million years ago. The genus was previously subdivided into two subgenera Coregonus ("true whitefishes") and Leucichthys ("ciscoes"), Coregonus comprising taxa with sub-terminal mouth and usually a benthic feeding habit, Leucichthys those with terminal or supra-terminal mouth and usually a pelagic plankton-feeding habit. This classification is not natural however: based on molecular data, ciscoes comprise two distinct lineages within the genus. Moreover, the genus Stenodus is not phylogenetically distinct from Coregonus; although Stenodus occupies a basal position within the genus, phylogenetic evidence indicates that C. huntsmani is even more basal than it. The scientific name given to this genus of fish comes from the Greek κόρη (kórē) "daughter ; eye pupil" and γωνία (gōnía) "angle", because their pupil makes an angle, even though they share this feature with a large number of other fish. Species diversity There is much uncertainty and confusion in the classification of the many species of this genus. Particularly, one extreme view of diversity recognises just two main species in Northern and Central Europe, the common whitefish C. lavaretus and the vendace C. albula, whereas others would divide these into numerous, often narrowly distributed species. A drastic increase in number of recognized species occurred in 2007, when a review advocated that more than 50 local European populations should be considered as distinct based on morphological differences. It has been estimated that several of them are very young, having separated from each other less than 15,000 years ago. Many of these were primarily defined based on number of gill rakers. Although this largely is hereditary, the number is highly variable (even within single populations and species), can change relatively fast in response to changes and genetic studies have shown that they often are of limited use in predicting relationships among populations (a large difference in gill raker number does not necessarily equal a distant relationship). Genetic differences between several of the recently proposed species, even ones that are relatively distinct morphologically, are very limited and sometimes they are not monophyletic. Various Coregonus, whether regarded as separate species or not, readily interbreed with each other. A review of whitefish in the United Kingdom found that the identification key provided in 2007 did not match most individuals and that solid evidence for more than one species in that region is lacking. Many European lakes have more than one Coregonus morph differing in ecology and morphology (especially gill rakers). Such morphs are sometimes partially reproductively isolated from each other, leading to suggestions of recognizing them as separate but clinal species. The morphs or clinal species may rapidly disappear (in 15 years or less, equalling three Coregonus generations) by merging into a single in response to changes in the habitat. A similar pattern can be seen in North America where the ciscoes of the Coregonus artedi complex in the Great Lakes and elsewhere comprise several, often co-occurring morphs or ecotypes, whose taxonomic status remains controversial. Species In 2017, FishBase listed 78 species, including the more than 50 proposed for Europe in 2007. Some of these are recently extinct (marked with a dagger, "†") and C. reighardi is likely extinct. Coregonus acrinasus Oliver M. Selz, Carmela J. Dönz, Pascal Vonlanthen, Ole Seehausen, 2020 Coregonus albellus Fatio, 1890 (autumn brienzlig) Coregonus albula Linnaeus, 1758 (vendace) †Coregonus alpenae (Koelz, 1924) (longjaw cisco) Coregonus alpinus Fatio, 1885 (kropfer) Coregonus anaulorum Chereshnev, 1996 Coregonus arenicolus Kottelat, 1997 Coregonus artedi Lesueur, 1818 (northern cisco or lake herring) Coregonus atterensis Kottelat, 1997 Coregonus austriacus C. C. Vogt, 1909 Coregonus autumnalis (Pallas, 1776) (Arctic cisco) Coregonus baerii Kessler, 1864 Coregonus baicalensis Dybowski, 1874 Coregonus baunti Mukhomediyarov, 1948 Coregonus bavaricus Hofer, 1909 †Coregonus bezola Fatio, 1888 (bezoule) Coregonus brienzii Oliver M. Selz, Carmela J. Dönz, Pascal Vonlanthen, Ole Seehausen, 2020 Coregonus candidus Goll, 1883 Coregonus chadary Dybowski, 1869 (Khadary whitefish) Coregonus clupeaformis (Mitchill, 1818) (lake whitefish) Coregonus clupeoides Lacépède, 1803 (powan) Coregonus confusus Fatio, 1885 Coregonus danneri C. C. Vogt, 1908 Coregonus duplex Fatio, 1890 Coregonus fatioi Kottelat, 1997 †Coregonus fera Jurine, 1825 (fera) Coregonus fontanae M. Schulz & Freyhof, 2003 (Stechlin cisco) †Coregonus gutturosus (C. C. Gmelin (de), 1818) Coregonus heglingus Schinz, 1822 †Coregonus hiemalis Jurine, 1825 (gravenche) Coregonus hoferi L. S. Berg, 1932 Coregonus holsata Thienemann, 1916 Coregonus hoyi (Milner, 1874) (bloater) Coregonus huntsmani W. B. Scott, 1987 (Atlantic whitefish) †Coregonus johannae (G. Wagner, 1910) (deepwater cisco) Coregonus kiletz Michailovsky, 1903 Coregonus kiyi (Koelz, 1921) (kiyi) Coregonus ladogae Pravdin, Golubev & Belyaeva, 1938 Coregonus laurettae T. H. Bean, 1881 (Bering cisco) Coregonus lavaretus Linnaeus, 1758 (common whitefish, European whitefish; lavaret) Coregonus lucinensis Thienemann, 1933 Coregonus lutokka Kottelat, Bogutskaya & Freyhof, 2005 Coregonus macrophthalmus Nüsslin, 1882 Coregonus maraena (Bloch, 1779) (maraena whitefish) Coregonus maraenoides L. S. Berg, 1916 Coregonus maxillaris Günther, 1866 Coregonus megalops Widegren, 1863 (lacustrine fluvial whitefish) Coregonus migratorius (Georgi, 1775) (omul) Coregonus muksun (Pallas, 1814) (muksun) Coregonus nasus (Pallas, 1776) (broad whitefish) Coregonus nelsonii T. H. Bean, 1884 (Alaska whitefish) †Coregonus nigripinnis (Milner, 1874) (blackfin cisco) Coregonus nilssoni Valenciennes, 1848 Coregonus nipigon (Koelz, 1925) Coregonus nobilis Haack, 1882 †Coregonus oxyrinchus Linnaeus, 1758 (houting) Coregonus palaea G. Cuvier, 1829 Coregonus pallasii Valenciennes, 1848 Coregonus peled (J. F. Gmelin, 1789) (peled) Coregonus pennantii Valenciennes, 1848 (gwyniad) Coregonus pidschian (J. F. Gmelin, 1789) (humpback whitefish) Coregonus pollan W. Thompson, 1835 (Irish pollan) Coregonus pravdinellus Dulkeit, 1949 Coregonus profundus Oliver M. Selz, Carmela J. Dönz, Pascal Vonlanthen, Ole Seehausen, 2020 Coregonus reighardi (Koelz, 1924) (shortnose cisco) Coregonus renke (Schrank, 1783) †Coregonus restrictus Fatio, 1885 Coregonus sardinella Valenciennes, 1848 (Sardine cisco) Coregonus steinmanni Oliver M. Selz, Carmela J. Dönz, Pascal Vonlanthen, Ole Seehausen, 2020 Coregonus stigmaticus Regan, 1908 (schelly) Coregonus subautumnalis Kaganowsky, 1932 Coregonus suidteri Fatio, 1885 Coregonus trybomi Svärdson (sv), 1979 Coregonus tugun (Pallas, 1814) Coregonus ussuriensis L. S. Berg, 1906 (Amur whitefish) Coregonus vandesius J. Richardson, 1836 (vendace) Coregonus vessicus Dryagin, 1932 Coregonus wartmanni (Bloch, 1784) Coregonus widegreni Malmgren, 1863 (Valaam whitefish) Coregonus zenithicus (D. S. Jordan & Evermann, 1909) (shortjaw cisco) Coregonus zuerichensis Nüsslin, 1882 Coregonus zugensis Nüsslin, 1882
Biology and health sciences
Salmoniformes
Animals
33303
https://en.wikipedia.org/wiki/Wankel%20engine
Wankel engine
The Wankel engine (, ) is a type of internal combustion engine using an eccentric rotary design to convert pressure into rotating motion. The concept was proven by German engineer Felix Wankel, followed by a commercially feasible engine designed by German engineer Hanns-Dieter Paschke. The Wankel engine's rotor, which creates the turning motion, is similar in shape to a Reuleaux triangle, with the sides having less curvature. The rotor spins inside a figure-eight-like epitrochoidal housing around a fixed-toothed gearing. The midpoint of the rotor moves in a circle around the output shaft, rotating the shaft via a cam. In its basic gasoline fuelled form, the Wankel engine has lower thermal efficiency and higher exhaust emissions relative to the four-stroke reciprocating piston engine. The thermal inefficiency has restricted the engine to limited use since its introduction in the 1960s. However, many disadvantages have mainly been overcome over the succeeding decades as the production of road-going vehicles progressed. The advantages of compact design, smoothness, lower weight, and fewer parts over the reciprocating piston internal combustion engines make the Wankel engine suited for applications such as chainsaws, auxiliary power units (APUs), loitering munitions, aircraft, jet skis, snowmobiles, and range extenders in cars. The Wankel engine was also used to power motorcycles and racing cars. Concept The Wankel engine is a type of rotary piston engine and exists in two primary forms, the Drehkolbenmotor (DKM, "rotary piston engine"), designed by Felix Wankel (see Figure 2.) and the Kreiskolbenmotor (KKM, "circuitous piston engine"), designed by Hanns-Dieter Paschke (see Figure 3.), of which only the latter has left the prototype stage. Thus, all production Wankel engines are of the KKM type. In a DKM engine, there are two rotors: the inner, trochoid-shaped rotor, and the outer rotor, which has an outer circular shape, and an inner figure eight shape. The center shaft is stationary, and torque is taken off the outer rotor, which is geared to the inner rotor. In a KKM engine, the outer rotor is part of the stationary housing (thus not a moving part). The inner shaft is a moving part with an eccentric lobe for the inner rotor to spin around. The rotor spins around its center and around the axis of the eccentric shaft in a hula hoop fashion, resulting in the rotor making one complete revolution for every three revolutions of the eccentric shaft. In the KKM engine, torque is taken off the eccentric shaft, making it a much simpler design to be adopted to conventional powertrains. Wankel engine development Felix Wankel designed a rotary compressor in the 1920s, and received his first patent for a rotary type of engine in 1934. He realized that the triangular rotor of the rotary compressor could have intake and exhaust ports added producing an internal combustion engine. Eventually, in 1951, Wankel began working at German firm NSU Motorenwerke to design a rotary compressor as a supercharger for NSU's motorcycle engines. Wankel conceived the design of a triangular rotor in the compressor. With the assistance of Prof. from Stuttgart University of Applied Sciences, the concept was defined mathematically. The supercharger he designed was used for one of NSU's 50 cm3 one-cylinder two-stroke engines. The engine produced a power output of at 12,000rpm. In 1954, NSU agreed to develop a rotary internal combustion engine with Felix Wankel, based upon Wankel's supercharger design for their motorcycle engines. Since Wankel was known as a "difficult colleague", the development work for the DKM was carried out at Wankel's private Lindau design bureau. According to John B. Hege, Wankel received help from his friend Ernst Höppner, who was a "brilliant engineer". The first working prototype, DKM 54 (see figure 2.), first ran on 1 February 1957, at the NSU research and development department Versuchsabteilung TX. It produced . Soon after that, a second prototype of the DKM was built. It had a working chamber volume Vk of 125 cm3 and also produced at 17,000rpm. It could even reach speeds of up to 25,000rpm. However, these engine speeds distorted the outer rotor's shape, thus proving impractical. According to Mazda Motors engineers and historians, four units of the DKM engine were built; the design is described to have a displacement Vh of 250 cm3 (equivalent to a working chamber volume Vk of 125 cm3). The fourth unit built is said to have received several design changes, and eventually produced at 17,000 rpm; it could reach speeds up to 22,000 rpm. One of the four engines built has been on static display at the Deutsches Museum Bonn (see figure. 2). Due to its complicated design with a stationary center shaft, the DKM engine was impractical. Wolf-Dieter Bensinger explicitly mentions that proper engine cooling cannot be achieved in a DKM engine, and argues that this is the reason why the DKM design had to be abandoned. NSU development chief engineer Walter Froede solved this problem by using Hanns-Dieter Paschke's design and converting the DKM into what would later be known as the KKM (see figure 5.). The KKM proved to be a much more practical engine, as it has easily accessible spark plugs, a simpler cooling design, and a conventional power take-off shaft. Wankel disliked Froede's KKM engine because of its inner rotor's eccentric motion, which was not a pure circular motion, as Wankel had intended. He remarked that his "race horse" was turned into a "plough horse". Wankel also complained that more stresses would be placed on the KKM's apex seals due to the eccentric hula-hoop motion of the rotor. NSU could not afford to finance developing both the DKM and the KKM, and eventually decided to drop the DKM in favor of the KKM, because the latter seemed to be the more practical design. Wankel obtained the US patent 2,988,065 on the KKM engine on 13 June 1961. Throughout the design phase of the KKM, Froede's engineering team had to solve problems such as repeated bearing seizures, the oil flow inside the engine, and the engine cooling. The first fully functioning KKM engine, the KKM 125, weighing in at only displaced 125 cm3 and produced at 11,000rpm. Its first run was on 1 July 1958. In 1963, NSU produced the first series-production Wankel engine for a car, the KKM 502 (see Figure 6.). It was used in the NSU Spider sports car, of which about 2,000 were made. Despite its "teething troubles", the KKM 502 was a powerful engine with decent potential, smooth operation, and low noise emissions at high engine speeds. It was a single-rotor PP engine with a displacement of , a rated power of at 6,000rpm and a BMEP of . Operation and design The Wankel engine has a spinning eccentric power take-off shaft, with a rotary piston riding on eccentrics on the shaft in a hula-hoop fashion, with the crown gear with one and a half times the number of teeth as on the eccentric shaft. Thus the Wankel is a 2:3 type of rotary engine, i.e., its housing's inner side resembles a two lobes oval-like epitrochoid (equivalent to a peritrochoid),. In contrast, its rotary piston has a three vertices trochoid shape (similar to a Reuleaux triangle). Thus, the Wankel engine's rotor constantly forms three moving working chambers. The Wankel engine's basic geometry is depicted in figure 7. Seals at the rotor's apices seal against the housing's periphery. The rotor moves in its rotating motion guided by gears and the eccentric output shaft, not being guided by the external chamber. The rotor does not make contact with the external engine housing. The force of expanded gas pressure on the rotor exerts pressure on the center of the eccentric part of the output shaft. All practical Wankel engines are four-cycle (i.e., four-stroke) engines. In theory, two-cycle engines are possible, but they are impractical because the intake gas and the exhaust gas cannot be properly separated. The operating principle is similar to the Otto operating principle; the Diesel operating principle with its compression ignition cannot be used in a practical Wankel engine. Therefore, Wankel engines typically have a high-voltage spark ignition system. In a Wankel engine, one side of the triangular rotor completes the four-stage Otto cycle of intake, compression, expansion, and exhaust each revolution of the rotor (equivalent to three shaft revolutions, see Figure 8.). The shape of the rotor between the fixed apexes is to minimize the volume of the geometric combustion chamber and maximize the compression ratio, respectively. As the rotor has three sides, this gives three power pulses per revolution of the rotor. Wankel engines have a much lower degree of irregularity relative to a reciprocating piston engine, making the Wankel engine run much smoother. This is because the Wankel engine has a lower moment of inertia and less excess torque area due to its more uniform torque delivery. For example, a two-rotor Wankel engine runs more than twice as smoothly as a four-cylinder piston engine. The eccentric output shaft of a Wankel engine also does not have the stress-related contours of a reciprocating piston engine's crankshaft. The maximum revolutions of a Wankel engine are thus mainly limited by tooth load on the synchronizing gears. Hardened steel gears are used for extended operation above 7,000 or 8,000rpm. In practice, automotive Wankel engines are not operated at much higher output shaft speeds than reciprocating piston engines of similar output power. Wankel engines in auto racing are operated at speeds up to 10,000rpm, but so are four-stroke reciprocating piston engines with relatively small displacement per cylinder. In aircraft, they are used conservatively, up to 6500 or 7500rpm. Chamber volume In a Wankel rotary engine, the chamber volume is equivalent to the product of the rotor surface and the rotor path . The rotor surface is given by the rotor tips' path across the rotor housing and determined by the generating radius , the rotor width , and the parallel transfers of the rotor and the inner housing . Since the rotor has a trochoid ("triangular") shape, the sine of 60 degrees describes the interval at which the rotors get closest to the rotor housing. Therefore, The rotor path may be integrated via the eccentricity as follows: Therefore, For convenience, may be omitted because it is difficult to determine and small: A different approach to this is introducing as the farthest, and as the shortest parallel transfer of the rotor and the inner housing and assuming that and . Then, Including the parallel transfers of the rotor and the inner housing provides sufficient accuracy for determining chamber volume. Equivalent displacement and power output Different approaches have been used over time to evaluate the total displacement of a Wankel engine in relation to a reciprocating engine: considering only one, two, or all three chambers. Part of this dispute was because of Europe vehicle taxation being dependent on engine displacement, as reported by Karl Ludvigsen. If is the number of chambers considered for each rotor and the number of rotors, then the total displacement is: If is the mean effective pressure, the shaft rotational speed and the number of shaft revolutions needed to complete a cycle ( is the frequency of the thermodynamic cycle), then the total power output is: Considering one chamber Kenichi Yamamoto and Walter G. Froede placed and : With these values, a single-rotor Wankel engine produces the same average power as a single-cylinder two-stroke engine, with the same average torque, with the shaft running at the same speed, operating the Otto cycles at triple the frequency. Considering two chambers Richard Franz Ansdale, Wolf-Dieter Bensinger and Felix Wankel based their analogy on the number of cumulative expansion strokes per shaft revolution. In a Wankel rotary engine, the eccentric shaft must make three full rotations (1080°) per combustion chamber to complete all four phases of a four-stroke engine. Since a Wankel rotary engine has three combustion chambers, all four phases of a four-stroke engine are completed within one full rotation of the eccentric shaft (360°), and one power pulse is produced at each revolution of the shaft. This is different from a four-stroke piston engine, which needs to make two full rotations per combustion chamber to complete all four phases of a four-stroke engine. Thus, in a Wankel rotary engine, according to Bensinger, displacement () is: If power is to be derived from BMEP, the four-stroke engine formula applies: Considering three chambers Eugen Wilhelm Huber, and Karl-Heinz Küttner counted all the chambers, since each one operates its own thermodynamic cycle. So and : With these values, a single-rotor Wankel engine produces the same average power as a three-cylinder four-stroke engine, with 3/2 of the average torque, with the shaft running at 2/3 the speed, operating the Otto cycles at the same frequency: Applying a 2/3 gear set to the output shaft of the three-cylinder (or a 3/2 one to the Wankel), the two are analogous from the thermodynamic and mechanical output point of view, as pointed out by Huber. Examples (counting two chambers) KKM 612 (NSU Ro80) e=14 mm R=100 mm a=2 mm B=67 mm i=2 Mazda 13B-REW (Mazda RX-7) e=15 mm R=103 mm a=2 mm B=80 mm i=2 Licenses issued NSU licensed the Wankel engine design to companies worldwide, in various forms, with many companies implementing continual improvements. In his 1973 book Rotationskolben-Verbrennungsmotoren, German engineer Wolf-Dieter Bensinger describes the following licensees, in chronological order, which is confirmed by John B. Hege: Curtiss-Wright: All types of engines, both air- and water-cooled, , from 1958; license sold to Deere & Company in 1984 Fichtel & Sachs: Industrial and marine engines, , from 1960 Yanmar Diesel: Marine engines up to , and engines running on diesel fuel up to , from 1961 Toyo Kogyo (Mazda): Motor vehicle engines up to , from 1961 Perkins Engines: All types of engines, up to , from 1961 until <1972 Klöckner-Humboldt-Deutz: Engines running on diesel fuel; development ended by 1972 Daimler Benz: All types of engines from up to , from 1961 until 1976. MAN: Engines running on diesel fuel; development ended by 1972 Krupp: Engines running on diesel fuel; development ended by 1972 Rheinstahl-Hanomag: Petrol engines, , from 1963; by 1972 merged into Daimler-Benz Alfa Romeo: Motor vehicle engines, , from 1964 Rolls-Royce: Engines for diesel fuel or multifuel operation, , from 1965 VEB Automobilbau: Automotive engines from and , from 1965; license abandoned by 1972 Porsche: Sportscar engines from , from 1965 Outboard Marine: Marine engines from , from 1966 Comotor (NSU Motorenwerke and Citroën): Petrol engines from , from 1967 Graupner: Model engines from , from 1967 Savkel: Industrial petrol engines from , from 1969 Nissan: Car engines from , from 1970 General Motors: All types of engines, excluding aircraft engines, up to four-rotor engines, from 1970 Suzuki: Motorcycle engines from , from 1970 Toyota: Car engines from , from 1971 Ford Germany: (including Ford Motor Company): Car engines from , from 1971 BSA Company : Petrol engines from , from 1972 Yamaha Motor Company: Petrol engines from , from 1972 Kawasaki Heavy Industries: Petrol engines from , from 1972 Brunswick Corporation Engines from , from 1972 Ingersoll Rand: Engines from , from 1972 American Motors Company: Petrol engines from , from 1973 In 1961, the Soviet research organizations of NATI, NAMI, and VNIImotoprom began developing a Wankel engine. Eventually, in 1974, development was transferred to a special design bureau at the AvtoVAZ plant. John B. Hege argues that no license was issued to any Soviet car manufacturer. Engineering Felix Wankel managed to overcome most of the problems that made prior attempts to perfect the rotary engines fail, by developing a configuration with vane seals having a tip radius equal to the amount of "oversize" of the rotor housing form, relative to the theoretical epitrochoid, to minimize radial apex seal motion plus introducing a cylindrical gas-loaded apex pin which abutted all sealing elements to seal around the three planes at each rotor apex. In the early days, unique, dedicated production machines had to be built for different housing dimensional arrangements. However, patented designs such as , G. J. Watt, 1974, for a "Wankel Engine Cylinder Generating Machine", , "Apparatus for machining and/or treatment of trochoidal surfaces" and , "Device for machining trochoidal inner walls", and others, solved the problem. Wankel engines have a problem not found in reciprocating piston four-stroke engines in that the block housing has intake, compression, combustion, and exhaust occurring at fixed locations around the housing. This causes a very uneven thermal load on the rotor housing. In contrast, four-stroke reciprocating engines perform these four strokes in one chamber, so that extremes of "freezing" intake and "flaming" exhaust are averaged and shielded by a boundary layer from overheating working parts. The University of Florida proposed the use of heat pipes in an air-cooled Wankel to overcome this uneven heating of the block housing. Pre-heating of certain housing sections with exhaust gas improved performance and fuel economy, also reducing wear and emissions. The boundary layer shields and the oil film act as thermal insulation, leading to a low temperature of the lubricating film (approximate maximum on a water-cooled Wankel engine). This gives a more constant surface temperature. The temperature around the spark plug is about the same as in the combustion chamber of a reciprocating engine. With circumferential or axial flow cooling, the temperature difference remains tolerable. Problems arose during research in the 1950s and 1960s. For a while, engineers were faced with what they called "chatter marks" and "devil's scratch" in the inner epitrochoid surface, resulting in chipping of the chrome coating of the trochoidal surfaces. They discovered that the cause was the apex seals reaching a resonating vibration, and the problem was solved by reducing the thickness and weight of the apex seals as well as using more suitable materials. Scratches disappeared after introducing more compatible materials for seals and housing coatings. Yamamoto experimentally lightened apex seals with holes. Now, weight was identified as the main cause. Mazda then used aluminum-impregnated carbon apex seals in their early production engines. NSU used carbon antimony-impregnated apex seals against chrome. NSU developed ELNISIL coating to production maturity and returned to a metal sealing strip for the RO80. Mazda continued to use chrome, but provided the aluminum housing with a steel jacket, which was then coated with a thin dimensional galvanized chrome layer. This allowed Mazda to return to the 3mm and later even 2mm thick metal apex seals. Another early problem was the build-up of cracks in the stator surface near the plug hole, which was eliminated by installing the spark plugs in a separate metal insert/ copper sleeve in the housing instead of a plug being screwed directly into the block housing. Toyota found that substituting a glow-plug for the leading site spark plug improved low rpm, part load, specific fuel consumption by 7%, and emissions and idle. A later alternative solution to spark plug boss cooling was provided with a variable coolant velocity scheme for water-cooled rotaries, which has had widespread use, being patented by Curtiss-Wright, with the last-listed for better air-cooled engine spark plug boss cooling. These approaches did not require a high-conductivity copper insert, but did not preclude its use. Ford tested a Wankel engine with the plugs placed in the side plates, instead of the usual placement in the housing working surface (, 1978). Torque delivery Wankel engines are capable of high-speed operation, meaning they do not necessarily need to produce high torque to produce high power. The positioning of the intake port and intake port closing greatly affect the engine's torque production. Early closing of the intake port increases low-end torque, but reduces high-end torque (and thus power). In contrast, late closing of the intake port reduces low-end torque while increasing torque at high engine speeds, thus resulting in more power at higher engine speeds. A peripheral intake port gives the highest mean effective pressure; however, side intake porting produces a more steady idle, because it helps to prevent blow-back of burned gases into the intake ducts, which cause "misfirings" caused by alternating cycles where the mixture ignites and fails to ignite. Peripheral porting (PP) gives the best mean effective pressure throughout the rpm range, but PP was also linked to worse idle stability and part-load performance. Early work by Toyota led to the addition of a fresh air supply to the exhaust port. It also proved that a Reed-valve in the intake port or ducts improved the low rpm and partial load performance of Wankel engines, by preventing blow-back of exhaust gas into the intake port and ducts, and reducing the misfire-inducing high EGR, at the cost of a slight loss of power at top rpm. Elasticity is improved with a greater rotor eccentricity, analogous to a longer stroke in a reciprocating engine. Wankel engines operate better with a low-pressure exhaust system. Higher exhaust back pressure reduces mean effective pressure, more severely in peripheral intake port engines. The Mazda RX-8 Renesis engine improved performance by doubling the exhaust port area relative to earlier designs, and there have been studies of the effect of intake and exhaust piping configuration on the performance of Wankel engines. Side intake ports (as used in Mazda's Renesis engine) were first proposed by Hanns-Dieter Paschke in the late 1950s. Paschke predicted that precisely calculated intake ports and intake manifolds could make a side port engine as powerful as a PP engine. Materials As formerly described, the Wankel engine is affected by unequal thermal expansion due to the four cycles taking place in fixed places of the engine. While this puts great demands on the materials used, the simplicity of the Wankel makes it easier to use alternative materials, such as exotic alloys and ceramics. A commonplace method is, for engine housings made of aluminum, to use a spurted molybdenum layer on the engine housing for the combustion chamber area, and a spurted steel layer elsewhere. Engine housings cast from iron can be induction-brazed to make the material suited for withstanding combustion heat stress. Among the alloys cited for Wankel housing use are A-132, Inconel 625, and 356 treated to T6 hardness. Several materials have been used for plating the housing working surface, Nikasil being one. Citroën, Daimler-Benz, Ford, A P Grazen, and others applied for patents in this field. For the apex seals, the choice of materials has evolved along with the experience gained, from carbon alloys, to steel, ferritic stainless, Ferro-TiC, and other materials. The combination of housing plating and the apex and side seal materials was determined experimentally, to obtain the best duration of both seals and housing cover. For the shaft, steel alloys with little deformation on load are preferred, the use of Maraging steel has been proposed for this. Leaded petrol fuel was the predominant type available in the first years of the Wankel engine's development. Lead is a solid lubricant, and leaded petrol is designed to reduce the wearing of seals and housings. The first engines had the oil supply calculated with consideration of petrol's lubricating qualities. As leaded petrol was being phased out, Wankel engines needed an increased mix of oil in the petrol to provide lubrication to critical engine parts. An SAE paper by David Garside extensively described Norton's choices of materials and cooling fins. Sealing Early engine designs had a high incidence of sealing loss, both between the rotor and the housing and also between the various pieces making up the housing. Also, in earlier model Wankel engines, carbon particles could become trapped between the seal and the casing, jamming the engine and requiring a partial rebuild. It was common for very early Mazda engines to require rebuilding after . Further sealing problems arose from the uneven thermal distribution within the housings causing distortion and loss of sealing and compression. This thermal distortion also caused uneven wear between the apex seal and the rotor housing, evident on higher mileage engines. The problem was exacerbated when the engine was stressed before reaching operating temperature. However, Mazda Wankel engines solved these initial problems. Current engines have nearly 100 seal-related parts. The problem of clearance for hot rotor apexes passing between the axially closer side housings in the cooler intake lobe areas was dealt with by using an axial rotor pilot radially inboard of the oils seals, plus improved inertia oil cooling of the rotor interior (C-W , C. Jones, 5/8/63, , M. Bentele, C. Jones. A.H. Raye. 7/2/62), and slightly "crowned" apex seals (different height in the center and in the extremes of seal). Fuel economy and emissions As is described in the thermodynamic disadvantages section, the early Wankel engines had poor fuel economy. This is caused by the Wankel engine's design of combustion chamber shape and huge surface area. The Wankel engine's design is, on the other hand, much less prone to engine knocking, which allows using low-octane fuels without reducing compression. NSU tested low octane gasoline at the suggestion of Felix Wankel. On a trial basis 40-octane gasoline was produced by BV Aral, which was used in the Wankel DKM54 test engine with a compression ratio of 8:1; it ran without complaint. This upset the petrochemical industry in Europe, which had invested considerable sums of money in new plants for the production of higher quality gasoline. Direct injection stratified charge engines can be operated with fuels with particularly low octane numbers. Such as diesel fuel, which only has an octane number of ~25. As a result of the poor efficiency, a Wankel engine with peripheral exhaust porting has a larger amount of unburnt hydrocarbons (HC) released into the exhaust. The exhaust is, however, relatively low in nitrogen oxide (NOx) emissions, because the combustion is slow, and temperatures are lower than in other engines, and also because of the Wankel engine's good exhaust gas recirculation (EGR) behavior. Carbon monoxide (CO) emissions of Wankel and Otto engines are about the same. The Wankel engine has a significantly higher (ΔtK>100 K) exhaust gas temperature than an Otto engine, especially under low and medium load conditions. This is because of the higher combustion frequency and slower combustion. Exhaust gas temperatures can exceed 1300 K under high load at engine speeds of 6000 rpm−1. To improve the exhaust gas behavior of the Wankel engine, a thermal reactor or catalyst converter may be used to reduce hydrocarbon and carbon monoxide from the exhaust. Mazda uses a dual ignition system with two spark plugs per chamber. This increases the power output and at the same time reduces HC emissions. At the same time, HC emissions can be lowered by reducing the pre-ignition of the T leading plug relative to the L trailing plug. This leads to internal afterburning and reduces HC emissions. On the other hand, the same ignition timing of L and T leads to a higher energy conversion. Hydrocarbons adhering to the combustion chamber wall are expelled into the exhaust at the peripheral outlet. Mazda used 3 spark plugs in their R26B engine per chamber. The third spark plug ignites the mixture in the trailing side before the squish is generated, causing the mixture to burn completely and, also speeding up flame propagation, which improves fuel consumption. According to Curtiss-Wright research, the factor that controls the amount of unburnt hydrocarbons in the exhaust is the rotor surface temperature, with higher temperatures resulting in fewer hydrocarbons in the exhaust. Curtiss-Wright widened the rotor, keeping the rest of engine's architecture unchanged, thus reducing friction losses and increasing displacement and power output. The limiting factor for this widening was mechanical, especially shaft deflection at high rotative speeds. Quenching is the dominant source of hydrocarbon at high speeds and leakage at low speeds. Using side-porting which enables closing the exhaust port around the top-dead center and reducing intake and exhaust overlap helps improving fuel consumption. Mazda's RX-8 car with the Renesis engine (that was first presented in 1999), met in 2004 the United States' low emissions vehicle (LEV-II) standard. This was mainly achieved by using side porting: The exhaust ports, which in earlier Mazda rotary engines were located in the rotor housings, were moved to the side of the combustion chamber. This approach allowed Mazda to eliminate overlap between intake and exhaust port openings, while simultaneously increasing the exhaust port area. This design improved the combustion stability in the low-speed and light load range. The HC emissions from the side exhaust port rotary engine are 35–50% less than those from the peripheral exhaust port Wankel engine. Peripheral ported rotary engines have a better mean effective pressure, especially at high rpm and with a rectangular-shaped intake port. However, the RX-8 was not improved to meet Euro 5 emission regulations, and it was discontinued in 2012. The new Mazda 8C of the Mazda MX-30 R-EV meets the Euro 6d-ISC-FCM emissions standard. Laser ignition Laser ignition was first proposed in 2011, but first studies of laser ignition were only conducted in 2021. It is assumed that laser ignition of lean fuel mixtures in Wankel engines could improve fuel consumption and exhaust gas behavior. In a 2021 study, a Wankel model engine was tested with laser ignition and various gaseous and liquid fuels. Laser ignition leads to a faster center of combustion development, thus improving combustion speed, leading to a reduction in NOx emissions. The laser pulse energy required for proper ignition is "reasonable", in the low single-digit mJ-range. A significant modification of the Wankel engine is not required for laser ignition. Compression-ignition Wankel Research has occurred into rotary compression ignition engines. The basic design parameters of the Wankel engine preclude obtaining a compression ratio sufficient for Diesel operation in a practical engine. The Rolls-Royce and Yanmar compression-ignition approach was to use a two-stage unit (see figure 16.), with one rotor acting as compressor, while combustion takes place in the other. Both engines were not functional. Multifuel Wankel engine A different approach from a compression ignition (Diesel) Wankel engine is a non-CI, multifuel Wankel engine that is capable of operating on a huge variety of fuels: diesel, petrol, kerosene, methanol, natural gas, and hydrogen. German engineer Dankwart Eiermann designed this engine at Wankel SuperTec (WST) in the early 2000s. It has a chamber volume of 500 cm3 (cc) and an indicated power output of per rotor. Versions with one up to four rotors are possible. The WST engine has a common-rail direct injection system operating on a stratified charge principle. Similar to a Diesel engine and unlike a conventional Wankel engine, the WST engine compresses air rather than an air–fuel mixture as in the four-cycle engine compression phase. Fuel is only injected into the compressed air shortly before top-dead centre, which results in stratified charge (i.e., no homogeneous mixture). A spark plug is used to initiate combustion. The pressure at the end of the compression phase and during combustion is lower than in a conventional Diesel engine, and the fuel consumption is equivalent to that of a small indirect injection compression ignition engine (i.e., >250 g/(kW·h)). Diesel-fuel-powered variants of the WST Wankel engine are being used as APUs in 60 Deutsche Bahn diesel locomotives. The WST diesel fuel engines can produce up to . Hydrogen fuel As a hydrogen/air fuel mixture is quicker to ignite with a faster burning rate than gasoline, an important issue of hydrogen internal combustion engines is to prevent pre-ignition and backfire. In a rotary engine, each cycle of the Otto cycle occurs in different chambers. Importantly, the intake chamber is separated from the combustion chamber, keeping the air/fuel mixture away from localized hot spots. Wankel engines also do not have hot exhaust valves, which eases adapting them to hydrogen operation. Another problem concerns the hydrogenate attack on the lubricating film in reciprocating engines. In a Wankel engine, the problem of a hydrogenate attack is circumvented by using ceramic apex seals. In a prototype Wankel engine fitted to a Mazda RX-8 to research hydrogen operation, Wakayama et al. found that hydrogen operation improved thermal efficiency by 23% over petrol fuel operation.Although the lean operation emits little NOx, total amount of engine-out NOx exceeds Japanese SULEV standard. The supplementary stoichiometic operation combined with a catalyst provides additional NOx reduction. Accordingly, the vehicle satisfies the SULEV standard. Advantages Prime advantages of the Wankel engine are: A far higher power-to-weight ratio than a piston engine Easier to package in small engine spaces than an equivalent piston engine Able to reach higher engine speeds than a comparable piston engine Operating with almost no vibration Not prone to engine-knock Cheaper to mass-produce, because the engine contains fewer parts Supplying torque for about two-thirds of the combustion cycle rather than one-quarter for a piston engine Easily adapted and highly suitable to use hydrogen fuel. Wankel engines are considerably lighter and simpler, containing far fewer moving parts than piston engines of equivalent power output. Valves or complex valve trains are eliminated by using simple ports cut into the walls of the rotor housing. Since the rotor rides directly on a large bearing on the output shaft, there are no connecting rods and no crankshaft. The elimination of reciprocating mass gives Wankel engines a low non-uniformity coefficient, meaning that they operate much smoother than comparable reciprocating piston engines. For example, a two-rotor Wankel engine is more than twice as smooth in its operation as a four-cylinder reciprocating piston engine. A four-stroke cylinder produces a power stroke only every other rotation of the crankshaft, with three strokes being pumping losses. The Wankel engine also has higher volumetric efficiency than a reciprocating piston engine. Because of the quasi-overlap of the power strokes, the Wankel engine is very quick to react to power increases, delivering power quickly when demanded, especially at higher engine speeds. This difference is more pronounced relative to four-cylinder reciprocating engines and less pronounced relative to higher cylinder counts. Due to the absence of hot exhaust valves, the fuel octane requirements of Wankel engines are lower than in reciprocating piston engines. As a rule of thumb, it may be assumed that a Wankel engine with a working chamber volume Vk of 500 cm3 and a compression of ε=9 runs well on mediocre-quality petrol with an octane rating of just 91 RON. If in a reciprocating piston engine, the compression must be reduced by one unit of compression to avoid knock, then, in a comparable Wankel engine, a reduction in compression may not be required. Because of the lower injector count, fuel injection systems in Wankel engines are cheaper than in reciprocating piston engines. An injection system that allows stratified charge operation may help reduce rich mixture areas in undesirable parts of the engine, which improves fuel efficiency. Disadvantages Thermodynamic disadvantages Wankel rotary engines mainly suffer from poor thermodynamics caused by the Wankel engine's design with its huge surface area and poor combustion chamber shape. As an effect of this, the Wankel engine has slow and incomplete combustion, which results in high fuel consumption and bad exhaust gas behavior. Wankel engines can reach a typical maximum efficiency of about 30 percent. In a Wankel rotary engine, fuel combustion is slow, because the combustion chamber is long, thin, and moving. Flame travel occurs almost exclusively in the direction of rotor movement, adding to the poor quenching of the fuel and air mixture, being the main source of unburnt hydrocarbons at high engine speeds: The trailing side of the combustion chamber naturally produces a "squeeze stream" that prevents the flame from reaching the chamber's trailing edge, which worsens the consequences of the fuel and air mixture quenching poorly. Direct fuel injection, in which fuel is injected towards the leading edge of the combustion chamber, can minimize the amount of unburnt fuel in the exhaust. Mechanical disadvantages Although many of the disadvantages are the subject of ongoing research, the current disadvantages of the Wankel engine in production are the following: Rotor sealing The engine housing has vastly different temperatures in each separate chamber section. The different expansion coefficients of the materials lead to imperfect sealing. Additionally, both sides of the apex seals are exposed to fuel, and the design does not allow for controlling the lubrication of the rotors accurately and precisely. Rotary engines tend to be overlubricated at all engine speeds and loads, and have relatively high oil consumption and other problems resulting from excess oil in the combustion areas of the engine, such as carbon formation and excessive emissions from burning oil. By comparison, a piston engine has all functions of a cycle in the same chamber giving a more stable temperature for piston rings to act against. Additionally, only one side of the piston in a (four-stroke) piston engine is exposed to fuel, allowing oil to lubricate the cylinders from the other side. Piston engine components can also be designed to increase ring sealing and oil control as cylinder pressures and power levels increase. To overcome the problems in a Wankel engine of differences in temperatures between different regions of housing and side and intermediary plates, and the associated thermal dilatation inequities, a heat pipe has been used to transport heat from the hot to the cold parts of the engine. The "heat pipes" effectively direct hot exhaust gas to the cooler parts of the engine, resulting in decreases in efficiency and performance. In small-displacement, charge-cooled rotor, air-cooled housing Wankel engines, that has been shown to reduce the maximum engine temperature from , and the maximum difference between hotter and colder regions of the engine from . Apex seal lifting Centrifugal force pushes the apex seal onto the housing surface forming a firm seal. Gaps can develop between the apex seal and trochoid housing in light-load operation when imbalances in centrifugal force and gas pressure occur. At low engine-rpm ranges, or under low-load conditions, the gas pressure in the combustion chamber can cause the seal to lift off the surface, resulting in combustion gas leaking into the next chamber. Mazda developed a solution, changing the shape of the trochoid housing, which meant that the seals remained flush with the housing. Using the Wankel engine at sustained higher revolutions helps eliminate apex seal lift-off, making it viable in applications such as electricity generation. In motor vehicles, the engine is suited to series-hybrid applications. NSU circumvented this problem by adding slots on one side of the apex seals, thus directing the gas pressure into the base of the apex. This effectively prevented the apex seals from lifting off. Although in two dimensions the seal system of a Wankel looks to be even simpler than that of a corresponding multi-cylinder piston engine, in three dimensions the opposite is true. As well as the rotor apex seals evident in the conceptual diagram, the rotor must also seal against the chamber ends. Piston rings in reciprocating engines are not perfect seals; each has a gap to allow for expansion. The sealing at the apexes of the Wankel rotor is less critical because leakage is between adjacent chambers on adjacent strokes of the cycle, rather than to the mainshaft case. Although sealing has improved over the years, the less-than-effective sealing of the Wankel, which is mostly due to lack of lubrication, remains a factor reducing its efficiency. The trailing side of the rotary engine's combustion chamber develops a squeeze stream that pushes back the flame front. With the conventional one or two-spark-plug system and homogenous mixture, this squeeze stream prevents the flame from propagating to the combustion chamber's trailing side in the mid and high-engine speed ranges. Kawasaki dealt with that problem in its US patent ; Toyota obtained a 7% economy improvement by placing a glow-plug in the leading side, and using Reed-Valves in intake ducts. In two-stroke engines, metal reeds last about while carbon fiber, around . This poor combustion in the trailing side of the chamber is one of the reasons why there is more carbon monoxide and unburned hydrocarbons in a Wankel's exhaust stream. A side-port exhaust, as is used in the Mazda Renesis, avoids port overlap, one of the causes of this, because the unburned mixture cannot escape. The Mazda 26B avoided this problem through the use of a three spark-plug ignition system and obtained a complete conversion of the aspirated mixture. In the 26B, the upper late trailing spark plug ignites before the onset of the squeeze flow. Regulations and taxation National agencies that tax automobiles according to displacement and regulatory bodies in automobile racing use a variety of equivalency factors to compare Wankel engines to four-stroke piston engines. Greece, for example, taxed cars based on the working chamber volume (the face of one rotor), multiplied by the number of rotors, lowering the cost of ownership. Japan did the same, but applied an equivalency factor of 1.5, making Mazda's 13B engine fit just under the 2-liter tax limit. FIA used an equivalency factor of 1.8 but later increased it to 2.0, using the displacement formula described by Bensinger. However, the DMSB applies an equivalency factor of 1.5 in motorsport. Car applications The first rotary-engined car for sale was the 1964 NSU Rotary Spider. Rotary engines were continuously fitted in cars until 2012 when Mazda discontinued the RX-8. Mazda introduced a rotary-engined hybrid electric car, the MX-30 R-EV in 2023. NSU and Mazda Mazda and NSU signed a study contract to develop the Wankel engine in 1961 and competed to bring the first Wankel-powered automobile to the market. Although Mazda produced an experimental rotary that year, NSU was the first with a rotary automobile for sale, the sporty NSU Spider in 1964; Mazda countered with a display of two- and four-rotor rotary engines at that year's Tokyo Motor Show. In 1967, NSU began production of a rotary-engined luxury car, the Ro 80. NSU had not produced reliable apex seals on the rotor, though, unlike Mazda and Curtiss-Wright. NSU had problems with apex seals' wear, poor shaft lubrication, and poor fuel economy, leading to frequent engine failures, not solved until 1972, which led to large warranty costs curtailing further NSU rotary engine development. This premature release of the new rotary engine gave a poor reputation for all makes, and even when these issues were solved in the last engines produced by NSU in the second half of the '70s, sales did not recover. By early 1978, Audi engineers Richard van Basshuysen and Gottlieb Wilmers had designed a new generation of the Audi NSU Wankel engine, the KKM 871. It was a two-rotor unit with a chamber volume Vk of 746.6 cm3, derived from an eccentricity of 17 mm, a generating radius of 118.5 mm, and equidistance of 4 mm and a housing width of 69 mm. It had double side intake ports, and a peripheral exhaust port; it was fitted with a continuously injecting Bosch K-Jetronic multipoint manifold injection system. According to the DIN 70020 standard, it produced 121 kW at 6500 rpm, and could provide a max. torque of 210 N·m at 3500 rpm. Van Basshuysen and Wilmers designed the engine with either a thermal reactor, or a catalytic converter for emissions control. The engine had a mass of 142 kg, and a BSFC of approximately 315 g/(kW·h) at 3000 rpm and a BMEP of 900 kPa. For testing, two KKM 871 engines were installed in Audi 100 Type 43 test cars, one with a five-speed manual gearbox, and one with a three-speed automatic gearbox. Mazda Mazda claimed to have solved the apex seal problem, operating test engines at high speed for 300 hours without failure. After years of development, Mazda's first rotary engine car was the 1967 Cosmo 110S. The company followed with several Wankel ("rotary" in the company's terminology) vehicles, including a bus and a pickup truck. Customers often cited the cars' smoothness of operation. However, Mazda chose a method to comply with hydrocarbon emission standards which, while less expensive to produce, increased fuel consumption. Mazda later abandoned the rotary in most of their automotive designs, continuing to use the engine in their sports car range only. The company normally used two-rotor designs. A more advanced twin-turbo three-rotor engine was fitted in the 1990 Eunos Cosmo sports car. In 2003, Mazda introduced the Renesis engine fitted in the RX-8. The Renesis engine relocated the ports for exhaust from the periphery of the rotary housing to the sides, allowing for larger overall ports, and better airflow. The Renesis is capable of with improved fuel economy, reliability, and lower emissions than prior Mazda rotary engines, all from a nominal 2.6 L displacement, but this was not enough to meet more stringent emissions standards. Mazda ended production of their rotary engine in 2012 after the engine failed to meet the more stringent Euro 5 emission standards, leaving no automotive company selling a rotary-powered road vehicle until 2023. Mazda launched the MX-30 R-EV hybrid fitted with a Wankel engine range extender in March 2023. The Wankel engine has no direct connection to the wheels and serves only to charge the battery. It is a single-rotor unit with a engine and a rated power output of . The engine has petrol direct injection, exhaust gas recirculation, and an exhaust-gas treatment system with a Three-way catalyst and a particulate filter. The engine is Euro 6d-ISC-FCM-compliant. Citroën Citroën did much research, producing the M35 and GS Birotor cars, and the helicopter, using engines produced by Comotor, a joint venture by Citroën and NSU. Daimler-Benz Daimler-Benz fitted a Wankel engine in their C111 concept car. The C 111-II's engine was naturally aspirated, fitted with petrol direct injection, and had four rotors. The total displacement was , and the compression ration was 9.3:1 It provided a maximum torque of at 5,000rpm and a power output of at 6,000rpm. American Motors American Motors Corporation (AMC) was so convinced "... that the rotary engine will play an important role as a powerplant for cars and trucks of the future ...", that the chairman, Roy D. Chapin Jr., signed an agreement in February 1973 after a year's negotiations, to build rotary engines for both passenger cars and military vehicles, and the right to sell any rotary engines it produced to other companies. AMC's president, William Luneburg, did not expect dramatic development through to 1980, but Gerald C. Meyers, AMC's vice president of the engineering product group, suggested that AMC should buy the engines from Curtiss-Wright before developing its own rotary engines, and predicted a total transition to rotary power by 1984. Plans called for the engine to be used in the AMC Pacer, but development was pushed back. American Motors designed the unique Pacer around the engine. By 1974, AMC had decided to purchase the General Motors (GM) rotary instead of building an engine in-house. Both GM and AMC confirmed the relationship would be beneficial in marketing the new engine, with AMC claiming that the GM rotary achieved good fuel economy. GM's engines had not reached production when the Pacer was launched onto the market. The 1973 oil crisis played a part in frustrating the use of the rotary engine. Rising fuel prices and speculation about proposed US emission standards legislation also increased concerns. General Motors At its annual meeting in May 1973, General Motors unveiled the Wankel engine it planned to use in the Chevrolet Vega. By 1974, GM R&D had not succeeded in producing a Wankel engine meeting both the emission requirements and good fuel economy, leading to a decision by the company to cancel the project. Because of that decision, the R&D team only partly released the results of its most recent research, which claimed to have solved the fuel-economy problem and built reliable engines with a lifespan above . Those findings were not taken into account when the cancellation order was issued. The ending of GM's rotary project required AMC, who was to purchase the engine, to reconfigure the Pacer to house its AMC straight-6 engine driving the rear wheels. AvtoVAZ In 1974, the Soviet Union created a special engine-design bureau, which, in 1978, designed an engine designated as VAZ-311 fitted into a VAZ-2101 car. In 1980, the company began delivering the VAZ-411 twin-rotor Wankel engine in VAZ-2106 cars, with about 200 being manufactured. Most of the production went to the security services. Ford Ford conducted research in rotary engines, resulting in patents granted: , 1974, a method for fabricating housings; 1974, side plates coating; , 1975, housing coating; , 1978: Housings alignment; , 1979, reed-valve assembly. In 1972, Henry Ford II stated that the rotary probably would not replace the piston in "my lifetime". Car racing The Sigma MC74 powered by a Mazda 12A engine was the first engine and only team from outside Western Europe or the United States to finish the entire 24hours of the 24 Hours of Le Mans race, in 1974. Yojiro Terada was the driver of the MC74. Mazda was the first team from outside Western Europe or the United States to win Le Mans outright. It was also the only non-piston engined car to win Le Mans, which the company accomplished in 1991 with their four-rotor 787B ( displacement), rated by FIA formula at ). In the C2 class, all participants had the same amount of fuel. The only exception was the unregulated C1 Category 1. This category only allowed naturally aspirated engines. The Mazdas were classified as naturally aspirated to start with 830 kg weight, 170 kg less than the supercharged competitors. The cars under the Group C1 Category 1 regulations for 1991 were allowed to be another 80 kg lighter than the 787B. In addition, Group C1 Category 1 had only permitted 3.5-liter naturally aspirated engines and had no fuel quantity limits. As a vehicle range extender Due to the compact size and the high power-to-weight ratio of a Wankel engine, it has been proposed for electric vehicles as range extenders to provide supplementary power when electric battery levels are low. A Wankel engine used as a generator has packaging, noise, vibration, and harshness advantages when used in a passenger car, maximizing interior passenger and luggage space, as well as providing a good noise and vibration emissions profile. However, it is questionable whether or not the inherent disadvantages of the Wankel engine allow the usage of the Wankel engine as a range extender for passenger cars. In 2010, Audi unveiled a prototype series-hybrid electric car, the A1 e-tron. It incorporated a Wankel engine with a chamber volume Vk of 254 cm3, capable of producing 18 kW at 5000 rpm. It was mated to an electric generator, which recharged the car's batteries as needed and provided electricity directly to the electric driving motor. The package had a mass of 70 kg and could produce 15 kW of electric power. In November 2013, Mazda announced to the motoring press a series-hybrid prototype car, the Mazda2 EV, using a Wankel engine as a range extender. The generator engine, located under the rear luggage floor, is a tiny, almost inaudible, single-rotor 330-cc unit, generating at 4,500rpm and maintaining a continuous electric output of 20 kW. Mazda introduced the MX-30 R-EV fitted with a Wankel engine range extender in March 2023. The car's Wankel engine is a naturally aspirated single-rotor unit with a chamber volume Vk of , a compression of 11.9, and a rated power output of . It has petrol direct injection, exhaust gas recirculation, and an exhaust-gas treatment system with a TWC and a particulate filter. According to auto motor und sport, the engine is Euro 6d-ISC-FCM-compliant. Motorcycle applications The first Wankel-engined motorcycle was an MZ-built MZ ES 250, fitted with a water-cooled KKM 175 W Wankel engine. An air-cooled version followed this in 1965, called the KKM 175 L. The engine produced at 6,750rpm, but the motorcycle never went into series production. Norton In Britain, Norton Motorcycles developed a Wankel rotary engine for motorcycles, based on the Sachs air-cooled rotor Wankel that powered the DKW/Hercules W-2000 motorcycle. This two-rotor engine was included in the Commander and F1. Norton improved on Sachs's air cooling, introducing a plenum chamber. Suzuki also made a production motorcycle powered by a Wankel engine, the RE-5, using ferroTiC alloy apex seals and an NSU rotor in a successful attempt to prolong the engine's life. In the early 1980s, using earlier work at BSA, Norton produced the air-cooled twin-rotor Classic, followed by the liquid-cooled Commander and the Interpol2 (a police version). Subsequent Norton Wankel bikes included the Norton F1, F1 Sports, RC588, Norton RCW588, and NRS588. Norton proposed a new 588-cc twin-rotor model called the "NRV588" and a 700-cc version called the "NRV700". A former mechanic at Norton, Brian Crighton, started developing his own rotary engined motorcycles line named "Roton", which won several Australian races. Despite successes in racing, no motorcycles powered by Wankel engines have been produced for sale to the general public for road use since 1992. Yamaha In 1972, Yamaha introduced the RZ201 at the Tokyo Motor Show, a prototype with a Wankel engine, weighing 220 kg and producing from a twin-rotor 660-cc engine (US patent N3964448). In 1972, Kawasaki presented its two-rotor Kawasaki X99 rotary engine prototype (US patents N 3848574 &3991722). Both Yamaha and Kawasaki claimed to have solved the problems of poor fuel economy, high exhaust emissions, and poor engine longevity in early Wankels, but neither prototype reached production. Hercules In 1974, Hercules produced W-2000 Wankel motorcycles, but low production numbers meant the project was unprofitable, and production ceased in 1977. Suzuki From 1975 to 1976, Suzuki produced its RE5 single-rotor Wankel motorcycle. It was a complex design, with both liquid cooling and oil cooling, and multiple lubrication and carburetor systems. It worked well and was smooth, but it did not sell well because it was heavy and had a modest power output of . Suzuki opted for a complicated oil-cooling and water-cooling system. The exhaust pipes become very hot, with Suzuki opting for a finned exhaust manifold, twin-skinned exhausted pipes with cooling grilles, heatproof pipe wrappings, and silencers with heat shields. Suzuki had three lube systems, while Garside had a single total-loss oil injection system that fed both the main bearings and the intake manifolds. Suzuki chose a single rotor that was fairly smooth, but with rough patches at 4,000 rpm. Suzuki mounted the massive rotor high in the frame. Although it was described to handle well, the result was that the Suzuki was heavy, overcomplicated, expensive to manufacture, and, at 62 bhp, short on power. Van Veen Dutch motorcycle importer and manufacturer Van Veen produced small quantities of a dual-rotor Wankel-engined OCR-1000 motorcycle between 1978 and 1980, using surplus Comotor engines. The OCR 1000 engine used a modified KKM 624 engine initially intended for the Citroën GS Birotor car. Whereby an electronic map ignition from Hartig replaced the ignition distributor. Non-road vehicle applications Aircraft In principle, rotary engines are ideal for light aircraft, being light, compact, almost vibrationless, and with a high power-to-weight ratio. Further aviation benefits of a rotary engine include: The engine is not susceptible to "shock-cooling" during descent; The engine does not require an enriched mixture for cooling at high power; Having no reciprocating parts, there is less vulnerability to damage when the engine revolves at a higher rate than the designed maximum. Unlike cars and motorcycles, a rotary aero-engine will be sufficiently warm before full power is applied because of the time taken for pre-flight checks. Also, the journey to the runway has minimum cooling, which further permits the engine to reach the operating temperature for full power on take-off. A Wankel aero-engine spends most of its operational time at high power outputs, with little idling. Since rotary engines operate at a relatively high rotational speed, at 6,000rpm of the output shaft, the rotor spins only at about one-third of that speed. With relatively low torque, propeller-driven aircraft must use a propeller speed reduction unit to maintain propellers within the designed speed range. Experimental aircraft with Wankel engines use propeller speed reduction units; for example, the MidWest twin-rotor engine has a 2.95:1 reduction gearbox. The first rotary engine aircraft was in the late-1960s in the experimental Lockheed Q-Star civilian version of the United States Army's reconnaissance QT-2, essentially a powered Schweizer sailplane. The plane was powered by a Curtiss-Wright RC2-60 Wankel rotary engine. The same engine model was also used in a Cessna Cardinal and a helicopter, as well as other airplanes. The French company Citroën developed a rotary-powered helicopter in the 1970s. In Germany in the mid-1970s, a pusher ducted fan airplane powered by a modified NSU multi-rotor rotary engine was developed in both civilian and military versions, Fanliner and Fantrainer. At roughly the same time as the first experiments with full-scale aircraft powered with rotary engines, model aircraft-sized versions were pioneered by a combination of the well-known Japanese O.S. Engines firm and the then-extant German Graupner aeromodelling products firm, under license from NSU. The Graupner model Wankel engine has a chamber volume Vk of 4.9 cm3, and produces 460 W at 16,000 rpm−1; its mass is 370 g. It was produced by O.S. engines of Japan. Rotary engines have been fitted in homebuilt experimental aircraft, such as the ARV Super2, a couple of which were powered by the British MidWest aero-engine. Most are Mazda 12A and 13B automobile engines, converted for aviation use. This is a very cost-effective alternative to certified aircraft engines, providing engines ranging from 100 to at a fraction of the cost of traditional piston engines. These conversions were initially in the early 1970s. Peter Garrison, a contributing editor for Flying magazine, wrote "in my opinion … the most promising engine for aviation use is the Mazda rotary." The sailplane manufacturer Schleicher uses an Austro Engine AE50R engine in its self-launching models ASK-21 Mi, ASH-26E, ASH-25 M/Mi, ASH-30 Mi, ASH-31 Mi, ASW-22 BLE, and ASG-32 Mi. In 2013, e-Go airplanes, based in Cambridge, United Kingdom, announced that a rotary engine from Rotron Power will power its new single-seater canard aircraft. The DA36 E-Star, an aircraft designed by Siemens, Diamond Aircraft and EADS, employs a series hybrid powertrain with the propeller being turned by a Siemens electric motor. The aim is to reduce fuel consumption and emissions by up to 25%. An onboard Austro Engine engine and generator provide the electricity. A propeller speed reduction unit is eliminated. The electric motor uses electricity stored in batteries, with the generator engine off, to take off and climb reducing sound emissions. The series-hybrid powertrain using the Wankel engine reduces the plane's weight by 100 kg relative to its predecessor. The DA36 E-Star first flew in June 2013, making this the first-ever flight of a series-hybrid powertrain. Diamond Aircraft claims that rotary engine technology is scalable to a 100-seat aircraft. Trains Since 2015, a total of 60 trains in Germany have been equipped with Wankel-engined auxiliary power systems that burn diesel fuel. The locomotives use the WST KKM 351 Wankel diesel fuel engine. Other uses The Wankel engine is well-suited for devices in which a human operator is close to the engine, e.g., hand-held devices such as chainsaws. The excellent starting behavior and low mass make the Wankel engine also a good powerplant for portable fire pumps and portable power generators. Small Wankel engines are being found in applications such as go-karts, personal watercraft, and auxiliary power units for aircraft. Kawasaki patented mixture-cooled rotary engine (US patent 3991722). Japanese diesel engine manufacturer Yanmar and Dolmar-Sachs of Germany had a rotary-engined chain saw (SAE paper 760642) and outboard boat engines, and the French Outils Wolf, made a lawnmower (Rotondor) powered by a Wankel rotary engine. The rotor was in a horizontal position to save on production costs, and there were no seals on the downside. The simplicity of the rotary engine makes it well-suited for mini, micro, and micro-mini engine designs. The Microelectromechanical systems (MEMS) Rotary Engine Lab at the University of California, Berkeley, formerly researched developing rotary engines down to 1 mm in diameter, with displacements less than 0.1 cc. Materials include silicon, and motive power includes compressed air. The goal of such research was to eventually develop an internal combustion engine with the ability to deliver 100 milliwatts of electrical power, with the engine serving as the rotor of the electric generator, with magnets built into the engine rotor. Development of the miniature rotary engine stopped at UC Berkeley at the end of the DARPA contract. In 1976, Road & Track reported that Ingersoll-Rand would develop a Wankel engine with a chamber volume Vk of with a rated power of per rotor. Eventually, 13 units of the proposed engine were built, albeit with a larger displacement, and covered over 90,000 operating hours combined. The engine was made with a chamber volume Vk of , and a power output of per rotor. Both single, and twin-rotor engines were made (producing or respectively). The engines ran on natural gas and had a relatively low engine speed due to its application. Deere & Company acquired the Curtiss-Wright rotary division in February 1984, making large multi-fuel prototypes, some with an 11-liter rotor for large vehicles. The developers attempted to use a stratified charge concept. The technology was transferred to RPI in 1991. Yanmar of Japan produced small, charge-cooled rotary engines for chainsaws and outboard engines. One of its products is the LDR (rotor recess in the leading edge of the combustion chamber) engine, which has better exhaust emissions profiles, and reed-valve controlled intake ports, which improve part-load and low rpm performance. In 1971 and 1972, Arctic Cat produced snowmobiles powered by Sachs KM 914 303-cc and KC-24 294-cc Wankel engines made in Germany. In the early 1970s, Outboard Marine Corporation sold snowmobiles under the Johnson and other brands, which were powered by OMC engines. Aixro of Germany produces and sells a go-kart engine with a 294-cc-chamber charge-cooled rotor and liquid-cooled housings. Other makers include Wankel AG, Cubewano, Rotron, and Precision Technology. Non-internal combustion In addition to applications as an internal combustion engine, the basic Wankel design has also been used for gas compressors, and superchargers for internal combustion engines, but in these cases, although the design still offers advantages in reliability, the primary advantages of the Wankel in size and weight over the four-stroke internal combustion engine are irrelevant. In a design using a Wankel supercharger on a Wankel engine, the supercharger is twice the size of the engine. The Wankel design is used in the seat belt pre-tensioner system in some Mercedes-Benz and Volkswagen cars. When the deceleration sensors detect a potential crash, small explosive cartridges are triggered electrically, and the resulting pressurized gas feeds into tiny Wankel engines, which rotate to take up the slack in the seat belt systems, anchoring the driver and passengers firmly in the seat before a collision.
Technology
Engines
null
33306
https://en.wikipedia.org/wiki/Water
Water
Water is an inorganic compound with the chemical formula . It is a transparent, tasteless, odorless, and nearly colorless chemical substance. It is the main constituent of Earth's hydrosphere and the fluids of all known living organisms (in which it acts as a solvent). It is vital for all known forms of life, despite not providing food energy or organic micronutrients. Its chemical formula, , indicates that each of its molecules contains one oxygen and two hydrogen atoms, connected by covalent bonds. The hydrogen atoms are attached to the oxygen atom at an angle of 104.45°. In liquid form, is also called "water" at standard temperature and pressure. Because Earth's environment is relatively close to water's triple point, water exists on Earth as a solid, a liquid, and a gas. It forms precipitation in the form of rain and aerosols in the form of fog. Clouds consist of suspended droplets of water and ice, its solid state. When finely divided, crystalline ice may precipitate in the form of snow. The gaseous state of water is steam or water vapor. Water covers about 71% of the Earth's surface, with seas and oceans making up most of the water volume (about 96.5%). Small portions of water occur as groundwater (1.7%), in the glaciers and the ice caps of Antarctica and Greenland (1.7%), and in the air as vapor, clouds (consisting of ice and liquid water suspended in air), and precipitation (0.001%). Water moves continually through the water cycle of evaporation, transpiration (evapotranspiration), condensation, precipitation, and runoff, usually reaching the sea. Water plays an important role in the world economy. Approximately 70% of the fresh water used by humans goes to agriculture. Fishing in salt and fresh water bodies has been, and continues to be, a major source of food for many parts of the world, providing 6.5% of global protein. Much of the long-distance trade of commodities (such as oil, natural gas, and manufactured products) is transported by boats through seas, rivers, lakes, and canals. Large quantities of water, ice, and steam are used for cooling and heating in industry and homes. Water is an excellent solvent for a wide variety of substances, both mineral and organic; as such, it is widely used in industrial processes and in cooking and washing. Water, ice, and snow are also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, diving, ice skating, snowboarding, and skiing. Etymology The word water comes from Old English , from Proto-Germanic (source also of Old Saxon , Old Frisian , Dutch , Old High German , German , , Gothic ()), from Proto-Indo-European , suffixed form of root (; ). Also cognate, through the Indo-European root, with Greek (; from Ancient Greek (), whence English ), Russian (), Irish , and Albanian . History On Earth Properties Water () is a polar inorganic compound. At room temperature it is a tasteless and odorless liquid, nearly colorless with a hint of blue. The simplest hydrogen chalcogenide, it is by far the most studied chemical compound and is sometimes described as the "universal solvent" for its ability to dissolve more substances than any other liquid, though it is poor at dissolving nonpolar substances. This allows it to be the "solvent of life": indeed, water as found in nature almost always includes various dissolved substances, and special steps are required to obtain chemically pure water. Water is the only common substance to exist as a solid, liquid, and gas in normal terrestrial conditions. States Along with oxidane, water is one of the two official names for the chemical compound ; it is also the liquid phase of . The other two common states of matter of water are the solid phase, ice, and the gaseous phase, water vapor or steam. The addition or removal of heat can cause phase transitions: freezing (water to ice), melting (ice to water), vaporization (water to vapor), condensation (vapor to water), sublimation (ice to vapor) and deposition (vapor to ice). Density Water is one of only a few common naturally occurring substances which, for some temperature ranges, become less dense as they cool, and the only known naturally occurring substance which does so while liquid. In addition it is unusual as it becomes significantly less dense as it freezes, though it is not unique in that respect. At 1 atm pressure, it reaches its maximum density of at . Below that temperature, but above the freezing point of , it expands becoming less dense until it reaches freezing point, reaching a density in its liquid phase of . Once it freezes and becomes ice, it expands by about 9%, with a density of . This expansion can exert enormous pressure, bursting pipes and cracking rocks. As a solid, it displays the usual behavior of contracting and becoming more dense as it cools. These unusual thermal properties have important consequences for life on earth. In a lake or ocean, water at sinks to the bottom, and ice forms on the surface, floating on the liquid water. This ice insulates the water below, preventing it from freezing solid. Without this protection, most aquatic organisms residing in lakes would perish during the winter. In addition, this anomalous behavior is an important part of the thermohaline circulation which distributes heat around the planet's oceans. Magnetism Water is a diamagnetic material. Though interaction is weak, with superconducting magnets it can attain a notable interaction. Phase transitions At a pressure of one atmosphere (atm), ice melts or water freezes (solidifies) at and water boils or vapor condenses at . However, even below the boiling point, water can change to vapor at its surface by evaporation (vaporization throughout the liquid is known as boiling). Sublimation and deposition also occur on surfaces. For example, frost is deposited on cold surfaces while snowflakes form by deposition on an aerosol particle or ice nucleus. In the process of freeze-drying, a food is frozen and then stored at low pressure so the ice on its surface sublimates. The melting and boiling points depend on pressure. A good approximation for the rate of change of the melting temperature with pressure is given by the Clausius–Clapeyron relation: where and are the molar volumes of the liquid and solid phases, and is the molar latent heat of melting. In most substances, the volume increases when melting occurs, so the melting temperature increases with pressure. However, because ice is less dense than water, the melting temperature decreases. In glaciers, pressure melting can occur under sufficiently thick volumes of ice, resulting in subglacial lakes. The Clausius-Clapeyron relation also applies to the boiling point, but with the liquid/gas transition the vapor phase has a much lower density than the liquid phase, so the boiling point increases with pressure. Water can remain in a liquid state at high temperatures in the deep ocean or underground. For example, temperatures exceed in Old Faithful, a geyser in Yellowstone National Park. In hydrothermal vents, the temperature can exceed . At sea level, the boiling point of water is . As atmospheric pressure decreases with altitude, the boiling point decreases by 1 °C every 274 meters. High-altitude cooking takes longer than sea-level cooking. For example, at , cooking time must be increased by a fourth to achieve the desired result. Conversely, a pressure cooker can be used to decrease cooking times by raising the boiling temperature. In a vacuum, water will boil at room temperature. Triple and critical points On a pressure/temperature phase diagram (see figure), there are curves separating solid from vapor, vapor from liquid, and liquid from solid. These meet at a single point called the triple point, where all three phases can coexist. The triple point is at a temperature of and a pressure of ; it is the lowest pressure at which liquid water can exist. Until 2019, the triple point was used to define the Kelvin temperature scale. The water/vapor phase curve terminates at and . This is known as the critical point. At higher temperatures and pressures the liquid and vapor phases form a continuous phase called a supercritical fluid. It can be gradually compressed or expanded between gas-like and liquid-like densities; its properties (which are quite different from those of ambient water) are sensitive to density. For example, for suitable pressures and temperatures it can mix freely with nonpolar compounds, including most organic compounds. This makes it useful in a variety of applications including high-temperature electrochemistry and as an ecologically benign solvent or catalyst in chemical reactions involving organic compounds. In Earth's mantle, it acts as a solvent during mineral formation, dissolution and deposition. Phases of ice and water The normal form of ice on the surface of Earth is ice Ih, a phase that forms crystals with hexagonal symmetry. Another with cubic crystalline symmetry, ice Ic, can occur in the upper atmosphere. As the pressure increases, ice forms other crystal structures. As of 2024, twenty have been experimentally confirmed and several more are predicted theoretically. The eighteenth form of ice, ice XVIII, a face-centred-cubic, superionic ice phase, was discovered when a droplet of water was subject to a shock wave that raised the water's pressure to millions of atmospheres and its temperature to thousands of degrees, resulting in a structure of rigid oxygen atoms in which hydrogen atoms flowed freely. When sandwiched between layers of graphene, ice forms a square lattice. The details of the chemical nature of liquid water are not well understood; some theories suggest that its unusual behavior is due to the existence of two liquid states. Taste and odor Pure water is usually described as tasteless and odorless, although humans have specific sensors that can feel the presence of water in their mouths, and frogs are known to be able to smell it. However, water from ordinary sources (including mineral water) usually has many dissolved substances that may give it varying tastes and odors. Humans and other animals have developed senses that enable them to evaluate the potability of water in order to avoid water that is too salty or putrid. Color and appearance Pure water is visibly blue due to absorption of light in the region c. 600–800 nm. The color can be easily observed in a glass of tap-water placed against a pure white background, in daylight. The principal absorption bands responsible for the color are overtones of the O–H stretching vibrations. The apparent intensity of the color increases with the depth of the water column, following Beer's law. This also applies, for example, with a swimming pool when the light source is sunlight reflected from the pool's white tiles. In nature, the color may also be modified from blue to green due to the presence of suspended solids or algae. In industry, near-infrared spectroscopy is used with aqueous solutions as the greater intensity of the lower overtones of water means that glass cuvettes with short path-length may be employed. To observe the fundamental stretching absorption spectrum of water or of an aqueous solution in the region around 3,500 cm (2.85 μm) a path length of about 25 μm is needed. Also, the cuvette must be both transparent around 3500 cm and insoluble in water; calcium fluoride is one material that is in common use for the cuvette windows with aqueous solutions. The Raman-active fundamental vibrations may be observed with, for example, a 1 cm sample cell. Aquatic plants, algae, and other photosynthetic organisms can live in water up to hundreds of meters deep, because sunlight can reach them. Practically no sunlight reaches the parts of the oceans below of depth. The refractive index of liquid water (1.333 at ) is much higher than that of air (1.0), similar to those of alkanes and ethanol, but lower than those of glycerol (1.473), benzene (1.501), carbon disulfide (1.627), and common types of glass (1.4 to 1.6). The refraction index of ice (1.31) is lower than that of liquid water. Molecular polarity In a water molecule, the hydrogen atoms form a 104.5° angle with the oxygen atom. The hydrogen atoms are close to two corners of a tetrahedron centered on the oxygen. At the other two corners are lone pairs of valence electrons that do not participate in the bonding. In a perfect tetrahedron, the atoms would form a 109.5° angle, but the repulsion between the lone pairs is greater than the repulsion between the hydrogen atoms. The O–H bond length is about 0.096 nm. Other substances have a tetrahedral molecular structure, for example methane () and hydrogen sulfide (). However, oxygen is more electronegative than most other elements, so the oxygen atom has a negative partial charge while the hydrogen atoms are partially positively charged. Along with the bent structure, this gives the molecule an electrical dipole moment and it is classified as a polar molecule. Water is a good polar solvent, dissolving many salts and hydrophilic organic molecules such as sugars and simple alcohols such as ethanol. Water also dissolves many gases, such as oxygen and carbon dioxide—the latter giving the fizz of carbonated beverages, sparkling wines and beers. In addition, many substances in living organisms, such as proteins, DNA and polysaccharides, are dissolved in water. The interactions between water and the subunits of these biomacromolecules shape protein folding, DNA base pairing, and other phenomena crucial to life (hydrophobic effect). Many organic substances (such as fats and oils and alkanes) are hydrophobic, that is, insoluble in water. Many inorganic substances are insoluble too, including most metal oxides, sulfides, and silicates. Hydrogen bonding Because of its polarity, a molecule of water in the liquid or solid state can form up to four hydrogen bonds with neighboring molecules. Hydrogen bonds are about ten times as strong as the Van der Waals force that attracts molecules to each other in most liquids. This is the reason why the melting and boiling points of water are much higher than those of other analogous compounds like hydrogen sulfide. They also explain its exceptionally high specific heat capacity (about 4.2 J/(g·K)), heat of fusion (about 333 J/g), heat of vaporization (), and thermal conductivity (between 0.561 and 0.679 W/(m·K)). These properties make water more effective at moderating Earth's climate, by storing heat and transporting it between the oceans and the atmosphere. The hydrogen bonds of water are around 23 kJ/mol (compared to a covalent O-H bond at 492 kJ/mol). Of this, it is estimated that 90% is attributable to electrostatics, while the remaining 10% is partially covalent. These bonds are the cause of water's high surface tension and capillary forces. The capillary action refers to the tendency of water to move up a narrow tube against the force of gravity. This property is relied upon by all vascular plants, such as trees. Self-ionization Water is a weak solution of hydronium hydroxide—there is an equilibrium , in combination with solvation of the resulting hydronium and hydroxide ions. Electrical conductivity and electrolysis Pure water has a low electrical conductivity, which increases with the dissolution of a small amount of ionic material such as common salt. Liquid water can be split into the elements hydrogen and oxygen by passing an electric current through it—a process called electrolysis. The decomposition requires more energy input than the heat released by the inverse process (285.8 kJ/mol, or 15.9 MJ/kg). Mechanical properties Liquid water can be assumed to be incompressible for most purposes: its compressibility ranges from 4.4 to in ordinary conditions. Even in oceans at 4 km depth, where the pressure is 400 atm, water suffers only a 1.8% decrease in volume. The viscosity of water is about 10 Pa·s or 0.01 poise at , and the speed of sound in liquid water ranges between depending on temperature. Sound travels long distances in water with little attenuation, especially at low frequencies (roughly 0.03 dB/km for 1 kHz), a property that is exploited by cetaceans and humans for communication and environment sensing (sonar). Reactivity Metallic elements which are more electropositive than hydrogen, particularly the alkali metals and alkaline earth metals such as lithium, sodium, calcium, potassium and cesium displace hydrogen from water, forming hydroxides and releasing hydrogen. At high temperatures, carbon reacts with steam to form carbon monoxide and hydrogen. On Earth Hydrology is the study of the movement, distribution, and quality of water throughout the Earth. The study of the distribution of water is hydrography. The study of the distribution and movement of groundwater is hydrogeology, of glaciers is glaciology, of inland waters is limnology and distribution of oceans is oceanography. Ecological processes with hydrology are in the focus of ecohydrology. The collective mass of water found on, under, and over the surface of a planet is called the hydrosphere. Earth's approximate water volume (the total water supply of the world) is . Liquid water is found in bodies of water, such as an ocean, sea, lake, river, stream, canal, pond, or puddle. The majority of water on Earth is seawater. Water is also present in the atmosphere in solid, liquid, and vapor states. It also exists as groundwater in aquifers. Water is important in many geological processes. Groundwater is present in most rocks, and the pressure of this groundwater affects patterns of faulting. Water in the mantle is responsible for the melt that produces volcanoes at subduction zones. On the surface of the Earth, water is important in both chemical and physical weathering processes. Water, and to a lesser but still significant extent, ice, are also responsible for a large amount of sediment transport that occurs on the surface of the earth. Deposition of transported sediment forms many types of sedimentary rocks, which make up the geologic record of Earth history. Water cycle The water cycle (known scientifically as the hydrologic cycle) is the continuous exchange of water within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants. Water moves perpetually through each of these regions in the water cycle consisting of the following transfer processes: evaporation from oceans and other water bodies into the air and transpiration from land plants and animals into the air. precipitation, from water vapor condensing from the air and falling to the earth or ocean. runoff from the land usually reaching the sea. Most water vapors found mostly in the ocean returns to it, but winds carry water vapor over land at the same rate as runoff into the sea, about 47 Tt per year while evaporation and transpiration happening in land masses also contribute another 72 Tt per year. Precipitation, at a rate of 119 Tt per year over land, has several forms: most commonly rain, snow, and hail, with some contribution from fog and dew. Dew is small drops of water that are condensed when a high density of water vapor meets a cool surface. Dew usually forms in the morning when the temperature is the lowest, just before sunrise and when the temperature of the earth's surface starts to increase. Condensed water in the air may also refract sunlight to produce rainbows. Water runoff often collects over watersheds flowing into rivers. Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil and level ground for the establishment of population centers. A flood occurs when an area of land, usually low-lying, is covered with water which occurs when a river overflows its banks or a storm surge happens. On the other hand, drought is an extended period of months or years when a region notes a deficiency in its water supply. This occurs when a region receives consistently below average precipitation either due to its topography or due to its location in terms of latitude. Water resources Water resources are natural resources of water that are potentially useful for humans, for example as a source of drinking water supply or irrigation water. Water occurs as both "stocks" and "flows". Water can be stored as lakes, water vapor, groundwater or aquifers, and ice and snow. Of the total volume of global freshwater, an estimated 69 percent is stored in glaciers and permanent snow cover; 30 percent is in groundwater; and the remaining 1 percent in lakes, rivers, the atmosphere, and biota. The length of time water remains in storage is highly variable: some aquifers consist of water stored over thousands of years but lake volumes may fluctuate on a seasonal basis, decreasing during dry periods and increasing during wet ones. A substantial fraction of the water supply for some regions consists of water extracted from water stored in stocks, and when withdrawals exceed recharge, stocks decrease. By some estimates, as much as 30 percent of total water used for irrigation comes from unsustainable withdrawals of groundwater, causing groundwater depletion. Seawater and tides Seawater contains about 3.5% sodium chloride on average, plus smaller amounts of other substances. The physical properties of seawater differ from fresh water in some important respects. It freezes at a lower temperature (about ) and its density increases with decreasing temperature to the freezing point, instead of reaching maximum density at a temperature above freezing. The salinity of water in major seas varies from about 0.7% in the Baltic Sea to 4.0% in the Red Sea. (The Dead Sea, known for its ultra-high salinity levels of between 30 and 40%, is really a salt lake.) Tides are the cyclic rising and falling of local sea levels caused by the tidal forces of the Moon and the Sun acting on the oceans. Tides cause changes in the depth of the marine and estuarine water bodies and produce oscillating currents known as tidal streams. The changing tide produced at a given location is the result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of Earth rotation and the local bathymetry. The strip of seashore that is submerged at high tide and exposed at low tide, the intertidal zone, is an important ecological product of ocean tides. Effects on life From a biological standpoint, water has many distinct properties that are critical for the proliferation of life. It carries out this role by allowing organic compounds to react in ways that ultimately allow replication. All known forms of life depend on water. Water is vital both as a solvent in which many of the body's solutes dissolve and as an essential part of many metabolic processes within the body. Metabolism is the sum total of anabolism and catabolism. In anabolism, water is removed from molecules (through energy requiring enzymatic chemical reactions) in order to grow larger molecules (e.g., starches, triglycerides, and proteins for storage of fuels and information). In catabolism, water is used to break bonds in order to generate smaller molecules (e.g., glucose, fatty acids, and amino acids to be used for fuels for energy use or other purposes). Without water, these particular metabolic processes could not exist. Water is fundamental to both photosynthesis and respiration. Photosynthetic cells use the sun's energy to split off water's hydrogen from oxygen. In the presence of sunlight, hydrogen is combined with (absorbed from air or water) to form glucose and release oxygen. All living cells use such fuels and oxidize the hydrogen and carbon to capture the sun's energy and reform water and in the process (cellular respiration). Water is also central to acid-base neutrality and enzyme function. An acid, a hydrogen ion (, that is, a proton) donor, can be neutralized by a base, a proton acceptor such as a hydroxide ion () to form water. Water is considered to be neutral, with a pH (the negative log of the hydrogen ion concentration) of 7 in an ideal state. Acids have pH values less than 7 while bases have values greater than 7. Aquatic life forms Earth's surface waters are filled with life. The earliest life forms appeared in water; nearly all fish live exclusively in water, and there are many types of marine mammals, such as dolphins and whales. Some kinds of animals, such as amphibians, spend portions of their lives in water and portions on land. Plants such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton is generally the foundation of the ocean food chain. Aquatic vertebrates must obtain oxygen to survive, and they do so in various ways. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals, such as dolphins, whales, otters, and seals need to surface periodically to breathe air. Some amphibians are able to absorb oxygen through their skin. Invertebrates exhibit a wide range of modifications to survive in poorly oxygenated waters including breathing tubes (see insect and mollusc siphons) and gills (Carcinus). However, as invertebrate life evolved in an aquatic habitat most have little or no specialization for respiration in water. Effects on human civilization Civilization has historically flourished around rivers and major waterways; Mesopotamia, one of the so-called cradles of civilization, was situated between the major rivers Tigris and Euphrates; the ancient society of the Egyptians depended entirely upon the Nile. The early Indus Valley civilization () developed along the Indus River and tributaries that flowed out of the Himalayas. Rome was also founded on the banks of the Italian river Tiber. Large metropolises like Rotterdam, London, Montreal, Paris, New York City, Buenos Aires, Shanghai, Tokyo, Chicago, and Hong Kong owe their success in part to their easy accessibility via water and the resultant expansion of trade. Islands with safe water ports, like Singapore, have flourished for the same reason. In places such as North Africa and the Middle East, where water is more scarce, access to clean drinking water was and is a major factor in human development. Health and pollution Water fit for human consumption is called drinking water or potable water. Water that is not potable may be made potable by filtration or distillation, or by a range of other methods. More than 660 million people do not have access to safe drinking water. Water that is not fit for drinking but is not harmful to humans when used for swimming or bathing is called by various names other than potable or drinking water, and is sometimes called safe water, or "safe for bathing". Chlorine is a skin and mucous membrane irritant that is used to make water safe for bathing or drinking. Its use is highly technical and is usually monitored by government regulations (typically 1 part per million (ppm) for drinking water, and 1–2 ppm of chlorine not yet reacted with impurities for bathing water). Water for bathing may be maintained in satisfactory microbiological condition using chemical disinfectants such as chlorine or ozone or by the use of ultraviolet light. Water reclamation is the process of converting wastewater (most commonly sewage, also called municipal wastewater) into water that can be reused for other purposes. There are 2.3 billion people who reside in nations with water scarcities, which means that each individual receives less than of water annually. of municipal wastewater are produced globally each year. Freshwater is a renewable resource, recirculated by the natural hydrologic cycle, but pressures over access to it result from the naturally uneven distribution in space and time, growing economic demands by agriculture and industry, and rising populations. Currently, nearly a billion people around the world lack access to safe, affordable water. In 2000, the United Nations established the Millennium Development Goals for water to halve by 2015 the proportion of people worldwide without access to safe water and sanitation. Progress toward that goal was uneven, and in 2015 the UN committed to the Sustainable Development Goals of achieving universal access to safe and affordable water and sanitation by 2030. Poor water quality and bad sanitation are deadly; some five million deaths a year are caused by water-related diseases. The World Health Organization estimates that safe water could prevent 1.4 million child deaths from diarrhea each year. In developing countries, 90% of all municipal wastewater still goes untreated into local rivers and streams. Some 50 countries, with roughly a third of the world's population, also suffer from medium or high water scarcity and 17 of these extract more water annually than is recharged through their natural water cycles. The strain not only affects surface freshwater bodies like rivers and lakes, but it also degrades groundwater resources. Human uses Agriculture The most substantial human use of water is for agriculture, including irrigated agriculture, which accounts for as much as 80 to 90 percent of total human water consumption. In the United States, 42% of freshwater withdrawn for use is for irrigation, but the vast majority of water "consumed" (used and not returned to the environment) goes to agriculture. Access to fresh water is often taken for granted, especially in developed countries that have built sophisticated water systems for collecting, purifying, and delivering water, and removing wastewater. But growing economic, demographic, and climatic pressures are increasing concerns about water issues, leading to increasing competition for fixed water resources, giving rise to the concept of peak water. As populations and economies continue to grow, consumption of water-thirsty meat expands, and new demands rise for biofuels or new water-intensive industries, new water challenges are likely. An assessment of water management in agriculture was conducted in 2007 by the International Water Management Institute in Sri Lanka to see if the world had sufficient water to provide food for its growing population. It assessed the current availability of water for agriculture on a global scale and mapped out locations suffering from water scarcity. It found that a fifth of the world's people, more than 1.2 billion, live in areas of physical water scarcity, where there is not enough water to meet all demands. A further 1.6 billion people live in areas experiencing economic water scarcity, where the lack of investment in water or insufficient human capacity make it impossible for authorities to satisfy the demand for water. The report found that it would be possible to produce the food required in the future, but that continuation of today's food production and environmental trends would lead to crises in many parts of the world. To avoid a global water crisis, farmers will have to strive to increase productivity to meet growing demands for food, while industries and cities find ways to use water more efficiently. Water scarcity is also caused by production of water intensive products. For example, cotton: 1 kg of cotton—equivalent of a pair of jeans—requires water to produce. While cotton accounts for 2.4% of world water use, the water is consumed in regions that are already at a risk of water shortage. Significant environmental damage has been caused: for example, the diversion of water by the former Soviet Union from the Amu Darya and Syr Darya rivers to produce cotton was largely responsible for the disappearance of the Aral Sea. As a scientific standard On 7 April 1795, the gram was defined in France to be equal to "the absolute weight of a volume of pure water equal to a cube of one-hundredth of a meter, and at the temperature of melting ice". For practical purposes though, a metallic reference standard was required, one thousand times more massive, the kilogram. Work was therefore commissioned to determine precisely the mass of one liter of water. In spite of the fact that the decreed definition of the gram specified water at —a highly reproducible temperature—the scientists chose to redefine the standard and to perform their measurements at the temperature of highest water density, which was measured at the time as . The Kelvin temperature scale of the SI system was based on the triple point of water, defined as exactly , but as of May 2019 is based on the Boltzmann constant instead. The scale is an absolute temperature scale with the same increment as the Celsius temperature scale, which was originally defined according to the boiling point (set to ) and melting point (set to ) of water. Natural water consists mainly of the isotopes hydrogen-1 and oxygen-16, but there is also a small quantity of heavier isotopes oxygen-18, oxygen-17, and hydrogen-2 (deuterium). The percentage of the heavier isotopes is very small, but it still affects the properties of water. Water from rivers and lakes tends to contain less heavy isotopes than seawater. Therefore, standard water is defined in the Vienna Standard Mean Ocean Water specification. For drinking The human body contains from 55% to 78% water, depending on body size. To function properly, the body requires between of water per day to avoid dehydration; the precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is ingested through foods or beverages other than drinking straight water. It is not clear how much water intake is needed by healthy people, though the British Dietetic Association advises that 2.5 liters of total water daily is the minimum to maintain proper hydration, including 1.8 liters (6 to 7 glasses) obtained directly from beverages. Medical literature favors a lower consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss from exercise or warm weather. Healthy kidneys can excrete 0.8 to 1 liter of water per hour, but stress such as exercise can reduce this amount. People can drink far more water than necessary while exercising, putting them at risk of water intoxication (hyperhydration), which can be fatal. The popular claim that "a person should consume eight glasses of water per day" seems to have no real basis in science. Studies have shown that extra water intake, especially up to at mealtime, was associated with weight loss. Adequate fluid intake is helpful in preventing constipation. An original recommendation for water intake in 1945 by the Food and Nutrition Board of the U.S. National Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." The latest dietary reference intake report by the U.S. National Research Council in general recommended, based on the median total water intake from US survey data (including food sources): for men and of water total for women, noting that water contained in food provided approximately 19% of total water intake in the survey. Specifically, pregnant and breastfeeding women need additional fluids to stay hydrated. The US Institute of Medicine recommends that, on average, men consume and women ; pregnant women should increase intake to and breastfeeding women should get 3 liters (12 cups), since an especially large amount of fluid is lost during nursing. Also noted is that normally, about 20% of water intake comes from food, while the rest comes from drinking water and beverages (caffeinated included). Water is excreted from the body in multiple forms; through urine and feces, through sweating, and by exhalation of water vapor in the breath. With physical exertion and heat exposure, water loss will increase and daily fluid needs may increase as well. Humans require water with few impurities. Common impurities include metal salts and oxides, including copper, iron, calcium and lead, and harmful bacteria, such as Vibrio. Some solutes are acceptable and even desirable for taste enhancement and to provide needed electrolytes. The single largest (by volume) freshwater resource suitable for drinking is Lake Baikal in Siberia. Washing Transportation Chemical uses Water is widely used in chemical reactions as a solvent or reactant and less commonly as a solute or catalyst. In inorganic reactions, water is a common solvent, dissolving many ionic compounds, as well as other polar compounds such as ammonia and compounds closely related to water. In organic reactions, it is not usually used as a reaction solvent, because it does not dissolve the reactants well and is amphoteric (acidic and basic) and nucleophilic. Nevertheless, these properties are sometimes desirable. Also, acceleration of Diels-Alder reactions by water has been observed. Supercritical water has recently been a topic of research. Oxygen-saturated supercritical water combusts organic pollutants efficiently. Heat exchange Water and steam are a common fluid used for heat exchange, due to its availability and high heat capacity, both for cooling and heating. Cool water may even be naturally available from a lake or the sea. It is especially effective to transport heat through vaporization and condensation of water because of its large latent heat of vaporization. A disadvantage is that metals commonly found in industries such as steel and copper are oxidized faster by untreated water and steam. In almost all thermal power stations, water is used as the working fluid (used in a closed-loop between boiler, steam turbine, and condenser), and the coolant (used to exchange the waste heat to a water body or carry it away by evaporation in a cooling tower). In the United States, cooling power plants is the largest use of water. In the nuclear power industry, water can also be used as a neutron moderator. In most nuclear reactors, water is both a coolant and a moderator. This provides something of a passive safety measure, as removing the water from the reactor also slows the nuclear reaction down. However other methods are favored for stopping a reaction and it is preferred to keep the nuclear core covered with water so as to ensure adequate cooling. Fire considerations Water has a high heat of vaporization and is relatively inert, which makes it a good fire extinguishing fluid. The evaporation of water carries heat away from the fire. It is dangerous to use water on fires involving oils and organic solvents because many organic materials float on water and the water tends to spread the burning liquid. Use of water in fire fighting should also take into account the hazards of a steam explosion, which may occur when water is used on very hot fires in confined spaces, and of a hydrogen explosion, when substances which react with water, such as certain metals or hot carbon such as coal, charcoal, or coke graphite, decompose the water, producing water gas. The power of such explosions was seen in the Chernobyl disaster, although the water involved in this case did not come from fire-fighting but from the reactor's own water cooling system. A steam explosion occurred when the extreme overheating of the core caused water to flash into steam. A hydrogen explosion may have occurred as a result of a reaction between steam and hot zirconium. Some metallic oxides, most notably those of alkali metals and alkaline earth metals, produce so much heat in reaction with water that a fire hazard can develop. The alkaline earth oxide quicklime, also known as calcium oxide, is a mass-produced substance that is often transported in paper bags. If these are soaked through, they may ignite as their contents react with water. Recreation Humans use water for many recreational purposes, as well as for exercising and for sports. Some of these include swimming, waterskiing, boating, surfing and diving. In addition, some sports, like ice hockey and ice skating, are played on ice. Lakesides, beaches and water parks are popular places for people to go to relax and enjoy recreation. Many find the sound and appearance of flowing water to be calming, and fountains and other flowing water structures are popular decorations. Some keep fish and other flora and fauna inside aquariums or ponds for show, fun, and companionship. Humans also use water for snow sports such as skiing, sledding, snowmobiling or snowboarding, which require the water to be at a low temperature either as ice or crystallized into snow. Water industry The water industry provides drinking water and wastewater services (including sewage treatment) to households and industry. Water supply facilities include water wells, cisterns for rainwater harvesting, water supply networks, and water purification facilities, water tanks, water towers, water pipes including old aqueducts. Atmospheric water generators are in development. Drinking water is often collected at springs, extracted from artificial borings (wells) in the ground, or pumped from lakes and rivers. Building more wells in adequate places is thus a possible way to produce more water, assuming the aquifers can supply an adequate flow. Other water sources include rainwater collection. Water may require purification for human consumption. This may involve the removal of undissolved substances, dissolved substances and harmful microbes. Popular methods are filtering with sand which only removes undissolved material, while chlorination and boiling kill harmful microbes. Distillation does all three functions. More advanced techniques exist, such as reverse osmosis. Desalination of abundant seawater is a more expensive solution used in coastal arid climates. The distribution of drinking water is done through municipal water systems, tanker delivery or as bottled water. Governments in many countries have programs to distribute water to the needy at no charge. Reducing usage by using drinking (potable) water only for human consumption is another option. In some cities such as Hong Kong, seawater is extensively used for flushing toilets citywide in order to conserve freshwater resources. Polluting water may be the biggest single misuse of water; to the extent that a pollutant limits other uses of the water, it becomes a waste of the resource, regardless of benefits to the polluter. Like other types of pollution, this does not enter standard accounting of market costs, being conceived as externalities for which the market cannot account. Thus other people pay the price of water pollution, while the private firms' profits are not redistributed to the local population, victims of this pollution. Pharmaceuticals consumed by humans often end up in the waterways and can have detrimental effects on aquatic life if they bioaccumulate and if they are not biodegradable. Municipal and industrial wastewater are typically treated at wastewater treatment plants. Mitigation of polluted surface runoff is addressed through a variety of prevention and treatment techniques. Industrial applications Many industrial processes rely on reactions using chemicals dissolved in water, suspension of solids in water slurries or using water to dissolve and extract substances, or to wash products or process equipment. Processes such as mining, chemical pulping, pulp bleaching, paper manufacturing, textile production, dyeing, printing, and cooling of power plants use large amounts of water, requiring a dedicated water source, and often cause significant water pollution. Water is used in power generation. Hydroelectricity is electricity obtained from hydropower. Hydroelectric power comes from water driving a water turbine connected to a generator. Hydroelectricity is a low-cost, non-polluting, renewable energy source. The energy is supplied by the motion of water. Typically a dam is constructed on a river, creating an artificial lake behind it. Water flowing out of the lake is forced through turbines that turn generators. Pressurized water is used in water blasting and water jet cutters. High pressure water guns are used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is also used in the cooling of machinery to prevent overheating, or prevent saw blades from overheating. Water is also used in many industrial processes and machines, such as the steam turbine and heat exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water (thermal pollution). Industry requires pure water for many applications and uses a variety of purification techniques both in water supply and discharge. Food processing Boiling, steaming, and simmering are popular cooking methods that often require immersing food in water or its gaseous state, steam. Water is also used for dishwashing. Water also plays many critical roles within the field of food science. Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and freezing points of water are affected by solutes, as well as air pressure, which is in turn affected by altitude. Water boils at lower temperatures with the lower air pressure that occurs at higher elevations. One mole of sucrose (sugar) per kilogram of water raises the boiling point of water by , and one mole of salt per kg raises the boiling point by ; similarly, increasing the number of dissolved particles lowers water's freezing point. Solutes in water also affect water activity that affects many chemical reactions and the growth of microbes in food. Water activity can be described as a ratio of the vapor pressure of water in a solution to the vapor pressure of pure water. Solutes in water lower water activity—this is important to know because most bacterial growth ceases at low levels of water activity. Not only does microbial growth affect the safety of food, but also the preservation and shelf life of food. Water hardness is also a critical factor in food processing and may be altered or treated by using a chemical ion exchange system. It can dramatically affect the quality of a product, as well as playing a role in sanitation. Water hardness is classified based on concentration of calcium carbonate the water contains. Water is classified as soft if it contains less than 100 mg/L (UK) or less than 60 mg/L (US). According to a report published by the Water Footprint organization in 2010, a single kilogram of beef requires of water; however, the authors also make clear that this is a global average and circumstantial factors determine the amount of water used in beef production. Medical use Water for injection is on the World Health Organization's list of essential medicines. Distribution in nature In the universe Much of the universe's water is produced as a byproduct of star formation. The formation of stars is accompanied by a strong outward wind of gas and dust. When this outflow of material eventually impacts the surrounding gas, the shock waves that are created compress and heat the gas. The water observed is quickly produced in this warm dense gas. On 22 July 2011, a report described the discovery of a gigantic cloud of water vapor containing "140 trillion times more water than all of Earth's oceans combined" around a quasar located 12 billion light years from Earth. According to the researchers, the "discovery shows that water has been prevalent in the universe for nearly its entire existence". Water has been detected in interstellar clouds within the Milky Way. Water probably exists in abundance in other galaxies, too, because its components, hydrogen, and oxygen, are among the most abundant elements in the universe. Based on models of the formation and evolution of the Solar System and that of other star systems, most other planetary systems are likely to have similar ingredients. Water vapor Water is present as vapor in: Atmosphere of the Sun: in detectable trace amounts Atmosphere of Mercury: 3.4%, and large amounts of water in Mercury's exosphere Atmosphere of Venus: 0.002% Earth's atmosphere: ≈0.40% over full atmosphere, typically 1–4% at surface; as well as that of the Moon in trace amounts Atmosphere of Mars: 0.03% Atmosphere of Ceres Atmosphere of Jupiter: 0.0004% – in ices only; and that of its moon Europa Atmosphere of Saturn – in ices only; Enceladus: 91% and Dione (exosphere) Atmosphere of Uranus – in trace amounts below 50 bar Atmosphere of Neptune – found in the deeper layers Extrasolar planet atmospheres: including those of HD 189733 b and HD 209458 b, Tau Boötis b, HAT-P-11b, XO-1b, WASP-12b, WASP-17b, and WASP-19b. Stellar atmospheres: not limited to cooler stars and even detected in giant hot stars such as Betelgeuse, Mu Cephei, Antares and Arcturus. Circumstellar disks: including those of more than half of T Tauri stars such as AA Tauri as well as TW Hydrae, IRC +10216 and APM 08279+5255, VY Canis Majoris and S Persei. Liquid water Liquid water is present on Earth, covering 71% of its surface. Liquid water is also occasionally present in small amounts on Mars. Scientists believe liquid water is present in the Saturnian moons of Enceladus, as a 10-kilometre thick ocean approximately 30–40 kilometers below Enceladus' south polar surface, and Titan, as a subsurface layer, possibly mixed with ammonia. Jupiter's moon Europa has surface characteristics which suggest a subsurface liquid water ocean. Liquid water may also exist on Jupiter's moon Ganymede as a layer sandwiched between high pressure ice and rock. Water ice Water is present as ice on: Mars: under the regolith and at the poles. Earth–Moon system: mainly as ice sheets on Earth and in Lunar craters and volcanic rocks NASA reported the detection of water molecules by NASA's Moon Mineralogy Mapper aboard the Indian Space Research Organization's Chandrayaan-1 spacecraft in September 2009. Ceres Jupiter's moons: Europa's surface and also that of Ganymede and Callisto Saturn: in the planet's ring system and on the surface and mantle of Titan and Enceladus Pluto–Charon system Comets and other related Kuiper belt and Oort cloud objects And is also likely present on: Mercury's poles Tethys Exotic forms Water and other volatiles probably comprise much of the internal structures of Uranus and Neptune and the water in the deeper layers may be in the form of ionic water in which the molecules break down into a soup of hydrogen and oxygen ions, and deeper still as superionic water in which the oxygen crystallizes, but the hydrogen ions float about freely within the oxygen lattice. Water and planetary habitability The existence of liquid water, and to a lesser extent its gaseous and solid forms, on Earth are vital to the existence of life on Earth as we know it. The Earth is located in the habitable zone of the Solar System; if it were slightly closer to or farther from the Sun (about 5%, or about 8 million kilometers), the conditions which allow the three forms to be present simultaneously would be far less likely to exist. Earth's gravity allows it to hold an atmosphere. Water vapor and carbon dioxide in the atmosphere provide a temperature buffer (greenhouse effect) which helps maintain a relatively steady surface temperature. If Earth were smaller, a thinner atmosphere would allow temperature extremes, thus preventing the accumulation of water except in polar ice caps (as on Mars). The surface temperature of Earth has been relatively constant through geologic time despite varying levels of incoming solar radiation (insolation), indicating that a dynamic process governs Earth's temperature via a combination of greenhouse gases and surface or atmospheric albedo. This proposal is known as the Gaia hypothesis. The state of water on a planet depends on ambient pressure, which is determined by the planet's gravity. If a planet is sufficiently massive, the water on it may be solid even at high temperatures, because of the high pressure caused by gravity, as it was observed on exoplanets Gliese 436 b and GJ 1214 b. Law, politics, and crisis Water politics is politics affected by water and water resources. Water, particularly fresh water, is a strategic resource across the world and an important element in many political conflicts. It causes health impacts and damage to biodiversity. Access to safe drinking water has improved over the last decades in almost every part of the world, but approximately one billion people still lack access to safe water and over 2.5 billion lack access to adequate sanitation. However, some observers have estimated that by 2025 more than half of the world population will be facing water-based vulnerability. A report, issued in November 2009, suggests that by 2030, in some developing regions of the world, water demand will exceed supply by 50%. 1.6 billion people have gained access to a safe water source since 1990. The proportion of people in developing countries with access to safe water is calculated to have improved from 30% in 1970 to 71% in 1990, 79% in 2000, and 84% in 2004. A 2006 United Nations report stated that "there is enough water for everyone", but that access to it is hampered by mismanagement and corruption. In addition, global initiatives to improve the efficiency of aid delivery, such as the Paris Declaration on Aid Effectiveness, have not been taken up by water sector donors as effectively as they have in education and health, potentially leaving multiple donors working on overlapping projects and recipient governments without empowerment to act. The authors of the 2007 Comprehensive Assessment of Water Management in Agriculture cited poor governance as one reason for some forms of water scarcity. Water governance is the set of formal and informal processes through which decisions related to water management are made. Good water governance is primarily about knowing what processes work best in a particular physical and socioeconomic context. Mistakes have sometimes been made by trying to apply 'blueprints' that work in the developed world to developing world locations and contexts. The Mekong river is one example; a review by the International Water Management Institute of policies in six countries that rely on the Mekong river for water found that thorough and transparent cost-benefit analyses and environmental impact assessments were rarely undertaken. They also discovered that Cambodia's draft water law was much more complex than it needed to be. In 2004, the UK charity WaterAid reported that a child dies every 15 seconds from easily preventable water-related diseases, which are often tied to a lack of adequate sanitation. Since 2003, the UN World Water Development Report, produced by the UNESCO World Water Assessment Programme, has provided decision-makers with tools for developing sustainable water policies. The 2023 report states that two billion people (26% of the population) do not have access to drinking water and 3.6 billion (46%) lack access to safely managed sanitation. People in urban areas (2.4 billion) will face water scarcity by 2050. Water scarcity has been described as endemic, due to overconsumption and pollution. The report states that 10% of the world's population lives in countries with high or critical water stress. Yet over the past 40 years, water consumption has increased by around 1% per year, and is expected to grow at the same rate until 2050. Since 2000, flooding in the tropics has quadrupled, while flooding in northern mid-latitudes has increased by a factor of 2.5. The cost of these floods between 2000 and 2019 was 100,000 deaths and $650 million. Organizations concerned with water protection include the International Water Association (IWA), WaterAid, Water 1st, and the American Water Resources Association. The International Water Management Institute undertakes projects with the aim of using effective water management to reduce poverty. Water related conventions are United Nations Convention to Combat Desertification (UNCCD), International Convention for the Prevention of Pollution from Ships, United Nations Convention on the Law of the Sea and Ramsar Convention. World Day for Water takes place on 22 March and World Oceans Day on 8 June. In culture Religion Water is considered a purifier in most religions. Faiths that incorporate ritual washing (ablution) include Christianity, Hinduism, Islam, Judaism, the Rastafari movement, Shinto, Taoism, and Wicca. Immersion (or aspersion or affusion) of a person in water is a central Sacrament of Christianity (where it is called baptism); it is also a part of the practice of other religions, including Islam (Ghusl), Judaism (mikvah) and Sikhism (Amrit Sanskar). In addition, a ritual bath in pure water is performed for the dead in many religions including Islam and Judaism. In Islam, the five daily prayers can be done in most cases after washing certain parts of the body using clean water (wudu), unless water is unavailable (see Tayammum). In Shinto, water is used in almost all rituals to cleanse a person or an area (e.g., in the ritual of misogi). In Christianity, holy water is water that has been sanctified by a priest for the purpose of baptism, the blessing of persons, places, and objects, or as a means of repelling evil. In Zoroastrianism, water (āb) is respected as the source of life. Philosophy The Ancient Greek philosopher Empedocles saw water as one of the four classical elements (along with fire, earth, and air), and regarded it as an ylem, or basic substance of the universe. Thales, whom Aristotle portrayed as an astronomer and an engineer, theorized that the earth, which is denser than water, emerged from the water. Thales, a monist, believed further that all things are made from water. Plato believed that the shape of water is an icosahedron – flowing easily compared to the cube-shaped earth. The theory of the four bodily humors associated water with phlegm, as being cold and moist. The classical element of water was also one of the five elements in traditional Chinese philosophy (along with earth, fire, wood, and metal). Some traditional and popular Asian philosophical systems take water as a role-model. James Legge's 1891 translation of the Dao De Jing states, "The highest excellence is like (that of) water. The excellence of water appears in its benefiting all things, and in its occupying, without striving (to the contrary), the low place which all men dislike. Hence (its way) is near to (that of) the Tao" and "There is nothing in the world more soft and weak than water, and yet for attacking things that are firm and strong there is nothing that can take precedence of it—for there is nothing (so effectual) for which it can be changed." Guanzi in the "Shui di" 水地 chapter further elaborates on the symbolism of water, proclaiming that "man is water" and attributing natural qualities of the people of different Chinese regions to the character of local water resources. Folklore "Living water" features in Germanic and Slavic folktales as a means of bringing the dead back to life. Note the Grimm fairy-tale ("The Water of Life") and the Russian dichotomy of and . The Fountain of Youth represents a related concept of magical waters allegedly preventing aging. Art and activism In the significant modernist novel Ulysses (1922) by Irish writer James Joyce, the chapter "Ithaca" takes the form of a catechism of 309 questions and answers, one of which is known as the "water hymn". According to Richard E. Madtes, the hymn is not merely a "monotonous string of facts", rather, its phrases, like their subject, "ebb and flow, heave and swell, gather and break, until they subside into the calm quiescence of the concluding 'pestilential fens, faded flowerwater, stagnant pools in the waning moon.'" The hymn is considered one of the most remarkable passages in Ithaca, and according to literary critic Hugh Kenner, achieves "the improbable feat of raising to poetry all the clutter of footling information that has accumulated in schoolbooks." The literary motif of water represents the novel's theme of "everlasting, everchanging life," and the hymn represents the culmination of the motif in the novel. The following is the hymn quoted in full. Painter and activist Fredericka Foster curated The Value of Water, at the Cathedral of St. John the Divine in New York City, which anchored a year-long initiative by the Cathedral on our dependence on water. The largest exhibition to ever appear at the Cathedral, it featured over forty artists, including Jenny Holzer, Robert Longo, Mark Rothko, William Kentridge, April Gornik, Kiki Smith, Pat Steir, Alice Dalton Brown, Teresita Fernandez and Bill Viola. Foster created Think About Water, an ecological collective of artists who use water as their subject or medium. Members include Basia Irland, Aviva Rahmani, Betsy Damon, Diane Burko, Leila Daw, Stacy Levy, Charlotte Coté, Meridel Rubenstein, and Anna Macleod. To mark the 10th anniversary of access to water and sanitation being declared a human right by the UN, the charity WaterAid commissioned ten visual artists to show the impact of clean water on people's lives. Dihydrogen monoxide parody 'Dihydrogen monoxide' is a technically correct but rarely used chemical name of water. This name has been used in a series of hoaxes and pranks that mock scientific illiteracy. This began in 1983, when an April Fools' Day article appeared in a newspaper in Durand, Michigan. The false story consisted of safety concerns about the substance. Music The word "Water" has been used by many Florida based rappers as a sort of catchphrase or adlib. Rappers who have done this include BLP Kosher and Ski Mask the Slump God. To go even further some rappers have made whole songs dedicated to the water in Florida, such as the 2023 Danny Towers song "Florida Water". Others have made whole songs dedicated to water as a whole, such as XXXTentacion, and Ski Mask the Slump God with their hit song "H2O".
Physical sciences
Science and medicine
null
33426
https://en.wikipedia.org/wiki/Wave%E2%80%93particle%20duality
Wave–particle duality
Wave-particle duality is the concept in quantum mechanics that fundamental entities of the universe, like photons and electrons, exhibit particle or wave properties according to the experimental circumstances. It expresses the inability of the classical concepts such as particle or wave to fully describe the behavior of quantum objects. During the 19th and early 20th centuries, light was found to behave as a wave then later discovered to have a particulate behavior, whereas electrons behaved like particles in early experiments then later discovered to have wavelike behavior. The concept of duality arose to name these seeming contradictions. History Wave-particle duality of light In the late 17th century, Sir Isaac Newton had advocated that light was corpuscular (particulate), but Christiaan Huygens took an opposing wave description. While Newton had favored a particle approach, he was the first to attempt to reconcile both wave and particle theories of light, and the only one in his time to consider both, thereby anticipating modern wave-particle duality. Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, validated Huygens' wave models. However, the wave model was challenged in 1901 by Planck's law for black-body radiation. Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. In 1905 Einstein interpreted the photoelectric effect also with discrete energies for photons. These both indicate particle behavior. Despite confirmation by various experimental observations, the photon theory (as it came to be called) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum and energy seemingly contradicted the earlier work demonstrating wave-like interference of light. Wave-particle duality of matter The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thomson, Robert Millikan, and Charles Wilson among others had shown that free electrons had particle properties, for instance, the measurement of their mass by Thomson in 1897. In 1924, Louis de Broglie introduced his theory of electron waves in his PhD thesis Recherches sur la théorie des quanta. He suggested that an electron around a nucleus could be thought of as being a standing wave and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles, and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before. Following de Broglie's proposal of wave–particle duality of electrons, in 1925 to 1926, Erwin Schrödinger developed the wave equation of motion for electrons. This rapidly became part of what was called by Schrödinger undulatory mechanics, now called the Schrödinger equation and also "wave mechanics". In 1926, Max Born gave a talk in an Oxford meeting about using the electron diffraction experiments to confirm the wave–particle duality of electrons. In his talk, Born cited experimental data from Clinton Davisson in 1923. It happened that Davisson also attended that talk. Davisson returned to his lab in the US to switch his experimental focus to test the wave property of electrons. In 1927, the wave nature of electrons was empirically confirmed by two experiments. The Davisson–Germer experiment at Bell Labs measured electrons scattered from Ni metal surfaces. George Paget Thomson and Alexander Reid at Cambridge University scattered electrons through thin metal films and observed concentric diffraction rings. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Davisson and Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Bethe, which includes the refraction due to the average potential, yielded more accurate results. Davisson and Thomson were awarded the Nobel Prize in 1937 for experimental verification of wave property of electrons by diffraction experiments. Similar crystal diffraction experiments were carried out by Otto Stern in the 1930s using beams of helium atoms and hydrogen molecules. These experiments further verified that wave behavior is not limited to electrons and is a general property of matter on a microscopic scale. Classical waves and particles Before proceeding further, it is critical to introduce some definitions of waves and particles both in a classical sense and in quantum mechanics. Waves and particles are two very different models for physical systems, each with an exceptionally large range of application. Classical waves obey the wave equation; they have continuous values at many points in space that vary with time; their spatial extent can vary with time due to diffraction, and they display wave interference. Physical systems exhibiting wave behavior and described by the mathematics of wave equations include water waves, seismic waves, sound waves, radio waves, and more. Classical particles obey classical mechanics; they have some center of mass and extent; they follow trajectories characterized by positions and velocities that vary over time; in the absence of forces their trajectories are straight lines. Stars, planets, spacecraft, tennis balls, bullets, sand grains: particle models work across a huge scale. Unlike waves, particles do not exhibit interference. Some experiments on quantum systems show wave-like interference and diffraction; some experiments show particle-like collisions. Quantum systems obey wave equations that predict particle probability distributions. These particles are associated with discrete values called quanta for properties such as spin, electric charge and magnetic moment. These particles arrive one at time, randomly, but build up a pattern. The probability that experiments will measure particles at a point in space is the square of a complex-number valued wave. Experiments can be designed to exhibit diffraction and interference of the probability amplitude. Thus statistically large numbers of the random particle appearances can display wave-like properties. Similar equations govern collective excitations called quasiparticles. Electrons behaving as waves and particles The electron double slit experiment is a textbook demonstration of wave-particle duality. A modern version of the experiment is shown schematically in the figure below. Electrons from the source hit a wall with two thin slits. A mask behind the slits can expose either one or open to expose both slits. The results for high electron intensity are shown on the right, first for each slit individually, then with both slits open. With either slit open there is a smooth intensity variation due to diffraction. When both slits are open the intensity oscillates, characteristic of wave interference. Having observed wave behavior, now change the experiment, lowering the intensity of the electron source until only one or two are detected per second, appearing as individual particles, dots in the video. As shown in the movie clip below, the dots on the detector seem at first to be random. After some time a pattern emerges, eventually forming an alternating sequence of light and dark bands. The experiment shows wave interference revealed a single particle at a time -- quantum mechanical electrons display both wave and particle behavior. Similar results have been shown for atoms and even large molecules. Observing photons as particles While electrons were thought to be particles until their wave properties were discovered; for photons it was the opposite. In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays, what are now called electrons. In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation. In 1905, Albert Einstein suggested that the energy of the light must occur a finite number of energy quanta. He postulated that electrons can receive energy from an electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by where h is the Planck constant (6.626×10−34 J⋅s). Only photons of a high enough frequency (above a certain threshold value which is the work function) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal he used, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. Both discrete (quantized) energies and also momentum are, classically, particle attributes. There are many other examples where photons display particle-type properties, for instance in solar sails, where sunlight could propel a space vehicle and laser cooling where the momentum is used to slow down (cool) atoms. These are a different aspect of wave-particle duality. Which slit experiments In a "which way" experiment, particle detectors are placed at the slits to determine which slit the electron traveled through. When these detectors are inserted, quantum mechanics predicts that the interference pattern disappears because the detected part of the electron wave has changed (loss of coherence). Many similar proposals have been made and many have been converted into experiments and tried out. Every single one shows the same result: as soon as electron trajectories are detected, interference disappears. A simple example of these "which way" experiments uses a Mach–Zehnder interferometer, a device based on lasers and mirrors sketched below. A laser beam along the input port splits at a half-silvered mirror. Part of the beam continues straight, passes though a glass phase shifter, then reflects downward. The other part of the beam reflects from the first mirror then turns at another mirror. The two beams meet at a second half-silvered beam splitter. Each output port has a camera to record the results. The two beams show interference characteristic of wave propagation. If the laser intensity is turned sufficiently low, individual dots appear on the cameras, building up the pattern as in the electron example. The first beam-splitter mirror acts like double slits, but in the interferometer case we can remove the second beam splitter. Then the beam heading down ends up in output port 1: any photon particles on this path gets counted in that port. The beam going across the top ends up on output port 2. In either case the counts will track the photon trajectories. However, as soon as the second beam splitter is removed the interference pattern disappears.
Physical sciences
Quantum mechanics
null
18307589
https://en.wikipedia.org/wiki/Woodlouse
Woodlouse
Woodlice are terrestrial isopods in the suborder Oniscidea. Their name is derived from being often found in old wood, and from louse, a parasitic insect, although woodlice are neither parasitic nor insects. Woodlice evolved from marine isopods which are presumed to have colonised land in the Carboniferous, though the oldest known fossils are from the Cretaceous period. This makes them quite unique among the crustaceans, being one of the few lineages to have transitioned into a fully terrestrial environment. Woodlice have many common names and although often referred to as terrestrial isopods, some species live semiterrestrially or have recolonised aquatic environments like those of the genus Ligia. Woodlice in the families Armadillidae, Armadillidiidae, Eubelidae, Tylidae and some other genera can roll up into a roughly spherical shape (conglobate) as a defensive mechanism or to conserve moisture; others have partial rolling ability, but most cannot conglobate at all. Woodlice have a basic morphology of a segmented, dorso-ventrally flattened body with seven pairs of jointed legs, and specialised appendages for respiration. Like other peracarids, female woodlice carry fertilised eggs in their marsupium, through which they provide developing embryos with water, oxygen and nutrients. The immature young hatch as mancae and receive further maternal care in some species. Juveniles then go through a series of moults before reaching maturity. Mancae are born with 6 segments and gain an additional one after their first molt. While the broader phylogeny of the Oniscideans has not been settled, eleven infraorders/sections are agreed on with 3,937 species validated in scientific literature in 2004 and 3,710 species in 2014 out of an estimated total of 5,000–7,000 species extant worldwide. Key adaptations to terrestrial life have led to a highly diverse set of animals; from the marine littoral zone and subterranean lakes to arid deserts and desert slopes above sea-level, woodlice have established themselves in most terrestrial biomes and represent the full range of transitional forms and behaviours for living on land. Woodlice are widely studied in the contexts of evolutionary biology, behavioural ecology and nutrient cycling. They are popular as terrarium pets because of their varied colour and texture forms, conglobating ability and ease of care. Recent research has shown that the grouping as traditionally defined may not be monophyletic, with some taxa like Ligia and possibly Tylidae more closely related to other marine isopod groups, though the majority of woodlice probably do constitute a clade. Common names Common names for woodlice vary throughout the English-speaking world. A number of common names make reference to the fact that some species of woodlice can roll up into a ball. Other names compare the woodlouse to a pig. The collective noun is a quabble of woodlice. Common names include: armadillo bug boat-builder (Newfoundland, Canada) butcher boy or butchy boy (Australia, mostly around Melbourne) carpenter or cafner (Newfoundland and Labrador, Canada) cheeselog (Reading, England) cheesy bobs (Guildford, England) cheesy bug (North West Kent, Gravesend, England) chiggy pig (Devon, England) chisel pig chucky pig (Devon, Gloucestershire, Herefordshire, England) doodlebug (also used for the larva of an antlion and for the cockchafer) gramersow (Cornwall, England) hog-louse millipedus QuaQua regional to Beddau and Keppoch Street Roath (), (), granny grey in Wales pill bug (usually applied only to the genus Armadillidium) potato bug roll up bug roly-poly slater (Scotland, Ulster, New Zealand and Australia) sow bug woodbunter wood bug (British Columbia, Canada) wood pig (mochyn coed, Welsh) Description and life cycle The woodlouse has a shell-like exoskeleton, which it must progressively shed as it grows. The moult takes place in two stages; the back half is lost first, followed two or three days later by the front. This method of moulting is different from that of most arthropods, which shed their cuticle in a single process. It is theorized that this allows woodlice to maintain partial mobility while molting. A female woodlouse will keep fertilised eggs in a marsupium on the underside of her body, which covers the under surface of the thorax and is formed by overlapping plates attached to the bases of the first five pairs of legs. They hatch into offspring that look like small white woodlice curled up in balls, although initially without the last pair of legs. The mother then appears to "give birth" to her offspring. A few species are also capable of reproducing asexually. Despite being crustaceans like lobsters or crabs, woodlice are said to have an unpleasant taste similar to "strong urine". This is due to their high concentration of uric acid, which is one of the chemicals in urine. Though other sources say that they taste like prawn, shrimp, or crawfish. Pillbugs and pill millipedes Pill bugs (woodlice of the family Armadillidiidae and Armadillidae) can be confused with pill millipedes of the order Glomerida. Both of these groups of terrestrial segmented arthropods are about the same size. They live in very similar habitats, share a similar diet, and conglobate as a defense mechanism. Pill millipedes and pillbugs appear superficially similar to the naked eye. This is an example of convergent evolution. These two groups can be distinguished in several ways. Glomeris millipedes have 19 (males) or 17 (females) pairs of legs, while pill bugs only have 7 pairs of legs. Additionally, pill bugs have a thorax consisting of 7 body segments, 5 abdominal segments, and a pleotelson, while Glomeris millipedes lack a visually defined thorax and have 12 body segments total. While the uropods of pillbugs are relatively quite small, flipping a pill bug over will reveal the small uropod overlapping the pleotelson. Some woodlouse species, like Armadillidium maculatum, seem to display Batesian Mimicry to certain pill millipedes like Glomeris marginata. Ecology Many members of Oniscidea live in terrestrial, non-aquatic environments, breathing through trachea-like lungs in their paddle-shaped hind legs (pleopods), called pleopodal lungs. Woodlice need moisture because they rapidly lose water by excretion and through their cuticle, and so are usually found in damp, dark places, such as under rocks and logs, although one species, the desert dwelling Hemilepistus reaumuri, inhabits "the driest habitat conquered by any species of crustacean". They are usually nocturnal and are detritivores, feeding mostly on dead plant matter. A few woodlice have returned to water. Evolutionary ancient species are amphibious, such as the marine-intertidal sea slater (Ligia oceanica), which belongs to family Ligiidae. Other examples include some Haloniscus species from Australia (family Scyphacidae), and in the northern hemisphere several species of Trichoniscidae and Thailandoniscus annae (family Styloniscidae). Species for which aquatic life is assumed include Typhlotricholigoides aquaticus (Mexico) and Cantabroniscus primitivus (Spain). Woodlice are eaten by a wide range of insectivores, including spiders of the genus Dysdera, such as the woodlouse spider Dysdera crocata, and land planarians of the genus Luteostriata, such as Luteostriata abundans. Woodlice are sensitive to agricultural pesticides, but can tolerate some toxic heavy metals, which they accumulate in the hepatopancreas. Thus they can be used as bioindicators of heavy metal pollution. Evolutionary history The oldest fossils of woodlice are known from the mid-Cretaceous around 100 million years ago, from amber deposits found in Spain, France and Myanmar, These include a specimen of living genus Ligia from the Charentese amber of France, the genus Myanmariscus from the Burmese amber of Myanmar, which belongs to the Synocheta and likely the Styloniscidae, Eoligiiscus tarraconensis which belongs to the family Ligiidae, Autrigoniscus resinicola which belongs to the family Trichoniscidae, and Heraclitus helenae which possibly belongs to Detonidae all from Spanish amber, and indeterminate specimens Charentese amber. The widespread distribution and diversification apparent of woodlice in the mid-Cretaceous implies that the origin of woodlice predates the breakup of Pangaea, likely during the Carboniferous. As pests Although woodlice, like earthworms, are generally considered beneficial in gardens for their role in controlling certain pests, producing compost and overturning the soil, some species like those of the genus Armadillidium have also been known to feed on cultivated plants, such as ripening strawberries and tender seedlings. Woodlice can also invade homes in groups searching for moisture, and their presence can indicate dampness problems. They are not generally regarded as a serious household pest as they do not spread disease and do not damage sound wood or structures. They can be easily removed with the help of vacuum cleaners, chemical sprays, insect repellents, and insect killers, or by removing the dampness. As pets Woodlice have become a popular household pet for children as well as a hobby for invertebrate and insect enthusiasts or collectors. Porcellionidae (sowbugs) and Armadillididae (pillbugs) are seen often as they are the most common terrestrial isopods in Europe and North America. While some isopod species are kept purely as pets, some can also be used as an addition to bioactive terrariums, due to their ability to break down decaying organic materials. Morphs and species in the hobby As isopods are bred in captivity, some hobbyists will discover a new mutation, or they will selectively breed isopods for a specific color/pattern expression. These populations with unique appearances are referred to as 'morphs'. Morphs are given nicknames, usually by the breeder who discovered/created the morph. The standard appearance of an isopod species is often referred to as 'Wild Type'. Some isopod morphs are characterized by polygenic traits, such as 'Orange Vigor' (Armadillidium vulgare) and 'Pink Rubber Ducky' (Cubaris sp. "Rubber Ducky"), the result of selectively breeding isopods that best match the desired appearance. These genes can vary in their expression greatly, as they are not the result of a specific genetic mutation. Other morphs are the result of dominant or recessive mutations, as seen with 'T+/T− Albino' and 'Whiteout' (Several spp.). As an example, T+ albino isopods are the result of an isopod being born without the ability to produce melanin, removing all black pigmentation. However, they are believed to be tyrosinase-positive (hence the T+), and therefore can still create some darker pigments such as brown and purple. T− albino isopods are thought to lack both melanin and tyrosinase, and therefore only express light yellows, oranges, and white. Confusion can often arise due to the rate at which unidentified or undescribed isopod species are introduced to the hobby. This has contributed significantly to the genus Cubaris being considered a wastebasket taxon, as many of the unidentified or undescribed isopod species are incorrectly labelled as "Cubaris sp." even when they do not fit the formal description of the genus. In the British Isles Classification There is general agreement that there are five main lineages in suborder Oniscidea, although the phylogenetic relationships between them are unsettled. Two main schemes for the classification that differ in which group is considered sister to the remaining oniscideans. One places Ligiidae in section Diplocheta, with the remaining families divided between four sections in infraorder Holoverticata. The other places Tylidae in infraorder Tylomorpha, with the remaining families placed in three sections in infraorder Ligiamorpha. The former scheme is presented below. Infraorder/section Diplocheta Ligiidae Infraorder Holoverticata Section: Tylida Tylidae Section: Microcheta Mesoniscidae Section: Synocheta Buddelundiellidae Schoebliidae Styloniscidae Titaniidae Trichoniscidae Turanoniscidae Section: Crinocheta Agnaridae Alloniscidae Armadillidae Armadillidiidae Balloniscidae Bathytropidae Berytoniscidae Cylisticidae Delatorreiidae Detonidae Eubelidae Halophilosciidae Olibrinidae Oniscidae Philosciidae Platyarthridae Porcellionidae Pudeoniscidae Rhyscotidae Scleropactidae Scyphacidae Spelaeoniscidae Stenoniscidae Tendosphaeridae Trachelipodidae
Biology and health sciences
Crustaceans
null
18310979
https://en.wikipedia.org/wiki/Group%20%28stratigraphy%29
Group (stratigraphy)
In geology, a group is a lithostratigraphic unit consisting of a series of related formations that have been classified together to form a group. Formations are the fundamental unit of stratigraphy. Groups may sometimes be combined into supergroups. Groups are useful for showing relationships between formations, and they are also useful for small-scale mapping or for studying the stratigraphy of large regions. Geologists exploring a new area have sometimes defined groups when they believe the strata within the groups can be divided into formations during subsequent investigations of the area. It is possible for only some of the strata making up a group to be divided into formations. An example of a group is the Glen Canyon Group, which includes (in ascending order) the Wingate Sandstone, the Moenave Formation, the Kayenta Formation, and the Navajo Sandstone. Each of the formations can be distinguished from its neighbor by its lithology, but all were deposited in the same vast erg. Not all these formations are present in all areas where the Glen Canyon Group is present. Another example of a group is the Vadito Group of northern New Mexico. Although many of its strata have been divided into formations, such as the Glenwoody Formation, other strata (particularly in the lower part of the group) remain undivided into formations. Some well known groups of northwestern Europe have in the past also been used as units for chronostratigraphy and geochronology. These are the Rotliegend and Zechstein (both of Permian age); Buntsandstein, Muschelkalk, and Keuper (Triassic in age); Lias, Dogger, and Malm (Jurassic in age) groups. Because of the confusion this causes, the official geologic timescale of the ICS does not contain any of these names. As with other lithostratigraphic ranks, a group must not be defined by fossil taxonomy.
Physical sciences
Stratigraphy
Earth science
102847
https://en.wikipedia.org/wiki/Solid-state%20physics
Solid-state physics
Solid-state physics is the study of rigid matter, or solids, through methods such as solid-state chemistry, quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. Along with solid-state chemistry, it also has direct applications in the technology of transistors and semiconductors. Background Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass). The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding. History The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society. Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices. Crystal structure and properties Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially. Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials. Electronic properties Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity. Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators. The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators. The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory. Modern research Modern research topics in solid-state physics include: High-temperature superconductivity Quasicrystals Spin glass Strongly correlated materials Two-dimensional materials Nanomaterials
Physical sciences
Basics_8
null
102871
https://en.wikipedia.org/wiki/Cellular%20respiration
Cellular respiration
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products. Cellular respiration is a vital process that occurs in the cells of all [[plants and some bacteria ]]. Respiration can be either aerobic, requiring oxygen, or anaerobic; some organisms can switch between aerobic and anaerobic respiration. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be oxidized by the citric acid cycle. The products of this process are carbon dioxide and water, and the energy transferred is used to make bonds between ADP and a third phosphate group to form ATP (adenosine triphosphate), by substrate-level phosphorylation, NADH and FADH2. The negative ΔG indicates that the reaction is exothermic (exergonic) and can occur spontaneously. The potential of NADH and FADH2 is converted to more ATP through an electron transport chain with oxygen and protons (hydrogen ions) as the "terminal electron acceptors". Most of the ATP produced by aerobic cellular respiration is made by oxidative phosphorylation. The energy released is used to create a chemiosmotic potential by pumping protons across a membrane. This potential is then used to drive ATP synthase and produce ATP from ADP and a phosphate group. Biology textbooks often state that 38 ATP molecules can be made per oxidized glucose molecule during cellular respiration (2 from glycolysis, 2 from the Krebs cycle, and about 34 from the electron transport system). However, this maximum yield is never quite reached because of losses due to leaky membranes as well as the cost of moving pyruvate and ADP into the mitochondrial matrix, and current estimates range around 29 to 30 ATP per glucose. Aerobic metabolism is up to 15 times more efficient than anaerobic metabolism (which yields 2 molecules of ATP per 1 molecule of glucose). However, some anaerobic organisms, such as methanogens are able to continue with anaerobic respiration, yielding more ATP by using inorganic molecules other than oxygen as final electron acceptors in the electron transport chain. They share the initial pathway of glycolysis but aerobic metabolism continues with the Krebs cycle and oxidative phosphorylation. The post-glycolytic reactions take place in the mitochondria in eukaryotic cells, and in the cytoplasm in prokaryotic cells. Although plants are net consumers of carbon dioxide and producers of oxygen via photosynthesis, plant respiration accounts for about half of the CO2 generated annually by terrestrial ecosystems. Glycolysis Glycolysis is a metabolic pathway that takes place in the cytosol of cells in all living organisms. Glycolysis can be literally translated as "sugar splitting", and occurs regardless of oxygen's presence or absence. The process converts one molecule of glucose into two molecules of pyruvate (pyruvic acid), generating energy in the form of two net molecules of ATP. Four molecules of ATP per glucose are actually produced, but two are consumed as part of the preparatory phase. The initial phosphorylation of glucose is required to increase the reactivity (decrease its stability) in order for the molecule to be cleaved into two pyruvate molecules by the enzyme aldolase. During the pay-off phase of glycolysis, four phosphate groups are transferred to four ADP by substrate-level phosphorylation to make four ATP, and two NADH are also produced during the pay-off phase. The overall reaction can be expressed this way: Glucose + 2 NAD+ + 2 Pi + 2 ADP → 2 pyruvate + 2 NADH + 2 ATP + 2 H+ + 2 H2O + energy Starting with glucose, 1 ATP is used to donate a phosphate to glucose to produce glucose 6-phosphate. Glycogen can be converted into glucose 6-phosphate as well with the help of glycogen phosphorylase. During energy metabolism, glucose 6-phosphate becomes fructose 6-phosphate. An additional ATP is used to phosphorylate fructose 6-phosphate into fructose 1,6-bisphosphate by the help of phosphofructokinase. Fructose 1,6-biphosphate then splits into two phosphorylated molecules with three carbon chains which later degrades into pyruvate. Oxidative decarboxylation of pyruvate Pyruvate is oxidized to acetyl-CoA and CO2 by the pyruvate dehydrogenase complex (PDC). The PDC contains multiple copies of three enzymes and is located in the mitochondria of eukaryotic cells and in the cytosol of prokaryotes. In the conversion of pyruvate to acetyl-CoA, one molecule of NADH and one molecule of CO2 is formed. Citric acid cycle The citric acid cycle is also called the Krebs cycle or the tricarboxylic acid cycle. When oxygen is present, acetyl-CoA is produced from the pyruvate molecules created from glycolysis. Once acetyl-CoA is formed, aerobic or anaerobic respiration can occur. When oxygen is present, the mitochondria will undergo aerobic respiration which leads to the Krebs cycle. However, if oxygen is not present, fermentation of the pyruvate molecule will occur. In the presence of oxygen, when acetyl-CoA is produced, the molecule then enters the citric acid cycle (Krebs cycle) inside the mitochondrial matrix, and is oxidized to CO2 while at the same time reducing NAD to NADH. NADH can be used by the electron transport chain to create further ATP as part of oxidative phosphorylation. To fully oxidize the equivalent of one glucose molecule, two acetyl-CoA must be metabolized by the Krebs cycle. Two low-energy waste products, H2O and CO2, are created during this cycle. The citric acid cycle is an 8-step process involving 18 different enzymes and co-enzymes. During the cycle, acetyl-CoA (2 carbons) + oxaloacetate (4 carbons) yields citrate (6 carbons), which is rearranged to a more reactive form called isocitrate (6 carbons). Isocitrate is modified to become α-ketoglutarate (5 carbons), succinyl-CoA, succinate, fumarate, malate and, finally, oxaloacetate. The net gain from one cycle is 3 NADH and 1 FADH2 as hydrogen (proton plus electron) carrying compounds and 1 high-energy GTP, which may subsequently be used to produce ATP. Thus, the total yield from 1 glucose molecule (2 pyruvate molecules) is 6 NADH, 2 FADH2, and 2 ATP. Oxidative phosphorylation In eukaryotes, oxidative phosphorylation occurs in the mitochondrial cristae. It comprises the electron transport chain that establishes a proton gradient (chemiosmotic potential) across the boundary of the inner membrane by oxidizing the NADH produced from the Krebs cycle. ATP is synthesized by the ATP synthase enzyme when the chemiosmotic gradient is used to drive the phosphorylation of ADP. The electrons are finally transferred to exogenous oxygen and, with the addition of two protons, water is formed. Efficiency of ATP production The table below describes the reactions involved when one glucose molecule is fully oxidized into carbon dioxide. It is assumed that all the reduced coenzymes are oxidized by the electron transport chain and used for oxidative phosphorylation. Although there is a theoretical yield of 38 ATP molecules per glucose during cellular respiration, such conditions are generally not realized because of losses such as the cost of moving pyruvate (from glycolysis), phosphate, and ADP (substrates for ATP synthesis) into the mitochondria. All are actively transported using carriers that utilize the stored energy in the proton electrochemical gradient. Pyruvate is taken up by a specific, low Km transporter to bring it into the mitochondrial matrix for oxidation by the pyruvate dehydrogenase complex. The phosphate carrier (PiC) mediates the electroneutral exchange (antiport) of phosphate (; Pi) for OH− or symport of phosphate and protons (H+) across the inner membrane, and the driving force for moving phosphate ions into the mitochondria is the proton motive force. The ATP-ADP translocase (also called adenine nucleotide translocase, ANT) is an antiporter and exchanges ADP and ATP across the inner membrane. The driving force is due to the ATP (−4) having a more negative charge than the ADP (−3), and thus it dissipates some of the electrical component of the proton electrochemical gradient. The outcome of these transport processes using the proton electrochemical gradient is that more than 3 H+ are needed to make 1 ATP. Obviously, this reduces the theoretical efficiency of the whole process and the likely maximum is closer to 28–30 ATP molecules. In practice the efficiency may be even lower because the inner membrane of the mitochondria is slightly leaky to protons. Other factors may also dissipate the proton gradient creating an apparently leaky mitochondria. An uncoupling protein known as thermogenin is expressed in some cell types and is a channel that can transport protons. When this protein is active in the inner membrane it short circuits the coupling between the electron transport chain and ATP synthesis. The potential energy from the proton gradient is not used to make ATP but generates heat. This is particularly important in brown fat thermogenesis of newborn and hibernating mammals. According to some newer sources, the ATP yield during aerobic respiration is not 36–38, but only about 30–32 ATP molecules / 1 molecule of glucose , because: ATP : NADH+H+ and ATP : FADH2 ratios during the oxidative phosphorylation appear to be not 3 and 2, but 2.5 and 1.5 respectively. Unlike in the substrate-level phosphorylation, the stoichiometry here is difficult to establish. ATP synthase produces 1 ATP / 3 H+. However the exchange of matrix ATP for cytosolic ADP and Pi (antiport with OH− or symport with H+) mediated by ATP–ADP translocase and phosphate carrier consumes 1 H+ / 1 ATP as a result of regeneration of the transmembrane potential changed during this transfer, so the net ratio is 1 ATP : 4 H+. The mitochondrial electron transport chain proton pump transfers across the inner membrane 10 H+ / 1 NADH+H+ (4 + 2 + 4) or 6 H+ / 1 FADH2 (2 + 4). So the final stoichiometry is 1 NADH+H+ : 10 H+ : 10/4 ATP = 1 NADH+H+ : 2.5 ATP 1 FADH2 : 6 H+ : 6/4 ATP = 1 FADH2 : 1.5 ATP ATP : NADH+H+ coming from glycolysis ratio during the oxidative phosphorylation is 1.5, as for FADH2, if hydrogen atoms (2H++2e−) are transferred from cytosolic NADH+H+ to mitochondrial FAD by the glycerol phosphate shuttle located in the inner mitochondrial membrane. 2.5 in case of malate-aspartate shuttle transferring hydrogen atoms from cytosolic NADH+H+ to mitochondrial NAD+ So finally we have, per molecule of glucose Substrate-level phosphorylation: 2 ATP from glycolysis + 2 ATP (directly GTP) from Krebs cycle Oxidative phosphorylation 2 NADH+H+ from glycolysis: 2 × 1.5 ATP (if glycerol phosphate shuttle transfers hydrogen atoms) or 2 × 2.5 ATP (malate-aspartate shuttle) 2 NADH+H+ from the oxidative decarboxylation of pyruvate and 6 from Krebs cycle: 8 × 2.5 ATP 2 FADH2 from the Krebs cycle: 2 × 1.5 ATP Altogether this gives 4 + 3 (or 5) + 20 + 3 = 30 (or 32) ATP per molecule of glucose These figures may still require further tweaking as new structural details become available. The above value of 3 H+ / ATP for the synthase assumes that the synthase translocates 9 protons, and produces 3 ATP, per rotation. The number of protons depends on the number of c subunits in the Fo c-ring, and it is now known that this is 10 in yeast Fo and 8 for vertebrates. Including one H+ for the transport reactions, this means that synthesis of one ATP requires protons in yeast and in vertebrates. This would imply that in human mitochondria the 10 protons from oxidizing NADH would produce 2.72 ATP (instead of 2.5) and the 6 protons from oxidizing succinate or ubiquinol would produce 1.64 ATP (instead of 1.5). This is consistent with experimental results within the margin of error described in a recent review. The total ATP yield in ethanol or lactic acid fermentation is only 2 molecules coming from glycolysis, because pyruvate is not transferred to the mitochondrion and finally oxidized to the carbon dioxide (CO2), but reduced to ethanol or lactic acid in the cytoplasm. Fermentation Without oxygen, pyruvate (pyruvic acid) is not metabolized by cellular respiration but undergoes a process of fermentation. The pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. This serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. Fermentation oxidizes NADH to NAD+ so it can be re-used in glycolysis. In the absence of oxygen, fermentation prevents the buildup of NADH in the cytoplasm and provides NAD+ for glycolysis. This waste product varies depending on the organism. In skeletal muscles, the waste product is lactic acid. This type of fermentation is called lactic acid fermentation. In strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by NADH. During anaerobic glycolysis, NAD+ regenerates when pairs of hydrogen combine with pyruvate to form lactate. Lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. Lactate can also be used as an indirect precursor for liver glycogen. During recovery, when oxygen becomes available, NAD+ attaches to hydrogen from lactate to form ATP. In yeast, the waste products are ethanol and carbon dioxide. This type of fermentation is known as alcoholic or ethanol fermentation. The ATP generated in this process is made by substrate-level phosphorylation, which does not require oxygen. Fermentation is less efficient at using the energy from glucose: only 2 ATP are produced per glucose, compared to the 38 ATP per glucose nominally produced by aerobic respiration. Glycolytic ATP, however, is produced more quickly. For prokaryotes to continue a rapid growth rate when they are shifted from an aerobic environment to an anaerobic environment, they must increase the rate of the glycolytic reactions. For multicellular organisms, during short bursts of strenuous activity, muscle cells use fermentation to supplement the ATP production from the slower aerobic respiration, so fermentation may be used by a cell even before the oxygen levels are depleted, as is the case in sports that do not require athletes to pace themselves, such as sprinting. Anaerobic respiration Cellular respiration is the process by which biological fuels are oxidised in the presence of an inorganic electron acceptor, such as oxygen, to produce large amounts of energy and drive the bulk production of ATP. Anaerobic respiration is used by microorganisms, either bacteria or archaea, in which neither oxygen (aerobic respiration) nor pyruvate derivatives (fermentation) is the final electron acceptor. Rather, an inorganic acceptor such as sulfate (), nitrate (), or sulfur (S) is used. Such organisms could be found in unusual places such as underwater caves or near hydrothermal vents at the bottom of the ocean., as well as in anoxic soils or sediment in wetland ecosystems. In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live below the surface. These organisms are also remarkable because they consume minerals such as pyrite as their food source.
Biology and health sciences
Cell processes
null
102908
https://en.wikipedia.org/wiki/System%20call
System call
In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, accessing a hard disk drive or accessing the device's camera), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system. In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, privileged system code also issues system calls. For embedded systems, system calls typically do not change the privilege mode of the CPU. Privileges The architecture of most modern processors, with the exception of some embedded systems, involves a security model. For example, the rings model specifies multiple privilege levels under which software may be executed: a program is usually limited to its own address space so that it cannot access or modify other running programs or the operating system itself, and is usually prevented from directly manipulating hardware devices (e.g. the frame buffer or network devices). However, many applications need access to these components, so system calls are made available by the operating system to provide well-defined, safe implementations for such operations. The operating system executes at the highest level of privilege, and allows applications to request services via system calls, which are often initiated via interrupts. An interrupt automatically puts the CPU into some elevated privilege level and then passes control to the kernel, which determines whether the calling program should be granted the requested service. If the service is granted, the kernel executes a specific set of instructions over which the calling program has no direct control, returns the privilege level to that of the calling program, and then returns control to the calling program. The library as an intermediary Generally, systems provide a library or API that sits between normal programs and the operating system. On Unix-like systems, that API is usually part of an implementation of the C library (libc), such as glibc, that provides wrapper functions for the system calls, often named the same as the system calls they invoke. On Windows NT, that API is part of the Native API, in the library; this is an undocumented API used by implementations of the regular Windows API and directly used by some system programs on Windows. The library's wrapper functions expose an ordinary function calling convention (a subroutine call on the assembly level) for using the system call, as well as making the system call more modular. Here, the primary function of the wrapper is to place all the arguments to be passed to the system call in the appropriate processor registers (and maybe on the call stack as well), and also setting a unique system call number for the kernel to call. In this way the library, which exists between the OS and the application, increases portability. The call to the library function itself does not cause a switch to kernel mode and is usually a normal subroutine call (using, for example, a "CALL" assembly instruction in some Instruction set architectures (ISAs)). The actual system call does transfer control to the kernel (and is more implementation-dependent and platform-dependent than the library call abstracting it). For example, in Unix-like systems, fork and execve are C library functions that in turn execute instructions that invoke the fork and exec system calls. Making the system call directly in the application code is more complicated and may require embedded assembly code to be used (in C and C++), as well as requiring knowledge of the low-level binary interface for the system call operation, which may be subject to change over time and thus not be part of the application binary interface; the library functions are meant to abstract this away. On exokernel based systems, the library is especially important as an intermediary. On exokernels, libraries shield user applications from the very low level kernel API, and provide abstractions and resource management. IBM's OS/360, DOS/360 and TSS/360 implement most system calls through a library of assembly language macros, although there are a few services with a call linkage. This reflects their origin at a time when programming in assembly language was more common than high-level language usage. IBM system calls were therefore not directly executable by high-level language programs, but required a callable assembly language wrapper subroutine. Since then, IBM has added many services that can be called from high level languages in, e.g., z/OS and z/VSE. In more recent release of MVS/SP and in all later MVS versions, some system call macros generate Program Call (PC). Examples and tools On Unix, Unix-like and other POSIX-compliant operating systems, popular system calls are open, read, write, close, wait, exec, fork, exit, and kill. Many modern operating systems have hundreds of system calls. For example, Linux and OpenBSD each have over 300 different calls, NetBSD has close to 500, FreeBSD has over 500, Windows has close to 2000, divided between win32k (graphical) and ntdll (core) system calls while Plan 9 has 54. Tools such as strace, ftrace and truss allow a process to execute from start and report all system calls the process invokes, or can attach to an already running process and intercept any system call made by the said process if the operation does not violate the permissions of the user. This special ability of the program is usually also implemented with system calls such as ptrace or system calls on files in procfs. Typical implementations Implementing system calls requires a transfer of control from user space to kernel space, which involves some sort of architecture-specific feature. A typical way to implement this is to use a software interrupt or trap. Interrupts transfer control to the operating system kernel, so software simply needs to set up some register with the system call number needed, and execute the software interrupt. This is the only technique provided for many RISC processors, but CISC architectures such as x86 support additional techniques. For example, the x86 instruction set contains the instructions SYSCALL/SYSRET and SYSENTER/SYSEXIT (these two mechanisms were independently created by AMD and Intel, respectively, but in essence they do the same thing). These are "fast" control transfer instructions that are designed to quickly transfer control to the kernel for a system call without the overhead of an interrupt. Linux 2.5 began using this on the x86, where available; formerly it used the INT instruction, where the system call number was placed in the EAX register before interrupt 0x80 was executed. An older mechanism is the call gate; originally used in Multics and later, for example, see call gate on the Intel x86. It allows a program to call a kernel function directly using a safe control transfer mechanism, which the operating system sets up in advance. This approach has been unpopular on x86, presumably due to the requirement of a far call (a call to a procedure located in a different segment than the current code segment) which uses x86 memory segmentation and the resulting lack of portability it causes, and the existence of the faster instructions mentioned above. For IA-64 architecture, EPC (Enter Privileged Code) instruction is used. The first eight system call arguments are passed in registers, and the rest are passed on the stack. In the IBM System/360 mainframe family, and its successors, a Supervisor Call instruction (), with the number in the instruction rather than in a register, implements a system call for legacy facilities in most of IBM's own operating systems, and for all system calls in Linux. In later versions of MVS, IBM uses the Program Call (PC) instruction for many newer facilities. In particular, PC is used when the caller might be in Service Request Block (SRB) mode. The PDP-11 minicomputer used the , and instructions, which, similar to the IBM System/360 and x86 , put the code in the instruction; they generate interrupts to specific addresses, transferring control to the operating system. The VAX 32-bit successor to the PDP-11 series used the , , and instructions to make system calls to privileged code at various levels; the code is an argument to the instruction. Categories of system calls System calls can be grouped roughly into six major categories: Process control create process (for example, fork on Unix-like systems, or NtCreateProcess in the Windows NT Native API) terminate process load, execute get/set process attributes wait for time, wait event, signal event allocate and free memory File management create file, delete file open, close read, write, reposition get/set file attributes Device management request device, release device read, write, reposition get/set device attributes logically attach or detach devices Information maintenance get/set total system information (including time, date, computer name, enterprise etc.) get/set process, file, or device metadata (including author, opener, creation time and date, etc.) Communication create, delete communication connection send, receive messages transfer status information attach or detach remote devices Protection get/set file permissions Processor mode and context switching System calls in most Unix-like systems are processed in kernel mode, which is accomplished by changing the processor execution mode to a more privileged one, but no process context switch is necessary although a privilege context switch does occur. The hardware sees the world in terms of the execution mode according to the processor status register, and processes are an abstraction provided by the operating system. A system call does not generally require a context switch to another process; instead, it is processed in the context of whichever process invoked it. In a multithreaded process, system calls can be made from multiple threads. The handling of such calls is dependent on the design of the specific operating system kernel and the application runtime environment. The following list shows typical models followed by operating systems: Many-to-one model: All system calls from any user thread in a process are handled by a single kernel-level thread. This model has a serious drawback any blocking system call (like awaiting input from the user) can freeze all the other threads. Also, since only one thread can access the kernel at a time, this model cannot utilize multiple cores of processors. One-to-one model: Every user thread gets attached to a distinct kernel-level thread during a system call. This model solves the above problem of blocking system calls. It is found in all major Linux distributions, macOS, iOS, recent Windows and Solaris versions. Many-to-many model: In this model, a pool of user threads is mapped to a pool of kernel threads. All system calls from a user thread pool are handled by the threads in their corresponding kernel thread pool. Hybrid model: This model implements both many to many and one to one models depending upon the choice made by the kernel. This is found in old versions of IRIX, HP-UX and Solaris.
Technology
Operating systems
null
102959
https://en.wikipedia.org/wiki/Substance%20abuse
Substance abuse
Substance misuse, also known as drug misuse or, in older vernacular, substance abuse, is the use of a drug in amounts or by methods that are harmful to the individual or others. It is a form of substance-related disorder. Differing definitions of drug misuse are used in public health, medical, and criminal justice contexts. In some cases, criminal or anti-social behavior occurs when the person is under the influence of a drug, and long-term personality changes in individuals may also occur. In addition to possible physical, social, and psychological harm, the use of some drugs may also lead to criminal penalties, although these vary widely depending on the local jurisdiction. Drugs most often associated with this term include alcohol, amphetamines, barbiturates, benzodiazepines, cannabis, cocaine, hallucinogens, methaqualone, and opioids. The exact cause of substance misuse is not clear, but there are two predominant theories: either a genetic predisposition or a habit learned from others, which, if addiction develops, manifests itself as a chronic debilitating disease. In 2010, about 5% of adults (230 million) used an illicit substance. Of these, 27 million have high-risk drug use—otherwise known as recurrent drug use—causing harm to their health, causing psychological problems, and or causing social problems that put them at risk of those dangers. In 2015, substance use disorders resulted in 307,400 deaths, up from 165,000 deaths in 1990. Of these, the highest numbers are from alcohol use disorders at 137,500, opioid use disorders at 122,100 deaths, amphetamine use disorders at 12,200 deaths, and cocaine use disorders at 11,100. Classification Public health definitions Public health practitioners have attempted to look at substance use from a broader perspective than the individual, emphasizing the role of society, culture, and availability. Some health professionals choose to avoid the terms alcohol or drug "abuse" in favor of language considered more objective, such as "substance and alcohol type problems" or "harmful/problematic use" of drugs. The Health Officers Council of British Columbia — in their 2005 policy discussion paper, A Public Health Approach to Drug Control in Canada — has adopted a public health model of psychoactive substance use that challenges the simplistic black-and-white construction of the binary (or complementary) antonyms "use" vs. "abuse". This model explicitly recognizes a spectrum of use, ranging from beneficial use to chronic dependence. Medical definitions 'Drug abuse' is no longer a current medical diagnosis in either of the most used diagnostic tools in the world, the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), and the World Health Organization's International Classification of Diseases (ICD). Value judgment Philip Jenkins suggests that there are two issues with the term "drug abuse". First, what constitutes a "drug" is debatable. For instance, GHB, a naturally occurring substance in the central nervous system is considered a drug, and is illegal in many countries, while nicotine is not officially considered a drug in most countries. Second, the word "abuse" implies a recognized standard of use for any substance. Drinking an occasional glass of wine is considered acceptable in most Western countries, while drinking several bottles is seen as abuse. Strict temperance advocates, who may or may not be religiously motivated, would see drinking even one glass as abuse. Some groups (Mormons, as prescribed in "the Word of Wisdom") even condemn caffeine use in any quantity. Similarly, adopting the view that any (recreational) use of cannabis or substituted amphetamines constitutes drug abuse implies a decision made that the substance is harmful, even in minute quantities. In the U.S., drugs have been legally classified into five categories, schedule I, II, III, IV, or V in the Controlled Substances Act. The drugs are classified on their deemed potential for abuse. The usage of some drugs is strongly correlated. For example, the consumption of seven illicit drugs (amphetamines, cannabis, cocaine, ecstasy, legal highs, LSD, and magic mushrooms) is correlated and the Pearson correlation coefficient r>0.4 in every pair of them; consumption of cannabis is strongly correlated (r>0.5) with the usage of nicotine (tobacco), heroin is correlated with cocaine (r>0.4) and methadone (r>0.45), and is strongly correlated with crack (r>0.5) Drug misuse Drug misuse is a term used commonly when prescription medication with sedative, anxiolytic, analgesic, or stimulant properties is used for mood alteration or intoxication ignoring the fact that overdose of such medicines can sometimes have serious adverse effects. It sometimes involves drug diversion from the individual for whom it was prescribed. Prescription misuse has been defined differently and rather inconsistently based on the status of drug prescription, the uses without a prescription, intentional use to achieve intoxicating effects, route of administration, co-ingestion with alcohol, and the presence or absence of dependence symptoms. Chronic use of certain substances leads to a change in the central nervous system known as a "tolerance" to the medicine such that more of the substance is needed in order to produce desired effects. With some substances, stopping or reducing use can cause withdrawal symptoms to occur, but this is highly dependent on the specific substance in question. The rate of prescription drug use is fast overtaking illegal drug use in the United States. According to the National Institute of Drug Abuse, 7 million people were taking prescription drugs for nonmedical use in 2010. Among 12th graders, nonmedical prescription drug use is now second only to cannabis. In 2011, "Nearly 1 in 12 high school seniors reported nonmedical use of Vicodin; 1 in 20 reported such use of OxyContin." Both of these drugs contain opioids. Fentanyl is an opioid that is 100 times more potent than morphine, and 50 times more potent than heroin. A 2017 survey of 12th graders in the United States, found misuse of OxyContin of 2.7 percent, compared to 5.5 percent at its peak in 2005. Misuse of the combination hydrocodone/paracetamol was at its lowest since a peak of 10.5 percent in 2003. This decrease may be related to public health initiatives and decreased availability. Avenues of obtaining prescription drugs for misuse are varied: sharing between family and friends, illegally buying medications at school or work, and often "doctor shopping" to find multiple physicians to prescribe the same medication, without the knowledge of other prescribers. Increasingly, law enforcement is holding physicians responsible for prescribing controlled substances without fully establishing patient controls, such as a patient "drug contract". Concerned physicians are educating themselves on how to identify medication-seeking behavior in their patients, and are becoming familiar with "red flags" that would alert them to potential prescription drug abuse. Signs and symptoms Depending on the actual compound, drug abuse including alcohol may lead to health problems, social problems, morbidity, injuries, unprotected sex, violence, deaths, motor vehicle accidents, homicides, suicides, physical dependence or psychological addiction. There is a high rate of suicide in alcoholics and other drug abusers. The reasons believed to cause the increased risk of suicide include the long-term abuse of alcohol and other drugs causing physiological distortion of brain chemistry as well as the social isolation. Another factor is the acute intoxicating effects of the drugs may make suicide more likely to occur. Suicide is also very common in adolescent alcohol abusers, with 1 in 4 suicides in adolescents being related to alcohol abuse. In the US, approximately 30% of suicides are related to alcohol abuse. Alcohol abuse is also associated with increased risks of committing criminal offences including child abuse, domestic violence, rapes, burglaries and assaults. Drug abuse, including alcohol and prescription drugs, can induce symptomatology which resembles mental illness. This can occur both in the intoxicated state and also during withdrawal. In some cases, substance-induced psychiatric disorders can persist long after detoxification, such as prolonged psychosis or depression after amphetamine or cocaine abuse. A protracted withdrawal syndrome can also occur with symptoms persisting for months after cessation of use. Benzodiazepines are the most notable drug for inducing prolonged withdrawal effects with symptoms sometimes persisting for years after cessation of use. Both alcohol, barbiturate as well as benzodiazepine withdrawal can potentially be fatal. Abuse of hallucinogens, although extremely unlikely, may in some individuals trigger delusional and other psychotic phenomena long after cessation of use. This is mainly a risk with deliriants, and most unlikely with psychedelics and dissociatives. Cannabis may trigger panic attacks during intoxication and with continued use, it may cause a state similar to dysthymia. Researchers have found that daily cannabis use and the use of high-potency cannabis are independently associated with a higher chance of developing schizophrenia and other psychotic disorders. Severe anxiety and depression are often induced by sustained alcohol abuse. Even sustained moderate alcohol use may increase anxiety and depression levels in some individuals. In most cases, these drug-induced psychiatric disorders fade away with prolonged abstinence. Similarly, although substance abuse induces many changes to the brain, there is evidence that many of these alterations are reversed following periods of prolonged abstinence. Impulsivity Impulsivity is characterized by actions based on sudden desires, whims, or inclinations rather than careful thought. Individuals with substance abuse have higher levels of impulsivity, and individuals who use multiple drugs tend to be more impulsive. A number of studies using the Iowa gambling task as a measure for impulsive behavior found that drug using populations made more risky choices compared to healthy controls. There is a hypothesis that the loss of impulse control may be due to impaired inhibitory control resulting from drug induced changes that take place in the frontal cortex. The neurodevelopmental and hormonal changes that happen during adolescence may modulate impulse control that could possibly lead to the experimentation with drugs and may lead to addiction. Impulsivity is thought to be a facet trait in the neuroticism personality domain (overindulgence/negative urgency) which is prospectively associated with the development of substance abuse. Screening and assessment The screening and assessment process of substance use behavior is important for the diagnosis and treatment of substance use disorders. Screeners is the process of identifying individuals who have or may be at risk for a substance use disorder and are usually brief to administer. Assessments are used to clarify the nature of the substance use behavior to help determine appropriate treatment. Assessments usually require specialized skills, and are longer to administer than screeners. Given that addiction manifests in structural changes to the brain, it is possible that non-invasive magnetic resonance imaging could help diagnose addiction in the future. Targeted assessments There are several different screening tools that have been validated for use with adolescents such as the CRAFFT Screening Test and in adults the CAGE questionnaire. Some recommendations for screening tools for substance misuse in pregnancy include that they take less than 10 minutes, should be used routinely, include an educational component. Tools suitable for pregnant women include i.a. 4Ps, T-ACE, TWEAK, TQDH (Ten-Question Drinking History), and AUDIT. Treatment Psychological From the applied behavior analysis literature, behavioral psychology, and from randomized clinical trials, several evidenced based interventions have emerged: behavioral marital therapy, motivational Interviewing, community reinforcement approach, exposure therapy, contingency management They help suppress cravings and mental anxiety, improve focus on treatment and new learning behavioral skills, ease withdrawal symptoms and reduce the chances of relapse. In children and adolescents, cognitive behavioral therapy (CBT) and family therapy currently has the most research evidence for the treatment of substance abuse problems. Well-established studies also include ecological family-based treatment and group CBT. These treatments can be administered in a variety of different formats, each of which has varying levels of research support Research has shown that what makes group CBT most effective is that it promotes the development of social skills, developmentally appropriate emotional regulatory skills and other interpersonal skills. A few integrated treatment models, which combines parts from various types of treatment, have also been seen as both well-established or probably effective. A study on maternal alcohol and other drug use has shown that integrated treatment programs have produced significant results, resulting in higher negative results on toxicology screens. Additionally, brief school-based interventions have been found to be effective in reducing adolescent alcohol and cannabis use and abuse. Motivational interviewing can also be effective in treating substance use disorder in adolescents. Alcoholics Anonymous and Narcotics Anonymous are widely known self-help organizations in which members support each other abstain from substances. Social skills are significantly impaired in people with alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. It has been suggested that social skills training adjunctive to inpatient treatment of alcohol dependence is probably efficacious, including managing the social environment. Medication A number of medications have been approved for the treatment of substance abuse. These include replacement therapies such as buprenorphine and methadone as well as antagonist medications like disulfiram and naltrexone in either short acting, or the newer long acting form. Several other medications, often ones originally used in other contexts, have also been shown to be effective including bupropion and modafinil. Methadone and buprenorphine are sometimes used to treat opiate addiction. These drugs are used as substitutes for other opioids and still cause withdrawal symptoms but they facilitate the tapering off process in a controlled fashion. When a person goes from using fentanyl every day, to not using it at all, they will experience a point where they need to get used to not using the substance. This is called withdrawal. Antipsychotic medications have not been found to be useful. Acamprostate is a glutamatergic NMDA antagonist, which helps with alcohol withdrawal symptoms because alcohol withdrawal is associated with a hyperglutamatergic system. Heroin-assisted treatment Three countries in Europe have active HAT programs, namely England, the Netherlands and Switzerland. Despite critical voices by conservative think-tanks with regard to these harm-reduction strategies, significant progress in the reduction of drug-related deaths has been achieved in those countries. For example, the US, devoid of such measures, has seen large increases in drug-related deaths since 2000 (mostly related to heroin use), while Switzerland has seen large decreases. In 2018, approximately 60,000 people have died of drug overdoses in America, while in the same time period, Switzerland's drug deaths were at 260. Relative to the population of these countries, the US has 10 times more drug-related deaths compared to the Swiss Confederation, which in effect illustrates the efficacy of HAT to reduce fatal outcomes in opiate/opioid addiction. Dual diagnosis It is common for individuals with drugs use disorder to have other psychological problems. The terms "dual diagnosis" or "co-occurring disorders", refer to having a mental health and substance use disorder at the same time. According to the British Association for Psychopharmacology (BAP), "symptoms of psychiatric disorders such as depression, anxiety and psychosis are the rule rather than the exception in patients misusing drugs and/or alcohol." Individuals who have a comorbid psychological disorder often have a poor prognosis if either disorder is untreated. Historically most individuals with dual diagnosis either received treatment only for one of their disorders or they did not receive any treatment all. However, since the 1980s, there has been a push towards integrating mental health and addiction treatment. In this method, neither condition is considered primary and both are treated simultaneously by the same provider. Epidemiology The initiation of drug use including alcohol is most likely to occur during adolescence, and some experimentation with substances by older adolescents is common. For example, results from 2010 Monitoring the Future survey, a nationwide study on rates of substance use in the United States, show that 48.2% of 12th graders report having used an illicit drug at some point in their lives. In the 30 days prior to the survey, 41.2% of 12th graders had consumed alcohol and 19.2% of 12th graders had smoked tobacco cigarettes. In 2009 in the United States about 21% of high school students have taken prescription drugs without a prescription. And earlier in 2002, the World Health Organization estimated that around 140 million people were alcohol dependent and another 400 million with alcohol-related problems. Studies have shown that the large majority of adolescents will phase out of drug use before it becomes problematic. Thus, although rates of overall use are high, the percentage of adolescents who meet criteria for substance abuse is significantly lower (close to 5%). According UN estimates, there are "more than 50 million regular users of morphine diacetate (heroin), cocaine and synthetic drugs". More than 70,200 Americans died from drug overdoses in 2017. Among these, the sharpest increase occurred among deaths related to fentanyl and synthetic opioids (28,466 deaths). See charts below. History APA, AMA, and NCDA In 1966, the American Medical Association's Committee on Alcoholism and Addiction defined abuse of stimulants (amphetamines, primarily) in terms of 'medical supervision': In 1972, the American Psychiatric Association created a definition that used legality, social acceptability, and cultural familiarity as qualifying factors: In 1973, the National Commission on Marijuana and Drug Abuse stated: ...drug abuse may refer to any type of drug or chemical without regard to its pharmacologic actions. It is an eclectic concept having only one uniform connotation: societal disapproval. ... The Commission believes that the term drug abuse must be deleted from official pronouncements and public policy dialogue. The term has no functional utility and has become no more than an arbitrary codeword for that drug use which is presently considered wrong. DSM The first edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (published in 1952) grouped alcohol and other drug abuse under "sociopathic personality disturbances", which were thought to be symptoms of deeper psychological disorders or moral weakness. The third edition, published in 1980, was the first to recognize substance abuse (including drug abuse) and substance dependence as conditions separate from substance abuse alone, bringing in social and cultural factors. The definition of dependence emphasised tolerance to drugs, and withdrawal from them as key components to diagnosis, whereas abuse was defined as "problematic use with social or occupational impairment" but without withdrawal or tolerance. In 1987, the DSM-IIIR category "psychoactive substance abuse", which includes former concepts of drug abuse is defined as "a maladaptive pattern of use indicated by...continued use despite knowledge of having a persistent or recurrent social, occupational, psychological or physical problem that is caused or exacerbated by the use (or by) recurrent use in situations in which it is physically hazardous". It is a residual category, with dependence taking precedence when applicable. It was the first definition to give equal weight to behavioural and physiological factors in diagnosis. By 1988, the DSM-IV defined substance dependence as "a syndrome involving compulsive use, with or without tolerance and withdrawal"; whereas substance abuse is "problematic use without compulsive use, significant tolerance, or withdrawal". Substance abuse can be harmful to health and may even be deadly in certain scenarios. By 1994, the fourth edition of the DSM issued by the American Psychiatric Association, the DSM-IV-TR, defined substance dependence as "when an individual persists in use of alcohol or other drugs despite problems related to use of the substance, substance dependence may be diagnosed", along with criteria for the diagnosis. The DSM-IV-TR defines substance abuse as: A. A maladaptive pattern of substance use leading to clinically significant impairment or distress, as manifested by one (or more) of the following, occurring within a 12-month period: Recurrent substance use resulting in a failure to fulfill major role obligations at work, school, or home (e.g., repeated absences or poor work performance related to substance use; substance-related absences, suspensions or expulsions from school; neglect of children or household) Recurrent substance use in situations in which it is physically hazardous (e.g., driving an automobile or operating a machine when impaired by substance use) Recurrent substance-related legal problems (e.g., arrests for substance-related disorderly conduct) Continued substance use despite having persistent or recurrent social or interpersonal problems caused or exacerbated by the effects of the substance (e.g., arguments with spouse about consequences of intoxication, physical fights) the symptoms have never met the criteria for substance dependence for this class of substance The fifth edition of the DSM (DSM-5), was released in 2013, and it revisited this terminology. The principal change was a transition from the abuse-dependence terminology. In the DSM-IV era, abuse was seen as an early form or less hazardous form of the disease characterized with the dependence criteria. However, the APA's dependence term does not mean that physiologic dependence is present but rather means that a disease state is present, one that most would likely refer to as an addicted state. Many involved recognize that the terminology has often led to confusion, both within the medical community and with the general public. The American Psychiatric Association requested input as to how the terminology of this illness should be altered as it moves forward with DSM-5 discussions. In the DSM-5, substance abuse and substance dependence have been merged into the category of substance use disorders and they no longer exist as individual concepts. While substance abuse and dependence were either present or not, substance use disorder has three levels of severity: mild, moderate and severe. Society and culture Legal approaches Related articles: Drug control law, Prohibition (drugs), Arguments for and against drug prohibition, Harm reduction Most governments have designed legislation to criminalize certain types of drug use. These drugs are often called "illegal drugs" but generally what is illegal is their unlicensed production, distribution, and possession. These drugs are also called "controlled substances". Even for simple possession, legal punishment can be quite severe (including the death penalty in some countries). Laws vary across countries, and even within them, and have fluctuated widely throughout history. Attempts by government-sponsored drug control policy to interdict drug supply and eliminate drug abuse have been largely unsuccessful. In spite of the huge efforts by the U.S., drug supply and purity has reached an all-time high, with the vast majority of resources spent on interdiction and law enforcement instead of public health. In the United States, the number of nonviolent drug offenders in prison exceeds by 100,000 the total incarcerated population in the EU, despite the fact that the EU has 100 million more citizens. Despite drug legislation (or perhaps because of it), large, organized criminal drug cartels operate worldwide. Advocates of decriminalization argue that drug prohibition makes drug dealing a lucrative business, leading to much of the associated criminal activity. Some states in the U.S., as of late, have focused on facilitating safe use as opposed to eradicating it. For example, as of 2022, New Jersey has made the effort to expand needle exchange programs throughout the state, passing a bill through legislature that gives control over decisions regarding these types of programs to the state's department of health. This state level bill is not only significant for New Jersey, as it could be used as a model for other states to possibly follow as well. This bill is partly a reaction to the issues occurring at local level city governments within the state of New Jersey as of late. One example of this is in the Atlantic City Government which came under lawsuit after they halted the enactment of said programs within their city. This suit came a year before the passing of this bill, stemming from a local level decision to shut down related operations in Atlantic City made in July that same year. This lawsuit highlights the feelings of New Jersey residents, who had a great influence on this bill passing the legislature. These feelings were demonstrated in front of Atlantic City City hall, where residents exclaimed their desire for these programs. All in all, the aforementioned bill was signed effectively into law just days after it passed legislature, by New Jersey Governor Phil Murphy. Cost Policymakers try to understand the relative costs of drug-related interventions. An appropriate drug policy relies on the assessment of drug-related public expenditure based on a classification system where costs are properly identified. Labelled drug-related expenditures are defined as the direct planned spending that reflects the voluntary engagement of the state in the field of illicit drugs. Direct public expenditures explicitly labeled as drug-related can be easily traced back by exhaustively reviewing official accountancy documents such as national budgets and year-end reports. Unlabelled expenditure refers to unplanned spending and is estimated through modeling techniques, based on a top-down budgetary procedure. Starting from overall aggregated expenditures, this procedure estimates the proportion causally attributable to substance abuse (Unlabelled Drug-related Expenditure = Overall Expenditure × Attributable Proportion). For example, to estimate the prison drug-related expenditures in a given country, two elements would be necessary: the overall prison expenditures in the country for a given period, and the attributable proportion of inmates due to drug-related issues. The product of the two will give a rough estimate that can be compared across different countries. Europe As part of the reporting exercise corresponding to 2005, the European Monitoring Centre for Drugs and Drug Addiction's network of national focal points set up in the 27 European Union (EU) the member states, Norway, and the candidates' countries to the EU, were requested to identify labeled drug-related public expenditure, at the national level. This was reported by 10 countries categorized according to the functions of government, amounting to a total of EUR 2.17 billion. Overall, the highest proportion of this total came within the government functions of health (66%) (e.g. medical services), and public order and safety (POS) (20%) (e.g. police services, law courts, prisons). By country, the average share of GDP was 0.023% for health, and 0.013% for POS. However, these shares varied considerably across countries, ranging from 0.00033% in Slovakia, up to 0.053% of GDP in Ireland in the case of health, and from 0.003% in Portugal, to 0.02% in the UK, in the case of POS; almost a 161-fold difference between the highest and the lowest countries for health, and a six-fold difference for POS. To respond to these findings and to make a comprehensive assessment of drug-related public expenditure across countries, this study compared health and POS spending and GDP in the 10 reporting countries. Results suggest GDP to be a major determinant of the health and POS drug-related public expenditures of a country. Labeled drug-related public expenditure showed a positive association with the GDP across the countries considered: r = 0.81 in the case of health, and r = 0.91 for POS. The percentage change in health and POS expenditures due to a one percent increase in GDP (the income elasticity of demand) was estimated to be 1.78% and 1.23% respectively. Being highly income elastic, health and POS expenditures can be considered luxury goods; as a nation becomes wealthier it openly spends proportionately more on drug-related health and public order and safety interventions. United Kingdom The UK Home Office estimated that the social and economic cost of drug abuse to the UK economy in terms of crime, absenteeism and sickness is in excess of £20 billion a year. However, the UK Home Office does not estimate what portion of those crimes are unintended consequences of drug prohibition (crimes to sustain expensive drug consumption, risky production and dangerous distribution), nor what is the cost of enforcement. Those aspects are necessary for a full analysis of the economics of prohibition. United States These figures represent overall economic costs, which can be divided in three major components: health costs, productivity losses and non-health direct expenditures. Health-related costs were projected to total $16 billion in 2002. Productivity losses were estimated at $128.6 billion. In contrast to the other costs of drug abuse (which involve direct expenditures for goods and services), this value reflects a loss of potential resources: work in the labor market and in household production that was never performed, but could reasonably be expected to have been performed absent the impact of drug abuse. Included are estimated productivity losses due to premature death ($24.6 billion), drug abuse-related illness ($33.4 billion), incarceration ($39.0 billion), crime careers ($27.6 billion) and productivity losses of victims of crime ($1.8 billion). The non-health direct expenditures primarily concern costs associated with the criminal justice system and crime victim costs, but also include a modest level of expenses for administration of the social welfare system. The total for 2002 was estimated at $36.4 billion. The largest detailed component of these costs is for state and federal corrections at $14.2 billion, which is primarily for the operation of prisons. Another $9.8 billion was spent on state and local police protection, followed by $6.2 billion for federal supply reduction initiatives. According to a report from the Agency for Healthcare Research and Quality (AHRQ), Medicaid was billed for a significantly higher number of hospitals stays for opioid drug overuse than Medicare or private insurance in 1993. By 2012, the differences were diminished. Over the same time, Medicare had the most rapid growth in number of hospital stays. Canada Substance abuse takes a financial toll on Canada's hospitals and the country as a whole. In the year 2011, around $267 million of hospital services were attributed to dealing with substance abuse problems. The majority of these hospital costs in 2011 were related to issues with alcohol. Additionally, in 2014, Canada also allocated almost $45 million towards battling prescription drug abuse, extending into the year 2019. Most of the financial decisions made on substance abuse in Canada can be attributed to the research conducted by the Canadian Centre on Substance Abuse (CCSA) which conduct both extensive and specific reports. In fact, the CCSA is heavily responsible for identifying Canada's heavy issues with substance abuse. Some examples of reports by the CCSA include a 2013 report on drug use during pregnancy and a 2015 report on adolescents' use of cannabis. Special populations Immigrants and refugees Immigrant and refugees have often been under great stress, physical trauma and depression and anxiety due to separation from loved ones often characterize the pre-migration and transit phases, followed by "cultural dissonance", language barriers, racism, discrimination, economic adversity, overcrowding, social isolation, and loss of status and difficulty obtaining work and fears of deportation are common. Refugees frequently experience concerns about the health and safety of loved ones left behind and uncertainty regarding the possibility of returning to their country of origin. For some, substance abuse functions as a coping mechanism to attempt to deal with these stressors. Immigrants and refugees may bring the substance use and abuse patterns and behaviors of their country of origin, or adopt the attitudes, behaviors, and norms regarding substance use and abuse that exist within the dominant culture into which they are entering. Street children Street children in many developing countries are a high-risk group for substance misuse, in particular solvent abuse. Drawing on research in Kenya, Cottrell-Boyce argues that "drug use amongst street children is primarily functional—dulling the senses against the hardships of life on the street—but can also provide a link to the support structure of the 'street family' peer group as a potent symbol of shared experience." Musicians In order to maintain high-quality performance, some musicians take chemical substances. Some musicians take drugs such as alcohol to deal with the stress of performing. As a group they have a higher rate of substance abuse. The most common chemical substance which is abused by pop musicians is cocaine, because of its neurological effects. Stimulants like cocaine increase alertness and cause feelings of euphoria, and can therefore make the performer feel as though they in some ways 'own the stage'. One way in which substance abuse is harmful for a performer (musicians especially) is if the substance being abused is aspirated. The lungs are an important organ used by singers, and addiction to cigarettes may seriously harm the quality of their performance. Smoking harms the alveoli, which are responsible for absorbing oxygen. Veterans Substance abuse can be a factor that affects the physical and mental health of veterans. Substance abuse may also harm personal and familial relationships, leading to financial difficulty. There is evidence to suggest that substance abuse disproportionately affects the homeless veteran population. A 2015 Florida study, which compared causes of homelessness between veterans and non-veteran populations in a self-reporting questionnaire, found that 17.8% of the homeless veteran participants attributed their homelessness to alcohol and other drug-related problems compared to just 3.7% of the non-veteran homeless group. A 2003 study found that homelessness was correlated with access to support from family/friends and services. However, this correlation was not true when comparing homeless participants who had a current substance-use disorders. The U.S. Department of Veterans Affairs provides a summary of treatment options for veterans with substance-use disorder. For treatments that do not involve medication, they offer therapeutic options that focus on finding outside support groups and "looking at how substance use problems may relate to other problems such as PTSD and depression". Sex and gender There are many sex differences in substance abuse. Men and women express differences in the short- and long-term effects of substance abuse. These differences can be credited to sexual dimorphisms in the brain, endocrine and metabolic systems. Social and environmental factors that tend to disproportionately affect women, such as child and elder care and the risk of exposure to violence, are also factors in the gender differences in substance abuse. Women report having greater impairment in areas such as employment, family and social functioning when abusing substances but have a similar response to treatment. Co-occurring psychiatric disorders are more common among women than men who abuse substances; women more frequently use substances to reduce the negative effects of these co-occurring disorders. Substance abuse puts both men and women at higher risk for perpetration and victimization of sexual violence. Men tend to take drugs for the first time to be part of a group and fit in more so than women. At first interaction, women may experience more pleasure from drugs than men do. Women tend to progress more rapidly from first experience to addiction than men. Physicians, psychiatrists and social workers have believed for decades that women escalate alcohol use more rapidly once they start. Once the addictive behavior is established for women they stabilize at higher doses of drugs than males do. When withdrawing from smoking women experience greater stress response. Males experience greater symptoms when withdrawing from alcohol. There are gender differences when it comes to rehabilitation and relapse rates. For alcohol, relapse rates were very similar for men and women. For women, marriage and marital stress were risk factors for alcohol relapse. For men, being married lowered the risk of relapse. This difference may be a result of gendered differences in excessive drinking. Alcoholic women are much more likely to be married to partners that drink excessively than are alcoholic men. As a result of this, men may be protected from relapse by marriage while women are at higher risk when married. However, women are less likely than men to experience relapse to substance use. When men experience a relapse to substance use, they more than likely had a positive experience prior to the relapse. On the other hand, when women relapse to substance use, they were more than likely affected by negative circumstances or interpersonal problems.
Biology and health sciences
Health and fitness: General
Health
103051
https://en.wikipedia.org/wiki/Iapetus%20%28moon%29
Iapetus (moon)
Iapetus () is the outermost of Saturn's large moons. With an estimated diameter of , it is the third-largest moon of Saturn and the eleventh-largest in the Solar System. Named after the Titan Iapetus, the moon was discovered in 1671 by Giovanni Domenico Cassini. A relatively low-density body made up mostly of ice, Iapetus is home to several distinctive and unusual features, such as a striking difference in coloration between its leading hemisphere, which is dark, and its trailing hemisphere, which is bright, as well as a massive equatorial ridge running three-quarters of the way around the moon. History Discovery Iapetus was discovered by Giovanni Domenico Cassini, an Italian-born French astronomer, in October 1671. This is the first moon that Cassini discovered; the second moon of Saturn to be discovered after Christaan Huygens spotted Titan 16 years prior in 1655; and the sixth extraterrestrial moon to be discovered in human history. Cassini discovered Iapetus when the moon was on the western side of Saturn, but when he tried viewing it on the eastern side some months later, he was unsuccessful. This was also the case the following year, when he was again able to observe it on the western side, but not the eastern side. Cassini finally observed Iapetus on the eastern side in 1705 with the help of an improved telescope, finding it two magnitudes dimmer on that side. Cassini correctly surmised that Iapetus has a bright hemisphere and a dark hemisphere, and that it is tidally locked, always keeping the same face towards Saturn. This means that the bright hemisphere is visible from Earth when Iapetus is on the western side of Saturn, and that the dark hemisphere is visible when Iapetus is on the eastern side. Name Iapetus is named after the Titan Iapetus from Greek mythology. The name was suggested by John Herschel (son of William Herschel) in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope, in which he advocated naming the moons of Saturn after the Titans, brothers and sisters of the Titan Cronus (whom the Romans equated with their god Saturn); and Giants, the massive but lesser relatives of the Titans who sided with the Titans against Zeus and the Olympian Gods. The name has a largely obsolete variant, Japetus , with an adjectival form Japetian. These occurred because there was no distinction between the letters and in Latin, and authors rendered them differently. When first discovered, Iapetus was among the four Saturnian moons labelled the Sidera Lodoicea by their discoverer Giovanni Cassini after King Louis XIV (the other three were Tethys, Dione and Rhea). However, astronomers fell into the habit of referring to them using Roman numerals, with Iapetus being Saturn V because it was the fifth known Saturnian moon in order of distance from Saturn at that time. Once Mimas and Enceladus were discovered in 1789, the numbering scheme was extended and Iapetus became Saturn VII. With the discovery of Hyperion in 1848, Iapetus became Saturn VIII, which is still its Roman numerical designation today. Geological features on Iapetus are generally named after characters and places from the French epic poem The Song of Roland. Orbit The orbit of Iapetus is somewhat unusual. Although it is Saturn's third-largest moon, it orbits much farther from Saturn than the next closest major moon, Titan. It also has the most inclined orbital plane of the regular satellites; only the irregular outer satellites like Phoebe have more inclined orbits. Because of this distant, inclined orbit, Iapetus is the only large moon from which the rings of Saturn would be clearly visible; from the other inner moons, the rings would be edge-on and difficult to see. The cause of this highly inclined orbit is unknown; however, the moon is not likely to have been captured. One suggestion for the cause of Iapetus's orbital inclination is an encounter between Saturn and another planet in the distant past. Despite being, on average, 2.4 times further from Saturn than Hyperion, the next moon inward, Iapetus is tidally locked to Saturn while Hyperion is not. Formation The moons of Saturn are typically thought to have formed through co-accretion, a similar process to that believed to have formed the planets in the Solar System. As the young gas giants formed, they were surrounded by discs of material that gradually coalesced into moons. However, a proposed model on the formation of Titan suggests that Titan was instead formed in a series of giant impacts between pre-existing moons. Iapetus and Rhea are thought to have formed from part of the debris of these collisions. More-recent studies, however, suggest that all of Saturn's moons inward of Titan are no more than 100 million years old; thus, Iapetus is unlikely to have formed in the same series of collisions as Rhea and all the other moons inward of Titan, and—along with Titan—may be a primordial satellite. Physical characteristics The low density of Iapetus indicates that it is mostly composed of ice, with only a small (~20%) amount of rocky materials. Unlike most of the large moons, its overall shape is neither spherical nor ellipsoid, but has a bulging waistline and squashed poles. Its unique equatorial ridge (see below) is so high that it visibly distorts Iapetus's shape even when viewed from a distance. These features often lead it to be characterized as walnut-shaped. Iapetus is heavily cratered, and Cassini images have revealed large impact basins, at least five of which are over wide. The largest, Turgis, has a diameter of ; its rim is extremely steep and includes a scarp about high. Iapetus is known to support long-runout landslides or sturzstroms, possibly supported by ice sliding. Two-tone coloration The difference in colouring between the two Iapetian hemispheres is striking. The leading hemisphere and sides are dark (albedo 0.03–0.05) with a slight reddish-brown coloring, while most of the trailing hemisphere and poles are bright (albedo 0.5–0.6, almost as bright as Europa). Thus, the apparent magnitude of the trailing hemisphere is around 10.2, whereas that of the leading hemisphere is around 11.9—beyond the capacity of the best telescopes in the 17th century. The dark region is named Cassini Regio, and the bright region is divided into Roncevaux Terra north of the equator, and Saragossa Terra south of it. The original dark material is believed to have come from outside Iapetus, but now it consists principally of lag from the sublimation (evaporation) of ice from the warmer areas of the moon's surface, further darkened by exposure to sunlight. It contains organic compounds similar to the substances found in primitive meteorites or on the surfaces of comets; Earth-based observations have shown it to be carbonaceous, and it probably includes cyano-compounds such as frozen hydrogen cyanide polymers. Images from the Cassini orbiter, which passed within , show that both Cassini Regio and the Terra's are heavily cratered. The color dichotomy of scattered patches of light and dark material in the transition zone between Cassini Regio and the bright areas exists at very small scales, down to the imaging resolution of . There is dark material filling in low-lying regions, and light material on the weakly illuminated pole-facing slopes of craters, but no shades of grey. The dark material is a very thin layer, only a few tens of centimeters (approx. one foot) thick at least in some areas, according to Cassini radar imaging and the fact that very small meteor impacts have punched through to the ice underneath. Because of its slow rotation of 79 days (equal to its revolution and the longest in the Saturnian system), Iapetus would have had the warmest daytime surface temperature and coldest nighttime temperature in the Saturnian system even before the development of the color contrast; near the equator, heat absorption by the dark material results in a daytime temperatures of in the dark Cassini Regio compared to in the bright regions. The difference in temperature means that ice preferentially sublimates from Cassini Regio, and deposits in the bright areas and especially at the even colder poles. Over geologic time scales, this would further darken Cassini Regio and brighten the rest of Iapetus, creating a positive feedback thermal runaway process of ever greater contrast in albedo, ending with all exposed ice being lost from Cassini Regio. It is estimated that over a period of one billion years at current temperatures, dark areas of Iapetus would lose about of ice to sublimation, while the bright regions would lose only , not considering the ice transferred from the dark regions. This model explains the distribution of light and dark areas, the absence of shades of grey, and the thinness of the dark material covering Cassini Regio. The redistribution of ice is facilitated by Iapetus's weak gravity, which means that at ambient temperatures a water molecule can migrate from one hemisphere to the other in just a few hops. However, a separate process of color segregation would be required to get the thermal feedback started. The initial dark material is thought to have been debris blasted by meteors off small outer moons in retrograde orbits and swept up by the leading hemisphere of Iapetus. The core of this model is some 30 years old, and was revived by the September 2007 flyby. Light debris outside of Iapetus's orbit, either knocked free from the surface of a moon by micrometeoroid impacts or created in a collision, would spiral in as its orbit decays. It would have been darkened by exposure to sunlight. A portion of any such material that crossed Iapetus's orbit would have been swept up by its leading hemisphere, coating it; once this process created a modest contrast in albedo, and so a contrast in temperature, the thermal feedback described above would have come into play and exaggerated the contrast. In support of the hypothesis, simple numerical models of the exogenic deposition and thermal water redistribution processes can closely predict the two-toned appearance of Iapetus. A subtle color dichotomy between Iapetus's leading and trailing hemispheres, with the former being more reddish, can in fact be observed in comparisons between both bright and dark areas of the two hemispheres. In contrast to the elliptical shape of Cassini Regio, the color contrast closely follows the hemisphere boundaries; the gradation between the differently colored regions is gradual, on a scale of hundreds of kilometers. The next moon inward from Iapetus, chaotically rotating Hyperion, also has an unusual reddish color. The largest reservoir of such infalling material is Phoebe, the largest of the outer moons. Although Phoebe's composition is closer to that of the bright hemisphere of Iapetus than the dark one, dust from Phoebe would only be needed to establish a contrast in albedo, and presumably would have been largely obscured by later sublimation. The discovery of a tenuous disk of material in the plane of and just inside Phoebe's orbit was announced on 6 October 2009, supporting the model. The disk extends from 128 to 207 times the radius of Saturn, while Phoebe orbits at an average distance of 215 Saturn radii. It was detected with the Spitzer Space Telescope. Overall shape Current triaxial measurements of Iapetus give it radial dimensions of , with a mean radius of . However, these measurements may be inaccurate on the kilometer scale as Iapetus's entire surface has not yet been imaged in high enough resolution. The observed oblateness would be consistent with hydrostatic equilibrium if Iapetus had a rotational period of approximately 16 hours, but it does not; its current rotation period is 79 days. A possible explanation for this is that the shape of Iapetus was frozen by formation of a thick crust shortly after its formation, while its rotation continued to slow afterwards due to tidal dissipation, until it became tidally locked. Equatorial ridge A further mystery of Iapetus is the equatorial ridge that runs along the center of Cassini Regio, about long, wide, and high. It was discovered when the Cassini spacecraft imaged Iapetus on December 31, 2004, although its existence had been inferred from the moon's polar images by Voyager 2. Peaks in the ridge rise more than above the surrounding plains, making them some of the tallest mountains in the Solar System. The ridge forms a complex system including isolated peaks, segments of more than and sections with three near parallel ridges. Within the bright regions there is no ridge, but there are a series of isolated peaks along the equator. The ridge system is heavily cratered, indicating that it is ancient. The prominent equatorial bulge gives Iapetus a walnut-like appearance. It is not clear how the ridge formed. One difficulty is to explain why it follows the equator almost perfectly. There are many hypotheses, but none explain why the ridge is confined to Cassini Regio. Theories include that the ridge is a remnant of Iapetus's oblate shape during its early life, that it was created by the collapse of a ring system, that it was formed by icy material welling from Iapetus's interior, or that it is a result of convective overturn. Exploration The first spacecraft to visit Saturn, Pioneer 11, did not provide any images of Iapetus and it came no closer than from the moon. Nonetheless, Pioneer 11 was humanity's first attempt to obtain direct measurements from the objects within the Saturnian system. Voyager 1 arrived at Saturn on November 12, 1980, and it became the first probe to return pictures of Iapetus that clearly show the moon's two-tone appearance from a distance of as it was exiting the Saturnian system. Voyager 2 became the next probe to visit Saturn on August 22, 1981, and made its closest approach to Iapetus at a distance of . It took photos of Iapetus's north pole as it entered the Saturnian system - opposite the approach direction of Voyager 1. The latest probe to visit Iapetus was the Cassini orbiter which went into orbit around Saturn starting on July 1, 2004. Iapetus has been imaged many times from moderate distances by Cassini but its great distance from Saturn makes close observation difficult. Cassini made its first targeted flyby of Iapetus on Dec. 31, 2004, at a distance of around the time when the spacecraft was settling in its orbit around Saturn. Cassini did not cross Iapetus's orbit when it flew by and remained inside the moon's orbit. Cassini's subsequent flybys of Titan would make the spacecraft's orbit smaller, preventing Cassini from flying close to Iapetus for months. Cassini made a second flyby of Iapetus on November 12, 2005, at a distance of , also without crossing the moon's orbit. Cassini then made a third and more distant flyby of Iapetus on January 22, 2006, at a distance of . The fourth flyby happened on April 8, 2006, at a distance of approximately , and this time, Cassini crossed Iapetus' orbit. After this, Cassini's orbit was made smaller once again, preventing the probe from approaching Iapetus for more than a year this time. Cassini's closest flyby of Iapetus happened on September 10, 2007, at a minimum range of . It approached Iapetus from its night side. After this encounter, Cassini made no further targeted flybys of Iapetus.
Physical sciences
Solar System
Astronomy
103068
https://en.wikipedia.org/wiki/Kimberlite
Kimberlite
Kimberlite is an igneous rock and a rare variant of peridotite. It is most commonly known to be the main host matrix for diamonds. It is named after the town of Kimberley in South Africa, where the discovery of an 83.5-carat (16.70 g) diamond called the Star of South Africa in 1869 spawned a diamond rush and led to the excavation of the open-pit mine called the Big Hole. Previously, the term kimberlite has been applied to olivine lamproites as Kimberlite II, however this has been in error. Kimberlite occurs in the Earth's crust in vertical structures known as kimberlite pipes, as well as igneous dykes and can also occur as horizontal sills. Kimberlite pipes are the most important source of mined diamonds today. The consensus on kimberlites is that they are formed deep within Earth's mantle. Formation occurs at depths between 150 and 450 kilometres (93 and 280 mi), potentially from anomalously enriched exotic mantle compositions, and they are erupted rapidly and violently, often with considerable carbon dioxide and other volatile components. It is this depth of melting and generation that makes kimberlites prone to hosting diamond xenocrysts. Despite its relative rarity, kimberlite has attracted attention because it serves as a carrier of diamonds and garnet peridotite mantle xenoliths to the Earth's surface. Its probable derivation from depths greater than any other igneous rock type, and the extreme magma composition that it reflects in terms of low silica content and high levels of incompatible trace-element enrichment, make an understanding of kimberlite petrogenesis important. In this regard, the study of kimberlite has the potential to provide information about the composition of the deep mantle and melting processes occurring at or near the interface between the cratonic continental lithosphere and the underlying convecting asthenospheric mantle. Morphology and volcanology Many kimberlite structures are emplaced as carrot-shaped, vertical intrusions termed "pipes". This classic carrot shape is formed due to a complex intrusive process of kimberlitic magma, which inherits a large proportion of CO2 (lower amounts of H2O) in the system, which produces a deep explosive boiling stage that causes a significant amount of vertical flaring. Kimberlite classification is based on the recognition of differing rock facies. These differing facies are associated with a particular style of magmatic activity, namely crater, diatreme and hypabyssal rocks. The morphology of kimberlite pipes and their classical carrot shape is the result of explosive diatreme volcanism from very deep mantle-derived sources. These volcanic explosions produce vertical columns of rock that rise from deep magma reservoirs. The eruptions forming these pipes fracture the surrounding rock as it explodes, bringing up unaltered xenoliths of peridotite to surface. These xenoliths provide valuable information to geologists about mantle conditions and composition. The morphology of kimberlite pipes is varied, but includes a sheeted dyke complex of tabular, vertically dipping feeder dykes in the root of the pipe, which extends down to the mantle. Within of the surface, the highly pressured magma explodes upwards and expands to form a conical to cylindrical diatreme, which erupts to the surface. The surface expression is rarely preserved but is usually similar to a maar volcano. Kimberlite dikes and sills can be thin (1–4 meters), while pipes range in diameter from about 75 meters to 1.5 kilometers. Petrology Both the location and origin of kimberlitic magmas are subjects of contention. Their extreme enrichment and geochemistry have led to a large amount of speculation about their origin, with models placing their source within the sub-continental lithospheric mantle (SCLM) or even as deep as the transition zone. The mechanism of enrichment has also been the topic of interest with models including partial melting, assimilation of subducted sediment or derivation from a primary magma source. Historically, kimberlites have been classified into two distinct varieties, termed "basaltic" and "micaceous" based primarily on petrographic observations. This was later revised by C. B. Smith, who renamed these divisions "group I" and "group II" based on the isotopic affinities of these rocks using the Nd, Sr, and Pb systems. Roger Mitchell later proposed that these group I and II kimberlites display such distinct differences, that they may not be as closely related as once thought. He showed that group II kimberlites show closer affinities to lamproites than they do to group I kimberlites. Hence, he reclassified group II kimberlites as orangeites to prevent confusion. Group I kimberlites Group-I kimberlites are of CO2-rich ultramafic potassic igneous rocks dominated by primary forsteritic olivine and carbonate minerals, with a trace-mineral assemblage of magnesian ilmenite, chromium pyrope, almandine-pyrope, chromium diopside (in some cases subcalcic), phlogopite, enstatite and of Ti-poor chromite. Group I kimberlites exhibit a distinctive inequigranular texture caused by macrocrystic () to megacrystic () phenocrysts of olivine, pyrope, chromian diopside, magnesian ilmenite, and phlogopite, in a fine- to medium-grained groundmass. The groundmass mineralogy, which more closely resembles a true composition of the igneous rock, is dominated by carbonate and significant amounts of forsteritic olivine, with lesser amounts of pyrope garnet, Cr-diopside, magnesian ilmenite, and spinel. Olivine lamproites Olivine lamproites were previously called group II kimberlite or orangeite in response to the mistaken belief that they only occurred in South Africa. Their occurrence and petrology, however, are identical globally and should not be erroneously referred to as kimberlite. Olivine lamproites are ultrapotassic, peralkaline rocks rich in volatiles (dominantly H2O). The distinctive characteristic of olivine lamproites is phlogopite macrocrysts and microphenocrysts, together with groundmass micas that vary in composition from phlogopite to "tetraferriphlogopite" (anomalously Al-poor phlogopite requiring Fe to enter the tetrahedral site). Resorbed olivine macrocrysts and euhedral primary crystals of groundmass olivine are common but not essential constituents. Characteristic primary phases in the groundmass include zoned pyroxenes (cores of diopside rimmed by Ti-aegirine), spinel-group minerals (magnesian chromite to titaniferous magnetite), Sr- and REE-rich perovskite, Sr-rich apatite, REE-rich phosphates (monazite, daqingshanite), potassian barian hollandite group minerals, Nb-bearing rutile and Mn-bearing ilmenite. Kimberlitic indicator minerals Kimberlites are peculiar igneous rocks because they contain a variety of mineral species with chemical compositions that indicate they formed under high pressure and temperature within the mantle. These minerals, such as chromium diopside (a pyroxene), chromium spinels, magnesian ilmenite, and pyrope garnets rich in chromium, are generally absent from most other igneous rocks, making them particularly useful as indicators for kimberlites. Geochemistry Kimberlites exhibit unique geochemical characteristics that distinguish them from other igneous rocks, reflecting their origin deep within the Earth's mantle. These features provide insights into the mantle's composition and the processes involved in the formation and eruption of kimberlite magmas. Composition Kimberlites are classified as ultramafic rocks due to their high magnesium oxide (MgO) content, which typically exceeds 12%, and often surpasses 15%. This high MgO concentration indicates a mantle-derived origin, rich in olivine and other magnesium-dominant minerals. Additionally, kimberlites are ultrapotassic, with a molar ratio of potassium oxide (K2O) to aluminum oxide (Al2O3) greater than 3, suggesting significant alterations or enrichment processes in their mantle source regions. Elemental abundance Characteristic of kimberlites is their abundance in near-primitive elements such as nickel (Ni), chromium (Cr), and cobalt (Co), with concentrations often exceeding 400 ppm for Ni, 1000 ppm for Cr, and 150 ppm for Co. These high levels reflect the primitive nature of their mantle source, having undergone minimal differentiation. Rare Earth and lithophile elements Kimberlites show enrichment in rare earth elements (REEs), which are pivotal for understanding their genesis and evolution. This enrichment in REEs, along with a moderate to high large-ion lithophile element (LILE) enrichment (more than 1,000 ppm) including potassium, barium, and strontium, points to a significant contribution from metasomatized mantle sources, where the rock composition has been altered by fluids. Volatile content A defining feature of kimberlites is their high volatile content, particularly of water (H2O) and carbon dioxide (CO2). The presence of these volatiles influences the explosivity of kimberlite eruptions and facilitates the transport of diamonds from deep within the mantle to the Earth's surface. The high levels of H2O and CO2 are indicative of a deep mantle origin, where these compounds are more abundant. Exploration techniques Kimberlite exploration techniques encompass a multifaceted approach that integrates geological, geochemical, and geophysical methodologies to locate and evaluate potential diamond-bearing deposits. Indicator minerals sampling Exploration techniques for kimberlites primarily hinge on the identification and analysis of indicator minerals associated with the presence of kimberlite pipes and their potential diamond content. Sediment sampling is a fundamental approach, where kimberlite indicator minerals (KIMs) are dispersed across landscapes due to geological processes like uplift, erosion, and glaciations. Loaming and alluvial sampling are utilized in different terrains to recover KIMs from soils and stream deposits, respectively. Understanding paleodrainage patterns and geological cover layers aids in tracing KIMs back to their source kimberlite pipes. In glaciated regions, techniques such as esker sampling, till sampling, and alluvial sampling are employed to recover KIMs buried beneath thick glacial deposits. Once collected, heavy minerals are separated and sorted by hand to identify these indicators. Chemical analysis confirms their identity and categorizes them. Techniques like thermobarometry help understand the conditions under which these minerals formed and where they came from in the Earth's mantle. By analyzing these indicators and geological curves, scientists can estimate the likelihood of finding diamonds in a kimberlite pipe. These methods help prioritize where to drill in the search for valuable diamond deposits. Geophysical methods Geophysical methods are particularly useful in areas where direct detection of kimberlites is challenging due to significant overburden or weathering. These methods leverage physical property contrasts between kimberlite bodies and their surrounding host rocks, enabling the detection of subtle anomalies indicative of potential kimberlite deposits. Airborne and ground surveys, including magnetics, electromagnetics, and gravity surveys, are commonly employed to acquire geophysical data over large areas efficiently. Magnetic surveys detect variations in the Earth's magnetic field caused by magnetic minerals within kimberlites, which typically exhibit distinct magnetic signatures compared to surrounding rocks. Electromagnetic surveys measure variations in electrical conductivity, with conductive kimberlite bodies producing anomalous responses. Gravity surveys detect variations in gravitational attraction caused by differences in density between kimberlite and surrounding rocks. By analyzing and interpreting these geophysical anomalies, geologists can delineate potential kimberlite targets for further investigation, such as drilling. However, the interpretation of geophysical data requires careful consideration of geological context and potential masking effects from surrounding geology, highlighting the importance of integrating geophysical results with other exploration techniques for accurate targeting and successful diamond discoveries. 3-D modeling Three-dimensional (3D) modeling offers a comprehensive framework for understanding the internal structure and distribution of key geological features within potential diamond-bearing deposits. This process begins with the collection and integration of various datasets, including drill-hole data, ground geophysical surveys, and geological mapping information. These datasets are then integrated into a cohesive digital platform, often utilizing specialized software packages tailored for geological modeling. Through advanced visualization techniques, geologists can create detailed 3D representations of the subsurface geology, highlighting the distribution and geometry of kimberlite bodies alongside other significant geological features such as faults, fractures, and lithological boundaries. Within the model, efforts are made to accurately depict the internal phases of kimberlite pipes, incorporating different facies, country rock xenoliths, and mantle xenoliths identified through careful interpretation of drill-core data and geophysical surveys. Once validated, the 3D model serves as a valuable decision-making tool, offering insights into potential diamond-bearing potential, identifying high-priority drilling targets, and guiding exploration strategies to maximize the chances of successful diamond discoveries. Historical significance Kimberlites are a valuable source of information about the composition of the Earth's mantle and the dynamic processes that occur within it. The study of kimberlites has contributed to our understanding of the Earth’s deep geochemical cycles and the mechanism of mantle plumes, which are upwellings of abnormally hot rock within the Earth's mantle. Moreover, kimberlites are unique in their ability to transport material from the Earth's mantle to its surface. This process, known as xenolith transport, provides geologists with samples of the Earth's mantle, which are otherwise inaccessible. Analyzing these samples has led to significant advances in our knowledge of the Earth's deep interior, including its physical conditions, composition, and the evolutionary history of the planet. The role of kimberlites in diamond exploration cannot be overstated. Diamonds are formed under the high-pressure, high-temperature conditions of the Earth's mantle. Kimberlites act as carriers for these diamonds, transporting them to the Earth's surface. The discovery of diamond-bearing kimberlites in the 1870s in Kimberley sparked a diamond rush, transforming the area into one of the world’s largest diamond-producing regions. Since then, the association between kimberlites and diamonds has been crucial in the search for new diamond deposits around the globe. Kimberlites also serve as a window into the Earth's past, offering clues about the formation of continents and the dynamic processes that shape our planet. Their distribution and age can provide insights into ancient continental movements and the assembly and breakup of supercontinents. Economic importance Kimberlites are the most important source of primary diamonds. Many kimberlite pipes also produce rich alluvial or eluvial diamond placer deposits. about 6,400 kimberlite pipes are known on Earth including about 900 that have been found to contain diamonds, with mining of diamonds occurring at about 30 pipes. The discovery of diamond-rich kimberlite pipes in northern Canada during the early 1990s serves as a prime example of how challenging these deposits can be to locate, as their surface features are often subtle. In this case, the pipes were hidden beneath ice-covered shallow ponds, which filled depressions formed by the softer kimberlite rock eroding slightly faster than the surrounding harder rock. The deposits occurring at Kimberley, South Africa, were the first recognized and the source of the name. The Kimberley diamonds were originally found in weathered kimberlite, which was colored yellow by limonite, and so was called "yellow ground". Deeper workings encountered less altered rock, serpentinized kimberlite, which miners call "blue ground". Yellow ground kimberlite is easy to break apart and was the first source of diamonds to be mined. Blue ground kimberlite needs to be run through rock crushers to extract the diamonds.
Physical sciences
Igneous rocks
Earth science
103077
https://en.wikipedia.org/wiki/Turbofan
Turbofan
A turbofan or fanjet is a type of airbreathing jet engine that is widely used in aircraft propulsion. The word "turbofan" is a combination of references to the preceding generation engine technology of the turbojet and the additional fan stage. It consists of a gas turbine engine which achieves mechanical energy from combustion, and a ducted fan that uses the mechanical energy from the gas turbine to force air rearwards. Thus, whereas all the air taken in by a turbojet passes through the combustion chamber and turbines, in a turbofan some of that air bypasses these components. A turbofan thus can be thought of as a turbojet being used to drive a ducted fan, with both of these contributing to the thrust. The ratio of the mass-flow of air bypassing the engine core to the mass-flow of air passing through the core is referred to as the bypass ratio. The engine produces thrust through a combination of these two portions working together. Engines that use more jet thrust relative to fan thrust are known as low-bypass turbofans; conversely those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use are of the high-bypass type, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofan engines with bypass and core mixing before the afterburner. Modern turbofans have either a large single-stage fan or a smaller fan with several stages. An early configuration combined a low-pressure turbine and fan in a single rear-mounted unit. Principles The turbofan was invented to improve the fuel consumption of the turbojet. It achieves this by pushing more air, thus increasing the mass and lowering the speed of the propelling jet compared to that of the turbojet. This is done mechanically by adding a ducted fan rather than using viscous forces. A vacuum ejector is used in conjunction with the fan as first envisaged by inventor Frank Whittle. Whittle envisioned flight speeds of 500 mph in his March 1936 UK patent 471,368 "Improvements relating to the propulsion of aircraft", in which he describes the principles behind the turbofan, although not called as such at that time. While the turbojet uses the gas from its thermodynamic cycle as its propelling jet, for aircraft speeds below 500 mph there are two penalties to this design which are addressed by the turbofan. Firstly, energy is wasted as the propelling jet is going much faster rearwards than the aircraft is going forwards, leaving a very fast wake. This wake contains kinetic energy that reflects the fuel used to produce it, rather than the fuel used to move the aircraft forwards. A turbofan harvests that wasted velocity and uses it to power a ducted fan that blows air in bypass channels around the rest of the turbine. This reduces the speed of the propelling jet while pushing more air, and thus more mass. The other penalty is that combustion is less efficient at lower speeds. Any action to reduce the fuel consumption of the engine by increasing its pressure ratio or turbine temperature to achieve better combustion causes a corresponding increase in pressure and temperature in the exhaust duct which in turn cause a higher gas speed from the propelling nozzle (and higher KE and wasted fuel). Although the engine would use less fuel to produce a pound of thrust, more fuel is wasted in the faster propelling jet. In other words, the independence of thermal and propulsive efficiencies, as exists with the piston engine/propeller combination which preceded the turbojet, is lost. In contrast, Roth considers regaining this independence the single most important feature of the turbofan which allows specific thrust to be chosen independently of the gas generator cycle. The working substance of the thermodynamic cycle is the only mass accelerated to produce thrust in a turbojet which is a serious limitation (high fuel consumption) for aircraft speeds below supersonic. For subsonic flight speeds the speed of the propelling jet has to be reduced because there is a price to be paid in producing the thrust. The energy required to accelerate the gas inside the engine (increase in kinetic energy) is expended in two ways, by producing a change in momentum ( i.e. a force), and a wake which is an unavoidable consequence of producing thrust by an airbreathing engine (or propeller). The wake velocity, and fuel burned to produce it, can be reduced and the required thrust still maintained by increasing the mass accelerated. A turbofan does this by transferring energy available inside the engine, from the gas generator, to a ducted fan which produces a second, additional mass of accelerated air. The transfer of energy from the core to bypass air results in lower pressure and temperature gas entering the core nozzle (lower exhaust velocity), and fan-produced higher pressure and temperature bypass-air entering the fan nozzle. The amount of energy transferred depends on how much pressure rise the fan is designed to produce (fan pressure ratio). The best energy exchange (lowest fuel consumption) between the two flows, and how the jet velocities compare, depends on how efficiently the transfer takes place which depends on the losses in the fan-turbine and fan. The fan flow has lower exhaust velocity, giving much more thrust per unit energy (lower specific thrust). Both airstreams contribute to the gross thrust of the engine. The additional air for the bypass stream increases the ram drag in the air intake stream-tube, but there is still a significant increase in net thrust. The overall effective exhaust velocity of the two exhaust jets can be made closer to a normal subsonic aircraft's flight speed and gets closer to the ideal Froude efficiency. A turbofan accelerates a larger mass of air more slowly, compared to a turbojet which accelerates a smaller amount more quickly, which is a less efficient way to generate the same thrust (see the efficiency section below). The ratio of the mass-flow of air bypassing the engine core compared to the mass-flow of air passing through the core is referred to as the bypass ratio. Engines with more jet thrust relative to fan thrust are known as low-bypass turbofans, those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use are high-bypass, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofans on combat aircraft. Bypass ratio The bypass ratio (BPR) of a turbofan engine is the ratio between the mass flow rate of the bypass stream to the mass flow rate entering the core. A bypass ratio of 6, for example, means that 6 times more air passes through the bypass duct than the amount that passes through the combustion chamber. Turbofan engines are usually described in terms of BPR, which together with overall pressure ratio, turbine inlet temperature and fan pressure ratio are important design parameters. In addition BPR is quoted for turboprop and unducted fan installations because their high propulsive efficiency gives them the overall efficiency characteristics of very high bypass turbofans. This allows them to be shown together with turbofans on plots which show trends of reducing specific fuel consumption (SFC) with increasing BPR. BPR can also be quoted for lift fan installations where the fan airflow is remote from the engine and doesn't flow past the engine core. Considering a constant core (i.e. fixed pressure ratio and turbine inlet temperature), core and bypass jet velocities equal and a particular flight condition (i.e. Mach number and altitude) the fuel consumption per lb of thrust (sfc) decreases with increase in BPR. At the same time gross and net thrusts increase, but by different amounts. There is considerable potential for reducing fuel consumption for the same core cycle by increasing BPR.This is achieved because of the reduction in pounds of thrust per lb/sec of airflow (specific thrust) and the resultant reduction in lost kinetic energy in the jets (increase in propulsive efficiency). If all the gas power from a gas turbine is converted to kinetic energy in a propelling nozzle, the aircraft is best suited to high supersonic speeds. If it is all transferred to a separate big mass of air with low kinetic energy, the aircraft is best suited to zero speed (hovering). For speeds in between, the gas power is shared between a separate airstream and the gas turbine's own nozzle flow in a proportion which gives the aircraft performance required. The trade off between mass flow and velocity is also seen with propellers and helicopter rotors by comparing disc loading and power loading. For example, the same helicopter weight can be supported by a high power engine and small diameter rotor or, for less fuel, a lower power engine and bigger rotor with lower velocity through the rotor. Bypass usually refers to transferring gas power from a gas turbine to a bypass stream of air to reduce fuel consumption and jet noise. Alternatively, there may be a requirement for an afterburning engine where the sole requirement for bypass is to provide cooling air. This sets the lower limit for BPR and these engines have been called "leaky" or continuous bleed turbojets (General Electric YJ-101 BPR 0.25) and low BPR turbojets (Pratt & Whitney PW1120). Low BPR (0.2) has also been used to provide surge margin as well as afterburner cooling for the Pratt & Whitney J58. Efficiency Propeller engines are most efficient for low speeds, turbojet engines for high speeds, and turbofan engines between the two. Turbofans are the most efficient engines in the range of speeds from about , the speed at which most commercial aircraft operate. In a turbojet (zero-bypass) engine, the high temperature and high pressure exhaust gas is accelerated when it undergoes expansion through a propelling nozzle and produces all the thrust. The compressor absorbs the mechanical power produced by the turbine. In a bypass design, extra turbines drive a ducted fan that accelerates air rearward from the front of the engine. In a high-bypass design, the ducted fan and nozzle produce most of the thrust. Turbofans are closely related to turboprops in principle because both transfer some of the gas turbine's gas power, using extra machinery, to a bypass stream leaving less for the hot nozzle to convert to kinetic energy. Turbofans represent an intermediate stage between turbojets, which derive all their thrust from exhaust gases, and turbo-props which derive minimal thrust from exhaust gases (typically 10% or less). Extracting shaft power and transferring it to a bypass stream introduces extra losses which are more than made up by the improved propulsive efficiency. The turboprop at its best flight speed gives significant fuel savings over a turbojet even though an extra turbine, a gearbox and a propeller are added to the turbojet's low-loss propelling nozzle. The turbofan has additional losses from its greater number of compressor stages/blades, fan and bypass duct. Froude, or propulsive, efficiency can be defined as: where: = thrust equivalent jet velocity = aircraft velocity Thrust While a turbojet engine uses all of the engine's output to produce thrust in the form of a hot high-velocity exhaust gas jet, a turbofan's cool low-velocity bypass air yields between 30% and 70% of the total thrust produced by a turbofan system. The thrust (FN) generated by a turbofan depends on the effective exhaust velocity of the total exhaust, as with any jet engine, but because two exhaust jets are present the thrust equation can be expanded as: where: = the mass rate of hot combustion exhaust flow from the core engine = the mass rate of total air flow entering the turbofan = = the mass rate of intake air that flows to the core engine = the mass rate of intake air that bypasses the core engine = the velocity of the air flow bypassed around the core engine = the velocity of the hot exhaust gas from the core engine = the velocity of the total air intake = the true airspeed of the aircraft = bypass ratio Nozzles The cold duct and core duct's nozzle systems are relatively complex due to the use of two separate exhaust flows. In high bypass engines, the fan is situated in a short duct near the front of the engine and typically has a convergent cold nozzle, with the tail of the duct forming a low pressure ratio nozzle that under normal conditions will choke creating supersonic flow patterns around the core. The core nozzle is more conventional, but generates less of the thrust, and depending on design choices, such as noise considerations, may conceivably not choke. In low bypass engines the two flows may combine within the ducts, and share a common nozzle, which can be fitted with afterburner. Noise Most of the air flow through a high-bypass turbofan is lower-velocity bypass flow: even when combined with the much-higher-velocity engine exhaust, the average exhaust velocity is considerably lower than in a pure turbojet. Turbojet engine noise is predominately jet noise from the high exhaust velocity. Therefore, turbofan engines are significantly quieter than a pure-jet of the same thrust, and jet noise is no longer the predominant source. Turbofan engine noise propagates both upstream via the inlet and downstream via the primary nozzle and the by-pass duct. Other noise sources are the fan, compressor and turbine. Modern commercial aircraft employ high-bypass-ratio (HBPR) engines with separate flow, non-mixing, short-duct exhaust systems. Their noise at takeoff is primarily from the fan and jet. The primary source of jet noise is the turbulent mixing of shear layers in the engine's exhaust. These shear layers contain instabilities that lead to highly turbulent vortices that generate the pressure fluctuations responsible for sound. To reduce the noise associated with jet flow, the aerospace industry has sought to disrupt shear layer turbulence and reduce the overall noise produced. Fan noise may come from the interaction of the fan-blade wakes with the pressure field of the downstream fan-exit stator vanes. It may be minimized by adequate axial spacing between blade trailing edge and stator entrance. At high engine speeds, as at takeoff, shock waves from the supersonic fan tips, because of their unequal nature, produce noise of a discordant nature known as "buzz saw" noise. All modern turbofan engines have acoustic liners in the nacelle to damp their noise. They extend as much as possible to cover the largest surface area. The acoustic performance of the engine can be experimentally evaluated by means of ground tests or in dedicated experimental test rigs. In the aerospace industry, chevrons are the "saw-tooth" patterns on the trailing edges of some jet engine nozzles that are used for noise reduction. The shaped edges smooth the mixing of hot air from the engine core and cooler air flowing through the engine fan, which reduces noise-creating turbulence. Chevrons were developed by GE under a NASA contract. Some notable examples of such designs are Boeing 787 and Boeing 747-8 on the Rolls-Royce Trent 1000 and General Electric GEnx engines. History Early turbojet engines were not very fuel-efficient because their overall pressure ratio and turbine inlet temperature were severely limited by the technology and materials available at the time. The first turbofan engine, which was only run on a test bed, was the German Daimler-Benz DB 670, designated the 109-007 by the German RLM (Ministry of Aviation), with a first run date of 27 May 1943, after the testing of the turbomachinery using an electric motor, which had been undertaken on 1 April 1943. Development of the engine was abandoned with its problems unsolved, as the war situation worsened for Germany. Later in 1943, the British ground tested the Metrovick F.3 turbofan, which used the Metrovick F.2 turbojet as a gas generator with the exhaust discharging into a close-coupled aft-fan module comprising a contra-rotating LP turbine system driving two co-axial contra-rotating fans. Improved materials, and the introduction of twin compressors, such as in the Bristol Olympus, and Pratt & Whitney JT3C engines, increased the overall pressure ratio and thus the thermodynamic efficiency of engines. They also had poor propulsive efficiency, because pure turbojets have a high specific thrust/high velocity exhaust, which is better suited to supersonic flight. The original low-bypass turbofan engines were designed to improve propulsive efficiency by reducing the exhaust velocity to a value closer to that of the aircraft. The Rolls-Royce Conway, the world's first production turbofan, had a bypass ratio of 0.3, similar to the modern General Electric F404 fighter engine. Civilian turbofan engines of the 1960s, such as the Pratt & Whitney JT8D and the Rolls-Royce Spey, had bypass ratios closer to 1 and were similar to their military equivalents. The first Soviet airliner powered by turbofan engines was the Tupolev Tu-124 introduced in 1962. It used the Soloviev D-20. 164 aircraft were produced between 1960 and 1965 for Aeroflot and other Eastern Bloc airlines, with some operating until the early 1990s. The first General Electric turbofan was the aft-fan CJ805-23, based on the CJ805-3 turbojet. It was followed by the aft-fan General Electric CF700 engine, with a 2.0 bypass ratio. This was derived from the General Electric J85/CJ610 turbojet to power the larger Rockwell Sabreliner 75/80 model aircraft, as well as the Dassault Falcon 20, with about a 50% increase in thrust to . The CF700 was the first small turbofan to be certified by the Federal Aviation Administration (FAA). There were at one time over 400 CF700 aircraft in operation around the world, with an experience base of over 10 million service hours. The CF700 turbofan engine was also used to train Moon-bound astronauts in Project Apollo as the powerplant for the Lunar Landing Research Vehicle. Common types Low-bypass turbofan A high-specific-thrust/low-bypass-ratio turbofan normally has a multi-stage fan behind inlet guide vanes, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to ensure there is sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the inlet temperature of the high-pressure (HP) turbine rotor. To illustrate one aspect of how a turbofan differs from a turbojet, comparisons can be made at the same airflow (to keep a common intake for example) and the same net thrust (i.e. same specific thrust). A bypass flow can be added only if the turbine inlet temperature is not too high to compensate for the smaller core flow. Future improvements in turbine cooling/material technology can allow higher turbine inlet temperature, which is necessary because of increased cooling air temperature, resulting from an overall pressure ratio increase. The resulting turbofan, with reasonable efficiencies and duct loss for the added components, would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Since the temperature rise across the whole engine (intake to nozzle) would be lower, the (dry power) fuel flow would also be reduced, resulting in a better specific fuel consumption (SFC). Some low-bypass ratio military turbofans (e.g. F404, JT8D) have variable inlet guide vanes to direct air onto the first fan rotor stage. This improves the fan surge margin (see compressor map). Afterburning turbofan Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area exit nozzle. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. When lit, large volumes of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The variable geometry nozzle must open to a larger throat area to accommodate the extra volume and increased flow rate when the afterburner is lit. Afterburning is often designed to give a significant thrust boost for take off, transonic acceleration and combat maneuvers, but is very fuel intensive. Consequently, afterburning can be used only for short portions of a mission. Unlike in the main engine, where stoichiometric temperatures in the combustor have to be reduced before they reach the turbine, an afterburner at maximum fuelling is designed to produce stoichiometric temperatures at entry to the nozzle, about . At a fixed total applied fuel:air ratio, the total fuel flow for a given fan airflow will be the same, regardless of the dry specific thrust of the engine. However, a high specific thrust turbofan will, by definition, have a higher nozzle pressure ratio, resulting in a higher afterburning net thrust and, therefore, a lower afterburning specific fuel consumption (SFC). However, high specific thrust engines have a high dry SFC. The situation is reversed for a medium specific thrust afterburning turbofan: i.e., poor afterburning SFC/good dry SFC. The former engine is suitable for a combat aircraft which must remain in afterburning combat for a fairly long period, but has to fight only fairly close to the airfield (e.g. cross border skirmishes). The latter engine is better for an aircraft that has to fly some distance, or loiter for a long time, before going into combat. However, the pilot can afford to stay in afterburning only for a short period, before aircraft fuel reserves become dangerously low. The first production afterburning turbofan engine was the Pratt & Whitney TF30, which initially powered the F-111 Aardvark and F-14 Tomcat. Low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200, the General Electric F110, the Klimov RD-33, and the Saturn AL-31, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle. High-bypass turbofan To further improve fuel economy and reduce noise, almost all jet airliners and most military transport aircraft (e.g., the C-17) are powered by low-specific-thrust/high-bypass-ratio turbofans. These engines evolved from the high-specific-thrust/low-bypass-ratio turbofans used in such aircraft in the 1960s. Modern combat aircraft tend to use low-bypass ratio turbofans, and some military transport aircraft use turboprops. Low specific thrust is achieved by replacing the multi-stage fan with a single-stage unit. Unlike some military engines, modern civil turbofans lack stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust. The core (or gas generator) of the engine must generate enough power to drive the fan at its rated mass flow and pressure ratio. Improvements in turbine cooling/material technology allow for a higher (HP) turbine rotor inlet temperature, which allows a smaller (and lighter) core, potentially improving the core thermal efficiency. Reducing the core mass flow tends to increase the load on the LP turbine, so this unit may require additional stages to reduce the average stage loading and to maintain LP turbine efficiency. Reducing core flow also increases bypass ratio. Bypass ratios greater than 5:1 are increasingly common; the Pratt & Whitney PW1000G, which entered commercial service in 2016, attains 12.5:1. Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Improvements in blade aerodynamics can reduce the number of extra compressor stages required, and variable geometry stators enable high-pressure-ratio compressors to work surge-free at all throttle settings. The first (experimental) high-bypass turbofan engine was the AVCO-Lycoming PLF1A-2, a Honeywell T55 turboshaft-derived engine that was first run in February 1962. The PLF1A-2 had a geared fan stage, produced a static thrust of , and had a bypass ratio of 6:1. The General Electric TF39 became the first production model, designed to power the Lockheed C-5 Galaxy military transport aircraft. The civil General Electric CF6 engine used a derived design. Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56; also the smaller TF34. More recent large high-bypass turbofans include the Pratt & Whitney PW4000, the three-shaft Rolls-Royce Trent, the General Electric GE90/GEnx and the GP7000, produced jointly by GE and P&W. The Pratt & Whitney JT9D engine was the first high bypass ratio jet engine to power a wide-body airliner. The lower the specific thrust of a turbofan, the lower the mean jet outlet velocity, which in turn translates into a high thrust lapse rate (i.e. decreasing thrust with increasing flight speed). See technical discussion below, item 2. Consequently, an engine sized to propel an aircraft at high subsonic flight speed (e.g., Mach 0.83) generates a relatively high thrust at low flight speed, thus enhancing runway performance. Low specific thrust engines tend to have a high bypass ratio, but this is also a function of the temperature of the turbine system. The turbofans on twin-engined transport aircraft produce enough take-off thrust to continue a take-off on one engine if the other engine shuts down after a critical point in the take-off run. From that point on the aircraft has less than half the thrust compared to two operating engines because the non-functioning engine is a source of drag. Modern twin engined airliners normally climb very steeply immediately after take-off. If one engine shuts down, the climb-out is much shallower, but sufficient to clear obstacles in the flightpath. The Soviet Union's engine technology was less advanced than the West's, and its first wide-body aircraft, the Ilyushin Il-86, was powered by low-bypass engines. The Yakovlev Yak-42, a medium-range, rear-engined aircraft seating up to 120 passengers, introduced in 1980, was the first Soviet aircraft to use high-bypass engines. Turbofan configurations Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e., same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g., net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration. The basic element of a turbofan is a spool, a single combination of fan/compressor, turbine and shaft rotating at a single speed. For a given pressure ratio, the surge margin can be increased by two different design paths: Splitting the compressor into two smaller spools rotating at different speeds, as with the Pratt & Whitney J57; or Making the stator vane pitch adjustable, typically in the front stages, as with the J79. Most modern western civil turbofans employ a relatively high-pressure-ratio high-pressure (HP) compressor, with many rows of variable stators to control surge margin at low rpm. In the three-spool RB211/Trent the core compression system is split into two, with the IP compressor, which supercharges the HP compressor, being on a different coaxial shaft and driven by a separate (IP) turbine. As the HP compressor has a modest pressure ratio its speed can be reduced surge-free, without employing variable geometry. However, because a shallow IP compressor working line is inevitable, the IPC has one stage of variable geometry on all variants except the −535, which has none. Single-shaft turbofan Although far from common, the single-shaft turbofan is probably the simplest configuration, comprising a fan and high-pressure compressor driven by a single turbine unit, all on the same spool. The Snecma M53, which powers Dassault Mirage 2000 fighter aircraft, is an example of a single-shaft turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation. Aft-fan turbofan One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805-23, which featured an integrated aft fan/low-pressure (LP) turbine unit located in the turbojet exhaust jetpipe. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. This arrangement introduces an additional gas leakage path compared to a front-fan configuration and was a problem with this engine with higher-pressure turbine gas leaking into the fan airflow. An aft-fan configuration was later used for the General Electric GE36 UDF (propfan) demonstrator of the early 1980s. In 1971 a concept was put forward by the NASA Lewis Research Center for a supersonic transport engine which operated as an aft-fan turbofan at take-off and subsonic speeds and a turbojet at higher speeds. This would give the low noise and high thrust characteristics of a turbofan at take-off, together with turbofan high propulsive efficiency at subsonic flight speeds. It would have the high propulsive efficiency of a turbojet at supersonic cruise speeds. Basic two-spool Many turbofans have at least basic two-spool configuration where the fan is on a separate low pressure (LP) spool, running concentrically with the compressor or high pressure (HP) spool; the LP spool runs at a lower angular velocity, while the HP spool turns faster and its compressor further compresses part of the air for combustion. The BR710 is typical of this configuration. At the smaller thrust sizes, instead of all-axial blading, the HP compressor configuration may be axial-centrifugal (e.g., CFE CFE738), double-centrifugal or even diagonal/centrifugal (e.g. Pratt & Whitney Canada PW600). Boosted two-spool Higher overall pressure ratios can be achieved by either raising the HP compressor pressure ratio or adding compressor (non-bypass) stages to the LP spool, between the fan and the HP compressor, to boost the latter. All of the large American turbofans (e.g. General Electric CF6, GE90, GE9X and GEnx plus Pratt & Whitney JT9D and PW4000) use booster stages. The Rolls-Royce BR715 is another example. The high bypass ratios used in modern civil turbofans tend to reduce the relative diameter of the booster stages, reducing their mean tip speed. Consequently, more booster stages are required to develop the necessary pressure rise. Three-spool Rolls-Royce chose a three-spool configuration for their large civil turbofans (i.e. the RB211 and Trent families), where the booster stages of a boosted two-spool configuration are separated into an intermediate pressure (IP) spool, driven by its own turbine. The first three-spool engine was the earlier Rolls-Royce RB.203 Trent of 1967. The Garrett ATF3, powering the Dassault Falcon 20 business jet, has an unusual three spool layout with an aft spool not concentric with the two others. Ivchenko Design Bureau chose the same configuration as Rolls-Royce for their Lotarev D-36 engine, followed by Lotarev/Progress D-18T and Progress D-436. The Turbo-Union RB199 military turbofan also has a three-spool configuration, as do the military Kuznetsov NK-25 and NK-321. Geared fan As bypass ratio increases, the fan blade tip speed increases relative to the LPT blade speed. This will reduce the LPT blade speed, requiring more turbine stages to extract enough energy to drive the fan. Introducing a (planetary) reduction gearbox, with a suitable gear ratio, between the LP shaft and the fan enables both the fan and LP turbine to operate at their optimum speeds. Examples of this configuration are the long-established Garrett TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G. Military turbofans Most of the configurations discussed above are used in civilian turbofans, while modern military turbofans (e.g., Snecma M88) are usually basic two-spool. High-pressure turbine Most civil turbofans use a high-efficiency, 2-stage HP turbine to drive the HP compressor. The CFM International CFM56 uses an alternative approach: a single-stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost. In the RB211 and Trent 3-spool engine series, the HP compressor pressure ratio is modest so only a single HP turbine stage is required. Modern military turbofans also tend to use a single HP turbine stage and a modest HP compressor. Low-pressure turbine Modern civil turbofans have multi-stage LP turbines (anywhere from 3 to 7). The number of stages required depends on the engine cycle bypass ratio and the boost (on boosted two-spools). A geared fan may reduce the number of required LPT stages in some applications. Because of the much lower bypass ratios employed, military turbofans require only one or two LP turbine stages. Overall performance Cycle improvements Consider a mixed turbofan with a fixed bypass ratio and airflow. Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow there is an increase in (HP) turbine rotor inlet temperature. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio, causing an increase in the hot mixer entry pressure. Consequently, net thrust increases, whilst specific fuel consumption (fuel flow/net thrust) decreases. A similar trend occurs with unmixed turbofans. Turbofan engines can be made more fuel efficient by raising overall pressure ratio and turbine rotor inlet temperature in unison. However, better turbine materials or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Increasing the latter may require better compressor materials. The overall pressure ratio can be increased by improving fan (or) LP compressor pressure ratio or HP compressor pressure ratio. If the latter is held constant, the increase in (HP) compressor delivery temperature (from raising overall pressure ratio) implies an increase in HP mechanical speed. However, stressing considerations might limit this parameter, implying, despite an increase in overall pressure ratio, a reduction in HP compressor pressure ratio. According to simple theory, if the ratio of turbine rotor inlet temperature/(HP) compressor delivery temperature is maintained, the HP turbine throat area can be retained. However, this assumes that cycle improvements are obtained, while retaining the datum (HP) compressor exit flow function (non-dimensional flow). In practice, changes to the non-dimensional speed of the (HP) compressor and cooling bleed extraction would probably make this assumption invalid, making some adjustment to HP turbine throat area unavoidable. This means the HP turbine nozzle guide vanes would have to be different from the original. In all probability, the downstream LP turbine nozzle guide vanes would have to be changed anyway. Thrust growth Thrust growth is obtained by increasing core power. There are two basic routes available: hot route: increase HP turbine rotor inlet temperature cold route: increase core mass flow Both routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream. The hot route may require changes in turbine blade/vane materials or better blade/vane cooling. The cold route can be obtained by one of the following: adding booster stages to the LP/IP compression adding a zero-stage to the HP compression improving the compression process, without adding stages (e.g. higher fan hub pressure ratio) all of which increase both overall pressure ratio and core airflow. Alternatively, the core size can be increased, to raise core airflow, without changing overall pressure ratio. This route is expensive, since a new (upflowed) turbine system (and possibly a larger IP compressor) is also required. Changes must also be made to the fan to absorb the extra core power. On a civil engine, jet noise considerations mean that any significant increase in take-off thrust must be accompanied by a corresponding increase in fan mass flow (to maintain a T/O specific thrust of about 30 lbf/lb/s). Technical discussion Specific thrust (net thrust/intake airflow) is an important parameter for turbofans and jet engines in general. Imagine a fan (driven by an appropriately sized electric motor) operating within a pipe, which is connected to a propelling nozzle. It is fairly obvious, the higher the fan pressure ratio (fan discharge pressure/fan inlet pressure), the higher the jet velocity and the corresponding specific thrust. Now imagine we replace this set-up with an equivalent turbofan – same airflow and same fan pressure ratio. Obviously, the core of the turbofan must produce sufficient power to drive the fan via the low-pressure (LP) turbine. If we choose a low (HP) turbine inlet temperature for the gas generator, the core airflow needs to be relatively high to compensate. The corresponding bypass ratio is therefore relatively low. If we raise the turbine inlet temperature, the core airflow can be smaller, thus increasing bypass ratio. Raising turbine inlet temperature tends to increase thermal efficiency and, therefore, improve fuel efficiency. Naturally, as altitude increases, there is a decrease in air density and, therefore, the net thrust of an engine. There is also a flight speed effect, termed thrust lapse rate. Consider the approximate equation for net thrust again: With a high specific thrust (e.g., fighter) engine, the jet velocity is relatively high, so intuitively one can see that increases in flight velocity have less of an impact upon net thrust than a medium specific thrust (e.g., trainer) engine, where the jet velocity is lower. The impact of thrust lapse rate upon a low specific thrust (e.g., civil) engine is even more severe. At high flight speeds, high-specific-thrust engines can pick up net thrust through the ram rise in the intake, but this effect tends to diminish at supersonic speeds because of shock wave losses. Thrust growth on civil turbofans is usually obtained by increasing fan airflow, thus preventing the jet noise becoming too high. However, the larger fan airflow requires more power from the core. This can be achieved by raising the overall pressure ratio (combustor inlet pressure/intake delivery pressure) to induce more airflow into the core and by increasing turbine inlet temperature. Together, these parameters tend to increase core thermal efficiency and improve fuel efficiency. Some high-bypass-ratio civil turbofans use an extremely low area ratio (less than 1.01), convergent-divergent, nozzle on the bypass (or mixed exhaust) stream, to control the fan working line. The nozzle acts as if it has variable geometry. At low flight speeds the nozzle is unchoked (less than a Mach number of unity), so the exhaust gas speeds up as it approaches the throat and then slows down slightly as it reaches the divergent section. Consequently, the nozzle exit area controls the fan match and, being larger than the throat, pulls the fan working line slightly away from surge. At higher flight speeds, the ram rise in the intake increases nozzle pressure ratio to the point where the throat becomes choked (M=1.0). Under these circumstances, the throat area dictates the fan match and, being smaller than the exit, pushes the fan working line slightly towards surge. This is not a problem, since fan surge margin is much better at high flight speeds. The off-design behaviour of turbofans is illustrated under compressor map and turbine map. Because modern civil turbofans operate at low specific thrust, they require only a single fan stage to develop the required fan pressure ratio. The desired overall pressure ratio for the engine cycle is usually achieved by multiple axial stages on the core compression. Rolls-Royce tend to split the core compression into two with an intermediate pressure (IP) supercharging the HP compressor, both units being driven by turbines with a single stage, mounted on separate shafts. Consequently, the HP compressor need develop only a modest pressure ratio (e.g., ~4.5:1). US civil engines use much higher HP compressor pressure ratios (e.g., ~23:1 on the General Electric GE90) and tend to be driven by a two-stage HP turbine. Even so, there are usually a few IP axial stages mounted on the LP shaft, behind the fan, to further supercharge the core compression system. Civil engines have multi-stage LP turbines, the number of stages being determined by the bypass ratio, the amount of IP compression on the LP shaft and the LP turbine blade speed. Because military engines usually have to be able to fly very fast at sea level, the limit on HP compressor delivery temperature is reached at a fairly modest design overall pressure ratio, compared with that of a civil engine. Also the fan pressure ratio is relatively high, to achieve a medium to high specific thrust. Consequently, modern military turbofans usually have only 5 or 6 HP compressor stages and require only a single-stage HP turbine. Low-bypass-ratio military turbofans usually have one LP turbine stage, but higher bypass ratio engines need two stages. In theory, by adding IP compressor stages, a modern military turbofan HP compressor could be used in a civil turbofan derivative, but the core would tend to be too small for high thrust applications. Improvements Aerodynamic modelling Aerodynamics is a mix of subsonic, transonic and supersonic airflow on a single fan/gas compressor blade in a modern turbofan. The airflow past the blades must be maintained within close angular limits to keep the air flowing against an increasing pressure. Otherwise air will be rejected back out of the intake. The Full Authority Digital Engine Control (FADEC) needs accurate data for controlling the engine. The critical turbine inlet temperature (TIT) is too harsh an environment, at and , for reliable sensors. Therefore, during development of a new engine type a relation is established between a more easily measured temperature like exhaust gas temperature and the TIT. Monitoring the exhaust gas temperature is then used to make sure the engine does not run too hot. Blade technology A turbine blade with a weight of is subjected to , at and a centrifugal force of , well above the point of plastic deformation and even above the melting point. Exotic alloys, sophisticated air cooling schemes and special mechanical design are needed to keep the physical stresses within the strength of the material. Rotating seals must withstand harsh conditions for 10 years, 20,000 missions and rotating at 10 to 20,000 rpm. Fan blades Fan blades have been growing as jet engines have been getting bigger: each fan blade carries the equivalent of nine double-decker buses and swallows air the equivalent volume of a squash court every second. Advances in computational fluid dynamics (CFD) modelling have permitted complex, 3D curved shapes with very wide chord, keeping the fan capabilities while minimizing the blade count to lower costs. Coincidentally, the bypass ratio grew to achieve higher propulsive efficiency and the fan diameter increased. Rolls-Royce pioneered the hollow, titanium wide-chord fan blade in the 1980s for aerodynamic efficiency and foreign object damage resistance in the RB211 then for the Trent. GE Aviation introduced carbon fiber composite fan blades on the GE90 in 1995, manufactured since 2017 with a carbon-fiber tape-layer process. GE partner Safran developed a 3D woven technology with Albany Composites for the CFM56 and CFM LEAP engines. Future progress Engine cores are shrinking as they operate at higher pressure ratios and become more efficient and smaller compared to the fan as bypass ratios increase. Blade tip clearances are more difficult to maintain at the exit of the high-pressure compressor where blades are high or less; backbone bending further affects clearance control as the core is proportionately longer and thinner and the fan to low-pressure turbine driveshaft space is constrained within the core. Pratt & Whitney VP technology and environment Alan Epstein argued "Over the history of commercial aviation, we have gone from 20% to 40% [cruise efficiency], and there is a consensus among the engine community that we can probably get to 60%". Geared turbofans and further fan pressure ratio reductions may continue to improve propulsive efficiency. The second phase of the FAA's Continuous Lower Energy, Emissions and Noise (CLEEN) program is targeting for the late 2020s reductions of 33% fuel burn, 60% emissions and 32 dB EPNdb noise compared with the 2000s state-of-the-art. In summer 2017 at NASA Glenn Research Center in Cleveland, Ohio, Pratt has finished testing a very-low-pressure-ratio fan on a PW1000G, resembling an open rotor with fewer blades than the PW1000G's 20. The weight and size of the nacelle would be reduced by a short duct inlet, imposing higher aerodynamic turning loads on the blades and leaving less space for soundproofing, but a lower-pressure-ratio fan is slower. UTC Aerospace Systems Aerostructures will have a full-scale ground test in 2019 of its low-drag Integrated Propulsion System with a thrust reverser, improving fuel burn by 1% and with 2.5-3 EPNdB lower noise. Safran expects to deliver another 10–15% in fuel efficiency through the mid-2020s before reaching an asymptote, and next will have to increase the bypass ratio to 35:1 instead of 11:1 for the CFM LEAP. It is demonstrating a counterrotating open rotor unducted fan (propfan) in Istres, France, under the European Clean Sky technology program. Modeling advances and high specific strength materials may help it succeed where previous attempts failed. When noise levels are within existing standards and similar to the LEAP engine, 15% lower fuel burn will be available and for that Safran is testing its controls, vibration and operation, while airframe integration is still challenging. For GE Aviation, the energy density of jet fuel still maximises the Breguet range equation and higher pressure ratio cores; lower pressure ratio fans, low-loss inlets and lighter structures can further improve thermal, transfer and propulsive efficiency. Under the U.S. Air Force's Adaptive Engine Transition Program, adaptive thermodynamic cycles will be used for the sixth-generation jet fighter, based on a modified Brayton cycle and Constant volume combustion. Additive manufacturing in the advanced turboprop will reduce weight by 5% and fuel burn by 20%. Rotating and static ceramic matrix composite (CMC) parts operates hotter than metal and are one-third its weight. With $21.9 million from the Air Force Research Laboratory, GE is investing $200 million in a CMC facility in Huntsville, Alabama, in addition to its Asheville, North Carolina site, mass-producing silicon carbide matrix with silicon-carbide fibers in 2018. CMCs will be used ten times more by the mid-2020s: the CFM LEAP requires 18 CMC turbine shrouds per engine and the GE9X will use it in the combustor and for 42 HP turbine nozzles. Rolls-Royce Plc aim for a 60:1 pressure ratio core for the 2020s Ultrafan and began ground tests of its gear for and 15:1 bypass ratios. Nearly stoichiometric turbine entry temperature approaches the theoretical limit and its impact on emissions has to be balanced with environmental performance goals. Open rotors, lower pressure ratio fans and potentially distributed propulsion offer more room for better propulsive efficiency. Exotic cycles, heat exchangers and pressure gain/constant volume combustion may improve thermodynamic efficiency. Additive manufacturing could be an enabler for intercooler and recuperators. Closer airframe integration and hybrid or electric aircraft can be combined with gas turbines. Rolls-Royce engines have a 72–82% propulsive efficiency and 42–49% thermal efficiency for a TSFC at Mach 0.8, and aim for theoretical limits of 95% for open rotor propulsive efficiency and 60% for thermal efficiency with stoichiometric turbine entry temperature and 80:1 overall pressure ratio for a TSFC As teething troubles may not show up until several thousand hours, the latest turbofans' technical problems disrupt airlines operations and manufacturers deliveries while production rates rise sharply. Trent 1000 cracked blades grounded almost 50 Boeing 787s and reduced ETOPS to 2.3 hours down from 5.5, costing Rolls-Royce plc almost $950 million. PW1000G knife-edge seal fractures have caused Pratt & Whitney to fall behind in deliveries, leaving about 100 engineless A320neos waiting for their powerplants. The CFM LEAP introduction had been smoother but a ceramic composite Turbine coating was prematurely lost, necessitating a new design, causing 60 A320neo engine removals for modification and delaying deliveries by up to six weeks late. On a widebody, Safran estimates 5–10% of fuel could be saved by reducing power intake for hydraulic systems, while swapping to electrical power could save 30% of weight, as initiated on the Boeing 787, while Rolls-Royce plc hopes for up to 5%. Manufacturers The turbofan engine market is dominated by General Electric, Rolls-Royce plc and Pratt & Whitney, in order of market share. General Electric and Safran of France have a joint venture, CFM International. Pratt & Whitney also have a joint venture, International Aero Engines with Japanese Aero Engine Corporation and MTU Aero Engines of Germany, specializing in engines for the Airbus A320 family. Pratt & Whitney and General Electric have a joint venture, Engine Alliance selling a range of engines for aircraft such as the Airbus A380. For airliners and cargo aircraft, the in-service fleet in 2016 is 60,000 engines and should grow to 103,000 in 2035 with 86,500 deliveries according to Flight Global. A majority will be medium-thrust engines for narrow-body aircraft with 54,000 deliveries, for a fleet growing from 28,500 to 61,000. High-thrust engines for wide-body aircraft, worth 40–45% of the market by value, will grow from 12,700 engines to over 21,000 with 18,500 deliveries. The regional jet engines below 20,000 lb (89 kN) fleet will grow from 7,500 to 9,000 and the fleet of turboprops for airliners will increase from 9,400 to 10,200. The manufacturers market share should be led by CFM with 44% followed by Pratt & Whitney with 29% and then Rolls-Royce and General Electric with 10% each. Commercial turbofans in production Extreme bypass jet engines In the 1970s, Rolls-Royce/SNECMA tested a M45SD-02 turbofan fitted with variable-pitch fan blades to improve handling at ultralow fan pressure ratios and to provide thrust reverse down to zero aircraft speed. The engine was aimed at ultraquiet STOL aircraft operating from city-centre airports. In a bid for increased efficiency with speed, a development of the turbofan and turboprop known as a propfan engine was created that had an unducted fan. The fan blades are situated outside of the duct, so that it appears like a turboprop with wide scimitar-like blades. Both General Electric and Pratt & Whitney/Allison demonstrated propfan engines in the 1980s. Excessive cabin noise and relatively cheap jet fuel prevented the engines being put into service. The Progress D-27 propfan, developed in the U.S.S.R., was the only propfan engine equipped on a production aircraft. Terminology Afterburner jetpipe equipped for afterburning Augmentor afterburner for turbofan with burning in hot and cold flows Bypass that part of the engine as distinct from the core in terms of components and airflow, eg that part of fan blading (fan outer) and stators which pass bypass air, bypass duct, bypass nozzle Bypass ratio bypass air mass flow /core air mass flow Core that part of the engine as distinct from the bypass in terms of components and airflow, eg core cowl, core nozzle, core airflow and associated machinery, combustor and fuel system Core power also known as "available energy" or "gas horsepower". It is used to measure the theoretical (isentropic expansion) shaft work available from a gas generator or core by expanding hot, high pressure gas to ambient pressure. Since the power depends on the pressure and temperature of the gas (and the ambient pressure) a related figure of merit for thrust-producing engines is one which measures the thrust-producing potential from hot, high pressure gas and known as "stream thrust". It is obtained by calculating the velocity obtained with isentropic expansion to atmospheric pressure. The significance of the thrust obtained appears when multiplied by the aircraft velocity to give the thrust work. The thrust work which is potentially available is far less than the gas horsepower due to the increasing waste in the exhaust kinetic energy with increasing pressure and temperature before expansion to atmospheric pressure. The two are related by the propulsive efficiency, a measure of the energy wasted as a result of producing a force (ie thrust) in a fluid by increasing the speed (ie momentum) of the fluid. Dry engine ratings/ throttle lever positions below afterburning selection EGT exhaust gas temperature EPR engine pressure ratio Fan turbofan LP compressor Fanjet turbofan or aircraft powered by turbofan (colloquial) Fan pressure ratio fan outlet total pressure/fan inlet total pressure Flex temp At reduced take-off weights commercial aircraft can use reduced thrust which increases engine life and reduces maintenance costs. Flex temperature is a higher than actual outside air temperature (OAT) which is input to the engine monitoring computer to achieve the required reduced thrust (also known as "assumed temperature thrust reduction"). Gas generator that part of the engine core which provides the hot, high pressure gas for fan-driving turbines (turbofan), for propelling nozzles (turbojet), for propeller- and rotor-driving turbines (turboprop and turboshaft), for industrial and marine power turbines HP high-pressure Intake ram drag Loss in momentum of engine stream tube from freestream to intake entrance, ie amount of energy imparted to air required to accelerate air from a stationary atmosphere to aircraft speed. IEPR integrated engine pressure ratio IP intermediate pressure LP low-pressure Net thrust nozzle thrust in stationary air (gross thrust) – engine stream tube ram drag (loss in momentum from freestream to intake entrance, ie amount of energy imparted to air required to accelerate air from a stationary atmosphere to aircraft speed). This is the thrust acting on the airframe. Overall pressure ratio amount of times the pressure increases due to ram compression and workndone by the compressor stages Overall efficiency thermal efficiency × propulsive efficiency Propulsive efficiency propulsive power/rate of production of propulsive kinetic energy (maximum propulsive efficiency occurs when jet velocity equals flight velocity, which implies zero net thrust!) Specific fuel consumption (SFC) total fuel flow/net thrust (proportional to flight velocity/overall thermal efficiency) Spooling up increase in RPM (colloquial) Spooling down decrease in RPM (colloquial) Stage loading For a turbine, the purpose of which is to produce power, the loading is an indicator of power developed per lb/sec of gas (specific power). A turbine stage turns the gas from an axial direction and speeds it up (in the nozzle guide vanes) to turn the rotor most effectively ( rotor blades must produce high lift), the proviso being that this is done efficiently, ie with acceptable losses. For a compressor stage, the purpose of which is to produce a pressure rise, a diffusion process is used. How much diffusion may be allowed ( and pressure rise obtained) before unacceptable flow separation occurs (ie losses) may be regarded as a loading limit. Stagnation pressure also known as total pressure; pressure of the fluid if all the kinetic energy were to be converted into pressure isentropically; sum of static pressure and dynamic pressure Static pressure pressure of the fluid which is associated not with its motion but with its state or, alternatively, pressure due to the random motion of the fluid molecules that would be felt or measured if moving with the flow Specific thrust net thrust/intake airflow Thermal efficiency rate of production of propulsive kinetic energy/fuel power Total fuel flow combustor (plus any afterburner) fuel flow rate (e.g., lb/s or g/s) Total pressure also known as stagnation pressure; sum of static pressure and dynamic pressure; pressure of the fluid if all the kinetic energy were to be converted into pressure isentropically Turbine rotor inlet temperature maximum cycle temperature, ie temperature at which work transfer takes place
Technology
Aircraft components
null
103118
https://en.wikipedia.org/wiki/Distributive%20property
Distributive property
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality is always true in elementary algebra. For example, in elementary arithmetic, one has Therefore, one would say that multiplication distributes over addition. This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted ) and the logical or (denoted ) distributes over the other. Definition Given a set and two binary operators and on the operation is over (or with respect to) if, given any elements of the operation is over if, given any elements of and the operation is over if it is left- and right-distributive. When is commutative, the three conditions above are logically equivalent. Meaning The operators used for examples in this section are those of the usual addition and multiplication If the operation denoted is not commutative, there is a distinction between left-distributivity and right-distributivity: In either case, the distributive property can be described in words as: To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted). If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of . One example of an operation that is "only" right-distributive is division, which is not commutative: In this case, left-distributivity does not apply: The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra. Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products. Examples Real numbers In the following examples, the use of the distributive law on the set of real numbers is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law. Matrices The distributive law is valid for matrix multiplication. More precisely, for all -matrices and -matrices as well as for all -matrices and -matrices Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws. Other examples Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. The cross product is left- and right-distributive over vector addition, though not commutative. The union of sets is distributive over intersection, and intersection is distributive over union. Logical disjunction ("or") is distributive over logical conjunction ("and"), and vice versa. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice versa: For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: For real numbers, addition distributes over the maximum operation, and also over the minimum operation: For binomial multiplication, distribution is sometimes referred to as the FOIL Method (First terms Outer Inner and Last ) such as: In all semirings, including the complex numbers, the quaternions, polynomials, and matrices, multiplication distributes over addition: In all algebras over a field, including the octonions and other non-associative algebras, multiplication distributes over addition. Propositional logic Rule of replacement In standard truth-functional propositional logic, in logical proofs uses two valid rules of replacement to expand individual occurrences of certain logical connectives, within some formula, into separate applications of those connectives across subformulas of the given formula. The rules are where "", also written is a metalogical symbol representing "can be replaced in a proof with" or "is logically equivalent to". Truth functional connectives is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies. Double distribution Distributivity and rounding In approximate arithmetic, such as floating-point arithmetic, the distributive property of multiplication (and division) over addition may fail because of the limitations of arithmetic precision. For example, the identity fails in decimal arithmetic, regardless of the number of significant digits. Methods such as banker's rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable. In rings and other structures Distributivity is most commonly found in semirings, notably the particular cases of rings and distributive lattices. A semiring has two binary operations, commonly denoted and and requires that must distribute over A ring is a semiring with additive inverses. A lattice is another kind of algebraic structure with two binary operations, If either of these operations distributes over the other (say distributes over ), then the reverse also holds ( distributes over ), and the lattice is called distributive.
Mathematics
Algebra
null
103127
https://en.wikipedia.org/wiki/Brute-force%20search
Brute-force search
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement. A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other. While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutionswhich in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed. This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a tablenamely, check all entries of the latter, sequentiallyis called linear search. Implementing the brute-force search Basic algorithm In order candidate for P after the current one c. valid (P, c): check whether candidate c is a solution for P. output (P, c): use the solution c of P as appropriate to the application. The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm c ← first(P) while c ≠ Λ do if valid(P,c) then output(P, c) c ← next(P, c) end while For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests n ≥ 1 and c < n are unnecessary.)The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time. Combinatorial explosion The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letterwhich is only a 10% increase in the data sizewill multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called the combinatorial explosion, or the curse of dimensionality. One example of a case where combinatorial complexity leads to solvability limit is in solving chess. Chess is not a solved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity. Speeding up brute-force searches One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, in the eight queens problem the challenge is to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutionsabout 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements. As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one. In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests. Reordering the search space In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to , than the other way aroundbecause the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance. Alternatives to brute-force search There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means of Constraint propagation, that is efficiently implemented in Constraint programming languages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function. In cryptography In cryptography, a brute-force attack involves systematically checking all possible keys until the correct key is found. This strategy can in theory be used against any encrypted data (except a one-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier. The key length used in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective by obfuscating the data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it.
Mathematics
Algorithms
null
103190
https://en.wikipedia.org/wiki/Wren
Wren
Wrens are a family, Troglodytidae, of small brown passerine birds. The family includes 96 species and is divided into 19 genera. All species are restricted to the New World except for the Eurasian wren that is widely distributed in the Old World. In Anglophone regions, the Eurasian wren is commonly known simply as the "wren", as it is the originator of the name. The name wren has been applied to other, unrelated birds, particularly the New Zealand wrens (Acanthisittidae) and the Australian wrens (Maluridae). Most wrens are visually inconspicuous though they have loud and often complex songs. Exceptions include the relatively large members of the genus Campylorhynchus, which can be quite bold in their behaviour. Wrens have short wings that are barred in most species, and they often hold their tails upright. Wrens are primarily insectivorous, eating insects, spiders and other small invertebrates, but many species also eat vegetable matter and some eat small frogs and lizards. Etymology and usage The English name "wren" derives from and , attested (as ) very early, in an eighth-century gloss. It is cognate to , , and (the latter two including an additional diminutive -ilan suffix). The Icelandic name is attested in Old Icelandic (Eddaic) as . This points to a Common Germanic name , but the further etymology of the name is unknown. The wren was also known as the ('kinglet') in Old High German, a name associated with the fable of the election of the "king of birds". The bird that could fly to the highest altitude would be made king. The eagle outflew all other birds, but he was beaten by a small bird that had hidden in his plumage. This fable was already known to Aristotle (Historia Animalium 9.11) and Pliny (Natural History 10.95), and was taken up by medieval authors such as Johann Geiler von Kaisersberg, but it most likely originally concerned kinglets (, such as the goldcrest) and was apparently motivated by the yellow "crown" sported by these birds (a point noted already by Ludwig Uhland). The confusion stemmed in part from the similarity and consequent interchangeability of the Ancient Greek words for the wren ( , 'king') and the crest ( , 'kinglet'), and the legend's reference to the "smallest of birds" becoming king likely led the title to be transferred to the equally tiny wren. In modern German, the name of the bird is ('king of the fence (or hedge)') and in Dutch, the name is ('king of winter'). The family name Troglodytidae is derived from troglodyte, which means 'cave-dweller'. Wrens get their scientific name from the tendency of some species to forage in dark crevices. The name "wren" is also ascribed to other families of passerine birds throughout the world. In Europe, kinglets are commonly known as "wrens", with the common firecrest and goldcrest known as the "fire-crested wren" and "golden-crested wren", respectively. The 27 Australasian "wren" species in the family Maluridae are unrelated, as are the New Zealand wrens in the family Acanthisittidae, the antbirds in the family Thamnophilidae, and the Old World babblers of the family Timaliidae. Description Wrens are medium-small to very small birds. The Eurasian wren is among the smallest birds in its range, while the smaller species from the Americas are among the smallest passerines in that part of the world. They range in size from the white-bellied wren, which averages under and , to the giant wren, which averages about and weighs almost . The dominating colors of their plumage are generally drab, composed of gray, brown, black, and white, and most species show some barring, especially on the tail or wings. No sexual dimorphism is seen in the plumage of wrens, and little difference exists between young birds and adults. All have fairly long, straight to marginally decurved (downward-curving) bills. Wrens have loud and often complex songs, sometimes given in duet by a pair. The songs of members of the genera Cyphorhinus and Microcerculus have been considered especially pleasant to the human ear, leading to common names such as song wren, musician wren, flutist wren, and southern nightingale-wren. Distribution and habitat Wrens are principally a New World family, distributed from Alaska and Canada to southern Argentina, with the greatest species richness in the Neotropics. As suggested by its name, the Eurasian wren is the only species of wren found outside the Americas, as restricted to Europe, Asia, and northern Africa (it was formerly considered conspecific with the winter wren and Pacific wren of North America). The insular species include the Clarión wren and Socorro wren from the Revillagigedo Islands in the Pacific Ocean, and Cobb's wren in the Falkland Islands, but few Caribbean islands have a species of wren, with only the southern house wren in the Lesser Antilles, the Cozumel wren of Cozumel Island, and the highly restricted Zapata wren in a single swamp in Cuba. The various species occur in a wide range of habitats, ranging from dry, sparsely wooded country to rainforests. Most species are mainly found at low levels, but members of the genus Campylorhynchus are frequently found higher, and the two members of Odontorchilus are restricted to the forest canopy. A few species, notably the Eurasian wren and the house wren, are often associated with humans. Most species are resident, remaining in Central and South America all year round, but the few species found in temperate regions of the Northern Hemisphere are partially migratory, spending the winter further south. Behavior and ecology Wrens vary from highly secretive species such as those found in the genus Microcerculus to the highly conspicuous genus Campylorhynchus, the members of which frequently sing from exposed perches. The family as a whole exhibits a great deal of variation in their behavior. Temperate species generally occur in pairs, but some tropical species may occur in parties of up to 20 birds. Wrens build dome-shaped nests, and may be either monogamous or polygamous, depending on species. Though little is known about the feeding habits of many of the Neotropical species, wrens are considered primarily insectivorous, eating insects, spiders, and other small arthropods. Many species also take vegetable matter such as seeds and berries, and some (primarily the larger species) take small frogs and lizards. The Eurasian wren has been recorded wading into shallow water to catch small fish and tadpoles; Sumichrast's wren and the Zapata wren take snails; and the giant wren and marsh wren have been recorded attacking and eating bird eggs (in the latter species, even eggs of conspecifics). A local Spanish name for the giant wren and bicolored wren is ('egg-sucker'), but whether the latter actually eats eggs is unclear. The plain wren and northern house wren sometimes destroy bird eggs, and the rufous-and-white wren has been recorded killing nestlings, but this is apparently to eliminate potential food competitors rather than to feed on the eggs or nestlings. Several species of Neotropical wrens sometimes participate in mixed-species flocks or follow army ants, and the Eurasian wren may follow badgers to catch prey items disturbed by them. Taxonomy and systematics Revised following Martínez Gómez et al. (2005) and Mann et al. (2006), the taxonomy of some groups is highly complex, and future species-level splits are likely. Additionally, undescribed taxa are known to exist. The black-capped donacobius is an enigmatic species traditionally placed with the wrens more for lack of a more apparent alternative than as a result of thorough study. It was recently determined to be most likely closer to certain warblers, possibly the newly established Megaluridae, and might constitute a monotypic family. The genus level cladogram of the Troglodytidae shown below is based on a molecular phylogenetic study by Tyler Imfeld and collaborators that was published in 2024. The number of species in each genus is based on the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). Family Troglodytidae Genus Campylorhynchus White-headed wren (Campylorhynchus albobrunneus) Band-backed wren (Campylorhynchus zonatus) Grey-barred wren (Campylorhynchus megalopterus) Stripe-backed wren (Campylorhynchus nuchalis) Fasciated wren (Campylorhynchus fasciatus) Giant wren (Campylorhynchus chiapensis) Bicolored wren (Campylorhynchus griseus) Veracruz wren (Campylorhynchus rufinucha) Russet-naped wren (Campylorhynchus humilis) Rufous-backed wren (Campylorhynchus capistratus) Spotted wren (Campylorhynchus gularis) Yucatan wren (Campylorhynchus yucatanicus) Boucard's wren (Campylorhynchus jocosus) Cactus wren (Campylorhynchus brunneicapillus) Thrush-like wren (Campylorhynchus turdinus) Genus Odontorchilus Grey-mantled wren (Odontorchilus branickii) Tooth-billed wren (Odontorchilus cinereus) Genus Salpinctes Rock wren (Salpinctes obsoletus) Genus Catherpes Canyon wren (Catherpes mexicanus) Genus Hylorchilus Sumichrast's wren (Hylorchilus sumichrasti) Nava's wren (Hylorchilus navai) Genus Cinnycerthia Rufous wren (Cinnycerthia unirufa) Sepia-brown wren (Cinnycerthia olivascens) Peruvian wren (Cinnycerthia peruana) Fulvous wren (Cinnycerthia fulva) Genus Cistothorus Sedge wren (Cistothorus stellaris) Mérida wren or paramo wren (Cistothorus meridae) Apolinar's wren (Cistothorus apolinari) Grass wren (Cistothorus platensis) Marsh wren (Cistothorus palustris) Genus Thryomanes Bewick's wren (Thryomanes bewickii) Genus Ferminia Zapata wren (Ferminia cerverai) Genus Pheugopedius (formerly included in Thryothorus) Black-throated wren (Pheugopedius atrogularis) Sooty-headed wren (Pheugopedius spadix) Black-bellied wren (Pheugopedius fasciatoventris) Plain-tailed wren (Pheugopedius euophrys) Grey-browed wren (Pheugopedius schulenbergi) Inca wren (Pheugopedius eisenmanni) Moustached wren (Pheugopedius genibarbis) Whiskered wren (Pheugopedius mystacalis) Coraya wren (Pheugopedius coraya) Happy wren (Pheugopedius felix) Spot-breasted wren (Pheugopedius maculipectus) Rufous-breasted wren (Pheugopedius rutilus) Speckle-breasted wren (Pheugopedius sclateri) Genus Thryophilus (formerly included in Thryothorus) Banded wren (Thryophilus pleurostictus) Rufous-and-white wren (Thryophilus rufalbus) Antioquia wren (Thryophilus sernai) Niceforo's wren (Thryophilus nicefori) Sinaloa wren (Thryophilus sinaloa) Genus Cantorchilus (formerly included in Thryothorus) Cabanis's wren (Cantorchilus modestus) Canebrake wren (Cantorchilus zeledoni) Isthmian wren (Cantorchilus elutus) Buff-breasted wren (Cantorchilus leucotis) (probably not monophyletic) Superciliated wren (Cantorchilus superciliaris) Fawn-breasted wren (Cantorchilus guarayanus) Long-billed wren (Cantorchilus longirostris) Grey wren (Cantorchilus griseus) Riverside wren (Cantorchilus semibadius) Bay wren (Cantorchilus nigricapillus) Stripe-breasted wren (Cantorchilus thoracicus) Stripe-throated wren (Cantorchilus leucopogon) Genus Thryothorus Carolina wren (Thryothorus ludovicianus) White-browed wren (Thryothorus (ludovicianus) albinucha) Genus Troglodytes (10–15 species, depending on taxonomy; includes species sometimes considered to be in the genus Nannus, which may be distinct) Eurasian wren (Troglodytes troglodytes) Winter wren (Troglodytes hiemalis) Pacific wren (Troglodytes pacificus) Clarión wren (Troglodytes tanneri) House wren (Troglodytes aedon) Cobb's wren (Troglodytes cobbi) Socorro wren (Troglodytes sissonii) Rufous-browed wren (Troglodytes rufociliatus) Ochraceous wren (Troglodytes ochraceus) Mountain wren (Troglodytes solstitialis) Santa Marta wren (Troglodytes monticola) Tepui wren (Troglodytes rufulus) Genus Thryorchilus Timberline wren (Thryorchilus browni) Genus Uropsila White-bellied wren (Uropsila leucogastra) Genus Henicorhina (wood wrens) White-breasted wood wren (Henicorhina leucosticta) Grey-breasted wood wren (Henicorhina leucophrys) Hermit wood wren (Henicorhina anachoreta) – split from H. leucophrys Bar-winged wood wren (Henicorhina leucoptera) Munchique wood wren (Henicorhina negreti) Genus Microcerculus Northern nightingale-wren (Microcerculus philomela) Southern nightingale-wren (Microcerculus marginatus) Flutist wren (Microcerculus ustulatus) Wing-banded wren (Microcerculus bambla) Genus Cyphorhinus Chestnut-breasted wren (Cyphorhinus thoracicus) Musician wren (Cyphorhinus arada) Song wren (Cyphorhinus phaeocephalus) Relationship with humans The wren features prominently in culture. The Eurasian wren has been long considered "the king of birds" in Europe. Killing one or harassing its nest is associated with bad luck, such as broken bones, lightning strikes on homes, or injury to cattle. Wren Day, celebrated in parts of Ireland on St. Stephen's Day (26 December), features a fake wren being paraded around town on a decorative pole; up to the 20th century, real birds were hunted for this purpose. A possible origin for the tradition is revenge for the betrayal of Saint Stephen by a noisy wren when he was trying to hide from enemies in a bush. The Carolina wren (Thryothorus ludovicianus) has been the state bird of South Carolina since 1948, and features on the back of its state quarter. The British farthing featured a wren on the reverse side from 1937 until its demonetisation in 1960. The Cactus wren (Campylorhynchus brunneicapillus) was designated the state bird of Arizona in 1931. The Women's Royal Naval Service (WRNS) were nicknamed Wrens based on the acronym WRNS. After the Women's Royal Navy Service was integrated into the Royal Navy in 1993, the title of Wren was dropped from official usage, however unofficially female sailors are still referred to as Wrens.
Biology and health sciences
Passerida
null
103356
https://en.wikipedia.org/wiki/Automata%20theory
Automata theory
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science with close connections to mathematical logic. The word automata comes from the Greek word αὐτόματος, which means "self-acting, self-willed, self-moving". An automaton (automata in plural) is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically. An automaton with a finite number of states is called a finite automaton (FA) or finite-state machine (FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists of states (represented in the figure by circles) and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function, which takes the previous state and current input symbol as its arguments. Automata theory is closely related to formal language theory. In this context, automata are used as finite representations of formal languages that may be infinite. Automata are often classified by the class of formal languages they can recognize, as in the Chomsky hierarchy, which describes a nesting relationship between major classes of automata. Automata play a major role in the theory of computation, compiler construction, artificial intelligence, parsing and formal verification. History The theory of abstract automata was developed in the mid-20th century in connection with finite automata. Automata theory was initially considered a branch of mathematical systems theory, studying the behavior of discrete-parameter systems. Early work in automata theory differed from previous work on systems by using abstract algebra to describe information systems rather than differential calculus to describe material systems. The theory of the finite-state transducer was developed under different names by different research communities. The earlier concept of Turing machine was also included in the discipline along with new forms of infinite-state automata, such as pushdown automata. 1956 saw the publication of Automata Studies, which collected work by scientists including Claude Shannon, W. Ross Ashby, John von Neumann, Marvin Minsky, Edward F. Moore, and Stephen Cole Kleene. With the publication of this volume, "automata theory emerged as a relatively autonomous discipline". The book included Kleene's description of the set of regular events, or regular languages, and a relatively stable measure of complexity in Turing machine programs by Shannon. In the same year, Noam Chomsky described the Chomsky hierarchy, a correspondence between automata and formal grammars, and Ross Ashby published An Introduction to Cybernetics, an accessible textbook explaining automata and information using basic set theory. The study of linear bounded automata led to the Myhill–Nerode theorem, which gives a necessary and sufficient condition for a formal language to be regular, and an exact count of the number of states in a minimal machine for the language. The pumping lemma for regular languages, also useful in regularity proofs, was proven in this period by Michael O. Rabin and Dana Scott, along with the computational equivalence of deterministic and nondeterministic finite automata. In the 1960s, a body of algebraic results known as "structure theory" or "algebraic decomposition theory" emerged, which dealt with the realization of sequential machines from smaller machines by interconnection. While any finite automaton can be simulated using a universal gate set, this requires that the simulating circuit contain loops of arbitrary complexity. Structure theory deals with the "loop-free" realizability of machines. The theory of computational complexity also took shape in the 1960s. By the end of the decade, automata theory came to be seen as "the pure mathematics of computer science". Automata What follows is a general definition of an automaton, which restricts a broader definition of a system to one viewed as acting in discrete time-steps, with its state behavior and outputs defined at each step by unchanging functions of only its state and input. Informal description An automaton runs when it is given some sequence of inputs in discrete (individual) time steps (or just steps). An automaton processes one input picked from a set of symbols or letters, which is called an input alphabet. The symbols received by the automaton as input at any step are a sequence of symbols called words. An automaton has a set of states. At each moment during a run of the automaton, the automaton is in one of its states. When the automaton receives new input, it moves to another state (or transitions) based on a transition function that takes the previous state and current input symbol as parameters. At the same time, another function called the output function produces symbols from the output alphabet, also according to the previous state and current input symbol. The automaton reads the symbols of the input word and transitions between states until the word is read completely, if it is finite in length, at which point the automaton halts. A state at which the automaton halts is called the final state. To investigate the possible state/input/output sequences in an automaton using formal language theory, a machine can be assigned a starting state and a set of accepting states. Then, depending on whether a run starting from the starting state ends in an accepting state, the automaton can be said to accept or reject an input sequence. The set of all the words accepted by an automaton is called the language recognized by the automaton. A familiar example of a machine recognizing a language is an electronic lock, which accepts or rejects attempts to enter the correct code. Formal definition Automaton An automaton can be represented formally by a quintuple , where: is a finite set of symbols, called the input alphabet of the automaton, is another finite set of symbols, called the output alphabet of the automaton, is a set of states, is the next-state function or transition function mapping state-input pairs to successor states, is the next-output function mapping state-input pairs to outputs. If is finite, then is a finite automaton. Input word An automaton reads a finite string of symbols , where , which is called an input word. The set of all words is denoted by . Run A sequence of states , where such that for , is a run of the automaton on an input starting from state . In other words, at first the automaton is at the start state , and receives input . For and every following in the input string, the automaton picks the next state according to the transition function , until the last symbol has been read, leaving the machine in the final state of the run, . Similarly, at each step, the automaton emits an output symbol according to the output function . The transition function is extended inductively into to describe the machine's behavior when fed whole input words. For the empty string , for all states , and for strings where is the last symbol and is the (possibly empty) rest of the string, . The output function may be extended similarly into , which gives the complete output of the machine when run on word from state . Acceptor In order to study an automaton with the theory of formal languages, an automaton may be considered as an acceptor, replacing the output alphabet and function and with , a designated start state, and , a set of states of (i.e. ) called accept states. This allows the following to be defined: Accepting word A word is an accepting word for the automaton if , that is, if after consuming the whole string the machine is in an accept state. Recognized language The language recognized by an automaton is the set of all the words that are accepted by the automaton, . Recognizable languages The recognizable languages are the set of languages that are recognized by some automaton. For finite automata the recognizable languages are regular languages. For different types of automata, the recognizable languages are different. Variant definitions of automata Automata are defined to study useful machines under mathematical formalism. So the definition of an automaton is open to variations according to the "real world machine" that we want to model using the automaton. People have studied many variations of automata. The following are some popular variations in the definition of different components of automata. Input Finite input: An automaton that accepts only finite sequences of symbols. The above introductory definition only encompasses finite words. Infinite input: An automaton that accepts infinite words (ω-words). Such automata are called ω-automata. Tree input: The input may be a tree of symbols instead of sequence of symbols. In this case after reading each symbol, the automaton reads all the successor symbols in the input tree. It is said that the automaton makes one copy of itself for each successor and each such copy starts running on one of the successor symbols from the state according to the transition relation of the automaton. Such an automaton is called a tree automaton. Infinite tree input : The two extensions above can be combined, so the automaton reads a tree structure with (in)finite branches. Such an automaton is called an infinite tree automaton. States Single state: An automaton with one state, also called a combinational circuit, performs a transformation which may implement combinational logic. Finite states: An automaton that contains only a finite number of states. Infinite states: An automaton that may not have a finite number of states, or even a countable number of states. Different kinds of abstract memory may be used to give such machines finite descriptions. Stack memory: An automaton may also contain some extra memory in the form of a stack in which symbols can be pushed and popped. This kind of automaton is called a pushdown automaton. Queue memory: An automaton may have memory in the form of a queue. Such a machine is called queue machine and is Turing-complete. Tape memory: The inputs and outputs of automata are often described as input and output tapes. Some machines have additional working tapes, including the Turing machine, linear bounded automaton, and log-space transducer. Transition function Deterministic: For a given current state and an input symbol, if an automaton can only jump to one and only one state then it is a deterministic automaton. Nondeterministic: An automaton that, after reading an input symbol, may jump into any of a number of states, as licensed by its transition relation. The term transition function is replaced by transition relation: The automaton non-deterministically decides to jump into one of the allowed choices. Such automata are called nondeterministic automata. Alternation: This idea is quite similar to tree automata but orthogonal. The automaton may run its multiple copies on the same next read symbol. Such automata are called alternating automata. The acceptance condition must be satisfied on all runs of such copies to accept the input. Two-wayness: Automata may read their input from left to right, or they may be allowed to move back-and-forth on the input, in a way similar to a Turing machine. Automata which can move back-and-forth on the input are called two-way finite automata. Acceptance condition Acceptance of finite words: Same as described in the informal definition above. Acceptance of infinite words: an ω-automaton cannot have final states, as infinite words never terminate. Rather, acceptance of the word is decided by looking at the infinite sequence of visited states during the run. Probabilistic acceptance: An automaton need not strictly accept or reject an input. It may accept the input with some probability between zero and one. For example, quantum finite automata, geometric automata and metric automata have probabilistic acceptance. Different combinations of the above variations produce many classes of automata. Automata theory is a subject matter that studies properties of various types of automata. For example, the following questions are studied about a given type of automata. Which class of formal languages is recognizable by some type of automata? (Recognizable languages) Are certain automata closed under union, intersection, or complementation of formal languages? (Closure properties) How expressive is a type of automata in terms of recognizing a class of formal languages? And, their relative expressive power? (Language hierarchy) Automata theory also studies the existence or nonexistence of any effective algorithms to solve problems similar to the following list: Does an automaton accept at least one input word? (Emptiness checking) Is it possible to transform a given non-deterministic automaton into a deterministic automaton without changing the language recognized? (Determinization) For a given formal language, what is the smallest automaton that recognizes it? (Minimization) Types of automata The following is an incomplete list of types of automata. Discrete, continuous, and hybrid automata Normally automata theory describes the states of abstract machines but there are discrete automata, analog automata or continuous automata, or hybrid discrete-continuous automata, which use digital data, analog data or continuous time, or digital and analog data, respectively. Hierarchy in terms of powers The following is an incomplete hierarchy in terms of powers of different types of virtual machines. The hierarchy reflects the nested categories of languages the machines are able to accept. Applications Each model in automata theory plays important roles in several applied areas. Finite automata are used in text processing, compilers, and hardware design. Context-free grammar (CFGs) are used in programming languages and artificial intelligence. Originally, CFGs were used in the study of human languages. Cellular automata are used in the field of artificial life, the most famous example being John Conway's Game of Life. Some other examples which could be explained using automata theory in biology include mollusk and pine cone growth and pigmentation patterns. Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists. The idea originated in the work of Konrad Zuse, and was popularized in America by Edward Fredkin. Automata also appear in the theory of finite fields: the set of irreducible polynomials that can be written as composition of degree two polynomials is in fact a regular language. Another problem for which automata can be used is the induction of regular languages. Automata simulators Automata simulators are pedagogical tools used to teach, learn and research automata theory. An automata simulator takes as input the description of an automaton and then simulates its working for an arbitrary input string. The description of the automaton can be entered in several ways. An automaton can be defined in a symbolic language or its specification may be entered in a predesigned form or its transition diagram may be drawn by clicking and dragging the mouse. Well known automata simulators include Turing's World, JFLAP, VAS, TAGS and SimStudio. Category-theoretic models One can define several distinct categories of automata following the automata classification into different types described in the previous section. The mathematical category of deterministic automata, sequential machines or sequential automata, and Turing machines with automata homomorphisms defining the arrows between automata is a Cartesian closed category, it has both categorical limits and colimits. An automata homomorphism maps a quintuple of an automaton Ai onto the quintuple of another automaton Aj. Automata homomorphisms can also be considered as automata transformations or as semigroup homomorphisms, when the state space, S, of the automaton is defined as a semigroup Sg. Monoids are also considered as a suitable setting for automata in monoidal categories. Categories of variable automata One could also define a variable automaton, in the sense of Norbert Wiener in his book on The Human Use of Human Beings via the endomorphisms . Then one can show that such variable automata homomorphisms form a mathematical group. In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, a variable automaton groupoid. Therefore, in the most general case, categories of variable automata of any kind are categories of groupoids or groupoid categories. Moreover, the category of reversible automata is then a 2-category, and also a subcategory of the 2-category of groupoids, or the groupoid category.
Mathematics
Automata theory
null
103548
https://en.wikipedia.org/wiki/Jerboa
Jerboa
Jerboas () are the members of the family Dipodidae. They are hopping desert rodents created throughout North Africa and Asia. They tend to live in hot deserts. When chased, jerboas can run at up to . Some species are preyed on by little owls (Athene noctua) in central Asia. Most species of jerboas have excellent hearing that they use to avoid becoming the prey of nocturnal predators. The typical lifespan of a jerboa is around 2–3 years. Taxonomy Jerboas, as previously defined, were thought to be paraphyletic, with the jumping mice (Zapodidae) and birch mice (Sminthidae) also being classified in the family Dipodidae. However, phylogenetic analysis split all three as distinct families, leaving just the jerboas in Dipodidae and revealing them to be a monophyletic group. This animal has a body length (including the head) of between 4 and 26 cm (1.6 to 10 in.), with an additional 7 – 30 cm (2.75 to 12 in.) of tail, which is always longer than the full body. Jerboa dental records reveal a slow increase in crown heights and that corresponds to a more open and dryer ecosystem. Anatomy and body features Jerboas look somewhat like miniature kangaroos, and have some external similarities. Both have long hind legs, short forelegs, and long tails. Jerboas move around in a similar manner to kangaroos, which is by hopping, or saltation. However, their anatomy is more attuned towards erratic hopping locomotion, making use of sharp turns and great vertical leaps to confuse and escape predators, rather than for sustained hopping over long periods of time. Researchers have found that, when jerboas execute their vertical leaps, the primary tendons in the hindlimbs only recovered and reused on average 4.4% of energy contributed to the jump; this is lower than many hopping animals. Jerboas have metatarsal bones that are fused into one long bone, called the cannon bone. Their cannon bone is more distinct and defined than in other rodents. This acts as leverage to allow them to reach higher heights while jumping, while also supporting the legs. Their back legs are often up to four times as long as the front legs. This further allows them to sling-shot themselves into the air. Jerboas that live in sandy desert environments develop hairs on the bottom of their feet that allow for better traction and grip so that they don't slip in the sand. Like other bipedal animals, their foramen magnum—the hole at the base of the skull—is forward-shifted, which enhances two-legged locomotion. The tail of a jerboa can be longer than its head and body, and a white cluster of hair is commonly seen at the end of the tail. Jerboas use their tails to balance when hopping, and as a prop when sitting upright. Jerboa fur is fine, and usually the colour of sand. This colour usually matches the jerboa habitat (an example of cryptic colouration). Some species of the jerboa family have long ears like a rabbit, whilst others have ears that are short like those of a mouse or rat. In addition to the Jerboa's large ears, they also have large feet which are a result of multiple genes overlapping each other in their DNA. Researchers found a gene called the shox2 gene that is expressed in Jerboa feet. This gene has the ability to turn other genes on and off and has been seen to cause mutant limbs. Behavior The bipedal locomotion of jerboas involves hopping, skipping, and running gaits, associated with rapid and frequent, difficult-to-predict changes in speed and direction, facilitating predator evasion relative to quadrupedal locomotion. This may explain why evolution of bipedal locomotion is favored in desert-dwelling rodents that forage in open habitats. Jerboas can hop 10–13 cm normally but if threatened by a predator the Jerboa can jump up to 3m. Jerboas are most active at twilight (crepuscular). During the heat of the day, they shelter in burrows. At night, they leave the burrows due to the cooler temperature of their environment. They dig the entrances to their burrow near plant life, especially along field borders. During the rainy season, they make tunnels in mounds or hills to reduce the risk of flooding. In the summer, jerboas occupying holes plug the entrance to keep out hot air and, some researchers speculate, predators. In most cases, burrows are constructed with an emergency exit that ends just below the surface or opens at the surface but is not strongly obstructed. This allows the jerboa to quickly escape predators. Since Jerboas dig in the sand, they have adapted to that environment by developing skin folds and hair that protects their ears and nose from getting sand inside them. Related jerboas often create four types of burrows. A temporary, summer day burrow is used for cover while hunting during the daylight. They have a second, temporary burrow used for hunting at night. They also have two permanent burrows: one for summer and one for winter. The permanent summer burrow is actively used throughout the summer and the young are raised there. Jerboas hibernate during the winter and use the permanent winter burrow for this. Temporary burrows are shorter in length than permanent burrows. Just like other animals that hibernate, these creatures are heavier pre-hibernation specifically in ungrazed sites (Shuai). Also, more food availability during pre-hibernation contributes to larger jerboa body mass in ungrazed regions, and entices more jerboas to migrate to ungrazed areas during post-hibernation. Grazing negatively impacts the Jerboa pre- and post-hibernation population, but not the survival rate. Jerboas create burrows to function as protection against predators and severe weather conditions. They will naturally respond to winter conditions such as cold temperatures and food deprivation by digging a winter burrow to hibernate in. Winter burrows are most often longer, deeper and have more entrance holes than summer burrows. Additionally, they use these burrows as nesting areas to raise their young. They can also function as feeding sites. Jerboas are solitary creatures. Once they reach adulthood, they usually have their own burrow and search for food on their own. However, occasional "loose colonies" may form, whereby some species of jerboa dig communal burrows that offer extra warmth when it is cold outside. Diet Most jerboas rely on plant material as the main component of their diet, but they cannot eat hard seeds. Some species opportunistically eat other jerboas and other animals they come across. Unlike gerbils, jerboas are not known to store their food. Some species of Jerboa are known to have a diet that consists of insects, plants, and sometimes seeds. They use their two front legs to gather food. Jerboas do not drink water but instead get their water intake from the food they eat. Jerboas like desert plants; they are best when they are wet but when dried out the Jerboas will dig the plants up and eat the roots because that part of the plant holds the most water. Jerboas will also try to minimize water loss by feeding at night when it is cooler in the desert. Communication and perception Many species within the family Dipodidae engage in dust bathing, often a way to use chemical communication. Their keen hearing suggests they may use sounds or vibrations to communicate. Reproduction Mating systems of closely related species in the family Dipodidae suggest that they may be polygynous. For some closely related jerboa species, mating usually happens a short time after awaking from winter hibernation. A female breeds twice in the summer, and raises from two to six young. Gestation time is between 25 and 35 days. Little is known about parental investment in long-eared jerboas. Like most mammals, females nurse and care for their young at least until they are weaned. Food conditions become abundant typically in the spring and summer. This is also when reproduction rates in the jerboas increase. Jerboas have cells that produce sex hormones known as the gonadotropin-releasing hormone (GnRH). These cells fire the most in the months of March through July. These cells quit producing GnRH in the autumn, and the jerboa's mating season ends. Classification Family Dipodidae Subfamily Cardiocraniinae Cardiocranius Five-toed pygmy jerboa, Cardiocranius paradoxus Salpingotus Thick-tailed pygmy jerboa, Salpingotus crassicauda Heptner's pygmy jerboa, Salpingotus heptneri Kozlov's pygmy jerboa, Salpingotus kozlovi Pallid pygmy jerboa, Salpingotus pallidus Thomas's pygmy jerboa, Salpingotus thomasi Salpingotulus Baluchistan pygmy jerboa, Salpingotulus michaelis Subfamily Dipodinae Dipus Northern three-toed jerboa, Dipus sagitta Eremodipus Lichtenstein's jerboa, Eremodipus lichensteini Jaculus Blanford's jerboa, Jaculus blanfordi Lesser Egyptian jerboa, Jaculus jaculus Greater Egyptian jerboa, Jaculus orientalis Stylodipus Andrews's three-toed jerboa, Stylodipus andrewsi Mongolian three-toed jerboa, Stylodipus sungorus Thick-tailed three-toed jerboa, Stylodipus telum Subfamily Euchoreutinae Euchoreutes Long-eared jerboa, Euchoreutes naso Subfamily Allactaginae Allactaga Balikun jerboa, Allactaga balikunica Gobi jerboa, Allactaga bullata Iranian jerboa, Allactaga firouzi Hotson's jerboa, Allactaga hotsoni Great jerboa, Allactaga major Severtzov's jerboa, Allactaga severtzovi Mongolian five-toed jerboa, Allactaga sibirica Allactodipus Bobrinski's jerboa, Allactodipus bobrinskii Pygeretmus Lesser fat-tailed jerboa, Pygeretmus platyurus Dwarf fat-tailed jerboa, Pygeretmus pumilio Greater fat-tailed jerboa, Pygeretmus shitkovi Scarturus Small five-toed jerboa, Scarturus elater Euphrates jerboa, Scarturus euphraticus Four-toed jerboa, Scarturus tetradactylus Vinogradov's jerboa, Scarturus vinogradovi Williams's jerboa, Scarturus williamsi Subfamily Paradipodinae Paradipus Comb-toed jerboa, Paradipus ctenodactylus
Biology and health sciences
Rodents
Animals
103915
https://en.wikipedia.org/wiki/Tissue%20%28biology%29
Tissue (biology)
In biology, tissue is an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Tissues occupy a biological organizational level between cells and a complete organ. Accordingly, organs are formed by the functional grouping together of multiple tissues. The English word "tissue" derives from the French word "", the past participle of the verb tisser, "to weave". The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis. Plant tissue In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue. Epidermis – Cells forming the outer surface of the leaves and of the young plant body. Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally. Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients. Plant tissues can also be divided differently into two types: Meristematic tissues Permanent tissues. Meristematic tissue Meristematic tissue consists of actively dividing cells and leads to increase in length and thickness of the plant. The primary growth of a plant occurs only in certain specific regions, such as in the tips of stems or roots. It is in these regions that meristematic tissue is present. Cells of this type of tissue are roughly spherical or polyhedral to rectangular in shape, with thin cell walls. New cells produced by meristem are initially those of meristem itself, but as the new cells grow and mature, their characteristics slowly change and they become differentiated as components of meristematic tissue, being classified as: 1.Primary meristem. Apical meristem : Present at the growing tips of stems and roots, they increase the length of the stem and root. They form growing parts at the apices of roots and stems and are responsible for the increase in length, also called primary growth. This meristem is responsible for the linear growth of an organ. 2.Secondary meristem. Lateral meristem: Cells which mainly divide in one plane and cause the organ to increase in diameter and girth. Lateral meristem usually occurs beneath the bark of the tree as cork cambium and in vascular bundles of dicotyledons as vascular cambium. The activity of this cambium forms secondary growth. Intercalary meristem: Located between permanent tissues, it is usually present at the base of the node, internode, and on leaf base. They are responsible for growth in length of the plant and increasing the size of the internode. They result in branch formation and growth. The cells of meristematic tissue are similar in structure and have a thin and elastic primary cell wall made of cellulose. They are compactly arranged without inter-cellular spaces between them. Each cell contains a dense cytoplasm and a prominent cell nucleus. The dense protoplasm of meristematic cells contains very few vacuoles. Normally the meristematic cells are oval, polygonal, or rectangular in shape. Meristematic tissue cells have a large nucleus with small or no vacuoles because they have no need to store anything. Their basic function is to multiply and increase the girth and length of the plant, with no intercellular spaces. Permanent tissues Permanent tissues may be defined as a group of living or dead cells formed by meristematic tissue and have lost their ability to divide and have permanently placed at fixed positions in the plant body. Meristematic tissues that take up a specific role lose the ability to divide. This process of taking up a permanent shape, size and a function is called cellular differentiation. Cells of meristematic tissue differentiate to form different types of permanent tissues. There are 2 types of permanent tissues: simple permanent tissues complex permanent tissues Simple permanent tissue Simple permanent tissue is a group of cells which are similar in origin, structure, and function. They are of three types: Parenchyma Collenchyma Sclerenchyma Parenchyma Parenchyma (Greek, para – 'beside'; enchyma– infusion – 'tissue') is the bulk of a substance. In plants, it consists of relatively unspecialized living cells with thin cell walls that are usually loosely packed so that intercellular spaces are found between cells of this tissue. These are generally isodiametric, in shape. They contain small number of vacuoles or sometimes they even may not contain any vacuole. Even if they do so the vacuole is of much smaller size than of normal animal cells. This tissue provides support to plants and also stores food. Chlorenchyma is a special type of parenchyma that contains chlorophyll and performs photosynthesis. In aquatic plants, aerenchyma tissues, or large air cavities, give support to float on water by making them buoyant. Parenchyma cells called idioblasts have metabolic waste. Spindle shaped fibers are also present in this cell to support them and known as prosenchyma, succulent parenchyma also noted. In xerophytes, parenchyma tissues store water. Collenchyma Collenchyma (Greek, 'Colla' means gum and 'enchyma' means infusion) is a living tissue of primary body like Parenchyma. Cells are thin-walled but possess thickening of cellulose, water and pectin substances (pectocellulose) at the corners where a number of cells join. This tissue gives tensile strength to the plant and the cells are compactly arranged and have very little inter-cellular spaces. It occurs chiefly in hypodermis of stems and leaves. It is absent in monocots and in roots. Collenchymatous tissue acts as a supporting tissue in stems of young plants. It provides mechanical support, elasticity, and tensile strength to the plant body. It helps in manufacturing sugar and storing it as starch. It is present in the margin of leaves and resists tearing effect of the wind. Sclerenchyma Sclerenchyma (Greek, Sclerous means hard and enchyma means infusion) consists of thick-walled, dead cells and protoplasm is negligible. These cells have hard and extremely thick secondary walls due to uniform distribution and high secretion of lignin and have a function of providing mechanical support. They do not have inter-cellular spaces between them. Lignin deposition is so thick that the cell walls become stronger, rigid and impermeable to water, which are also known as a stone cells or sclereids. These tissues are mainly of two types: sclerenchyma fiber and sclereids. Sclerenchyma fiber cells have a narrow lumen and are long, narrow and unicellular. Fibers are elongated cells that are strong and flexible, often used in ropes. Sclereids have extremely thick cell walls and are brittle, and are found in nutshells and legumes. Epidermis The entire surface of the plant consists of a single layer of cells called epidermis or surface tissue. The entire surface of the plant has this outer layer of the epidermis. Hence it is also called surface tissue. Most of the epidermal cells are relatively flat. The outer and lateral walls of the cell are often thicker than the inner walls. The cells form a continuous sheet without intercellular spaces. It protects all parts of the plant. The outer epidermis is coated with a waxy thick layer called cutin which prevents loss of water. The epidermis also consists of stomata (singular:stoma) which helps in transpiration. Complex permanent tissue The complex permanent tissue consists of more than one type of cells having a common origin which work together as a unit. Complex tissues are mainly concerned with the transportation of mineral nutrients, organic solutes (food materials), and water. That's why it is also known as conducting and vascular tissue. The common types of complex permanent tissue are: Xylem (or wood) Phloem (or bast). Xylem and phloem together form vascular bundles. Xylem Xylem (Greek, xylos = wood) serves as a chief conducting tissue of vascular plants. It is responsible for the conduction of water and inorganic solutes. Xylem consists of four kinds of cells: Tracheids Vessels (or tracheae) Xylem fibers or Xylem sclerenchyma Xylem parenchyma Xylem tissue is organised in a tube-like fashion along the main axes of stems and roots. It consists of a combination of parenchyma cells, fibers, vessels, tracheids, and ray cells. Longer tubes made up of individual cellssels tracheids, while vessel members are open at each end. Internally, there may be bars of wall material extending across the open space. These cells are joined end to end to form long tubes. Vessel members and tracheids are dead at maturity. Tracheids have thick secondary cell walls and are tapered at the ends. They do not have end openings such as the vessels. The end overlap with each other, with pairs of pits present. The pit pairs allow water to pass from cell to cell. Though most conduction in xylem tissue is vertical, lateral conduction along the diameter of a stem is facilitated via rays. Rays are horizontal rows of long-living parenchyma cells that arise out of the vascular cambium. Phloem Phloem consists of: Sieve tube Companion cell Phloem fiber Phloem parenchyma. Phloem is an equally important plant tissue as it also is part of the 'plumbing system' of a plant. Primarily, phloem carries dissolved food substances throughout the plant. This conduction system is composed of sieve-tube member and companion cells, that are without secondary walls. The parent cells of the vascular cambium produce both xylem and phloem. This usually also includes fibers, parenchyma and ray cells. Sieve tubes are formed from sieve-tube members laid end to end. The end walls, unlike vessel members in xylem, do not have openings. The end walls, however, are full of small pores where cytoplasm extends from cell to cell. These porous connections are called sieve plates. In spite of the fact that their cytoplasm is actively involved in the conduction of food materials, sieve-tube members do not have nuclei at maturity. It is the companion cells that are nestled between sieve-tube members that function in some manner bringing about the conduction of food. Sieve-tube members that are alive contain a polymer called callose, a carbohydrate polymer, forming the callus pad/callus, the colourless substance that covers the sieve plate. Callose stays in solution as long as the cell contents are under pressure. Phloem transports food and materials in plants upwards and downwards as required. Animal tissue Animal tissues are grouped into four basic types: connective, muscle, nervous, and epithelial. Collections of tissues joined in units to serve a common function compose organs. While most animals can generally be considered to contain the four tissue types, the manifestation of these tissues can differ depending on the type of organism. For example, the origin of the cells comprising a particular tissue type may differ developmentally for different classifications of animals. Tissue appeared for the first time in the diploblasts, but modern forms only appeared in triploblasts. The epithelium in all animals is derived from the ectoderm and endoderm (or their precursor in sponges), with a small contribution from the mesoderm, forming the endothelium, a specialized type of epithelium that composes the vasculature. By contrast, a true epithelial tissue is present only in a single layer of cells held together via occluding junctions called tight junctions, to create a selectively permeable barrier. This tissue covers all organismal surfaces that come in contact with the external environment such as the skin, the airways, and the digestive tract. It serves functions of protection, secretion, and absorption, and is separated from other tissues below by a basal lamina. The connective tissue and the muscular are derived from the mesoderm. The nervous tissue is derived from the ectoderm. Epithelial tissues The epithelial tissues are formed by cells that cover the organ surfaces, such as the surface of skin, the airways, surfaces of soft organs, the reproductive tract, and the inner lining of the digestive tract. The cells comprising an epithelial layer are linked via semi-permeable, tight junctions; hence, this tissue provides a barrier between the external environment and the organ it covers. In addition to this protective function, epithelial tissue may also be specialized to function in secretion, excretion and absorption. Epithelial tissue helps to protect organs from microorganisms, injury, and fluid loss. Functions of epithelial tissue: The principle function of epithelial tissues are covering and lining of free surface The cells of the body's surface form the outer layer of skin. Inside the body, epithelial cells form the lining of the mouth and alimentary canal and protect these organs. Epithelial tissues help in the elimination of waste. Epithelial tissues secrete enzymes and/or hormones in the form of glands. Some epithelial tissue perform secretory functions. They secrete a variety of substances including sweat, saliva, mucus, enzymes. There are many kinds of epithelium, and nomenclature is somewhat variable. Most classification schemes combine a description of the cell-shape in the upper layer of the epithelium with a word denoting the number of layers: either simple (one layer of cells) or stratified (multiple layers of cells). However, other cellular features such as cilia may also be described in the classification system. Some common kinds of epithelium are listed below: Simple squamous (pavement) epithelium Simple cuboidal epithelium Simple Columnar epithelium Simple ciliated (pseudostratified) columnar epithelium Simple glandular columnar epithelium Stratified non-keratinized squamous epithelium Stratified keratinized epithelium Stratified transitional epithelium Connective tissue Connective tissues are made up of cells separated by non-living material, which is called an extracellular matrix. This matrix can be liquid or rigid. For example, blood contains plasma as its matrix and bone's matrix is rigid. Connective tissue gives shape to organs and holds them in place. Blood, bone, tendon, ligament, adipose, and areolar tissues are examples of connective tissues. One method of classifying connective tissues is to divide them into three types: fibrous connective tissue, skeletal connective tissue, and fluid connective tissue. Muscle tissue Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood through the body. Nervous tissue Cells comprising the central nervous system and peripheral nervous system are classified as nervous (or neural) tissue. In the central nervous system, neural tissues form the brain and spinal cord. In the peripheral nervous system, neural tissues form the cranial nerves and spinal nerves, inclusive of the motor neurons. Mineralized tissues Mineralized tissues are biological tissues that incorporate minerals into soft matrices. Such tissues may be found in both plants and animals. History Xavier Bichat introduced the word tissue into the study of anatomy by 1801. He was "the first to propose that tissue is a central element in human anatomy, and he considered organs as collections of often disparate tissues, rather than as entities in themselves". Although he worked without a microscope, Bichat distinguished 21 types of elementary tissues from which the organs of the human body are composed, a number later reduced by other authors.
Biology and health sciences
Basics_2
null
104444
https://en.wikipedia.org/wiki/Dietary%20supplement
Dietary supplement
A dietary supplement is a manufactured product intended to supplement a person's diet by taking a pill, capsule, tablet, powder, or liquid. A supplement can provide nutrients either extracted from food sources, or that are synthetic (in order to increase the quantity of their consumption). The classes of nutrient compounds in supplements include vitamins, minerals, fiber, fatty acids, and amino acids. Dietary supplements can also contain substances that have not been confirmed as being essential to life, and so are not nutrients per se, but are marketed as having a beneficial biological effect, such as plant pigments or polyphenols. Animals can also be a source of supplement ingredients, such as collagen from chickens or fish for example. These are also sold individually and in combination, and may be combined with nutrient ingredients. The European Commission has also established harmonized rules to help insure that food supplements are safe and appropriately labeled. Creating an industry estimated to have a value of $151.9 billion in 2021, there are more than 50,000 dietary supplement products marketed in the United States, where about 50% of the American adult population consumes dietary supplements. Multivitamins are the most commonly used product among types of dietary supplements. The United States National Institutes of Health states that supplements "may be of value" for those who are nutrient deficient from their diet and receive approval from their medical provider. In the United States, it is against federal regulations for supplement manufacturers to claim that these products prevent or treat any disease. Companies are allowed to use what is referred to as "Structure/Function" wording if there is substantiation of scientific evidence for a supplement providing a potential health effect. An example would be "_ helps maintain healthy joints", but the label must bear a disclaimer that the Food and Drug Administration (FDA) "has not evaluated the claim" and that the dietary supplement product is not intended to "diagnose, treat, cure or prevent any disease", because only a drug can legally make such a claim. The FDA enforces these regulations and also prohibits the sale of supplements and supplement ingredients that are dangerous, or supplements not made according to standardized good manufacturing practices (GMPs). Definition In the United States, the Dietary Supplement Health and Education Act of 1994 provides this description: "The Dietary Supplement Health and Education Act of 1994 (DSHEA) defines the term "dietary supplement" to mean a product (other than tobacco) intended to supplement the diet that bears or contains one or more of the following dietary ingredients: a vitamin, a mineral, an herb or other botanical, an amino acid, a dietary substance for use by man to supplement the diet by increasing the total dietary intake, or a concentrate, metabolite, constituent, extract, or combination of any of the aforementioned ingredients. Furthermore, a dietary supplement must be labeled as a dietary supplement and be intended for ingestion and must not be represented for use as conventional food or as a sole item of a meal or of the diet. In addition, a dietary supplement cannot be approved or authorized for investigation as a new drug, antibiotic, or biologic, unless it was marketed as a food or a dietary supplement before such approval or authorization. Under DSHEA, dietary supplements are deemed to be food, except for purposes of the drug definition." Per DSHEA, dietary supplements are consumed orally, and are mainly defined by what they are not: conventional foods (including meal replacements), medical foods, preservatives or pharmaceutical drugs. Products intended for use as a nasal spray, or topically, as a lotion applied to the skin, do not qualify. FDA-approved drugs cannot be ingredients in dietary supplements. Supplement products are or contain vitamins, nutritionally essential minerals, amino acids, essential fatty acids and non-nutrient substances extracted from plants or animals or fungi or bacteria, or in the instance of probiotics, are live bacteria. Dietary supplement ingredients may also be synthetic copies of naturally occurring substances (for example: melatonin). All products with these ingredients are required to be labeled as dietary supplements. Like foods and unlike drugs, no government approval is required to make or sell dietary supplements; the manufacturer confirms the safety of dietary supplements but the government does not; and rather than requiring risk–benefit analysis to prove that the product can be sold like a drug, such assessment is only used by the FDA to decide that a dietary supplement is unsafe and should be removed from market. Types Vitamins A vitamin is an organic compound required by an organism as a vital nutrient in limited amounts. An organic chemical compound (or related set of compounds) is called a vitamin when it cannot be synthesized in sufficient quantities by an organism and must be obtained from the diet. The term is conditional both on the circumstances and on the particular organism. For example, ascorbic acid (vitamin C) is a vitamin for anthropoid primates, humans, guinea pigs and bats, but not for other mammals. Vitamin D is not an essential nutrient for people who get sufficient exposure to ultraviolet light, either from the sun or an artificial source, as they synthesize vitamin D in skin. Humans require thirteen vitamins in their diet, most of which are actually groups of related molecules, "vitamers", (e.g. vitamin E includes tocopherols and tocotrienols, vitamin K includes vitamin K1 and K2). The list: vitamins A, C, D, E, K, Thiamine (B1), Riboflavin (B2), Niacin (B3), Pantothenic Acid (B5), Vitamin B6, Biotin (B7), Folate (B9) and Vitamin B12. Vitamin intake below recommended amounts can result in signs and symptoms associated with vitamin deficiency. There is little evidence of benefit when vitamins are consumed as a dietary supplement by those who are healthy and have a nutritionally adequate diet. The U.S. Institute of Medicine sets tolerable upper intake levels (ULs) for some of the vitamins. This does not prevent dietary supplement companies from selling products with content per serving higher than the ULs. For example, the UL for vitamin D is 100 μg (4,000 IU), but products are available without prescription at 10,000 IU. Minerals Minerals are the exogenous chemical elements indispensable for life. Four minerals – carbon, hydrogen, oxygen, and nitrogen – are essential for life but are so ubiquitous in food and drink that these are not considered nutrients and there are no recommended intakes for these as minerals. The need for nitrogen is addressed by requirements set for protein, which is composed of nitrogen-containing amino acids. Sulfur is essential, but for humans, not identified as having a recommended intake per se. Instead, recommended intakes are identified for the sulfur-containing amino acids methionine and cysteine. There are dietary supplements that provide sulfur, such as taurine and methylsulfonylmethane. The essential nutrient minerals for humans, listed in order by weight needed to be at the Recommended Dietary Allowance or Adequate Intake are potassium, chlorine, sodium, calcium, phosphorus, magnesium, iron, zinc, manganese, copper, iodine, chromium, molybdenum, selenium and cobalt (the last as a component of vitamin B12). There are other minerals which are essential for some plants and animals, but may or may not be essential for humans, such as boron and silicon. Essential and purportedly essential minerals are marketed as dietary supplements, individually and in combination with vitamins and other minerals. Although as a general rule, dietary supplement labeling and marketing are not allowed to make disease prevention or treatment claims, the U.S. FDA has for some foods and dietary supplements reviewed the science, concluded that there is significant scientific agreement, and published specifically worded allowed health claims. An initial ruling allowing a health claim for calcium dietary supplements and osteoporosis was later amended to include calcium supplements with or without vitamin D, effective January 1, 2010. Examples of allowed wording are shown below. In order to qualify for the calcium health claim, a dietary supplement must contain at least 20% of the Reference Dietary Intake, which for calcium means at least 260 mg/serving. "Adequate calcium throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis." "Adequate calcium as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life." "Adequate calcium and vitamin D throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis." "Adequate calcium and vitamin D as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life." In the same year, the European Food Safety Authority also approved a dietary supplement health claim for calcium and vitamin D and the reduction of the risk of osteoporotic fractures by reducing bone loss. The U.S. FDA also approved Qualified Health Claims (QHCs) for various health conditions for calcium, selenium and chromium picolinate. QHCs are supported by scientific evidence, but do not meet the more rigorous "significant scientific agreement" standard required for an authorized health claim. If dietary supplement companies choose to make such a claim then the FDA stipulates the exact wording of the QHC to be used on labels and in marketing materials. The wording can be onerous: "One study suggests that selenium intake may reduce the risk of bladder cancer in women. However, one smaller study showed no reduction in risk. Based on these studies, FDA concludes that it is highly uncertain that selenium supplements reduce the risk of bladder cancer in women." Proteins and amino acids Protein-containing supplements, either ready-to-drink or as powders to be mixed into water, are marketed as aids to people recovering from illness or injury, those hoping to thwart the sarcopenia of old age, to athletes who believe that strenuous physical activity increases protein requirements, to people hoping to lose weight while minimizing muscle loss, i.e., conducting a protein-sparing modified fast, and to people who want to increase muscle size for performance and appearance. Whey protein is a popular ingredient, but products may also incorporate casein, soy, pea, hemp or rice protein. A meta-analysis found a moderate degree of evidence in favor of whey protein supplements use as a safe and effective adjunct to an athlete's training and recovery, including benefits for endurance, average power, muscle mass, and reduced perceived exercise intensity. According to US and Canadian Dietary Reference Intake guidelines, the protein Recommended Dietary Allowance (RDA) for adults is based on 0.8 grams protein per kilogram body weight. The recommendation is for sedentary and lightly active people. Scientific reviews can conclude that a high protein diet, when combined with exercise, will increase muscle mass and strength, or conclude the opposite. The International Olympic Committee recommends protein intake targets for both strength and endurance athletes at about 1.2–1.8 g/kg body mass per day. One review proposed a maximum daily protein intake of approximately 25% of energy requirements, i.e., approximately 2.0 to 2.5 g/kg. The same protein ingredients marketed as dietary supplements can be incorporated into meal replacement and medical food products, but those are regulated and labeled differently from supplements. In the United States, "meal replacement" products are foods and are labeled as such. These typically contain protein, carbohydrates, fats, vitamins and minerals. There may be content claims such as "good source of protein", "low fat" or "lactose free". Medical foods, also nutritionally complete, are designed to be used while a person is under the care of a physician or other licensed healthcare professional. Liquid medical food products – for example, Ensure – are available in regular and high protein versions. Proteins are chains of amino acids. Nine of these proteinogenic amino acids are considered essential for humans because they cannot be produced from other compounds by the human body and so must be taken in as food. Recommended intakes, expressed as milligrams per kilogram of body weight per day, have been established. Other amino acids may be conditionally essential for certain ages or medical conditions. Amino acids, individually and in combinations, are sold as dietary supplements. The claim for supplementing with the branched-chain amino acids leucine, valine and isoleucine is for stimulating muscle protein synthesis. A review of the literature concluded this claim was unwarranted. In elderly people, supplementation with just leucine resulted in a modest (0.99 kg) increase in lean body mass. The non-essential amino acid arginine, consumed in sufficient amounts, is thought to act as a donor for the synthesis of nitric oxide, a vasodilator. A review confirmed blood pressure lowering. Taurine, a popular dietary supplement ingredient with claims made for sports performance, is technically not an amino acid. It is synthesized in the body from the amino acid cysteine. Bodybuilding supplements Essential fatty acids Fish oil is a commonly used fatty acid supplement because it is a source of omega-3 fatty acids. Fatty acids are strings of carbon atoms, having a range of lengths. If links are all single (C−C), then the fatty acid is called saturated; with one double bond (C=C), it is called monounsaturated; if there are two or more double bonds (C=C=C), it is called polyunsaturated. Only two fatty acids, both polyunsaturated, are considered essential to be obtained from the diet, as the others are synthesized in the body. The "essential" fatty acids are alpha-linolenic acid (ALA), an omega-3 fatty acid, and linoleic acid (LA), an omega-6 fatty acid. ALA can be elongated in the body to create other omega-3 fatty acids: eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Plant oils, particularly seed and nut oils, contain ALA. Food sources of EPA and DHA are oceanic fish, whereas dietary supplement sources include fish oil, krill oil and marine algae extracts. The European Food Safety Authority (EFSA) identifies 250 mg/day for a combined total of EPA and DHA as Adequate Intake, with a recommendation that women pregnant or lactating consume an additional 100 to 200 mg/day of DHA. In the United States and Canada are Adequate Intakes for ALA and LA over various stages of life, but there are no intake levels specified for EPA and/or DHA. Supplementation with EPA and/or DHA does not appear to affect the risk of death, cancer or heart disease. Furthermore, studies of fish oil supplements have failed to support claims of preventing heart attacks or strokes. In 2017, the American Heart Association issued a science advisory stating that it could not recommend use of omega-3 fish oil supplements for primary prevention of cardiovascular disease or stroke, although it reaffirmed supplementation for people who have a history of coronary heart disease. Manufacturers have begun to include long chain polyunsaturated fatty acids DHA and arachidonic acid (AA) into their formula milk for newborns, however, a 2017 review found that supplementation with DHA and AA does not appear to be harmful or beneficial to formula-fed infants. Natural products Dietary supplements can be manufactured using intact sources or extracts from plants, animals, algae, fungi or lichens, including such examples as ginkgo biloba, curcumin, cranberry, St. John's wort, ginseng, resveratrol, glucosamine and collagen. Products bearing promotional claims of health benefits are sold without requiring a prescription in pharmacies, supermarkets, specialist shops, military commissaries, buyers clubs, direct selling organizations, and the internet. While most of these products have a long history of use in herbalism and various forms of traditional medicine, concerns exist about their actual efficacy, safety and consistency of quality. Canada has published a manufacturer and consumer guide describing quality, licensing, standards, identities, and common contaminants of natural products. In 2019, sales of herbal supplements just in the United States alone were $9.6 billion, with the market growing at approximately 8.6% per year, with cannabidiol and mushroom product sales as the highest. Italy, Germany, and Eastern European countries were leading consumers of botanical supplements in 2016, with European Union market growth forecast to be $8.7 billion by 2020. Probiotics Claimed benefits of using probiotic supplements are not supported by sufficient clinical evidence. Meta-analysis studies have reported a modest reduction of antibiotic-associated diarrhea and acute diarrhea in children taking probiotics. There is limited evidence in support of adults using mono-strain and multi-strain containing probiotics for the alleviation of symptoms associated with irritable bowel syndrome. Probiotic supplements are generally regarded as safe. Fertility A meta-analysis provided preliminary evidence that men treated with supplements containing selenium, zinc, omega-3 fatty acids, coenzyme Q10 or carnitines reported improvements in total sperm count, concentration, motility, and morphology. A review concluded that omega-3 taken through supplements and diet might improve semen quality in infertile men. A 2021 review also supported selenium, zinc, omega-3 fatty acids, coenzyme Q10 or carnitines, but warned that "excessive use of antioxidants may be detrimental to the spermatic function and many of the over-the-counter supplements are not scientifically proven to improve fertility." There is low quality and insufficient evidence for the use of oral antioxidant supplements as a viable treatment for subfertile woman. A review provided evidence that taking dehydroepiandrosterone before starting an in vitro fertilization series may increase pregnancy rates and decrease miscarriage likelihood. Prenatal Prenatal vitamins are dietary supplements commonly given to pregnant women to supply nutrients that may reduce health complications for the mother and fetus. Although prenatal vitamins are not meant to substitute for dietary nutrition, prenatal supplementation may be beneficial for pregnant women at risk of nutrient deficiencies because of diet limitations or restrictions. The most common components in prenatal vitamins include vitamins B6, folate, B12, C, D, E, iron and calcium. Sufficient intake of vitamin B6 can lower the risk of early pregnancy loss and relieve symptoms of morning sickness. Folate is also an essential nutrient for pregnant women to prevent neural tube defects. In 2006, the World Health Organization endorsed the recommendation for women of child-bearing age to consume 400 micrograms of folate through the diet daily if planning a pregnancy. A 2013 review found folic acid supplementation during pregnancy did not affect the mother's health other than a risk reduction on low pre-delivery serum folate and megaloblastic anemia. There is little evidence to suggest that vitamin D supplementation improves prenatal outcomes in hypertensive disorders and gestational diabetes. Evidence does not support the routine use of vitamin E supplementation during pregnancy to prevent adverse events, such as preterm birth, fetal or neonatal death, or maternal hypertensive disorders. Iron supplementation can lower the risk of iron deficiency anemia for pregnant women. In 2020, the World Health Organization updated recommendations for adequate calcium levels during pregnancy to prevent hypertensive disorders. Pharmacotherapy Individuals with hypokalemic sensory overstimulation are sometimes diagnosed as having attention deficit hyperactivity disorder (ADHD), raising the possibility that a subtype of ADHD has a cause that can be understood mechanistically and treated in a novel way. The sensory overload is treatable with oral potassium gluconate. Industry In 2020, the American market for dietary supplements was valued at $140.3 billion, with the economic impact in the United States for 2016 estimated at $122 billion, including employment wages and taxes. A 2020 analysis projected that the global market for vitamins and dietary supplement products would reach $196.6 billion by 2028, where the growth in market size is largely attributed to recent technological advancements in product manufacturing, increased demand for products advertised as healthy, increased product availability, and population aging. Adulteration, contamination and mislabeling Over the period 2008 to 2011, the Government Accountability Office (GAO) of the United States received 6,307 reports of health problems (identified as adverse events) from use of dietary supplements containing a combination of ingredients in manufactured vitamins, minerals or other supplement products, with 92% of tested herbal supplements containing lead and 80% containing other chemical contaminants. Using undercover staff, the GAO also found that supplement retailers intentionally engaged in "unequivocal deception" to sell products advertised with baseless health claims, particularly to elderly consumers. Consumer Reports also reported unsafe levels of arsenic, cadmium, lead and mercury in several protein powder products. The Canadian Broadcasting Corporation (CBC) reported that protein spiking, i.e., the addition of amino acids to manipulate protein content analysis, was common. Many of the companies involved challenged CBC's claim. In some botanical products, undeclared ingredients were used to increase the bulk of the product and reduce its cost of manufacturing, while potentially violating certain religious and/or cultural limitations on consuming animal ingredients, such as cow, buffalo or deer. In 2015, the New York Attorney General (NY-AG) identified four major retailers with dietary supplement products that contained fraudulent and potentially dangerous ingredients, requiring the companies to remove the products from retail stores. According to the NY-AG, only about 20% of the herbal supplements tested contained the plants claimed. The methodology used by the NY-AG was disputed. The test involves looking for DNA fragments from the plants named as the dietary supplement ingredients in the products. One scientist said that it was possible that the extraction process used to create the supplements removed or destroyed all DNA. This, however, would not explain the presence of DNA from plants such as rice or wheat, that were not listed as ingredients. A study of dietary supplements sold between 2007 and 2016 identified 776 that contained unlisted pharmaceutical drugs, many of which could interact with other medications and lead to hospitalization. 86% of the adulterated supplements were marketed for weight loss and sexual performance, with many containing prescription erectile dysfunction medication. Muscle building supplements were contaminated with anabolic steroids that can lead to health complications affecting the kidney, the heart, and cause gynecomastia. Multiple bodybuilding products also contained antidepressants and antihistamines. Despite these findings, fewer than half of the adulterated supplements were recalled. Regulatory compliance The European Commission has published harmonized rules on supplement products to assure consumers have minimal health risks from using dietary supplements and are not misled by advertising. In the United States and Canada, dietary supplements are considered a subset of foods, and are regulated accordingly. The U.S. Food and Drug Administration (FDA) monitors supplement products for accuracy in advertising and labeling. Dietary supplements are regulated by the FDA as food products subject to compliance with current Good Manufacturing Practices (CGMP) and labeling with science-based ingredient descriptions and advertising. When finding CGMP or advertising violations, FDA warning letters are used to notify manufacturers of impending enforcement action, including search and seizure, injunction, and financial penalties. Examples between 2016 and 2018 of CGMP and advertising violations by dietary supplement manufacturers included several with illegal compositions or advertising of vitamins and minerals. The U.S. Federal Trade Commission, which litigates against deceptive advertising in marketed products, established a consumer center to assist reports of false health claims in product advertising for dietary supplements. In 2017, the FTC successfully sued nine manufacturers for deceptive advertising of dietary supplements. Adverse effects In the United States, manufacturers of dietary supplements are required to demonstrate safety of their products before approval is granted for commerce. Despite this caution, numerous adverse effects have been reported, including muscle cramps, hair loss, joint pain, liver disease, and allergic reactions, with 29% of the adverse effects resulting in hospitalization, and 20% in serious injuries or illnesses. The potential for adverse effects also occurs when individuals consume more than the necessary daily amount of vitamins or minerals that are needed to maintain normal body processes and functions. The incidence of adverse effects reported to the FDA were due to "combination products" that contain multiple ingredients, whereas dietary supplements containing a single vitamin, mineral, lipid product, and herbal product were less likely to cause adverse effects related to excess supplementation. Among general reasons for the possible harmful effects of dietary supplements are: a) absorption in a short time, b) manufacturing quality and contamination, and c) enhancing both positive and negative effects at the same time. The incidence of liver injury from herbal and dietary supplements is about 16–20% of all supplement products causing injury, with the occurrence growing globally over the early 21st century. The most common liver injuries from weight loss and bodybuilding supplements involve hepatocellular damage with resulting jaundice, and the most common supplement ingredients attributed to these injuries are green tea catechins, anabolic steroids, and the herbal extract, aegeline. Weight loss supplements have also had adverse psychiatric effects. Some dietary supplements may also have adverse interactions with prescription medications that may enhance side effects or decrease therapeutic effects of medications. Society and culture Public health Work done by scientists in the early 20th century on identifying individual nutrients in food and developing ways to manufacture them raised hopes that optimal health could be achieved and diseases prevented by adding them to food and providing people with dietary supplements; while there were successes in preventing vitamin deficiencies, and preventing conditions like neural tube defects by supplementation and food fortification with folic acid, no targeted supplementation or fortification strategies to prevent major diseases like cancer or cardiovascular diseases have proved successful. For example, while increased consumption of fruits and vegetables are related to decreases in mortality, cardiovascular diseases and cancers, supplementation with key factors found in fruits and vegetable, like antioxidants, vitamins, or minerals, do not help and some have been found to be harmful in some cases. In general, as of 2016, robust clinical data is lacking, that shows that any kind of dietary supplementation does more good than harm for people who are healthy and eating a reasonable diet but there is clear data showing that dietary pattern and lifestyle choices are associated with health outcomes. As a result of the lack of good data for supplementation and the strong data for dietary pattern, public health recommendations for healthy eating urge people to eat a plant-based diet of whole foods, minimizing ultra-processed food, salt and sugar and to get exercise daily, and to abandon Western pattern diets and a sedentary lifestyle. Legal regulation United States The regulation of food and dietary supplements by the U.S. Food and Drug Administration (FDA) is governed by various statutes enacted by the United States Congress. Pursuant to the Federal Food, Drug, and Cosmetic Act and accompanying legislation, the FDA has authority to oversee the quality of substances sold as food in the United States, and to monitor claims made in the labeling about both the composition and the health benefits of foods. Substances which the FDA regulates as food are subdivided into various categories, including foods, food additives, added substances (man-made substances which are not intentionally introduced into food, but nevertheless end up in it), and dietary supplements. The specific standards which the FDA exercises differ from one category to the next. Furthermore, the FDA has been granted a variety of means by which it can address violations of the standards for a given category of substances. Dietary supplement manufacture is required to comply with the good manufacturing practices established in 2007. The FDA can visit manufacturing facilities, send Warning Letters if not in compliance with GMPs, stop production, and if there is a health risk, require that the company conduct a recall. Only after a dietary supplement product is marketed, may the FDA's Center for Food Safety and Applied Nutrition (CFSAN) review the products for safety and effectiveness. European Union The European Union's (EU) Food Supplements Directive of 2002 requires that supplements be demonstrated to be safe, both in dosages and in purity. Only those supplements that have been proven to be safe may be sold in the EU without prescription. As a category of food, food supplements cannot be labeled with drug claims but can bear health claims and nutrition claims. The dietary supplements industry in the United Kingdom (UK), one of the 28 countries in the bloc, strongly opposed the Directive. In addition, a large number of consumers throughout Europe, including over one million in the UK, and various doctors and scientists, had signed petitions by 2005 against what are viewed by the petitioners as unjustified restrictions of consumer choice. In 2004, along with two British trade associations, the Alliance for Natural Health (ANH) had a legal challenge to the Food Supplements Directive referred to the European Court of Justice by the High Court in London. Although the European Court of Justice's Advocate General subsequently said that the bloc's plan to tighten rules on the sale of vitamins and food supplements should be scrapped, he was eventually overruled by the European Court, which decided that the measures in question were necessary and appropriate for the purpose of protecting public health. ANH, however, interpreted the ban as applying only to synthetically produced supplements, and not to vitamins and minerals normally found in or consumed as part of the diet. Nevertheless, the European judges acknowledged the Advocate General's concerns, stating that there must be clear procedures to allow substances to be added to the permitted list based on scientific evidence. They also said that any refusal to add the product to the list must be open to challenge in the courts. Fraudulent products during the COVID-19 outbreak During the COVID-19 pandemic in the United States, the FDA and Federal Trade Commission (FTC) warned consumers about marketing scams of fraudulent supplement products, including homeopathic remedies, cannabidiol products, teas, essential oils, tinctures and colloidal silver, among others. By August 2020, the FDA and FTC had issued warning letters to dozens of companies advertising scam products, which were purported "to be drugs, medical devices or vaccines. Products that claim to cure, mitigate, treat, diagnose or prevent disease, but are not proven safe and effective for those purposes, defraud consumers of money and can place consumers at risk for serious harm" Research Examples of ongoing government research organizations to better understand the potential health properties and safety of dietary supplements are the European Food Safety Authority, the Office of Dietary Supplements of the United States National Institutes of Health, the Natural and Non-prescription Health Products Directorate of Canada, and the Therapeutic Goods Administration of Australia. Together with public and private research groups, these agencies construct databases on supplement properties, perform research on quality, safety, and population trends of supplement use, and evaluate the potential clinical efficacy of supplements for maintaining health or lowering disease risk. Databases As continual research on the properties of supplements accumulates, databases or fact sheets for various supplements are updated regularly, including the Dietary Supplement Label Database, Dietary Supplement Ingredient Database, and Dietary Supplement Facts Sheets of the United States. In Canada where a license is issued when a supplement product has been proven by the manufacturer and government to be safe, effective and of sufficient quality for its recommended use, an eight-digit Natural Product Number is assigned and recorded in a Licensed Natural Health Products Database. The European Food Safety Authority maintains a compendium of botanical ingredients used in manufacturing of dietary supplements. In 2015, the Australian Government's Department of Health published the results of a review of herbal supplements to determine if any were suitable for coverage by health insurance. Establishing guidelines to assess safety and efficacy of botanical supplement products, the European Medicines Agency provided criteria for evaluating and grading the quality of clinical research in preparing monographs about herbal supplements. In the United States, the National Center for Complementary and Integrative Health of the National Institutes of Health provides fact sheets evaluating the safety, potential effectiveness and side effects of many botanical products. Quality and safety To assure supplements have sufficient quality, standardization, and safety for public consumption, research efforts have focused on development of reference materials for supplement manufacturing and monitoring. High-dose products have received research attention, especially for emergency situations such as vitamin A deficiency in malnutrition of children, and for women taking folate supplements to reduce the risk of breast cancer. Population monitoring In the United States, the National Health and Nutrition Examination Survey (NHANES) has investigated habits of using dietary supplements in context of total nutrient intakes from the diet in adults and children. Over the period of 1999 to 2012, use of multivitamins decreased, and there was wide variability in the use of individual supplements among subgroups by age, sex, race/ethnicity, and educational status. Particular attention has been given to use of folate supplements by young women to reduce the risk of fetal neural tube defects. Clinical studies Limited human research has been conducted on the potential for dietary supplementation to affect disease risk. Examples: vitamin D acute respiratory tract infections iron maternal iron deficiency anemia and adverse effects on the fetus multiple supplements no evidence of benefit to lower risk of death, cardiovascular diseases or cancer magnesium supplementation in reducing all-cause and cancer mortality, as well as improving glucose parameters in people with diabetes and insulin-sensitivity parameters in those at high risk of diabetes. folate alone or with B vitamins stroke A 2017 academic review indicated a rising incidence of liver injury from use of herbal and dietary supplements, particularly those with steroids, green tea extract, or multiple ingredients. Absence of benefit The potential benefit of using essential nutrient dietary supplements to lower the risk of diseases has been refuted by findings of no effect or weak evidence in numerous clinical reviews, such as for HIV, or tuberculosis. Reporting bias A review of clinical trials registered at clinicaltrials.gov, which would include both drugs and supplements, reported that nearly half of completed trials were sponsored wholly or partially by industry. This does not automatically imply bias, but there is evidence that because of selective non-reporting, results in support of a potential drug or supplement ingredient are more likely to be published than results that do not demonstrate a statistically significant benefit. One review reported that fewer than half of the registered clinical trials resulted in publication in peer-reviewed journals. Future Improving public information about use of dietary supplements involves investments in professional training programs, further studies of population and nutrient needs, expanding the database information, enhancing collaborations between governments and universities, and translating dietary supplement research into useful information for consumers, health professionals, scientists, and policymakers. Future demonstration of efficacy from use of dietary supplements requires high-quality clinical research using rigorously qualified products and compliance with established guidelines for reporting of clinical trial results (e.g., CONSORT guidelines).
Biology and health sciences
Health and fitness
null
104790
https://en.wikipedia.org/wiki/Integer%20partition
Integer partition
In number theory and combinatorics, a partition of a non-negative integer , also called an integer partition, is a way of writing as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. (If order matters, the sum becomes a composition.) For example, can be partitioned in five distinct ways: The only partition of zero is the empty sum, having no parts. The order-dependent composition is the same partition as , and the two distinct compositions and represent the same partition as . An individual summand in a partition is called a part. The number of partitions of is given by the partition function . So . The notation means that is a partition of . Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general. Examples The seven partitions of 5 are 5 4 + 1 3 + 2 3 + 1 + 1 2 + 2 + 1 2 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1 Some authors treat a partition as a decreasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as the tuple or in the even more compact form where the superscript indicates the number of repetitions of a part. This multiplicity notation for a partition can be written alternatively as , where is the number of 1's, is the number of 2's, etc. (Components with may be omitted.) For example, in this notation, the partitions of 5 are written , and . Diagrammatic representations of partitions There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named after Norman Macleod Ferrers, and as Young diagrams, named after Alfred Young. Both have several possible conventions; here, we use English notation, with diagrams aligned in the upper-left corner. Ferrers diagram The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram: The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are shown below: Young diagram An alternative visual representation of an integer partition is its Young diagram (often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is while the Ferrers diagram for the same partition is {| |- style="vertical-align:top; text-align:left;" | |- style="vertical-align:top; text-align:center;" |} While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance. As a type of shape made by adjacent squares joined together, Young diagrams are a special kind of polyomino. Partition function The partition function counts the partitions of a non-negative integer . For instance, because the integer has the five partitions , , , , and . The values of this function for are: 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, ... . The generating function of is No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument., as follows: as In 1937, Hans Rademacher found a way to represent the partition function by the convergent series where and is the Dedekind sum. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. Srinivasa Ramanujan discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of ends in the digit 4 or 9, the number of partitions of will be divisible by 5. Restricted partitions In both combinatorics and number theory, families of partitions subject to various restrictions are often studied. This section surveys a few such restrictions. Conjugate and self-conjugate partitions If we flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14: By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to be conjugate of one another. In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to be self-conjugate. Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts. Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram: One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example: Odd parts and distinct parts Among the 22 partitions of the number 8, there are 6 that contain only odd parts: 7 + 1 5 + 3 5 + 1 + 1 + 1 3 + 3 + 1 + 1 3 + 1 + 1 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called a partition with distinct parts. If we count the partitions of 8 with distinct parts, we also obtain 6: 8 7 + 1 6 + 2 5 + 3 5 + 2 + 1 4 + 3 + 1 This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted by q(n). This result was proved by Leonhard Euler in 1748 and later was generalized as Glaisher's theorem. For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example is q(n) (partitions into distinct parts). The first few values of q(n) are (starting with q(0)=1): 1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10, ... . The generating function for q(n) is given by The pentagonal number theorem gives a recurrence for q: q(k) = ak + q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22) − ... where ak is (−1)m if k = 3m2 − m for some integer m and is 0 otherwise. Restricted part size or number of parts By taking conjugates, the number of partitions of into exactly k parts is equal to the number of partitions of in which the largest part has size . The function satisfies the recurrence with initial values and if and and are not both zero. One recovers the function p(n) by One possible generating function for such partitions, taking k fixed and n variable, is More generally, if T is a set of positive integers then the number of partitions of n, all of whose parts belong to T, has generating function This can be used to solve change-making problems (where the set T specifies the available coins). As two particular cases, one has that the number of partitions of n in which all parts are 1 or 2 (or, equivalently, the number of partitions of n into 1 or 2 parts) is and the number of partitions of n in which all parts are 1, 2 or 3 (or, equivalently, the number of partitions of n into at most three parts) is the nearest integer to (n + 3)2 / 12. Partitions in a rectangle and Gaussian binomial coefficients One may also simultaneously limit the number and size of the parts. Let denote the number of partitions of with at most parts, each of size at most . Equivalently, these are the partitions whose Young diagram fits inside an rectangle. There is a recurrence relation obtained by observing that counts the partitions of into exactly parts of size at most , and subtracting 1 from each part of such a partition yields a partition of into at most parts. The Gaussian binomial coefficient is defined as: The Gaussian binomial coefficient is related to the generating function of by the equality Rank and Durfee square The rank of a partition is the largest number k such that the partition contains at least k parts of size at least k. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rank r, the r × r square of entries in the upper-left is known as the Durfee square: {| |- style="vertical-align:top; text-align:left;" | |} The Durfee square has applications within combinatorics in the proofs of various partition identities. It also has some practical significance in the form of the h-index. A different statistic is also sometimes called the rank of a partition (or Dyson rank), namely, the difference for a partition of k parts with largest part . This statistic (which is unrelated to the one described above) appears in the study of Ramanujan congruences. Young's lattice There is a natural partial order on partitions given by inclusion of Young diagrams. This partially ordered set is known as Young's lattice. The lattice was originally defined in the context of representation theory, where it is used to describe the irreducible representations of symmetric groups Sn for all n, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of a differential poset. Random partitions There is a deep theory of random partitions chosen according to the uniform probability distribution on the symmetric group via the Robinson–Schensted correspondence. In 1977, Logan and Shepp, as well as Vershik and Kerov, showed that the Young diagram of a typical large partition becomes asymptotically close to the graph of a certain analytic function minimizing a certain functional. In 1988, Baik, Deift and Johansson extended these results to determine the distribution of the longest increasing subsequence of a random permutation in terms of the Tracy–Widom distribution. Okounkov related these results to the combinatorics of Riemann surfaces and representation theory.
Mathematics
Sums and products
null
20491903
https://en.wikipedia.org/wiki/Velocity
Velocity
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Definition Average velocity The average velocity of an object over a period of time is its change in position, , divided by the duration of the period, , given mathematically as Instantaneous velocity The instantaneous velocity of an object is the limit average velocity as the time interval approaches zero. At any particular time , it can be calculated as the derivative of the position with respect to time: From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time ( vs. graph) is the displacement, . In calculus terms, the integral of the velocity function is the displacement function . In the figure, this corresponds to the yellow area under the curve. Although the concept of an instantaneous velocity might at first seem counter-intuitive, it may be thought of as the velocity that the object would continue to travel at if it stopped accelerating at that moment. Difference between speed and velocity While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an object's speed and direction. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Units Since the derivative of the position with respect to time gives the change in position (in metres) divided by the change in time (in seconds), velocity is measured in metres per second (m/s). Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some time period . Average velocity can be calculated as: The average velocity is always less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, displacement can increase or decrease in magnitude as well as change direction. In terms of a displacement-time ( vs. ) graph, the instantaneous velocity (or, simply, velocity) can be thought of as the slope of the tangent line to the curve at any point, and the average velocity as the slope of the secant line between two points with coordinates equal to the boundaries of the time period for the average velocity. Special cases When a particle moves with different uniform speeds v1, v2, v3, ..., vn in different time intervals t1, t2, t3, ..., tn respectively, then average speed over the total time of journey is given as If , then average speed is given by the arithmetic mean of the speeds When a particle moves different distances s1, s2, s3,..., sn with speeds v1, v2, v3,..., vn respectively, then the average speed of the particle over the total distance is given as If , then average speed is given by the harmonic mean of the speeds Relationship to acceleration Although velocity is defined as the rate of change of position, it is often common to start with an expression for an object's acceleration. As seen by the three green tangent lines in the figure, an object's instantaneous acceleration at a point in time is the slope of the line tangent to the curve of a graph at that point. In other words, instantaneous acceleration is defined as the derivative of velocity with respect to time: From there, velocity is expressed as the area under an acceleration vs. time graph. As above, this is done using the concept of the integral: Constant acceleration In the special case of constant acceleration, velocity can be studied using the suvat equations. By considering a as being equal to some arbitrary constant vector, this shows with as the velocity at time and as the velocity at time . By combining this equation with the suvat equation , it is possible to relate the displacement and the average velocity by It is also possible to derive an expression for the velocity independent of time, known as the Torricelli equation, as follows: where etc. The above equations are valid for both Newtonian mechanics and special relativity. Where Newtonian mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in Newtonian mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. In other words, only relative velocity can be calculated. Quantities that are dependent on velocity Momentum In classical mechanics, Newton's second law defines momentum, p, as a vector that is the product of an object's mass and velocity, given mathematically aswhere m is the mass of the object. Kinetic energy The kinetic energy of a moving object is dependent on its velocity and is given by the equationwhere Ek is the kinetic energy. Kinetic energy is a scalar quantity as it depends on the square of the velocity. Drag (fluid resistance) In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. The drag force, , is dependent on the square of velocity and is given aswhere is the density of the fluid, is the speed of the object relative to the fluid, is the cross sectional area, and is the drag coefficient – a dimensionless number. Escape velocity Escape velocity is the minimum speed a ballistic object needs to escape from a massive body such as Earth. It represents the kinetic energy that, when added to the object's gravitational potential energy (which is always negative), is equal to zero. The general formula for the escape velocity of an object at a distance r from the center of a planet with mass M iswhere G is the gravitational constant and g is the gravitational acceleration. The escape velocity from Earth's surface is about 11 200 m/s, and is irrespective of the direction of the object. This makes "escape velocity" somewhat of a misnomer, as the more correct term would be "escape speed": any object attaining a velocity of that magnitude, irrespective of atmosphere, will leave the vicinity of the base body as long as it does not intersect with something in its path. The Lorentz factor of special relativity In special relativity, the dimensionless Lorentz factor appears frequently, and is given bywhere γ is the Lorentz factor and c is the speed of light. Relative velocity Relative velocity is a measurement of velocity between two objects as determined in a single coordinate system. Relative velocity is fundamental in both classical and modern physics, since many systems in physics deal with the relative motion of two or more particles. Consider an object A moving with velocity vector v and an object B with velocity vector w; these absolute velocities are typically expressed in the same inertial reference frame. Then, the velocity of object A object B is defined as the difference of the two velocity vectors: Similarly, the relative velocity of object B moving with velocity w, relative to object A moving with velocity v is: Usually, the inertial frame chosen is that in which the latter of the two mentioned objects is in rest. In Newtonian mechanics, the relative velocity is independent of the chosen inertial reference frame. This is not the case anymore with special relativity in which velocities depend on the choice of reference frame. Scalar velocities In the one-dimensional case, the velocities are scalars and the equation is either: if the two objects are moving in opposite directions, or: if the two objects are moving in the same direction. Coordinate systems Cartesian coordinates In multi-dimensional Cartesian coordinate systems, velocity is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding velocity components are defined as The two-dimensional velocity vector is then defined as . The magnitude of this vector represents speed and is found by the distance formula as In three-dimensional systems where there is an additional z-axis, the corresponding velocity component is defined as The three-dimensional velocity vector is defined as with its magnitude also representing speed and being determined by While some textbooks use subscript notation to define Cartesian components of velocity, others use , , and for the -, -, and -axes respectively. Polar coordinates In polar coordinates, a two-dimensional velocity is described by a radial velocity, defined as the component of velocity away from or toward the origin, and a transverse velocity, perpendicular to the radial one. Both arise from angular velocity, which is the rate of rotation about the origin (with positive quantities representing counter-clockwise rotation and negative quantities representing clockwise rotation, in a right-handed coordinate system). The radial and traverse velocities can be derived from the Cartesian velocity and displacement vectors by decomposing the velocity vector into radial and transverse components. The transverse velocity is the component of velocity along a circle centered at the origin. where is the transverse velocity is the radial velocity. The radial speed (or magnitude of the radial velocity) is the dot product of the velocity vector and the unit vector in the radial direction. where is position and is the radial direction. The transverse speed (or magnitude of the transverse velocity) is the magnitude of the cross product of the unit vector in the radial direction and the velocity vector. It is also the dot product of velocity and transverse direction, or the product of the angular speed and the radius (the magnitude of the position). such that Angular momentum in scalar form is the mass times the distance to the origin times the transverse velocity, or equivalently, the mass times the distance squared times the angular speed. The sign convention for angular momentum is the same as that for angular velocity. where is mass The expression is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion.
Physical sciences
Classical mechanics
null
20494183
https://en.wikipedia.org/wiki/Avulsion%20%28river%29
Avulsion (river)
In sedimentary geology and fluvial geomorphology, avulsion is the rapid abandonment of a river channel and the formation of a new river channel. Avulsions occur as a result of channel slopes that are much less steep than the slope that the river could travel if it took a new course. Deltaic and net-depositional settings Avulsions are common in river deltas, where sediment deposits as the river enters the ocean and channel gradients are typically very small. This process is also known as delta switching. Deposition from the river results in the formation of an individual deltaic lobe that pushes out into the sea. An example of a deltaic lobe is the bird's-foot delta of the Mississippi River, pictured at right with its sediment plumes. As the deltaic lobe advances, the slope of the river channel becomes lower, as the river channel is longer but has the same change in elevation. As the slope of the river channel decreases, it becomes unstable for two reasons. First, water under the force of gravity will tend to flow in the most direct course downslope. If the river could breach its natural levees (i.e., during a flood), it would spill out onto a new course with a shorter route to the ocean, thereby obtaining a more stable steeper slope. Second, as its slope is reduced, the amount of shear stress on the bed will decrease, resulting in deposition of more sediment within the channel and thus raising of the channel bed relative to the floodplain. This will make it easier for the river to breach its levees and cut a new channel that enters the ocean at a steeper slope. When this avulsion occurs, the new channel carries sediment out to the ocean, building a new deltaic lobe. The abandoned delta eventually subsides. This process is also related to the distributary network of river channels that can be observed within a river delta. When the channel does this, some of its flow can remain in the abandoned channel. When these channel switching events happen repeatedly over time, a mature delta will gain a distributary network. Subsidence of the delta and/or sea-level rise can further cause backwater and deposition in the delta. This deposition fills the channels and leaves a geologic record of channel avulsion in sedimentary basins. On average, an avulsion will occur every time the bed of a river channel aggrades enough that the river channel is superelevated above the floodplain by one channel-depth. In this situation, enough hydraulic head is available that any breach of the natural levees will result in an avulsion. Erosional avulsions Rivers can also avulse due to the erosion of a new channel that creates a straighter path through the landscape. This can happen during large floods in situations in which the slope of the new channel is significantly greater than that of the old channel. Where the new channel's slope is about the same as the old channel's slope, a partial avulsion will occur in which both channels are occupied by flow. An example of an erosional avulsion is the 2006 avulsion of the Suncook River in New Hampshire, in which heavy rains caused flow levels to rise. The river level backed up behind an old mill dam, which produced a shallowly-sloping pool that overtopped a sand and gravel quarry, connected with a downstream section of channel, and cut a new shorter channel at 25–50 meters per hour. Sediment mobilised by this erosional avulsion produced a depositionally-forced meander cutoff further downstream by superelevating the bed around the meander bend to nearly the level of the floodplain. Another example is the Cheslatta River, once a small tributary of the Nechako River in British Columbia. In the 1950s the Cheslatta River was made to be the spillway of the then new Nechako Reservoir. The discharge of the spills far exceeds the original flow of the Cheslatta River, which has resulted in major erosion in the upper Cheslatta valley, with the scoured sediment being deposited in the lower valley. Large reservoir spills caused the lower Cheslatta River to avulse in 1961 and again in 1972, carving a new route to the Nechako River and depositing a fan of sediment called the Cheslatta Fan in the Nechako River. After 1972 a cofferdam was built to restore the river to its original course. Meander cutoffs An example of a minor avulsion is known as a meander cutoff, when a pronounced meander (hook) in a river is breached by a flow that connects the two closest parts of the hook to form a new channel. This occurs when the ratio between the channel slope and the potential slope after an avulsion is less than about 1/5. Occurrence Avulsion typically occurs during large floods which carry the power necessary to rapidly change the landscape. Dam removal could also lead to avulsion. Avulsions usually occur as a downstream to upstream process via head cutting erosion. If a bank of a current stream is breached a new trench will be cut into the existing floodplain. It either cuts through floodplain deposits or reoccupies an old channel. Avulsions have been investigated in deltas or coastal plain channels as a result of obstructions such as log-jams and possible tectonic influences.
Physical sciences
Hydrology
Earth science
3401973
https://en.wikipedia.org/wiki/Abelisauridae
Abelisauridae
Abelisauridae (meaning "Abel's lizards") is a family (or clade) of ceratosaurian theropod dinosaurs. Abelisaurids thrived during the Cretaceous period, on the ancient southern supercontinent of Gondwana, and today their fossil remains are found on the modern continents of Africa and South America, as well as on the Indian subcontinent and the island of Madagascar. Isolated teeth were found in the Late Jurassic of Portugal, and the Late Cretaceous genera Tarascosaurus, Arcovenator and Caletodraco have been described in France. Abelisaurids possibly first appeared during the Jurassic period based on fossil records, and some genera survived until the end of the Mesozoic era, around . Like most theropods, abelisaurids were carnivorous bipeds. They were characterized by stocky hind limbs and extensive ornamentation of the skull bones, with grooves and pits. In many abelisaurids, such as Carnotaurus, the forelimbs are vestigial, the skull is shorter, and bony crests grow above the eyes. Most of the known abelisaurids would have been between 5 and 9 m (17 to 30 ft) in length, from snout to tip of tail, with a new and as yet unnamed specimen from northwestern Turkana in Kenya, Africa reaching a possible length of 11–12 m (36 to 39 ft). Before becoming well known, fragmentary abelisaurid remains were occasionally misidentified as possible South American tyrannosaurids. Description Abelisaurid hind limbs were more typical of ceratosaurs, with the astragalus and calcaneum (upper ankle bones) fused to each other and to the tibia, forming a tibiotarsus. The tibia was shorter than the femur, giving the hind limb stocky proportions. Three functional digits were on the foot (the second, third, and fourth), while the first digit, or hallux, did not contact the ground. Skull Although skull proportions varied, abelisaurid skulls were generally very tall and very short in length. In Carnotaurus, for example, the skull was nearly as tall as it was long. The premaxilla in abelisaurids was very tall, so the front of the snout was blunt, not tapered as seen in many other theropods. Two skull bones, the lacrimal and postorbital bones, projected into the eye socket from the front and back, nearly dividing it into two compartments. The eye would have been located in the upper compartment, which was tilted slightly outwards in Carnotaurus, perhaps providing some degree of binocular vision. The lacrimal and postorbital also met above the eye socket, to form a ridge or brow above the eye. Sculpturing is seen on many of the skull bones, in the form of long grooves, pits, and protrusions. Like other ceratosaurs, the frontal bones of the skull roof were fused together. Carnotaurines commonly had bony projections from the skull. Carnotaurus had two pronounced horns, projecting outward above the eyes, while its close relative Aucasaurus had smaller projections in the same area. Majungasaurus and Rajasaurus had a single bony horn or dome, projecting upwards from the skull. These projections, like the horns of many modern animals, might have been displayed for species recognition or intimidation. In Arcovenator, the dorsal margin of the postorbital (and probably also the lacrimal) is thickened dorsolaterally, forming a strong and rugose bony brow ridge rising above the level of the skull roof. Possibly, this rugose brow ridge supported a keratinous or scaly structure for displays. Fore limbs and hands Data for the abelisaurid fore limbs are known from Eoabelisaurus and the carnotaurines Aucasaurus, Carnotaurus, and Majungasaurus. All had small fore limbs, which seem to have been vestigial. The bones of the forearm (radius and ulna) were extremely short, only 25% of the length of the upper arm (humerus) in Carnotaurus and 33% in Aucasaurus. The entire arm was held straight, and the elbow joint was immobile. As is typical for ceratosaurs, the abelisaurid hand had four basic digits, but any similarity ends there. No wrist bones existed, with the four palm bones (metacarpals) attaching directly to the forearm. No phalanges (finger bones) were on the first or fourth digits, only one on the second digit and two on the third digit. These two external fingers were extremely short and immobile. Manual claws were very small in Eoabelisaurus, and totally absent in carnotaurines. More primitive relatives such as Noasaurus and Ceratosaurus had longer, mobile arms with fingers and claws. Paleobiologist Alexander O. Vargas suggested a major reason for the evolution towards vestigial fore limbs in the group was because of a genetic defect; the loss of function in HOXA11 and HOXD11, two genes that regulate the fore limbs' development. Distribution Abelisaurids are typically regarded as a Cretaceous period group. The earliest possible abelisaurid taxon is Eoabelisaurus mefi from the Jurassic period of Argentina, though other researchers either consider it as a ceratosaurid, an abelisauroid or its sister taxon outside abelisaurids. Indeterminate remains are also known from the Jurassic period of Madagascar and Tanzania. Abelisaurid remains are mainly known in the southern continents, which once made up the supercontinent of Gondwana. When first described in 1985, only Carnotaurus and Abelisaurus were known, both from the Late Cretaceous of South America. Abelisaurids were then located in Late Cretaceous India (Indosuchus and Rajasaurus) and Madagascar (Majungasaurus), which were closely connected for much of the Cretaceous. It was thought that the absence of abelisaurids from continental Africa indicated that the group evolved after the separation of Africa from Gondwana, around 100 million years ago. However, the discovery of Rugops and other abelisaurid material from the middle of the Cretaceous in northern Africa disproved this hypothesis. Mid-Cretaceous abelisaurids are now known from South America as well, showing that the group existed prior to the breakup of Gondwana. In 2014, the description of Arcovenator escotae from southern France provided the first indisputable evidence of the presence of Abelisaurids in Europe. Arcovenator presents strong similarities with the Madagascan Majungasaurus and Indian abelisaurids, but not with the South American forms. Arcovenator, Majungasaurus, and Indian forms are united in the new clade Majungasaurinae. Classification Paleontologists Jose Bonaparte and Fernando Novas coined the name Abelisauridae in 1985 when they described the eponymous Abelisaurus. The name is formed from the family name of Roberto Abel, who discovered Abelisaurus, and from the Greek word () meaning lizard. The very common suffix -idae is usually applied to zoological family names and is derived from the Greek suffix -ιδαι (-) meaning 'descendants'. Abelisauridae is a family in rank-based Linnaean taxonomy, within the infraorder Ceratosauria and the superfamily Abelisauroidea, which also contains the family Noasauridae. It has had several definitions in phylogenetic taxonomy. It was originally defined as a node-based taxon including Abelisaurus, Carnotaurus, their common ancestor, and all of its descendants. Later, it was redefined as a stem-based taxon, including all animals more closely related to Abelisaurus (or the more complete Carnotaurus) than to Noasaurus. The node-based definition would not include animals such as Rugops or Ilokelesia, which are thought to be more basal than Abelisaurus and would be included by a stem-based definition. Within the Abelisauridae is the subgroup Carnotaurinae, and among carnotaurines, Aucasaurus and Carnotaurus are united in Carnotaurini. Shared characteristics Complete skeletons have been described only for the most advanced abelisaurids (such as Carnotaurus and Aucasaurus), making establishment of defining features of the skeleton for the family as a whole more difficult. However, most are known from at least some skull bones, so known shared features come mainly from the skull. Many abelisaurid skull features are shared with carcharodontosaurids. These shared features, along with the fact that abelisaurids seem to have replaced carcharodontosaurids in South America, have led to suggestions that the two groups were related. However, no cladistic analysis has ever found such a relationship, and aside from the skull, abelisaurids and carcharodontosaurids are very different, more similar to ceratosaurs and allosauroids, respectively. Phylogeny Below is a cladogram generated by Tortosa et al. (2014) in the description of Arcovenator and creation of a new subfamily Majungasaurinae. Ilokelesia was originally described as a sister group to the Abelisauroidea. However, Sereno tentatively places it closer to Abelisaurus than to noasaurids, a result which agrees with several other recent analyses. If a stem-based definition is used, Ilokelesia and Rugops are therefore basal abelisaurids. However, as they are more basal than Abelisaurus, they are outside of the Abelisauridae if the node-based definition is adopted. Ekrixinatosaurus was also published in 2004, so it was not included in Sereno's analysis. However, an independent analysis, performed by Jorge Calvo and colleagues, shows it to be an abelisaurid. Some scientists include Xenotarsosaurus from Argentina and Compsosuchus from India as basal abelisaurids, while others consider them to be outside the Abelisauroidea. The French Genusaurus and Tarascosaurus have also been called abelisaurids but both are fragmentary and may be more basal ceratosaurians, though Tortosa et al. (2014) considered both to be distinct abelisaurids. Subsequent phylogenetic analyses recover Xenotarsosaurus and Tarascosaurus as an abelisaurid, but Genusaurus as either a noasaurid or an abelisaurid. With the description of Skorpiovenator in 2008, Canale et al. published another phylogenetic analysis focusing on the South American abelisaurids. In their results, they found that all South American forms, including Ilokelesia (except Abelisaurus), grouped together as a subclade of carnotaurines, which they named the Brachyrostra. In the same year Matthew T. Carrano and Scott D. Sampson published new large phylogenetic analysis of ceratosaurian. With the description of Eoabelisaurus, Diego Pol and Oliver W. M. Rauhut (2012) combined these analyses and added 10n new characters. The following cladogram follows their analysis. In the 2021 description of Llukalkan, the following consensus tree was recovered. Paleobiology Feeding Fossil teeth found amid the bones of a titanosaur from the Allen Formation of Argentina suggest that abelisaurids preyed upon or at least scavenged titanosaurs. Ontogeny and growth Studies of the abelisaurid Majungasaurus indicate that it was a much slower-growing dinosaur than other theropods, taking nearly 20 years to reach adult size. However, other mature abelisaurid specimens indicate that they generally reached a faster rate of maturation. The holotype of Aucasaurus had a minimum age of 11 years, the holotype of Niebla had a minimum age of 9 years, and MMCh-PV 69 had a minimum age of 14 years.
Biology and health sciences
Theropods
Animals
3406142
https://en.wikipedia.org/wiki/Relativity%20of%20simultaneity
Relativity of simultaneity
In physics, the relativity of simultaneity is the concept that distant simultaneity – whether two spatially separated events occur at the same time – is not absolute, but depends on the observer's reference frame. This possibility was raised by mathematician Henri Poincaré in 1900, and thereafter became a central idea in the special theory of relativity. Description According to the special theory of relativity introduced by Albert Einstein, it is impossible to say in an absolute sense that two distinct events occur at the same time if those events are separated in space. If one reference frame assigns precisely the same time to two events that are at different points in space, a reference frame that is moving relative to the first will generally assign different times to the two events (the only exception being when motion is exactly perpendicular to the line connecting the locations of both events). For example, a car crash in London and another in New York that appear to happen at the same time to an observer on Earth will appear to have occurred at slightly different times to an observer on an airplane flying between London and New York. Furthermore, if the two events cannot be causally connected, depending on the state of motion, the crash in London may appear to occur first in a given frame, and the New York crash may appear to occur first in another. However, if the events can be causally connected, precedence order is preserved in all frames of reference. History In 1892 and 1895, Hendrik Lorentz used a mathematical method called "local time" t' = t – v x/c2 for explaining the negative aether drift experiments. However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper did not contain any discussion of Lorentz's theory or the possible difference in defining simultaneity for observers in different states of motion. This was done in 1900, when Poincaré derived local time by assuming that the speed of light is invariant within the aether. Due to the "principle of relative motion", moving observers within the aether also assume that they are at rest and that the speed of light is constant in all directions (only to first order in v/c). Therefore, if they synchronize their clocks by using light signals, they will only consider the transit time for the signals, but not their motion in respect to the aether. So the moving clocks are not synchronous and do not indicate the "true" time. Poincaré calculated that this synchronization error corresponds to Lorentz's local time. In 1904, Poincaré emphasized the connection between the principle of relativity, "local time", and light speed invariance; however, the reasoning in that paper was presented in a qualitative and conjectural manner. Albert Einstein used a similar method in 1905 to derive the time transformation for all orders in v/c, i.e., the complete Lorentz transformation. Poincaré obtained the full transformation earlier in 1905 but in the papers of that year he did not mention his synchronization procedure. This derivation was completely based on light speed invariance and the relativity principle, so Einstein noted that for the electrodynamics of moving bodies the aether is superfluous. Thus, the separation into "true" and "local" times of Lorentz and Poincaré vanishes – all times are equally valid and therefore the relativity of length and time is a natural consequence. In 1908, Hermann Minkowski introduced the concept of a world line of a particle in his model of the cosmos called Minkowski space. In Minkowski's view, the naïve notion of velocity is replaced with rapidity, and the ordinary sense of simultaneity becomes dependent on hyperbolic orthogonality of spatial directions to the worldline associated to the rapidity. Then every inertial frame of reference has a rapidity and a simultaneous hyperplane. In 1990, Robert Goldblatt wrote Orthogonality and Spacetime Geometry, directly addressing the structure Minkowski had put in place for simultaneity. In 2006, Max Jammer, through Project MUSE, published Concepts of Simultaneity: from antiquity to Einstein and beyond. The book culminates in chapter 6, "The transition to the relativistic conception of simultaneity". Jammer indicates that Ernst Mach demythologized the absolute time of Newtonian physics. Naturally the mathematical notions preceded physical interpretation. For instance, conjugate diameters of conjugate hyperbolas are related as space and time. The principle of relativity can be expressed as the arbitrariness of which pair are taken to represent space and time in a plane. Thought experiments Einstein's train Einstein's version of the experiment presumed that one observer was sitting midway inside a speeding traincar and another was standing on a platform as the train moved past. As measured by the standing observer, the train is struck by two bolts of lightning simultaneously, but at different positions along the axis of train movement (back and front of the train car). In the inertial frame of the standing observer, there are three events which are spatially dislocated, but simultaneous: standing observer facing the moving observer (i.e., the center of the train), lightning striking the front of the train car, and lightning striking the back of the car. Since the events are placed along the axis of train movement, their time coordinates become projected to different time coordinates in the moving train's inertial frame. Events which occurred at space coordinates in the direction of train movement happen earlier than events at coordinates opposite to the direction of train movement. In the moving train's inertial frame, this means that lightning will strike the front of the train car before the two observers align (face each other). The train-and-platform A popular picture for understanding this idea is provided by a thought experiment similar to those suggested by Daniel Frost Comstock in 1910 and Einstein in 1917. It also consists of one observer midway inside a speeding traincar and another observer standing on a platform as the train moves past. A flash of light is given off at the center of the traincar just as the two observers pass each other. For the observer on board the train, the front and back of the traincar are at fixed distances from the light source and as such, according to this observer, the light will reach the front and back of the traincar at the same time. For the observer standing on the platform, on the other hand, the rear of the traincar is moving (catching up) toward the point at which the flash was given off, and the front of the traincar is moving away from it. As the speed of light is, according to the second postulate of special relativity, same in all directions for all observers, the light headed for the back of the train will have less distance to cover than the light headed for the front. Thus, the flashes of light will strike the ends of the traincar at different times. Spacetime diagrams It may be helpful to visualize this situation using spacetime diagrams. For a given observer, the t-axis is defined to be a point traced out in time by the origin of the spatial coordinate x, and is drawn vertically. The x-axis is defined as the set of all points in space at the time t = 0, and is drawn horizontally. The statement that the speed of light is the same for all observers is represented by drawing a light ray as a 45° line, regardless of the speed of the source relative to the speed of the observer. In the first diagram, the two ends of the train are drawn as grey lines. Because the ends of the train are stationary with respect to the observer on the train, these lines are just vertical lines, showing their motion through time but not space. The flash of light is shown as the 45° red lines. The points at which the two light flashes hit the ends of the train are at the same level in the diagram. This means that the events are simultaneous. In the second diagram, the two ends of the train moving to the right, are shown by parallel lines. The flash of light is given off at a point exactly halfway between the two ends of the train, and again form two 45° lines, expressing the constancy of the speed of light. In this picture, however, the points at which the light flashes hit the ends of the train are not at the same level; they are not simultaneous. Lorentz transformation The relativity of simultaneity can be demonstrated using the Lorentz transformation, which relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first. Assume that the first observer uses coordinates labeled t, x, y, and z, while the second observer uses coordinates labeled t′, x′, y′, and z′. Now suppose that the first observer sees the second observer moving in the x-direction at a velocity v. And suppose that the observers' coordinate axes are parallel and that they have the same origin. Then the Lorentz transformation expresses how the coordinates are related: where c is the speed of light. If two events happen at the same time in the frame of the first observer, they will have identical values of the t-coordinate. However, if they have different values of the x-coordinate (different positions in the x-direction), they will have different values of the t''' coordinate, so they will happen at different times in that frame. The term that accounts for the failure of absolute simultaneity is the vx/c2. The equation t′ = constant defines a "line of simultaneity" in the (x′, t′) coordinate system for the second (moving) observer, just as the equation t = constant defines the "line of simultaneity" for the first (stationary) observer in the (x, t) coordinate system. From the above equations for the Lorentz transform it can be seen that t' is constant if and only if t − vx/c2 = constant. Thus the set of points that make t constant are different from the set of points that makes t' constant. That is, the set of events which are regarded as simultaneous depends on the frame of reference used to make the comparison. Graphically, this can be represented on a spacetime diagram by the fact that a plot of the set of points regarded as simultaneous generates a line which depends on the observer. In the spacetime diagram, the dashed line represents a set of points considered to be simultaneous with the origin by an observer moving with a velocity v of one-quarter of the speed of light. The dotted horizontal line represents the set of points regarded as simultaneous with the origin by a stationary observer. This diagram is drawn using the (x, t) coordinates of the stationary observer, and is scaled so that the speed of light is one, i.e., so that a ray of light would be represented by a line with a 45° angle from the x axis. From our previous analysis, given that v = 0.25 and c = 1, the equation of the dashed line of simultaneity is t − 0.25x = 0 and with v = 0, the equation of the dotted line of simultaneity is t = 0. In general the second observer traces out a worldline in the spacetime of the first observer described by t = x/v, and the set of simultaneous events for the second observer (at the origin) is described by the line t = vx. Note the multiplicative inverse relation of the slopes of the worldline and simultaneous events, in accord with the principle of hyperbolic orthogonality. Accelerated observers The Lorentz-transform calculation above uses a definition of extended-simultaneity (i.e. of when and where events occur at which you were not present'') that might be referred to as the co-moving or "tangent free-float-frame" definition. This definition is naturally extrapolated to events in gravitationally-curved spacetimes, and to accelerated observers, through use of a radar-time/distance definition that (unlike the tangent free-float-frame definition for accelerated frames) assigns a unique time and position to any event. The radar-time definition of extended-simultaneity further facilitates visualization of the way that acceleration curves spacetime for travelers in the absence of any gravitating objects. This is illustrated in the figure at right, which shows radar time/position isocontours for events in flat spacetime as experienced by a traveler (red trajectory) taking a constant proper-acceleration roundtrip. One caveat of this approach is that the time and place of remote events are not fully defined until light from such an event is able to reach our traveler.
Physical sciences
Theory of relativity
Physics
1791751
https://en.wikipedia.org/wiki/Somali%20plate
Somali plate
The Somali plate is a minor tectonic plate which straddles the Equator in the Eastern Hemisphere. It is currently in the process of separating from the African plate along the East African Rift Valley. It is approximately centered on the island of Madagascar and includes about half of the east coast of Africa, from the Gulf of Aden in the north through the East African Rift Valley. The southern boundary with the Nubian–African plate is a diffuse plate boundary consisting of the Lwandle plate. Geology The Arabian plate diverges to the north forming the Gulf of Aden. The Indian plate, Australian plate, and Antarctic plate all diverge from the Somali plate forming the eastern Indian Ocean. The Somali-Indian boundary spreading ridge is known as the Carlsberg Ridge. The Somali-Australian boundary spreading ridge is known as the Central Indian Ridge. The Somali-Antarctic boundary spreading ridge is known as the Southwest Indian Ridge. The western boundary with the African plate is diverging to form the East African Rift, which stretches south from the triple junction in the Afar depression. The southern boundary with the Nubian–African plate is a diffuse plate boundary with the Lwandle plate. The Seychelles and the Mascarene Plateau are located northeast of the Madagascar. Tectonic history From the Kibaran orogeny fused the Tanzanian and Congo cratons. From 1000 to 600 Ma the super-continent Gondwana was formed and the Pan-African orogeny sutured the Tanzanian and Kalahari cratons. The rifting of Gondwana occurred from 190 Ma to 47 Ma separating Madagascar from the eastern coast of Africa and placing the Seychelles/Mascarene Plateau northeast of Madagascar. The rifting of the Red Sea started around and the first rifting occurred in the northern West African Rift System around .
Physical sciences
Tectonic plates
Earth science
1792150
https://en.wikipedia.org/wiki/Styracosaurus
Styracosaurus
Styracosaurus ( ; meaning "spiked lizard" from the Ancient Greek / "spike at the butt-end of a spear-shaft" and / "lizard") is an extinct genus of herbivorous ceratopsian dinosaur from the Late Cretaceous (Campanian stage) of North America. It had four to six long parietal spikes extending from its neck frill, a smaller jugal horn on each of its cheeks, and a single horn protruding from its nose, which may have been up to long and wide. The function or functions of the horns and frills have been debated for many years. Styracosaurus was a relatively large dinosaur, reaching lengths of and weighing about . It stood about tall. Styracosaurus possessed four short legs and a bulky body. Its tail was rather short. The skull had a beak and shearing cheek teeth arranged in continuous dental batteries, suggesting that the animal sliced up plants. Like other ceratopsians, this dinosaur may have been a herd animal, travelling in large groups, as suggested by bone beds. Named by Lawrence Lambe in 1913, Styracosaurus is a member of the Centrosaurinae. One species, S. albertensis, is currently assigned to Styracosaurus. Another species, S. ovatus, named in 1930 by Charles Gilmore was reassigned to a new genus, Rubeosaurus, by Andrew McDonald and Jack Horner in 2010, but it has been considered either its own genus or a species of Styracosaurus (or even a specimen of S. albertensis) again, since 2020. Discoveries and species The first fossil remains of Styracosaurus were collected in Alberta, Canada by C. M. Sternberg (from an area now known as Dinosaur Provincial Park, in a formation now called the Dinosaur Park Formation) and named by Lawrence Lambe in 1913. This quarry was revisited in 1935 by a Royal Ontario Museum crew who found the missing lower jaws and most of the skeleton. These fossils indicate that S. albertensis was around in length and stood about high at the hips. An unusual feature of this first skull is that the smallest frill spike on the left side is partially overlapped at its base by the next spike. It appears that the frill suffered a break at this point in life and was shortened by about . The normal shape of this area is unknown because the corresponding area of the right side of the frill was not recovered. Barnum Brown and crew, working for the American Museum of Natural History in New York, collected a nearly complete articulated skeleton with a partial skull in 1915. These fossils were also found in the Dinosaur Park Formation, near Steveville, Alberta. Brown and Erich Maren Schlaikjer compared the finds, and, though they allowed that both specimens were from the same general locality and geological formation, they considered the specimen sufficiently distinct from the holotype to warrant erecting a new species, and described the fossils as Styracosaurus parksi, named in honor of William Parks. Among the differences between the specimens cited by Brown and Schlaikjer were a cheekbone quite different from that of S. albertensis, and smaller tail vertebrae. S. parksi also had a more robust jaw, a shorter dentary, and the frill differed in shape from that of the type species. However, much of the skull consisted of plaster reconstruction, and the original 1937 paper did not illustrate the actual skull bones. It is now accepted as a specimen of S. albertensis. In the summer of 2006, Darren Tanke of the Royal Tyrrell Museum of Palaeontology in Drumheller, Alberta relocated the long lost S. parksi site. Pieces of the skull, evidently abandoned by the 1915 crew, were found in the quarry. These were collected and it is hoped more pieces will be found, perhaps enough to warrant a redescription of the skull and test whether S. albertensis and S. parksi are the same. The Tyrrell Museum has also collected several partial Styracosaurus skulls. At least one confirmed bone bed (bonebed 42) in Dinosaur Provincial Park has also been explored (other proposed Styracosaurus bone beds instead have fossils from a mix of animals, and nondiagnostic ceratopsian remains). Bonebed 42 is known to contain numerous pieces of skulls such as horncores, jaws and frill pieces. Several other species which were assigned to Styracosaurus have since been assigned to other genera. S. sphenocerus, described by Edward Drinker Cope in 1890 as a species of Monoclonius and based on a nasal bone with a broken Styracosaurus-like straight nose horn, was attributed to Styracosaurus in 1915. "S. makeli", mentioned informally by amateur paleontologists Stephen and Sylvia Czerkas in 1990 in a caption to an illustration, is an early name for Einiosaurus. "S. borealis" is an early informal name for S. parksi. Styracosaurus ovatus A species, Styracosaurus ovatus, from the Two Medicine Formation of Montana, was described by Gilmore in 1930, named for a partial parietal under the accession number USNM 11869. Unlike S. albertensis, the longest parietal spikes converge towards their tips, instead of projecting parallel behind the frill. There also may only have been two sets of spikes on each side of the frill, instead of three. As estimated from the preserved material, the spikes are much shorter than in S. albertensis, with the longest only long. An additional specimen from the Two Medicine Formation was referred to Styracosaurus ovatus in 2010 by Andrew McDonald and John Horner, having been found earlier in 1986 but not described until that year. Known from a premaxilla, the nasal bones and their horncore, a postorbital bone and a parietal, the specimen Museum of the Rockies 492 was considered to share the medially-converging parietal spikes with the only other specimen of S. ovatus, the holotype. Following this additional material, the species was added to a phylogenetic analysis where it was found to group not with Styracosaurus albertensis, but in a clade including Pachyrhinosaurus, Einiosaurus and Achelousaurus, and therefore McDonald and Horner gave the species the new genus name Rubeosaurus. Another specimen, the partial immature skull USNM 14768, which was earlier referred to the undiagnostic genus Brachyceratops, was also referred to Rubeosaurus ovatus by McDonald and colleagues in 2011. While the medial spikes of USNM 14768 were too incomplete to show if it shared the convergence seen in other R. ovatus specimens, it was considered to be the same species as it was also found in the older deposits of the Two Medicine Formation, and had a unique combination of parietal features only shared completely with the other specimens of the species. Though it was originally found to nest closer to Einiosaurus and later centrosaurines by McDonald and colleagues in both 2010 and 2011, revisions of phylogenetic analyses in 2013 by Scott Sampson and colleagues, and further expansions and modifications of the same dataset, instead placed Rubeosaurus ovatus as the sister taxon of Styracosaurus albertensis, as had been originally considered when the species was first named, though the two species were not moved into the same genus as originally named. A review of the variability within known Styracosaurus specimens by Robert Holmes and colleagues in 2020 found that USNM 11869, the type specimen of Rubeosaurus ovatus, fell within the variation seen in other specimens from the older deposits of the Dinosaur Park Formation S. albertensis is known from. While no phylogenetic analysis was conducted, previous results of updated analyses showed that Rubeosaurus ovatus and Styracosaurus albertensis were not distantly related, so the justification for naming the genus Rubeosaurus was not present, and the variability in Styracosaurus albertensis specimens also did not support the distinction of Styracosaurus ovatus, with Holmes et al. considering the latter a junior synonym of the former. The conclusion of Holmes and colleagues was supported by a later 2020 study authored by Caleb Brown, Holmes, and Philip J. Currie, who described a new juvenile Styracosaurus specimen and determined that there were several specimens that are otherwise consistent with S. albertensis have been found with inward angled midline frill spikes, though not the same degree as S. ovatus. Though they considered that S. ovatus represented an extreme end of the S. albertensis variation not only in morphology but also as it was stratigraphically younger, they cautioned that at the least the current diagnosis of S. ovatus was inadequate. Later in 2020, the supposed specimen MOR 492 was redescribed by John Wilson and colleagues, who reinterpreted its anatomy in a way that contrasted McDonald and Horner who referred it to Styracosaurus ovatus. While Wilson et al. agreed that the close relationship between S. albertensis and S. ovatus meant that the genus name Rubeosaurus should be abandoned, they cautioned against synonymization. MOR 492 was moved into its own taxon, Stellasaurus ancellae, which nested alongside Einiosaurus, Achelousaurus and Pachyrhinosaurus in a similar result to McDonald and Horner when the specimen was included as part of the S. ovatus hypodigm. Wilson and colleagues also suggested that the new taxon may have been ancestral to the later forms it was found related to, suggesting that gradual evolution through anagenesis could be the reason for the intermediate morphologies of many specimens and species found in the Two Medicine Formation, possibly also including S. ovatus. As the holotype of Styracosaurus ovatus was found in deposits much younger than the remainder of Styracosaurus specimens, and was considered to have the most extreme morphology while still falling within plausible variation as Holmes et al. had concluded, Wilson and colleagues advised that S. ovatus was retained as a separate, probably directly descended from S. albertensis, species of Styracosaurus. The immature specimen USNM 14768, referred to S. ovatus by McDonald et al. in 2011, was considered too immature to be diagnostic, and thus S. ovatus was limited to its holotype USNM 11869. Description Individuals of the genus Styracosaurus were approximately long as adults and weighed about . The skull was massive, with a large nostril, a tall straight nose horn, and a parietal squamosal frill (a neck frill) crowned with at least four large spikes. Each of the four longest frill spines was comparable in length to the nose horn, at long. The nasal horn was estimated by Lambe at long in the type specimen, but the tip had not been preserved. Based on other nasal horn cores from Styracosaurus and Centrosaurus, this horn may have come to a more rounded point at around half of that length. Aside from the large nasal horn and four long frill spikes, the cranial ornamentation was variable. Some individuals had small hook-like projections and knobs at the posterior margin of the frill, similar to but smaller than those in Centrosaurus. Others had less prominent tabs. Some, like the type individual, had a third pair of long frill spikes. Others had much smaller projections, and small points are found on the side margins of some but not all specimens. Modest pyramid-shaped brow horns were present in subadults, but were replaced by pits in adults. Like most ceratopsids, Styracosaurus had large fenestrae (skull openings) in its frill. The front of the mouth had a toothless beak. The bulky body of Styracosaurus resembled that of a rhinoceros. It had powerful shoulders which may have been useful in intraspecies combat. Styracosaurus had a relatively short tail. Each toe bore a hooflike ungual which was sheathed in horn. Various limb positions have been proposed for Styracosaurus and ceratopsids in general, including forelegs which were held underneath the body, or, alternatively, held in a sprawling position. The most recent work has put forward an intermediate crouched position as most likely. Classification Styracosaurus is a member of the Centrosaurinae. Other members of the clade include Centrosaurus (from which the group takes its name), Pachyrhinosaurus, Avaceratops, Einiosaurus, Albertaceratops, Achelousaurus, Brachyceratops, and Monoclonius, although these last two are dubious. Because of the variation between species and even individual specimens of centrosaurines, there has been much debate over which genera and species are valid, particularly whether Centrosaurus and/or Monoclonius are valid genera, undiagnosable, or possibly members of the opposite sex. In 1996, Peter Dodson found enough variation between Centrosaurus, Styracosaurus, and Monoclonius to warrant separate genera, and that Styracosaurus resembled Centrosaurus more closely than either resembled Monoclonius. Dodson also believed one species of Monoclonius, M. nasicornis, may actually have been a female Styracosaurus. However, most other researchers have not accepted Monoclonius nasicornis as a female Styracosaurus, instead regarding it as a synonym of Centrosaurus apertus. While sexual dimorphism has been proposed for an earlier ceratopsian, Protoceratops, there is no firm evidence for sexual dimorphism in any ceratopsid. The cladogram depicted below represents a phylogenetic analysis by Chiba et al. (2017): Origins and evolution The evolutionary origins of Styracosaurus were not understood for many years because fossil evidence for early ceratopsians was sparse. The discovery of Protoceratops, in 1922, shed light on early ceratopsid relationships, but several decades passed before additional finds filled in more of the blanks. Fresh discoveries in the late 1990s and 2000s, including Zuniceratops, the earliest known ceratopsian with brow horns, and Yinlong, the first-known Jurassic ceratopsian, indicate what the ancestors of Styracosaurus may have looked like. These new discoveries have been important in illuminating the origins of horned dinosaurs in general, and suggest that the group originated during the Jurassic in Asia, with the appearance of true horned ceratopsians occurring by the beginning of the late Cretaceous in North America. Goodwin and colleagues proposed in 1992 that Styracosaurus was part of the lineage leading to Einiosaurus, Achelousaurus and Pachyrhinosaurus. This was based on a series of fossil skulls from the Two Medicine Formation of Montana. The position of Styracosaurus in this lineage is now equivocal, as the remains that were thought to represent Styracosaurus have been transferred to the genus Rubeosaurus. Styracosaurus is known from a higher position in the formation (relating specifically to its own genus) than the closely related Centrosaurus, suggesting that Styracosaurus displaced Centrosaurus as the environment changed over time and/or dimension. It has been suggested that Styracosaurus albertensis is a direct descendant of Centrosaurus (C. apertus or C. nasicornis), and that it in turn evolved directly into the slightly later species Rubeosaurus ovatus. Subtle changes can be traced in the arrangement of the horns through this lineage, leading from Rubeosaurus to Einiosaurus, to Achelousaurus and Pachyrhinosaurus. However, the lineage may not be a simple, straight line, as a pachyrhinosaur-like species has been reported from the same time and place as Styracosaurus albertensis. In 2020, during the description of Stellasaurus, Wilson et al. found Styracosaurus (including S. ovatus) to be the earliest member of a single evolutionary lineage that eventually developed into Stellasaurus, Achelousaurus, and Pachyrhinosaurus. Paleobiology Styracosaurus and other horned dinosaurs are often depicted in popular culture as herd animals. A bonebed composed of Styracosaurus remains is known from the Dinosaur Park Formation of Alberta, about halfway up the formation. This bonebed is associated with different types of river deposits. The mass deaths may have been a result of otherwise non-herding animals congregating around a waterhole in a period of drought, with evidence suggesting the environment may have been seasonal and semi-arid. Paleontologists Gregory Paul and Per Christiansen proposed that large ceratopsians such as Styracosaurus were able to run faster than an elephant, based on possible ceratopsian trackways which did not exhibit signs of sprawling forelimbs. Dentition and diet Styracosaurs were herbivorous dinosaurs; they probably fed mostly on low growth because of the position of the head. They may, however, have been able to knock down taller plants with their horns, beak, and bulk. The jaws were tipped with a deep, narrow beak, believed to have been better at grasping and plucking than biting. Ceratopsid teeth, including those of Styracosaurus, were arranged in groups called batteries. Older teeth on top were continually replaced by the teeth underneath them. Unlike hadrosaurids, which also had dental batteries, ceratopsid teeth sliced but did not grind. Some scientists have suggested that ceratopsids like Styracosaurus ate palms and cycads, while others have suggested ferns. Dodson has proposed that Late Cretaceous ceratopsians may have knocked down angiosperm trees and then sheared off leaves and twigs. Horns and frill The large nasal horns and frills of Styracosaurus are among the most distinctive facial adornments of all dinosaurs. Their function has been the subject of debate since the first horned dinosaurs were discovered. Early in the 20th century, paleontologist R. S. Lull proposed that the frills of ceratopsian dinosaurs acted as anchor points for their jaw muscles. He later noted that for Styracosaurus, the spikes would have given it a formidable appearance. In 1996, Dodson supported the idea of muscle attachments in part and created detailed diagrams of possible muscle attachments in the frills of Styracosaurus and Chasmosaurus, but did not subscribe to the idea that they completely filled in the fenestrae. C. A. Forster, however, found no evidence of large muscle attachments on the frill bones. It was long believed that ceratopsians like Styracosaurus used their frills and horns in defence against the large predatory dinosaurs of the time. Although pitting, holes, lesions, and other damage on ceratopsid skulls are often attributed to horn damage in combat, a 2006 study found no evidence for horn thrust injuries causing these forms of damage (for example, there is no evidence of infection or healing). Instead, non-pathological bone resorption, or unknown bone diseases, are suggested as causes. However, a newer study compared incidence rates of skull lesions in Triceratops and Centrosaurus and showed that these were consistent with Triceratops using its horns in combat and the frill being adapted as a protective structure, while lower pathology rates in Centrosaurus may indicate visual rather than physical use of cranial ornamentation, or a form of combat focused on the body rather than the head; as Centrosaurus was more closely related to Styracosaurus and both genera had long nasal horns, the results for this genus would be more applicable for Styracosaurus. The researchers also concluded that the damage found on the specimens in the study was often too localized to be caused by bone disease. The large frill on Styracosaurus and related genera also may have helped to increase body area to regulate body temperature, like the ears of the modern elephant. A similar theory has been proposed regarding the plates of Stegosaurus, although this use alone would not account for the bizarre and extravagant variation seen in different members of the Ceratopsidae. This observation is highly suggestive of what is now believed to be the primary function, display. The theory of frill use in sexual display was first proposed in 1961 by Davitashvili. This theory has gained increasing acceptance. Evidence that visual display was important, either in courtship or in other social behavior, can be seen in the fact that horned dinosaurs differ markedly in their adornments, making each species highly distinctive. Also, modern living creatures with such displays of horns and adornments use them in similar behavior. The use of the exaggerated structures in dinosaurs as species identification has been questioned, as no such function exists in vast majority of modern species of tetrapods (terrestrial vertebrates). A skull discovered in 2015 from a Styracosaurus indicates that individual variation was likely commonplace in the genus. The asymmetrical nature of the horns in the specimen has been compared to deer, which often have asymmetrical antlers in various individuals. The study carried out may also indicate that the genus Rubeosaurus may be synonymous with Styracosaurus as a result. Paleoecology Styracosaurus is known from the Dinosaur Park Formation, and was a member of a diverse and well-documented fauna of prehistoric animals that included horned relatives such as Centrosaurus and Chasmosaurus, duckbills such as Prosaurolophus, Lambeosaurus, Gryposaurus, Corythosaurus, and Parasaurolophus, ornithomimids Struthiomimus, tyrannosaurids Gorgosaurus, and Daspletosaurus, and armored Edmontonia and Euoplocephalus. The Dinosaur Park Formation is interpreted as a low-relief setting of rivers and floodplains that became more swampy and influenced by marine conditions over time as the Western Interior Seaway transgressed westward. The climate was warmer than present-day Alberta, without frost, but with wetter and drier seasons. Conifers were apparently the dominant canopy plants, with an understory of ferns, tree ferns, and angiosperms. In the Two Medicine Formation, dinosaurs that lived alongside Styracosaurus ovatus included the basal ornithopod Orodromeus, hadrosaurids (such as Hypacrosaurus, Maiasaura, and Prosaurolophus), the centrosaurines Brachyceratops and Einiosaurus, the leptoceratopsid Cerasinops, the ankylosaurs Edmontonia and Euoplocephalus, the tyrannosaurid Daspletosaurus (which appears to have been a specialist of preying on ceratopsians), as well as the smaller theropods Bambiraptor, Chirostenotes, Troodon, and Avisaurus.
Biology and health sciences
Ornitischians
Animals
1792302
https://en.wikipedia.org/wiki/Pachyrhinosaurus
Pachyrhinosaurus
Pachyrhinosaurus (from Ancient Greek (), thick; (), nose; and (), lizard) is an extinct genus of centrosaurine ceratopsid dinosaur from the Late Cretaceous period of North America. The first examples were discovered by Charles M. Sternberg in Alberta, Canada, in 1946, and named in 1950. Over a dozen partial skulls and a large assortment of other fossils from various species have been found in Alberta and Alaska. A great number were not available for study until the 1980s, resulting in a relatively recent increase of interest in Pachyrhinosaurus. Three species have been identified. P. lakustai, from the Wapiti Formation, the bonebed horizon of which is roughly equivalent age to the upper Bearpaw and lower Horseshoe Canyon Formations, is known to have existed from about 73.5–72.5 million years ago. P. canadensis is younger, known from the lower Horseshoe Canyon Formation, about 71.5–71 Ma ago and the St. Mary River Formation. Fossils of the youngest species, P. perotorum, have been recovered from the Prince Creek Formation of Alaska, and date to 70–69 Ma ago. The presence of three known species makes this genus the most speciose among the centrosaurines. Discovery and species Pachyrhinosaurus canadensis, was described in 1950 by Charles Mortram Sternberg based on the holotype incomplete skull NMC 8867, and the paratype incomplete skull NMC 8866, which included the anterior part of the skull but was lacking the right lower mandible, and the "beak". These skulls were collected in 1945 and 1946 from the sandy clay of the Horseshoe Canyon Formation in Alberta, Canada. In the years to come, additional material would be recovered at the Scabby Butte locality of the St. Mary River Formation near Lethbridge, Alberta, from terrestrial sediments considered to be between 74 and 66 million years old. These were among the first dinosaur sites found in the province, in the 1880s. The significance of these discoveries was not understood until shortly after World War II when preliminary excavations were conducted. Another Pachyrhinosaurus skull was taken out of the Scabby Butte locality in 1955, and then in 1957 Wann Langston Jr. and a small crew excavated additional pachyrhinosaur remains. The University of Calgary has plans to reopen this important site some day as a field school for university-level paleontology students. Several specimens, NMC 21863, NMC 21864, NMC 10669 assigned in 1975 by W. Langston Jr. to Pachyrhinosaurus were also recovered at the Scabby Butte locality. Another Pachyrhinosaurus bonebed, on the Wapiti River south of Wembley in northwestern Alberta, was worked briefly by staff of the Royal Tyrrell Museum in the late 1980s but is now worked annually each summer (since 2015) by the Philip J. Currie Dinosaur Museum in Wembley, Alberta. Material from this site appears referable to Pachyrhinosaurus canadensis. In 1974, Grande Prairie, Alberta science teacher Al Lakusta found a large bonebed along Pipestone Creek in Alberta. When the area was finally excavated between 1986 and 1989 by staff and volunteers of the Royal Tyrrell Museum of Palaeontology, paleontologists discovered an amazingly large and dense selection of bones—up to 100 per square metre, with a total of 3,500 bones and 14 skulls. This was apparently the site of a mass mortality, perhaps a failed attempt to cross a river during a flood. Found amongst the fossils were the skeletons of four distinct age groups ranging from juveniles to full-grown dinosaurs, indicating that the Pachyrhinosaurus cared for their young. The adult skulls had both convex and concave bosses as well as unicorn-style horns on the parietal bone just behind their eyes. The concave boss types might be related to erosion only and not reflect male/female differences. In 2008, a detailed monograph describing the skull of the Pipestone Creek pachyrhinosaur, and penned by Philip J. Currie, Wann Langston Jr., and Darren Tanke, classified the specimen as a second species of Pachyrhinosaurus, named P. lakustai after its discoverer. In 2013, Fiorillo et al. described a new specimen, an incomplete nasal bone attributable to Pachyrhinosaurus perotorum which was collected from the Kikak-Tegoseak Quarry on the Colville River in Alaska. Fiorillo et al. named this unique northern Alaskan species after the Texas oil billionaire and benefactor, Ross Perot. This bone, designated DMNH 21460 belongs to an immature individual. This discovery expands the known age profile of this dinosaur genus from this particular site. The specimen has nasal ornamentation that is dorsally enlarged, representing an intermediate stage of growth. Of note, the authors pointed out that the posterior part of the nasal shows evidence for "a degree of integument complexity not previously recognized in other species" of Pachyrhinosaurus. It was determined that the dorsal surface of the nasal boss bore a thick, cornified pad and sheath. Description Size estimates for the largest Pachyrhinosaurus species, P. canadensis indicate lengths of and a weight of . The other species, P. lakustai and P. perotorum, have been estimated by Greg Paul at in length and in weight. They were herbivorous and possessed strong cheek teeth to help them chew tough, fibrous plants. Instead of horns, their skulls bore massive, flattened bosses; a large boss over the nose and a smaller one over the eyes. A prominent pair of horns grew from the frill and extended upwards. The skull also bore several smaller horns or ornaments that varied between individuals and between species. In P. canadensis and P. perotorum, the bosses over the nose and eyes nearly grew together, and were separated only by a narrow groove. In P. lakustai, the two bosses were separated by a wide gap. In P. canadensis and P. lakustai, the frill bore two additional small, curved, backward-pointed horns. P. perotorum was thought to have two unique, flattened horns which projected forward and down from the top edge of the frill, but 2019 it was shown that these had been inaccurately reconstructed and it instead had the same backward pointed as its sister species. Various ornaments of the nasal boss have also been used to distinguish between different species of Pachyrhinosaurus. Both P. lakustai and P. perotorum bore a jagged, comb-like extension at the tip of the boss which was missing in P. canadensis. P. perotorum was unique in having a narrow dome in the middle of the back portion of the nasal boss, and P. lakustai had a pommel-like structure projecting from the front of the boss (the boss of P. canadensis was mainly flat on top and rounded). P. lakustai bore another comb-like horn arising from the middle of the frill behind the eyes. Classification The cladogram below shows the phylogenetic position of all currently known Pachyrhinosaurus species following Chiba et al. (2017): Paleobiology Growth rates During the first few years of development, P. perotorum show extremely rapid growth. When the animals were one year old, they had already reached 28% of their adult body size. By age two, they were almost half the size of a mature adult. However, the rate of growth slows considerably after that, and maximum size is not fully attained until about age twenty. The development of characteristics useful in sexual selection, including competition between males, such as pronounced nasal bosses, occurred at approximately nine years of age. This presumably corresponds to the age of sexual maturity. Unlike other Pachyrhinosaurus species, P. perotorum shows highly conspicuous growth banding in the bones, indicating retarded growth during the winter. This is perhaps not surprising, considering that P. perotorum experienced much harsher winters than southerly species within the genus. P. lakustai does not show growth banding early in ontogeny in the specimens that have been examined. However, growth bands are weakly expressed later in ontogeny. This probably indicates rapid growth in youth, followed by gradually decreasing growth rates as the animal neared adulthood. The growth curve of the animal would therefore be somewhat asymptotic, unlike the linear growth found in many ectothermic animals. The development of characteristics useful in sexual selection, including competition between males, such as pronounced nasal bosses, occurred when the dinosaur was roughly 73% the size of a full-grown adult. The age of sexual maturity is unknown. Due to the lack of conspicuous growth banding, more detailed analyses of P. lakustai growth rates cannot be performed. Palaeopathology P. perotorum had very low rates of palaeopathologies compared to ceratopsians that inhabited lower latitudes, suggesting that the high latitude environment of Alaska did not impose unique hardships on the species. Paleoecology St. Mary River Formation Habitat The St. Mary River Formation has not undergone a definitive radiometric dating, however, the available stratigraphic correlation has shown that this formation was deposited between 74 and 66 million years ago, during the Campanian and the late Maastrichtian, during the final regression of the mid-continental Bearpaw Seaway. It ranges from as far south as Glacier County, Montana to as far north as the Little Bow River in Alberta. The St. Mary River Formation is part of the Western Canadian Sedimentary Basin in southwestern Alberta, which extends from the Rocky Mountains in the west to the Canadian Shield in the east. It is laterally equivalent to the Horseshoe Canyon Formation. The region where dinosaurs lived was bounded by mountains to the west, and included ancient channels, small freshwater ponds, streams, and floodplains. Paleofauna Pachyrhinosaurus shared its paleoenvironment with other dinosaurs, such as the ceratopsians Anchiceratops and Montanoceratops cerorhynchus, the armored nodosaur Edmontonia longiceps, the duckbilled hadrosaur Edmontosaurus regalis, the theropods Saurornitholestes and Troodon, possibly the ornithopod Thescelosaurus, and the tyrannosaurid Albertosaurus, which was likely the apex predator in its ecosystem. Vertebrates present in the St. Mary River Formation at the time of Pachyrhinosaurus included the actinopterygian fishes Amia fragosa, Lepisosteus, Belonostomus, Paralbula casei, and Platacodon nanus, the mosasaur Plioplatecarpus, the turtle Boremys and the diapsid reptile Champsosaurus. A fair number of mammals lived in this region, which included Turgidodon russelli, Cimolestes, Didelphodon, Leptalestes, Cimolodon nitidus, and Paracimexomys propriscus. Non-vertebrates in this ecosystem included mollusks, the oyster Crassostrea wyomingensis, the small clam Anomia, and the snail Thiara. Flora of the region include the aquatic angiosperm Trapago angulata, the amphibious heterosporous fern Hydropteris pinnata, rhizomes, and taxodiaceous conifers. Horseshoe Canyon Formation Habitat The Horseshoe Canyon Formation has been radiometrically dated as being between 74 and 67 million years old. It was deposited during the gradual withdrawal of the Western Interior Seaway, during the Campanian and Maastrichtian stage of the Late Cretaceous period. The Horseshoe Canyon Formation is a terrestrial unit which is part of the Edmonton Group that includes the Battle Formation and the Whitemud Member, both in Edmonton. The valley where dinosaurs lived included ancient meandering estuary channels, straight channels, peat swamps, river deltas, floodplains, shorelines and wetlands. Due to the changing sea levels, many different environments are represented in the Horseshoe Canyon Formation, including offshore and near-shore marine habitats and coastal habitats like lagoons, and tidal flats. The area was wet and warm with a temperate to subtropical climate. Just prior to the Campanian–Maastrichtian boundary, the mean annual temperature and precipitation in this region dropped rapidly. The dinosaurs from this formation form part of the Edmontonian land vertebrate age, and are distinct from those in the formations above and below. Modern life at high elevations in lower latitudes resembles life at low elevation in higher latitudes. There may be parallels to this phenomenon in Cretaceous ecosystems, for instance, Pachyrhinosaurus species are found in both Alaska and upland environments in southern Alberta. During the Edmontonian, in North America's northern biome, there is a general trend of reduced centrosaurine diversity, with only Pachyrhinosaurus surviving. Pachyrhinosaurus appears to have been part of a coastal fauna characterized by an association with Edmontosaurus. Paleofauna P. canadensis coexisted with ankylosaurids Anodontosaurus lambei and Edmontonia longiceps, the maniraptorans Atrociraptor marshalli, Epichirostenotes curriei, the troodontid Albertavenator curriei, the alvarezsaurid theropod Albertonykus borealis, the ornithomimids Dromiceiomimus brevitertius, Ornithomimus edmontonicus, and an unnamed species of Struthiomimus, the bone-head pachycephalosaurids Stegoceras, and Sphaerotholus edmontonensis, the ornithopod Parksosaurus warreni, the hadrosaurids Edmontosaurus regalis, Hypacrosaurus altispinus, and Saurolophus osborni, the ceratopsians Anchiceratops ornatus, Arrhinoceratops brachyops, Eotriceratops xerinsularis, Montanoceratops cerorhynchus, and the tyrannosaurid Albertosaurus sarcophagus, which was the apex predator of this paleoenvironment. Of these, the hadrosaurs dominated in terms of sheer number and made up half of all dinosaurs who lived in this region. Vertebrates present in the Horseshoe Canyon Formation at the time of Pachyrhinosaurus included reptiles, and amphibians. Sharks, rays, sturgeons, bowfins, gars and the gar-like Belonostomus made up the fish fauna. Reptiles such as turtles and crocodilians are rare in the Horseshoe Canyon Formation, and this was thought to reflect the relatively cool climate which prevailed at the time. A study by Quinney et al. (2013), however, showed that the decline in turtle diversity, which was previously attributed to climate, coincided instead with changes in soil drainage conditions, and was limited by aridity, landscape instability, and migratory barriers. The saltwater plesiosaur Leurospondylus was present and freshwater environments were populated by turtles, Champsosaurus, and crocodilians like Leidyosuchus and Stangerochampsa. Evidence has shown that multituberculates and the early marsupial Didelphodon coyi were present. Vertebrate trace fossils from this region included the tracks of theropods, ceratopsians and ornithopods, which provide evidence that these animals were also present. Non-vertebrates in this ecosystem included both marine and terrestrial invertebrates.
Biology and health sciences
Ornitischians
Animals
1792433
https://en.wikipedia.org/wiki/Maximum%20a%20posteriori%20estimation
Maximum a posteriori estimation
An estimation procedure that is often claimed to be part of Bayesian statistics is the maximum a posteriori (MAP) estimate of an unknown quantity, that equals the mode of the posterior density with respect to some reference measure, typically the Lebesgue measure. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior density over the quantity one wants to estimate. MAP estimation is therefore a regularization of maximum likelihood estimation, so is not a well-defined statistic of the Bayesian posterior distribution. Description Assume that we want to estimate an unobserved population parameter on the basis of observations . Let be the sampling distribution of , so that is the probability of when the underlying population parameter is . Then the function: is known as the likelihood function and the estimate: is the maximum likelihood estimate of . Now assume that a prior distribution over exists. This allows us to treat as a random variable as in Bayesian statistics. We can calculate the posterior density of using Bayes' theorem: where is density function of , is the domain of . The method of maximum a posteriori estimation then estimates as the mode of the posterior density of this random variable: The denominator of the posterior density (the marginal likelihood of the model) is always positive and does not depend on and therefore plays no role in the optimization. Observe that the MAP estimate of coincides with the ML estimate when the prior is uniform (i.e., is a constant function), which occurs whenever the prior distribution is taken as the reference measure, as is typical in function-space applications. When the loss function is of the form as goes to 0, the Bayes estimator approaches the MAP estimator, provided that the distribution of is quasi-concave. But generally a MAP estimator is not a Bayes estimator unless is discrete. Computation MAP estimates can be computed in several ways: Analytically, when the mode(s) of the posterior density can be given in closed form. This is the case when conjugate priors are used. Via numerical optimization such as the conjugate gradient method or Newton's method. This usually requires first or second derivatives, which have to be evaluated analytically or numerically. Via a modification of an expectation-maximization algorithm. This does not require derivatives of the posterior density. Via a Monte Carlo method using simulated annealing Limitations While only mild conditions are required for MAP estimation to be a limiting case of Bayes estimation (under the 0–1 loss function), it is not representative of Bayesian methods in general. This is because MAP estimates are point estimates, and depend on the arbitrary choice of reference measure, whereas Bayesian methods are characterized by the use of distributions to summarize data and draw inferences: thus, Bayesian methods tend to report the posterior mean or median instead, together with credible intervals. This is both because these estimators are optimal under squared-error and linear-error loss respectively—which are more representative of typical loss functions—and for a continuous posterior distribution there is no loss function which suggests the MAP is the optimal point estimator. In addition, the posterior density may often not have a simple analytic form: in this case, the distribution can be simulated using Markov chain Monte Carlo techniques, while optimization to find the mode(s) of the density may be difficult or impossible. In many types of models, such as mixture models, the posterior may be multi-modal. In such a case, the usual recommendation is that one should choose the highest mode: this is not always feasible (global optimization is a difficult problem), nor in some cases even possible (such as when identifiability issues arise). Furthermore, the highest mode may be uncharacteristic of the majority of the posterior, especially in many dimensions. Finally, unlike ML estimators, the MAP estimate is not invariant under reparameterization. Switching from one parameterization to another involves introducing a Jacobian that impacts on the location of the maximum. In contrast, Bayesian posterior expectations are invariant under reparameterization. As an example of the difference between Bayes estimators mentioned above (mean and median estimators) and using a MAP estimate, consider the case where there is a need to classify inputs as either positive or negative (for example, loans as risky or safe). Suppose there are just three possible hypotheses about the correct method of classification , and with posteriors 0.4, 0.3 and 0.3 respectively. Suppose given a new instance, , classifies it as positive, whereas the other two classify it as negative. Using the MAP estimate for the correct classifier , is classified as positive, whereas the Bayes estimators would average over all hypotheses and classify as negative. Example Suppose that we are given a sequence of IID random variables and a prior distribution of is given by . We wish to find the MAP estimate of . Note that the normal distribution is its own conjugate prior, so we will be able to find a closed-form solution analytically. The function to be maximized is then given by which is equivalent to minimizing the following function of : Thus, we see that the MAP estimator for μ is given by which turns out to be a linear interpolation between the prior mean and the sample mean weighted by their respective covariances. The case of is called a non-informative prior and leads to an improper probability distribution; in this case
Mathematics
Statistics
null
1795571
https://en.wikipedia.org/wiki/Calling%20convention
Calling convention
In computer science, a calling convention is an implementation-level (low-level) scheme for how subroutines or functions receive parameters from their caller and how they return a result. When some code calls a function, design choices have been taken for where and how parameters are passed to that function, and where and how results are returned from that function, with these transfers typically done via certain registers or within a stack frame on the call stack. There are design choices for how the tasks of preparing for a function call and restoring the environment after the function has completed are divided between the caller and the callee. Some calling convention specifies the way every function should get called. The correct calling convention should be used for every function call, to allow the correct and reliable execution of the whole program using these functions. Introduction Calling conventions are usually considered part of the application binary interface (ABI). They may be considered a contract between the caller and the called function. Related concepts The names or meanings of the parameters and return values are defined in the application programming interface (API, as opposed to ABI), which is a separate though related concept to ABI and calling convention. The names of members within passed structures and objects would also be considered part of the API, and not ABI. Sometimes APIs do include keywords to specify the calling convention for functions. Calling conventions do not typically include information on handling lifespan of dynamically-allocated structures and objects. Other supplementary documentation may state where the responsibility for freeing up allocated memory lies. Calling conventions are unlikely to specify the layout of items within structures and objects, such as byte ordering or structure packing. For some languages, the calling convention includes details of error or exception handling, (e.g. Go, Java) and for others, it does not (e.g. C++). For Remote procedure calls, there is an analogous concept called Marshalling. Calling conventions may be related to a particular programming language's evaluation strategy, but most often are not considered part of it (or vice versa), as the evaluation strategy is usually defined on a higher abstraction level and seen as a part of the language rather than as a low-level implementation detail of a particular language's compiler. Different calling conventions Calling conventions may differ in: Where parameters are placed. Options include registers, on the call stack, a mix of both, or in other memory structures. The order in which parameters are passed. Options include left-to-right order, or right-to-left, or something more complex. How functions that take a variable number of arguments (variadic functions) are handled. Options include just passed in order (presuming the first parameter is in an obvious position) or the variable parts in an array. How return values are delivered from the callee back to the caller. Options include on the stack, in a register, or reference to something allocated on the heap. How long or complex values are handled, perhaps by splitting across multiple registers, within the stack frame, or with reference to memory. Which registers are guaranteed to have the same value when the callee returns as they did when the callee was called. These registers are said to be saved or preserved, so they are not volatile. How the task of setting up for and cleaning up after a function call is divided between the caller and the callee. In particular, how the stack frame is restored so the caller may continue after the callee has finished. Whether and how metadata describing the arguments is passed Where the previous value of the frame pointer is stored, which is used to restore the stack frame when the subroutine ends. Options include within the call stack, or in a specific register. Sometimes frame pointers are not used at all. Where any static scope links for the routine's non-local data access are placed (typically at one or more positions in the stack frame, but sometimes in a general register, or, for some architectures, in special-purpose registers) For object-oriented languages, how the function's object is referenced Calling conventions within one platform Sometimes multiple calling conventions appear on a single platform; a given platform and language implementation may offer a choice of calling conventions. Reasons for this include performance, adaptation of conventions of other popular languages, and restrictions or conventions imposed by various "computing platforms". Many architectures only have one widely-used calling convention, often suggested by the architect. For RISCs including SPARC, MIPS, and RISC-V, registers names based on this calling convention are often used. For example, MIPS registers through have "ABI names" through , reflecting their use for parameter passing in the standard calling convention. (RISC CPUs have many equivalent general-purpose registers so there's typically no hardware reason for giving them names other than numbers.) The calling convention of a given program's language may differ from the calling convention of the underlying platform, OS, or of some library being linked to. For example, on 32-bit Windows, operating system calls have the stdcall calling convention, whereas many C programs that run there use the cdecl calling convention. To accommodate these differences in calling convention, compilers often permit keywords that specify the calling convention for a given function. The function declarations will include additional platform-specific keywords that indicate the calling convention to be used. When handled correctly, the compiler will generate code to call functions in the appropriate manner. Some languages allow the calling convention for a function to be explicitly specified with that function; other languages will have some calling convention but it will be hidden from the users of that language, and therefore will not typically be a consideration for the programmer. Architectures x86 (32-bit) The 32-bit version of the x86 architecture is used with many different calling conventions. Due to the small number of architectural registers, and historical focus on simplicity and small code-size, many x86 calling conventions pass arguments on the stack. The return value (or a pointer to it) is returned in a register. Some conventions use registers for the first few parameters which may improve performance, especially for short and simple leaf-routines very frequently invoked (i.e. routines that do not call other routines). Example call: push EAX ; pass some register result push dword [EBP+20] ; pass some memory variable (FASM/TASM syntax) push 3 ; pass some constant call calc ; the returned result is now in EAX Typical callee structure: (some or all (except ret) of the instructions below may be optimized away in simple procedures). Some conventions leave the parameter space allocated, using plain instead of . In that case, the caller could in this example, or otherwise deal with the change to ESP. calc: push EBP ; save old frame pointer mov EBP,ESP ; get new frame pointer sub ESP,localsize ; reserve stack space for locals . . ; perform calculations, leave result in EAX . mov ESP,EBP ; free space for locals pop EBP ; restore old frame pointer ret paramsize ; free parameter space and return. x86-64 The 64-bit version of the x86 architecture, known as x86-64, AMD64, and Intel 64, has two calling sequences in common use. One calling sequence, defined by Microsoft, is used on Windows; the other calling sequence, specified in the AMD64 System V ABI, is used by Unix-like systems and, with some changes, by OpenVMS. As x86-64 has more general-purpose registers than does 16-bit x86, both conventions pass some arguments in registers. ARM (A32) The standard 32-bit ARM calling convention allocates the 16 general-purpose registers as: r15: Program counter (as per the instruction set specification). r14: Link register. The BL instruction, used in a subroutine call, stores the return address in this register. r13: Stack pointer. The Push/Pop instructions in "Thumb" operating mode use this register only. r12: Intra-Procedure-call scratch register. r4 to r11: Local variables. r0 to r3: Argument values passed to a subroutine and results returned from a subroutine. If the type of value returned is too large to fit in r0 to r3, or whose size cannot be determined statically at compile time, then the caller must allocate space for that value at run time, and pass a pointer to that space in r0. Subroutines must preserve the contents of r4 to r11 and the stack pointer (perhaps by saving them to the stack in the function prologue, then using them as scratch space, then restoring them from the stack in the function epilogue). In particular, subroutines that call other subroutines must save the return address in the link register r14 to the stack before calling those other subroutines. However, such subroutines do not need to return that value to r14—they merely need to load that value into r15, the program counter, to return. The ARM calling convention mandates using a full-descending stack. In addition, the stack pointer must always be 4-byte aligned, and must always be 8-byte aligned at a function call with a public interface. This calling convention causes a "typical" ARM subroutine to: In the prologue, push r4 to r11 to the stack, and push the return address in r14 to the stack (this can be done with a single STM instruction); Copy any passed arguments (in r0 to r3) to the local scratch registers (r4 to r11); Allocate other local variables to the remaining local scratch registers (r4 to r11); Do calculations and call other subroutines as necessary using BL, assuming r0 to r3, r12 and r14 will not be preserved; Put the result in r0; In the epilogue, pull r4 to r11 from the stack, and pull the return address to the program counter r15. This can be done with a single LDM instruction. ARM (A64) The 64-bit ARM (AArch64) calling convention allocates the 31 general-purpose registers as: x31 (SP): Stack pointer or a zero register, depending on context. x30 (LR): Procedure link register, used to return from subroutines. x29 (FP): Frame pointer. x19 to x28: Callee-saved. x18 (PR): Platform register. Used for some operating-system-specific special purpose, or an additional caller-saved register. x16 (IP0) and x17 (IP1): Intra-Procedure-call scratch registers. x9 to x15: Local variables, caller saved. x8 (XR): Indirect return value address. x0 to x7: Argument values passed to and results returned from a subroutine. All registers starting with x have a corresponding 32-bit register prefixed with w. Thus, a 32-bit x0 is called w0. Similarly, the 32 floating-point registers are allocated as: v0 to v7: Argument values passed to and results returned from a subroutine. v8 to v15: callee-saved, but only the bottom 64 bits need to be preserved. v16 to v31: Local variables, caller saved. RISC-V ISA RISC-V has a defined calling convention with two flavors, with or without floating point. It passes arguments in registers whenever possible. POWER, PowerPC, and Power ISA The POWER, PowerPC, and Power ISA architectures have a large number of registers so most functions can pass all arguments in registers for single level calls. Additional arguments are passed on the stack, and space for register-based arguments is also always allocated on the stack as a convenience to the called function in case multi-level calls are used (recursive or otherwise) and the registers must be saved. This is also of use in variadic functions, such as , where the function's arguments need to be accessed as an array. A single calling convention is used for all procedural languages. Branch-and-link instructions store the return address in a special link register separate from the general-purpose registers; a routine returns to its caller with a branch instruction that uses the link register as the destination address. Leaf routines do not need to save or restore the link register; non-leaf routines must save the return address before making a call to another routine and restore it before it returns, saving it by using the Move From Special Purpose Register instruction to move the link register to a general-purpose register and, if necessary, then saving it to the stack, and restoring it by, if it was saved to the stack, loading the saved link register value to a general-purpose register, and then using the Move To Special Purpose Register instruction to move the register containing the saved link-register value to the link register. MIPS The O32 ABI is the most commonly-used ABI, owing to its status as the original System V ABI for MIPS. It is strictly stack-based, with only four registers available to pass arguments. This perceived slowness, along with an antique floating-point model with 16 registers only, has encouraged the proliferation of many other calling conventions. The ABI took shape in 1990 and was never updated since 1994. It is only defined for 32-bit MIPS, but GCC has created a 64-bit variation called O64. For 64-bit, the N64 ABI (not related to Nintendo 64) by Silicon Graphics is most commonly used. The most important improvement is that eight registers are now available for argument passing; It also increases the number of floating-point registers to 32. There is also an ILP32 version called N32, which uses 32-bit pointers for smaller code, analogous to the x32 ABI. Both run under the 64-bit mode of the CPU. A few attempts have been made to replace O32 with a 32-bit ABI that resembles N32 more. A 1995 conference came up with MIPS EABI, for which the 32-bit version was quite similar. EABI inspired MIPS Technologies to propose a more radical "NUBI" ABI that additionally reuses argument registers for the return value. MIPS EABI is supported by GCC but not LLVM; neither supports NUBI. For all of O32 and N32/N64, the return address is stored in a register. This is automatically set with the use of the (jump and link) or (jump and link register) instructions. The stack grows downwards. SPARC The SPARC architecture, unlike most RISC architectures, is built on register windows. There are 24 accessible registers in each register window: 8 are the "in" registers (%i0-%i7), 8 are the "local" registers (%l0-%l7), and 8 are the "out" registers (%o0-%o7). The "in" registers are used to pass arguments to the function being called, and any additional arguments need to be pushed onto the stack. However, space is always allocated by the called function to handle a potential register window overflow, local variables, and (on 32-bit SPARC) returning a struct by value. To call a function, one places the arguments for the function to be called in the "out" registers; when the function is called, the "out" registers become the "in" registers and the called function accesses the arguments in its "in" registers. When the called function completes, it places the return value in the first "in" register, which becomes the first "out" register when the called function returns. The System V ABI, which most modern Unix-like systems follow, passes the first six arguments in "in" registers %i0 through %i5, reserving %i6 for the frame pointer and %i7 for the return address. IBM System/360 and successors The IBM System/360 is another architecture without a hardware stack. The examples below illustrate the calling convention used by OS/360 and successors prior to the introduction of 64-bit z/Architecture; other operating systems for System/360 might have different calling conventions. Calling program: LA 1,ARGS Load argument list address L 15,=A(SUB) Load subroutine address BALR 14,15 Branch to called routine1 ... ARGS DC A(FIRST) Address of 1st argument DC A(SECOND) ... DC A(THIRD)+X'80000000' Last argument2 Called program: SUB EQU * This is the entry point of the subprogram Standard entry sequence: USING *,153 STM 14,12,12(13) Save registers4 ST 13,SAVE+4 Save caller's savearea addr LA 12,SAVE Chain saveareas ST 12,8(13) LR 13,12 ... Standard return sequence: L 13,SAVE+45 LM 14,12,12(13) L 15,RETVAL6 BR 14 Return to caller SAVE DS 18F Savearea7
Technology
Software development: General
null
1795597
https://en.wikipedia.org/wiki/Acetone
Acetone
Acetone (2-propanone or dimethyl ketone) is an organic compound with the formula . It is the simplest and smallest ketone (). It is a colorless, highly volatile, and flammable liquid with a characteristic pungent odour, very reminiscent of the smell of pear drops. Acetone is miscible with water and serves as an important organic solvent in industry, home, and laboratory. About 6.7 million tonnes were produced worldwide in 2010, mainly for use as a solvent and for production of methyl methacrylate and bisphenol A, which are precursors to widely used plastics. It is a common building block in organic chemistry. It serves as a solvent in household products such as nail polish remover and paint thinner. It has volatile organic compound (VOC)-exempt status in the United States. Acetone is produced and disposed of in the human body through normal metabolic processes. It is normally present in blood and urine. People with diabetic ketoacidosis produce it in larger amounts. Ketogenic diets that increase ketone bodies (acetone, β-hydroxybutyric acid and acetoacetic acid) in the blood are used to counter epileptic attacks in children who suffer from refractory epilepsy. Name From the 17th century, and before modern developments in organic chemistry nomenclature, acetone was given many different names. They included "spirit of Saturn", which was given when it was thought to be a compound of lead and, later, "pyro-acetic spirit" and "pyro-acetic ester". Prior to the name "acetone" being coined by French chemists (see below), it was named "mesit" (from the Greek μεσίτης, meaning mediator) by Carl Reichenbach, who also claimed that methyl alcohol consisted of mesit and ethyl alcohol. Names derived from mesit include mesitylene and mesityl oxide which were first synthesised from acetone. Unlike many compounds with the acet- prefix which have a 2-carbon chain, acetone has a 3-carbon chain. That has caused confusion because there cannot be a ketone with 2 carbons. The prefix refers to acetone's relation to vinegar (acetum in Latin, also the source of the words "acid" and "acetic"), rather than its chemical structure. History Acetone was first produced by Andreas Libavius in 1606 by distillation of lead(II) acetate. In 1832, French chemist Jean-Baptiste Dumas and German chemist Justus von Liebig determined the empirical formula for acetone. In 1833, French chemists Antoine Bussy and Michel Chevreul decided to name acetone by adding the suffix -one to the stem of the corresponding acid (viz, acetic acid) just as a similarly prepared product of what was then confused with margaric acid was named margarone. By 1852, English chemist Alexander William Williamson realized that acetone was methyl acetyl; the following year, the French chemist Charles Frédéric Gerhardt concurred. In 1865, the German chemist August Kekulé published the modern structural formula for acetone. Johann Josef Loschmidt had presented the structure of acetone in 1861, but his privately published booklet received little attention. During World War I, Chaim Weizmann developed the process for industrial production of acetone (Weizmann Process). Production In 2010, the worldwide production capacity for acetone was estimated at 6.7 million tonnes per year. With 1.56 million tonnes per year, the United States had the highest production capacity, followed by Taiwan and China. The largest producer of acetone is INEOS Phenol, owning 17% of the world's capacity, with also significant capacity (7–8%) by Mitsui, Sunoco and Shell in 2010. INEOS Phenol also owns the world's largest production site (420,000 tonnes/annum) in Beveren (Belgium). Spot price of acetone in summer 2011 was 1100–1250 USD/tonne in the United States. Current method Acetone is produced directly or indirectly from propene. Approximately 83% of acetone is produced via the cumene process; as a result, acetone production is tied to phenol production. In the cumene process, benzene is alkylated with propylene to produce cumene, which is oxidized by air to produce phenol and acetone: Other processes involve the direct oxidation of propylene (Wacker-Hoechst process), or the hydration of propylene to give 2-propanol, which is oxidized (dehydrogenated) to acetone. Older methods Previously, acetone was produced by the dry distillation of acetates, for example calcium acetate in ketonic decarboxylation. Ca(CH3COO)2 -> CaO_{(s)}{} + CO2_{(g)}{} + (CH3)2CO v After that time, during World War I, acetone was produced using acetone-butanol-ethanol fermentation with Clostridium acetobutylicum bacteria, which was developed by Chaim Weizmann (later the first president of Israel) in order to help the British war effort, in the preparation of Cordite. This acetone-butanol-ethanol fermentation was eventually abandoned when newer methods with better yields were found. Chemical properties Acetone is reluctant to form a hydrate: K = 10−3 M−1 Like most ketones, acetone exhibits the keto–enol tautomerism in which the nominal keto structure of acetone itself is in equilibrium with the enol isomer (prop-1-en-2-ol). In acetone vapor at ambient temperature, only 2.4% of the molecules are in the enol form. In the presence of suitable catalysts, two acetone molecules also combine to form the compound diacetone alcohol , which on dehydration gives mesityl oxide . This product can further combine with another acetone molecule, with loss of another molecule of water, yielding phorone and other compounds. Acetone is a weak Lewis base that forms adducts with soft acids like I2 and hard acids like phenol. Acetone also forms complexes with divalent metals. Under ultraviolet light, acetone fluoresces.. The flame temperature of pure acetone is 1980 °C. Polymerisation At its melting point (−96 °C) is claimed to polymerize to give a white elastic solid, soluble in acetone, stable for several hours at room temperature. To do so, a vapor of acetone is co-condensed with magnesium as a catalyst onto a very cold surface. Natural occurrence Humans exhale several milligrams of acetone per day. It arises from decarboxylation of acetoacetate. Small amounts of acetone are produced in the body by the decarboxylation of ketone bodies. Certain dietary patterns, including prolonged fasting and high-fat low-carbohydrate dieting, can produce ketosis, in which acetone is formed in body tissue. Certain health conditions, such as alcoholism and diabetes, can produce ketoacidosis, uncontrollable ketosis that leads to a sharp, and potentially fatal, increase in the acidity of the blood. Since it is a byproduct of fermentation, acetone is a byproduct of the distillery industry. Metabolism Acetone can then be metabolized either by CYP2E1 via methylglyoxal to D-lactate and pyruvate, and ultimately glucose/energy, or by a different pathway via propylene glycol to pyruvate, lactate, acetate (usable for energy) and propionaldehyde. Uses About a third of the world's acetone is used as a solvent, and a quarter is consumed as acetone cyanohydrin, a precursor to methyl methacrylate. Chemical intermediate Acetone is used to synthesize methyl methacrylate. It begins with the initial conversion of acetone to acetone cyanohydrin via reaction with hydrogen cyanide (HCN): (CH3)2CO + HCN -> (CH3)2C(OH)CN In a subsequent step, the nitrile is hydrolyzed to the unsaturated amide, which is esterified: (CH3)2C(OH)CN + CH3OH -> CH2=C(CH3)CO2CH3 + NH3 The third major use of acetone (about 20%) is synthesizing bisphenol A. Bisphenol A is a component of many polymers such as polycarbonates, polyurethanes, and epoxy resins. The synthesis involves the condensation of acetone with phenol: (CH3)2CO + 2 C6H5OH -> (CH3)2C(C6H4OH)2 + H2O Many millions of kilograms of acetone are consumed in the production of the solvents methyl isobutyl alcohol and methyl isobutyl ketone. These products arise via an initial aldol condensation to give diacetone alcohol. 2 (CH3)2CO -> (CH3)2C(OH)CH2C(O)CH3 Condensation with acetylene gives 2-methylbut-3-yn-2-ol, precursor to synthetic terpenes and terpenoids. Solvent Acetone is a good solvent for many plastics and some synthetic fibers. It is used for thinning polyester resin, cleaning tools used with it, and dissolving two-part epoxies and superglue before they harden. It is used as one of the volatile components of some paints and varnishes. As a heavy-duty degreaser, it is useful in the preparation of metal prior to painting or soldering, and to remove rosin flux after soldering (to prevent adhesion of dirt and electrical leakage and perhaps corrosion or for cosmetic reasons), although it may attack some electronic components, such as polystyrene capacitors. Although itself flammable, acetone is used extensively as a solvent for the safe transportation and storage of acetylene, which cannot be safely pressurized as a pure compound. Vessels containing a porous material are first filled with acetone followed by acetylene, which dissolves into the acetone. One litre of acetone can dissolve around 250 litres of acetylene at a pressure of . Acetone is used as a solvent by the pharmaceutical industry and as a denaturant in denatured alcohol. Acetone is also present as an excipient in some pharmaceutical drugs. Lab and domestic solvent A variety of organic reactions employ acetone as a polar, aprotic solvent, e.g. the Jones oxidation. Because acetone is cheap, volatile, and dissolves or decomposes with most laboratory chemicals, an acetone rinse is the standard technique to remove solid residues from laboratory glassware before a final wash. Despite common desiccatory use, acetone dries only via bulk displacement and dilution. It forms no azeotropes with water (see azeotrope tables). Acetone also removes certain stains from microscope slides. Acetone freezes well below −78 °C. An acetone/dry ice mixture cools many low-temperature reactions. Make-up artists use acetone to remove skin adhesive from the netting of wigs and mustaches by immersing the item in an acetone bath, then removing the softened glue residue with a stiff brush. Acetone is a main ingredient in many nail polish removers because it breaks down nail polish. It is used for all types of nail polish removal, like gel nail polish, dip powder and acrylic nails. Biology Proteins precipitate in acetone. The chemical modifies peptides, both at α- or ε-amino groups, and in a poorly understood but rapid modification of certain glycine residues. In pathology, acetone helps find lymph nodes in fatty tissues (such as the mesentery) for tumor staging. The liquid dissolves the fat and hardens the nodes, making them easier to find. Medical Dermatologists use acetone with alcohol for acne treatments to chemically peel dry skin. Common agents used today for chemical peeling are salicylic acid, glycolic acid, azelaic acid, 30% salicylic acid in ethanol, and trichloroacetic acid (TCA). Prior to chemexfoliation, the skin is cleaned and excess fat removed in a process called defatting. Acetone, hexachlorophene, or a combination of these agents was used in this process. Acetone has been shown to have anticonvulsant effects in animal models of epilepsy, in the absence of toxicity, when administered in millimolar concentrations. It has been hypothesized that the high-fat low-carbohydrate ketogenic diet used clinically to control drug-resistant epilepsy in children works by elevating acetone in the brain. Because of their higher energy requirements, children have higher acetone production than most adults – and the younger the child, the higher the expected production. This indicates that children are not uniquely susceptible to acetone exposure. External exposures are small compared to the exposures associated with the ketogenic diet. Safety Acetone's most hazardous property is its extreme flammability. In small amounts, acetone burns with a dull blue flame; in larger amounts, fuel evaporation causes incomplete combustion and a bright yellow flame. When hotter than acetone's flash point of , air mixtures of 2.512.8% acetone (by volume) may explode or cause a flash fire. Vapors can flow along surfaces to distant ignition sources and flash back. Static discharge may also ignite acetone vapors, though acetone has a very high ignition initiation energy and accidental ignition is rare. Acetone's auto-ignition temperature is the relatively high ; moreover, auto-ignition temperature depends upon experimental conditions, such as exposure time, and has been quoted as high as 535 °C. Even pouring or spraying acetone over red-glowing coal will not ignite it, due to the high vapour concentration and the cooling effect of evaporation. Acetone should be stored away from strong oxidizers, such as concentrated nitric and sulfuric acid mixtures. It may also explode when mixed with chloroform in the presence of a base. When oxidized without combustion, for example with hydrogen peroxide, acetone may form acetone peroxide, a highly unstable primary explosive. Acetone peroxide may be formed accidentally, e.g. when waste peroxide is poured into waste solvents. Toxicity Acetone occurs naturally as part of certain metabolic processes in the human body, and has been studied extensively and is believed to exhibit only slight toxicity in normal use. There is no strong evidence of chronic health effects if basic precautions are followed. It is generally recognized to have low acute and chronic toxicity if ingested and/or inhaled. Acetone is not currently regarded as a carcinogen, a mutagen, or a concern for chronic neurotoxicity effects. Acetone can be found as an ingredient in a variety of consumer products ranging from cosmetics to processed and unprocessed foods. Acetone has been rated as a generally recognized as safe (GRAS) substance when present in drinks, baked foods, desserts, and preserves at concentrations ranging from 5 to 8 mg/L. Acetone is however an irritant, causing mild skin and moderate-to-severe eye irritation. At high vapor concentrations, it may depress the central nervous system like many other solvents. Acute toxicity for mice by ingestion (LD50) is 3 g/kg, and by inhalation (LC50) is 44 g/m3 over 4 hours. Environmental effects Although acetone occurs naturally in the environment in plants, trees, volcanic gases, forest fires, and as a product of the breakdown of body fat, the majority of the acetone released into the environment is of industrial origin. Acetone evaporates rapidly, even from water and soil. Once in the atmosphere, it has a 22-day half-life and is degraded by UV light via photolysis (primarily into methane and ethane.) Consumption by microorganisms contributes to the dissipation of acetone in soil, animals, or waterways. EPA classification In 1995, the United States Environmental Protection Agency (EPA) removed acetone from the list of volatile organic compounds. The companies requesting the removal argued that it would "contribute to the achievement of several important environmental goals and would support EPA's pollution prevention efforts", and that acetone could be used as a substitute for several compounds that are listed as hazardous air pollutants (HAP) under section 112 of the Clean Air Act. In making its decision EPA conducted an extensive review of the available toxicity data on acetone, which was continued through the 2000s. It found that the evaluable "data are inadequate for an assessment of the human carcinogenic potential of acetone". Extraterrestrial occurrence On 30 July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate, and propionaldehyde.
Physical sciences
Carbon–oxygen bond
null
19321517
https://en.wikipedia.org/wiki/Newt
Newt
A newt is a salamander in the subfamily Pleurodelinae. The terrestrial juvenile phase is called an eft. Unlike other members of the family Salamandridae, newts are semiaquatic, alternating between aquatic and terrestrial habitats. Not all aquatic salamanders are considered newts, however. More than 100 known species of newts are found in North America, Europe, North Africa and Asia. Newts metamorphose through three distinct developmental life stages: aquatic larva, terrestrial juvenile (eft), and adult. Adult newts have lizard-like bodies and return to the water every year to breed, otherwise living in humid, cover-rich land habitats. Newts are threatened by habitat loss, fragmentation and pollution. Several species are endangered, and at least one species, the Yunnan lake newt, has become extinct recently. Etymology The Old English name of the animal was , (of unknown origin), resulting in Middle English ; this word was transformed irregularly into , , or . The initial "n" was added from the indefinite article "an" by provection (juncture loss) ("an eft" → "a n'eft" → ...) by the early 15th century. The form "newt" appears to have arisen as a dialectal variant of eft in Staffordshire, but entered Standard English by the Early Modern period (used by Shakespeare in Macbeth iv.1). The regular form eft, now only used for newly metamorphosed specimens, survived alongside newt, especially in composition, the larva being called "water-eft" and the mature form "land-eft" well into the 18th century, but the simplex "eft" as equivalent to "water-eft" has been in use since at least the 17th century. Dialectal English and Scots also has the word ask (also awsk, esk in Scots) used for both newts and wall lizards, from Old English āþexe, from Proto-Germanic *agiþahsijǭ, literally "lizard-badger" or "distaff-like lizard" (compare German Eidechse and Echse, both "lizard;" *agi- is ultimately cognate with Greek "snake," from Proto-Indo-European *h₁ogʷʰis). Latin had the name stellio for a type of spotted newt, now used for species of the genus Stellagama. Ancient Greek had the name , presumably for the water newt (immature newt, eft). German has , from Middle High German mol, :wikt:olm, like the English term of unknown etymology. Newts are also known as Tritones (viz., named for the mythological Triton) in historical literature, and "triton" remains in use as common name in some Romance languages, such as Spanish and Romanian, but as well as in Greek, Russian, and Bulgarian. The systematic name Tritones was introduced alongside Pleurodelinae by Tschudi in 1838, based on the type genus named Triton by Laurenti in 1768. Laurenti's Triton was renamed to Triturus ("Triton-tail") by Rafinesque in 1815. Tschudi's Pleurodelinae is based on the type genus Pleurodeles (ribbed newt) named by Michahelles in 1830 (the name meaning "having prominent ribs," formed from "ribs" and "conspicuous"). Collective nouns for newts are flotilla and armada. Distribution and habitats Newts are found in North America, Europe, North Africa and Asia. The Pacific newts (Taricha) and the Eastern newts (Notophthalmus) with together seven species are the only representatives in North America, while most diversity is found in the Old World: In Europe and the Middle East, the group's likely origin, eight genera with roughly 30 species are found, with the ribbed newts (Pleurodeles) extending to northernmost Africa. Eastern Asia, from Eastern India over Indochina to Japan, is home to five genera with more than 40 species. Newts are semiaquatic, spending part of the year in the water for reproduction and the rest of the year on land. While most species prefer stagnant water bodies such as ponds, ditches, or flooded meadows for reproduction, some species such as the Danube crested newt can also occur in slow-flowing rivers. The European brook newts (Calotriton) and European mountain newts (Euproctus) have even adapted to life in cold, oxygen-rich mountain streams. During their terrestrial phase, newts live in humid habitats with abundant cover such as logs, rocks, or earth holes. Characteristics Newts share many of the characteristics of their salamander kin, Caudata, including semipermeable glandular skin, four equal-sized limbs, and a distinct tail. The newt's skin, however, is not as smooth as that of other salamanders. The cells at the site of an injury have the ability to undifferentiate, reproduce rapidly, and differentiate again to create a new limb or organ. One hypothesis is that the undifferentiated cells are related to tumor cells, since chemicals that produce tumors in other animals will produce additional limbs in newts. Development The main breeding season for newts (in the Northern Hemisphere) is in June and July. A single newt female can produce hundreds of eggs. For instance, the warty newt can produce 200–300 eggs (Bradford 2017). After courtship rituals of varying complexity, which take place in ponds or slow-moving streams, the male newt transfers a spermatophore, which is taken up by the female. Fertilized eggs are laid singly and are usually attached to aquatic plants. This distinguishes them from the free-floating eggs of frogs or toads, which are laid in clumps or in strings. Plant leaves are usually folded over and attached to the eggs to protect them. The larvae, which resemble fish fry but are distinguished by their feathery external gills, hatch out in about three weeks. After hatching, they eat algae, small invertebrates, or other amphibian larvae. During the subsequent few months, the larvae undergo metamorphosis, during which they develop legs, and the gills are absorbed and replaced by air-breathing lungs. Some species, such as the North American newts, also become more brightly colored during this phase. Once fully metamorphosed, they leave the water and live a terrestrial life, when they are known as "efts." Only when the eft reaches adulthood will the North American species return to live in water, rarely venturing back onto the land. Conversely, most European species live their adult lives on land and only visit water to breed. Toxicity Many newts produce toxins in their skin secretions as a defence mechanism against predators. Taricha newts of western North America are particularly toxic. The rough-skinned newt Taricha granulosa of the Pacific Northwest produces more than enough tetrodotoxin to kill an adult human, and some Native Americans of the Pacific Northwest used the toxin to poison their enemies. However, the toxins are only dangerous if ingested or otherwise enter the body; for example, through a wound. Newts can safely live in the same ponds or streams as frogs and other amphibians or be kept as pets. The only predators of Taricha newts are garter snakes, some having developed a resistance to the toxin. Most newts can be safely handled, provided the toxins they produce are not ingested or allowed to come in contact with mucous membranes or breaks in the skin. Systematics Newts form one of three subfamilies in the family Salamandridae, aside Salamandrinae and Salamandrininae. They comprise most extant species in the family, roughly 100, which are classified in sixteen genera: Calotriton Cynops (incl. Hypselotriton) Echinotriton Euproctus Ichthyosaura Laotriton Lissotriton Neurergus Notophthalmus Ommatotriton Pachytriton Paramesotriton Pleurodeles Taricha Triturus Tylototriton (incl. Liangshantriton) Hypselotriton and Liangshantriton are regarded as separate genera by some authors, but this is not unanimous. The term "newt" has traditionally been seen as an exclusively functional term for salamanders living in water, and not a clade. Phylogenetic analyses have however shown that species in the Salamandridae traditionally called newts do form a monophyletic group. Other, more distantly related salamander families also contain fully or in part aquatic species, such as the mole salamanders, the Proteidae, or the Sirenidae. Classification of all genera of the Pleurodelinae subfamily after Pyron and Weins, revised by Mikko Haaramo. Phylogenetics Phylogenetic analyses estimated the origin of the newt subfamily in the Late Cretaceous to Eocene. Several fossil salamanders have also been referred to the Pleurodelinae, including: Archaeotriton Brachycormus Carpathotriton Chelotriton Koalliella Palaeopleurodeles Anatomy and physiology Circulation The heart of newts, like that of most amphibians, consists of two atria and one ventricle. Blood flows from the anterior and posterior caval veins into the right atrium; blood that entered the heart from the left atrium is then expelled out of the ventricle. Newts do not have a coronary artery on the ventricle, due to circulation that is found in the conus arteriosus. Newts contain a special circulatory adaptation that allows them to survive ventricular penetration: when a newt's ventricle is punctured, the heart will divert the blood directly into an ascending aorta via a duct located between the ventricle and the conus arteriosus. Newts begin to regenerate the ventricle by a thickening of the epicardial layer that protrudes to allow the new vessels to form, and conclude with a regeneration of the entire myocardial wall. In early stages of development in amphibians, ventilator gas transport and hemoglobin gas transport are independent mechanisms and not yet coupled as they are in adulthood. In juvenile amphibians, there is no cardiovascular response in conditions of hypoxia. When newts are induced into anemia, they are able to respire without the need of blood cells. In T. carnifex, around two weeks after anemia is induced, the newts produced a mass of cells that helps to revitalize the already circulating red blood cell mass. Respiration Adult crested newts (Triturus cristus) were found to breathe mainly via the skin but also through the lungs and the buccal cavity. Lung breathing is mainly used when there is a lack of oxygen in the water, or at high activity such as during courtship, breeding, or feeding. A form of compensatory respiration is the ability to release stored erythrocytes when needed, for example under hypoxia. Spleen size can increase as the temperature declines for adults – in larvae, there is no dramatic change in spleen size. During hibernation, an increase in liver pigment cells allows for storage of oxygen, as well as other important ions and free radicals. Osmoregulation In experiments, dehydrated eastern newts were prone to a loss of motor control: After only 22% water weight loss, newts in the aquatic phase lost their ability to remain upright and mobile. However, after adaptation to a terrestrial phase, they could lose 30% before a loss of motor control was recorded. Newts in the terrestrial phase were found to dehydrate much quicker than newts in the aquatic phase, but conversely, during rehydration, dehydrated terrestrial animals will go through water gain 5x faster than dehydrated newts that are in the aquatic phase. In the Italian crested newt, it was shown that during winter months, prolactin is released into the circulatory system, which drives the newts into the aquatic environment and reduces the active transport of sodium ions. In contrast to prolactin, which decreases osmotic permeability, vasotocin increases the permeability and is secreted during the summer months. Arginine vasotocin not only increases cutaneous water permeability, but promotes increased cutaneous blood flow. Thermoregulation Thermoregulation, in combination with seasonal acclimation, describes the major mechanisms of how newts, as ectotherms cope with the changing temperatures existing in their environments. This regulation is most often achieved through behavioral thermoregulation. They are thermoconformers, which means they will acclimate to their surrounding environmental temperatures. When there is a large range of environmental temperatures, newts are insensitive to a thermal gradient profile. To escape predators, newt larvae have been found to shift their microhabitat to a temperature range that exists outside the predator's preferred temperature range. Larvae that are in the metamorphosizing stage tend to prefer warmer temperatures than those in the stage following metamorphosis. Therefore, the larvae in this stage will undergo a much more precise thermoregulation process than those in the intermediate stage. Reproductive females of the Italian crested newt were shown to regulate their body temperature more precisely and prefer higher temperatures than non-reproductive females and males. Spermatogenesis The newt is regarded as an ideal vertebrate model for investigating the mechanism(s) controlling the transition from mitosis to meiosis during spermatogenesis. In the male newt Cynopa pyrrhogaster, this transition was shown to involve expression of PCNA, a DNA polymerase delta auxiliary protein involved in DNA replication and DNA repair, as well as DMC1 protein, a marker for genetic recombination activity. Susceptibility to pollution Larvae, with their great number of lamellae in their gills, are more susceptible to pollutants than adults. Cadmium, a heavy metal released into the environment from industrial and consumer waste, has been shown to be detrimental to the Italian crested newt even at concentrations below Italian and European thresholds, by disrupting the activity of the adrenal gland. In experiments allowing Italian crested newts to be exposed to nonylphenol, an endocrine disruptor common in leakage from sewers, there was a decrease in corticosterone and aldosterone, hormones produced by the adrenal gland and important for stress response. Conservation status Although some species, such as the rough-skinned newt (Taricha granulosa) and Eastern newt (Notophthalmus viridescens) in North America or the smooth newt (Lissotriton vulgaris) in Europe, are still relatively common, populations of newts throughout their distribution range suffer from habitat loss, fragmentation, and pollution. This affects especially the aquatic breeding sites they depend on, but also their land habitats. Several species, such as the Edough ribbed newt (Pleurodeles poireti), Kaiser's spotted newt (Neurergus kaiseri), or the Montseny brook newt (Calotriton arnoldi) are considered threatened by the IUCN, and the Yunnan lake newt is an example of a newt species that has gone extinct recently. Some newt populations in Europe have decreased because of pollution or destruction of their breeding sites and terrestrial habitats, and countries such as the UK have taken steps to halt their declines. In the UK, they are protected under the Wildlife and Countryside Act 1981 and the Habitat Regulations Act 1994. It is illegal to catch, possess, or handle great crested newts without a licence, or to cause them harm or death, or to disturb their habitat in any way. The IUCN Red List categorises the species as ‘lower risk’ Although the other UK species, the smooth newt and palmate newt are not listed, the sale of either species is prohibited under the Wildlife and Countryside Act, 1981. In Europe, nine newts are listed as "strictly protected fauna species" under appendix II of the Convention on the Conservation of European Wildlife and Natural Habitats: Calotriton asper Euproctus montanus Euproctus platycephalus Lissotriton italicus Lissotriton montandoni Triturus carnifex Triturus cristatus Triturus dobrogicus Triturus karelinii The remaining European species are listed as "protected fauna species" under appendix III. As bioindicators Newts, as with salamanders in general and other amphibians, serve as bioindicators because of their thin, sensitive skin and evidence of their presence (or absence) can serve as an indicator of the health of the environment. Most species are highly sensitive to subtle changes in the pH level of the streams and lakes where they live. Because their skin is permeable to water, they absorb oxygen and other substances they need through their skin. Scientists study the stability of the amphibian population when studying the water quality of a particular body of water. As pets Chinese warty newts, Chinese fire belly newts, eastern newts, paddletail newts, Japanese fire belly newts, Chuxiong fire-bellied newts, Triturus species, emperor newts, Spanish ribbed newts (leucistic genes exist), and red-tailed knobby newts are some commonly seen newts in the pet trade. Some newts rarely seen in the pet trade are rough-skinned newts, Kaiser's spotted newts, banded newts and yellow-spotted newts.
Biology and health sciences
Amphibians
null
2481401
https://en.wikipedia.org/wiki/Tiangong%20space%20station
Tiangong space station
Tiangong (), officially the Tiangong space station (), is a permanently crewed space station constructed by China and operated by China Manned Space Agency. Tiangong is a modular design, with modules docked together while in low Earth orbit, between above the surface. It is China's first long-term space station, part of the Tiangong program and the core of the "Third Step" of the China Manned Space Program; it has a pressurised volume of 340 m3 (12,000 cu ft), slightly over one third the size of the International Space Station. The space station aims to provide opportunities for space-based experiments and a platform for building capacity for scientific and technological innovation. The construction of the station is based on the experience gained from its precursors, Tiangong-1 and Tiangong-2. The first module, the Tianhe ("Harmony of the Heavens") core module, was launched on 29 April 2021. This was followed by multiple crewed and uncrewed missions and the addition of two laboratory cabin modules. The first, Wentian ("Quest for the Heavens"), launched on 24 July 2022; the second, Mengtian ("Dreaming of the Heavens"), launched on 31 October 2022. Nomenclature The names used in the space program, previously all chosen from the revolutionary history of the People's Republic, have been replaced with mystical-religious ones. Thus, the space capsule Divine Vessel (), spaceplane Divine Dragon (), land-based high-power laser Divine Light (), and supercomputer Divine Might (). These poetic names continue as the first, second, third, fourth, fifth and future probes of the Chinese Lunar Exploration Program are called Chang'e – after the Moon goddess. The name "Tiangong" means "heavenly palace". Across China, the launch of Tiangong-1 was reported to have inspired a variety of feelings, including love poetry. The rendezvous of the space vehicles has been compared to the reunion of the cowherd and the weaver girl. Wang Wenbao, director of the China Manned Space Agency (CMSA), told a news conference in 2011: On 31 October 2013, CMSA announced the new names for the whole space station program: The precursor space labs would be called Tiangong (), code TG. Tiangong-1 and Tiangong-2 were launched respectively in 2011 and 2016. The large modular space station would be called Tiangong as well, without number. The cargo transport spacecraft would be called Tianzhou (), code TZ. The first Tianzhou mission successfully launched and deorbited in 2017. The first mission to the space station, Tianzhou 2, flew on 29 May 2021. Subsequently, Tianzhou 3, Tianzhou 4 and Tianzhou 5 were launched respectively on 20 September 2021, 9 May 2022 and 12 November 2022. The Modular Space Station Core Module would be called Tianhe (), code TH. Tianhe was successfully launched on 29 April 2021. The Modular Space Station Experiment Module I would be called Wentian (), code WT. Wentian was successfully launched on 24 July 2022. The Modular Space Station Experiment Module II would be called Mengtian (), code MT. Mengtian was successfully launched on 31 October 2022. The separate space telescope module would be called Xuntian (), code XT (telescope), receiving the previously intended name for the Experiment Module II. Launch is planned for 2026. Purpose and mission According to CMSA, which operates the space station, the purpose and mission of Tiangong is to develop and gain experience in spacecraft rendezvous technology, permanent human operations in orbit, long-term autonomous spaceflight of the space station, regenerative life support technology and autonomous cargo and fuel supply technology. It will also serve the platform for the next-generation orbit transportation vehicles, scientific and practical applications at large-scale in orbit, and technology for future deep space exploration. CMSA also encourages commercial activities led by the private sector and hopes their involvement could bring cost-effective aerospace innovations. Space tourism at the space station is also considered. Scientific research The space station will have 23 experimental racks in an enclosed, pressurised environment. There will also be platforms for exposed experiments; 22 and 30 on the Wentian and Mengtian laboratory modules, respectively. Over 1,000 experiments are tentatively approved by CMSA, and scheduled to be conducted on the space station. Agriculture in microgravity was explored with cultivation of rice and Arabidopsis thaliana as sustainable food sources for long-term spaceflight. The programmed experiment equipment racks for the three modules as of June 2016 were: Space life sciences and biotechnology Ecology Science Experiment Rack (ESER) Biotechnology Experiment Rack (BER) Science Glove-box and Refrigerator Rack (SGRR) Microgravity fluid physics and combustion Fluids Physics Experiment Rack (FPER) Two-phase System Experiment Rack (TSER) Combustion Experiment Rack (CER) Material science in space Material Furnace Experiment Rack (MFER) Container-less Material Experiment Rack (CMER) Fundamental Physics in Microgravity Cold Atom Experiment Rack (CAER) High-precision Time-Frequency Rack (HTFR) Multipurpose Facilities High Micro-gravity Level Rack (HMGR) Varying-Gravity Experiment Rack (VGER) Modularized Experiment Rack (RACK) Education and cultural outreach The space station features space lectures and popular science experiments to educate, motivate and inspire the younger Chinese generation and world audience in science and technology. Each lecture is concluded with a question-and-answer session with school children's questions from classrooms across China. The first and second Tiangong space lesson was conducted in December 2021 and March 2022, as a part of the Shenzhou 13 mission. This tradition continued with the Shenzhou 14. The CSSARC is the Amateur Radio payload for the Chinese Space Station, proposed by the Chinese Radio Amateurs Club (CRAC), Aerospace System Engineering Research Institute of Shanghai (ASES) and Harbin Institute of Technology (HIT). The payload will provide resources for radio amateurs worldwide to contact onboard astronauts or communicate with each other, aim to inspire students to take interests and careers in science, technology, engineering, and math, and encourage more people to get interested in amateur radio. The first phase of the payload is capable of providing the following functions utilising the VHF/UHF amateur radio band: V/V or U/U crew voice V/U or U/V FM repeater V/V or U/U 1k2 AFSK digipeater V/V or U/U SSTV or digital image Structure The space station is a third-generation modular space station. First-generation space stations, such as early Salyut, Almaz, and Skylab, were single-piece stations and not designed for resupply. Second generation Salyut 6 and 7, and Tiangong 1 and 2 stations, are designed for mid-mission resupply. Third-generation stations, such as Mir and the International Space Station, are modular space stations, assembled in orbit from pieces launched separately. Modular design can greatly improve reliability, reduce costs, shorten development cycles, and meet diversified task requirements. Modules The initial target configuration for the end of 2022 consisted of three modules. Previous plans suggested expanding to six modules by duplicating the initial three, but as of 2023, planning has shifted to adding a single multi-functional module with six docking ports instead. In October 2023, China announced revised plans to expand the station to six modules starting in 2027. The Tianhe Core Cabin Module (CCM) provides life support and living quarters for three crew members and provides guidance, navigation, and orientation control for the station. The module also provides the station's power, propulsion, and life support systems. The module consists of three sections: living quarters, a service section, and a docking hub. The living quarters will contain a kitchen and toilet, fire control equipment, atmospheric processing and control equipment, computers, scientific apparatus, communications equipment to send and receive communications via ground control in Beijing, and other equipment. In 2018 a full-scale mockup of CCM was publicly presented at China International Aviation & Aerospace Exhibition in Zhuhai. The video from CMSA revealed that two of these core modules have been built. Artist impressions have also depicted the two core modules docked together to enlarge the overall station. The first of two Laboratory Cabin Modules (LCM), Wentian, provides additional avionics, propulsion, and life support systems as backup functions for the CCM. The Wentian is also fitted with an independent airlock cabin to serve as the main entry-exit point for extravehicular activities (EVA), replacing the Tianhe docking hub. For the scientific payload, the LCM is equipped with multiple internal science racks and 22 payload adapters on the exterior for various types of experiments. Aside from scientific equipment, the module features three additional living quarters designed for short-term stay, which will be used during crew rotation. Wentian was launched and docked with the Tianhe on 24 July 2022. The second LCM, Mengtian, was launched on 31 October 2022. The Mengtian module is equipped with expanded in-orbit experiment capacity. The module is divided into multiple sections, including the pressurised crew working compartment, the unpressurised cargo section, the cargo airlock/on-orbit release mechanism, as well as the control module section featuring external experiment adapters, a communication antenna, and two solar arrays. In total, it carries 13 experimental racks and 37 external payload adapters. The cargo airlock is specifically designed for conveying payloads from inside the station to the exterior. Both LCMs provide a pressurised environment for researchers to conduct science experiments in freefall or microgravity which could not be conducted on Earth for more than a few minutes. Experiments can also be placed on the outside of the modules for exposure to the space environment, cosmic rays, vacuum, and solar winds. Overall, Wentian prioritises life science, while the Mengtian focus on microgravity experiments. The axial port of the LCMs is fitted with rendezvous equipment for docking at the axial port of the CCM. A mechanical arm called the indexing robotic arm, externally resembling the Lyappa arm used on the Mir space station, moves Wentian LCM to the starboard side, and the Mengtian LCM module to a port-side port of the CCM. The Indexing robot arms differentiate from the Lyappa arm as they are used when docking is needed in the same plane, while the Lyappa arm controls the pitch of the spacecraft to re-dock it at a different plane. The Chinarm on the Tianhe module can be used as a backup for docking relocation. Systems Communication Real-time communications, including live audio and video links, are provided by the Tianlian II series of data relay satellites. A constellation of three satellites was launched into geostationary orbits, providing communication and data support for the station. Docking Tiangong is fitted with the Chinese Docking Mechanism used by Shenzhou spacecraft and previous Tiangong prototypes. The Chinese docking mechanism is based on the Russian APAS-89/APAS-95 system. Despite NASA describing it as a "clone" to APAS, there have been contradictory claims on the compatibility of the Chinese system with both current and future docking mechanisms on the ISS, which are also based on APAS. It has a circular transfer passage that has a diameter of . The androgynous variant has a mass of 310 kg and the non-androgynous variant has a mass of 200 kg. The Chinese Docking Mechanism was used for the first time on Shenzhou 8 and Tiangong 1 space stations and will be used on future Chinese space stations and with future CMSA cargo resupply vehicles. Power supply Electrical power is provided by two steerable solar power arrays on each module, which use gallium arsenide photovoltaic cells to convert sunlight into electricity. Energy is stored to power the station when it passes into the Earth's shadow. Resupply spacecraft will replenish fuel for the station's propulsion engines for station keeping, to counter the effects of atmospheric drag. The solar arrays are designed to last up to 15 years. Propulsion The Tiangong space station is fitted with conventional chemical propulsion and ion thrusters to adjust and maintain the station's orbit. Four Hall-effect thrusters are mounted on the hull of the Tianhe core module. The development of the Hall-effect thrusters is considered a sensitive topic in China, with scientists "working to improve the technology without attracting attention". Hall-effect thrusters are created with crewed mission safety in mind with an effort to prevent erosion and damage caused by the accelerated ion particles. A magnetic field and specially designed ceramic shield were created to repel damaging particles and maintain the integrity of the thrusters. According to a report by the Chinese Academy of Sciences, the ion drive used on Tiangong ran continuously for 8,240 hours without a glitch during the testing phase, indicating its suitability for Tiangong's designated 15-year lifespan. These are the world's first Hall thrusters to be used on a human-rated mission. Robotic arms The Tiangong station features five robotic arms. The longest one is the 10-meter-long, ISS Canadian-style SSRMS robotic arm, nicknamed Chinarm, mounted on the Tianhe core module. The Wentian module features a smaller, long SSRMS robotic arm that is 5 times more accurate in positioning than the Chinarm. The Wentian arm is primarily used to transfer extravehicular experiments and other hardware outside the station during astronaut EVAs. A dual-arm connector is installed on the Chinarm, providing it the capability to link with the Wentian robotic arm, extending its reach and weight-carrying limits. The Mengtian module carries a payload release mechanism, installed to assist in cargo transfer. The robotic arm can retrieve experiments from the cargo airlock, then install them onto the external adapters fitted on the module exterior. It can also be used to launch microsatellites. Two Indexing robotic arms, developed by the Shanghai Academy of Spaceflight Technology, are fitted on top of docking ports for the two laboratory modules to help relocate them during construction. Co-orbit modules Construction Planning In 2011, it was announced that the future space station was planned to be assembled from 2020 to 2022. By 2013, the space station's core module was planned to be launched earlier, in 2018, followed by the first laboratory module in 2020, and a second in 2022. By 2018, it was reported that this had slipped to 2020–2023. In February 2020, a total of 11 launches were planned for the whole construction phase, beginning in 2021. In 2021, it was reported China National Space Administration planned to complete the construction of the space station in 2022. Tiangong modules are self-contained and pre-assembled, in contrast to the US Orbital Segment of the ISS, which required spacewalking to interconnect cables, piping, and structural elements manually. The assembly method of the station can be compared with the Soviet-Russian Mir space station and the Russian orbital segment of the International Space Station, making China the second nation to develop and use automatic rendezvous and docking for modular space station construction. The technologies in the construction are derived from decades of Chinese crewed spaceflight experiences, including those gained from Tiangong-1 and Tiangong-2 prototypes, as well as the purchase of aerospace technology from Russia in the early 1990s. A representative of the Chinese crewed space program stated that around 2000, China and Russia were engaged in technological exchanges regarding the development of a docking mechanism used for space stations. Deputy Chief Designer, Huang Weifen, stated that near the end of 2009, China Manned Space Agency (CMSA) began to train astronauts on how to dock spacecraft. In accordance to the plan, by the end of 2022, the fully assembled Tiangong space station had three 22 metric-ton modules in a basic T-shape. Because of the modular design, the Tiangong space station can be further expanded into six modules possibly enabling more astronaut participation in the future. Assembly The construction of the Chinese Space Station officially began in April 2021. The planned 11 missions include three module launches, four crewed missions, and four autonomous cargo flights. On 29 April 2021, the first component of the station, Tianhe core module, was launched to the orbit aboard the Long March 5B rocket from Wenchang Spacecraft Launch Site. On 29 May 2021, Tianzhou 2 autonomous cargo spacecraft was launched to the Tianhe core module in preparation for the Shenzhou 12 crew, who will be responsible for testing Tianhes various systems and preparing for future operations. On 17 June 2021, Shenzhou 12 team docked with the space station, marking them the first visitors to the Tiangong station. The first crew mission began the examination of the core module and verification of key technologies. On 4 July 2021, Liu Boming and Tang Hongbo began their first spacewalk in upgraded Chinese Feitian spacesuits, outfitting the space stations with extravehicular activity (EVA) equipment, such as foot restraints and the standing platform for Chinarm. Shenzhou 12 commander Nie Haisheng stayed inside the station and tested the robotic arm movements. Liu Boming and Nie Haisheng completed the second spacewalk on 20 August 2021 and installed various devices outside of the station, including a thermal control system, a panoramic camera, and other equipment. On 16 September 2021, the Shenzhou 12 crew entered the returning spacecraft and undocked from Tianhe. Before leaving the orbit, the crew performed various radial rendezvous (R-Bar) maneuvers to circumnavigate around the space station. They tested the guidance system and recorded lighting conditions while approaching the Tianhe from different angles. The crew landed in the Gobi Desert of Inner Mongolia on the same day. Tianzhou 3 cargo spacecraft, which arrived at the launch facility a month earlier, was immediately rolled out onto the launch pad for the next supply mission. On 20 September 2021, Tianzhou 3 autonomous freighter was launched from the Wenchang Satellite Launch Center in preparation for the arrival of Shenzhou 13 crew. The Shenzhou 13 was the first six-month mission on the Tiangong station, whereas previous Shenzhou 12 was only three months in length. The Shenzhou 13 docked with the space station on 15 October 2021. Missions for the Shenzhou 13 crew included orbit experiments, spacewalks, and for the station's future expansion. On 7 November 2021, Shenzhou 13 crew Zhai Zhigang and Wang Yaping conducted the first spacewalks to test the next-generation EVA suit and robotic Chinarm, making Wang Yaping China's first female spacewalker. One of the missions in the 6.5-hour extravehicular activity was to install a dual-arm connector to the 10-meter-long robotic arm. The connector can provide the capability for Chinarm to extend in length with another 5-meter-long segment mounted on the Wentian module that will arrive in 2022. According to Gao Shen of the China Academy of Space Technology (CAST), the combined 15-meter Chinarm will have greater range and weight-carrying capacity. During spacewalks, various preparations were performed on the robotic arm for manipulation and construction of future modules. On 26 December 2021, Shenzhou 13 crew Zhai Zhigang and Ye Guangfu conducted the second spacewalk to install a panoramic camera, which will be used for space station monitoring and robotic arm observation. They also practiced various movements with the help of Chinarm controlled by the monitoring astronaut Wang Yaping inside the station. During the construction phase of the station in 2021, according to documents filed by China Manned Space Agency (CMSA) with the United Nations Office for Outer Space Affairs and reported by Reuters, the station had two "close encounters" with SpaceX's Starlink satellites on 1 July and 21 October, with the station conducting evasive adjustment maneuvers. On 5 January 2022, Shenzhou 13 team used the 10-meter long Chinarm to relocate the Tianzhou 2 supply ship by 20 degrees before returning it to the original location. This maneuver was conducted to practice the procedures, equipment, and backup operation system needed for future module assembly. On 13 January, the crew tested the emergency docking system by controlling the cargo spacecraft manually. In March 2022, Shenzhou 13 crew began the preparation to undock from the space station. The crew landed in China on 16 April 2022, after staying 182 days in the low-Earth orbit. Soon afterward, China launched Tianzhou 4 cargo spacecraft in preparation for the next crewed mission in May. The automated freighter docked with the space station on 9 May 2022, and carried vital maintenance equipment and a refrigerator for scientific experiment. Beginning with the Shenzhou 14, China officially started the final construction phase for the space station, with three astronauts tasked to oversee the arrival of two labotorary modules in 2022. On 5 June 2022, Shenzhou 14 crew arrived at the space station, docking at the Earth-facing nadir port. Shenzhou 14 crew will begin the assembly for both Wentian and Mengtian modules, arriving in second half of the year. The crew installed carbon dioxide reduction system for the space station, tested Feitian spacesuits, and debugged Tianhe core module. On 19 July 2022, Tianzhou 3 was undocked from the station, making way for the arrival of the Wentian module. On 24 July 2022, the Wentian laboratory module was launched from the Wenchang space center and rendezvoused with the Tianhe core module on the same day. Wentian is the second module for the Tiangong space station, and the first laboratory cabin module (LCM). The module is equipped with an airlock cabin, which will become the primary entry-exit point for future EVAs. The module also feature backup avionics, propulsion, and life support systems, improving Tiangong space station's operational redundancy. On 2 September 2022, the crew member Chen Dong and Liu Yang performed their first spacewalk from the new Wentian airlock, installing and adjusting various external equipment as well as testing emergency return procedures. On 17 September 2022, astronauts Chen Dong and Cai Xuzhe performed the second spacewalk, installing external pumps and verified emergency rescue capability. On 30 September 2022, all crew members worked in coordination, moving the Wentian module from the forward port to the starboard lateral docking port, which is its planned permanent location on 30 September 2022 at 04:44 UTC. The relocation process was largely automated with the assistance of the Indexing robotic arm. In October 2022, CMSA prepared to launch the third and final module, Mengtian, to complete the construction for the Tiangong space station. On 31 October 2022, Mengtian module was launched from the Wenchang space center, and docked with the station 13 hours later. The assembly of the Mengtian marks the final step in the 1.5-year construction process. According to China Academy of Space Technology, the rendezvous and docking process for Mengtian was conducted expeditiously, as then L-shaped Tiangong station consumed large amount of energy to stay oriented in its asymmetrical arrangement. On 3 November 2022, Mengtian was relocated autonomously from the forward docking port to port-side lateral docking port via Indexing robotic arm, and successfully berthed at its planned permanent location with Tianhe module at 01:32UTC (9:32BJT), forming a T-shape. Subsequently, CMSA announced the construction of the Tiangong space station is officially complete. Designer of Mengtian module, Li Guangxing, explained the space station was maneuvered to a special position, utilizing the Earth's gravity to help stabilize the docking process. At 07:12UTC, The Shenzhou 14 crew entered the Mengtian module. On 10 November 2022, Tianzhou 4 cargo spacecraft undocked from the Tiangong, and Tianzhou 5 was prepared to launch on the same day. Tianzhou 5 was launched on 12 November 2022, carrying supplies, experiments, and microsatellites to the space station. It also contained gifts for China's first crew handover ceremony in orbit. The completed station had extra capacity for expanded crew activities and living space for six, allowing crew rotation. On 29 November 2022, the Shenzhou 15 crew Fei Junlong, Deng Qingming, and Zhang Lu was launched to the space station. The crew spent one week together for handover and verification for sustainable six-man operations. With the crew rotation operation, China commenced its permanent space presence. On 17 December 2024, Cai Xuzhe broke the record with Song Lingdong for the longest spacewalk in human history, of 9 hours and 6 minutes, with the assistance of the space station's robotic arms and ground-based scientific personnel, completed tasks such as the installation of space debris protection devices, inspection, and maintenance of external equipment and facilities. Expansion According to CMSA, the Tiangong space station is expected to be expanded from three to six modules, with improved versions of the Tianhe, Wentian, and Mengtian modules. According to Wang Xiang, commander of the space station system at the China Academy of Space Technology (CAST), the potential next phase would be adding a new core module. "Following our current design, we can continue to launch an extension module to dock with the forward section of the space station, and the extension module can carry a new hub for docking with the subsequent space vehicles," Wang told CCTV. In October 2023, CAST presented new plan on the 74th International Astronautical Congress to expand the Tiangong to 180 tons, six-module assembly, with at least 15 years of operational life. A multi-functional module with six docking ports was planned as the foundation for the expansion. New sections included 3D printers, robots, improved robotic arms, and space debris observation, detection, and warning systems. The Xuntian space telescope module is planned to launch in 2026. International co-operation China's incentive to build its own space station was amplified after US Congress prohibited NASA from any direct engagement & cooperation with CNSA thus effectively prohibiting any Chinese participation in the International Space Station (ISS) in 2011, although China, Russia and Europe mutually vowed intentions to maintain a cooperative and multilateral approach in space. Between 2007 and 2011, the space agencies of Russia, Europe, and China carried out the ground-based preparations in the Mars500 project, which complement the ISS-based preparations for a human mission to Mars. Tiangong has involved cooperation with France, Sweden, and Russia. Cooperation in the field of crewed space flight between the China Manned Space Agency (CMSA, formerly known as CMSEO) and the Italian Space Agency (ASI) was examined in 2011, and participation in the development of China crewed space stations and cooperation with China in the fields such as visiting astronauts, and scientific research was discussed. In November 2011, the China National Space Administration and the Italian Space Agency signed an initial cooperative agreement, covering areas of collaboration within space transportation, telecommunications, Earth observation, and so on. On 22 February 2017, the CMSA and the Italian Space Agency (ASI) signed an agreement to cooperate on long-term human spaceflight activities. The agreement holds importance due to Italy's leading position in the field of human spaceflight with regards to the creation and exploitation of the International Space Station (Node 2, Node 3, Columbus, Cupola, Leonardo, Raffaello, Donatello, PMM, etc.) and it signified Italy's increased anticipation in China's developing space station programme. The European Space Agency (ESA) started human spaceflight training with CMSA in 2017, with the ultimate goal of sending ESA astronauts to Tiangong. To prepare for the future missions, selected ESA astronauts lived together with their Chinese counterparts and engaged in training sessions such as splashes-down survival, language learning, and spacecraft operations. However, in January 2023, ESA announced that the agency will not send its astronauts to China's space station due to political and financial reasons. In 2019, an Italian experiment High Energy cosmic-Radiation Detection (HERD) was scheduled on board the Chinese station. In 2019, international experiments were selected by the CMSA and the United Nations Office for Outer Space Affairs (UNOOSA) in a UN session. 42 applications were submitted, and 9 experiments were accepted. Some of the experiments are a continuation to the ones on Tiangong-2 such as POLAR-2, an experiment of researching Gamma-ray burst polarimetry, proposed by Switzerland, Poland, Germany, and China. Canadian Professor Dr. Tricia Larose of the University of Oslo has been developing a cutting-edge cancer research experiment for the station. The 31-day experiment is to research whether weightlessness has a positive effect in stopping cancer growth. The High Energy Cosmic Ray Detector project is conducted by a 200 scientists team from Europe, mainland China, Hong Kong, and Taiwan. Under UNOOSA framework, Tiangong is also expected to host experiments from Belgium, France, Germany, India, Italy, Japan, Mexico, the Netherlands, Peru, Russia, Saudi Arabia, and Spain, involving 23 institutions and 17 countries. Regarding the participation of foreign astronauts, CMSA has repeatedly communicated its support for such proposals. During the press conference of the Shenzhou 12 mission, Zhou Jianping, the chief designer of China Manned Space Program, explained that multiple countries had expressed their wishes to participate. He told journalists that the future participation of foreign astronauts "will be guaranteed". Ji Qiming, an assistant director at CMSA, told reporters that he believes: In October 2022, the station opened its selection process to Hong Kong and Macau, the two special administrative regions of China. Life aboard Crew activities Astronauts on the Tiangong station follow China Standard Time (CST) for their daily schedule. The crew often wakes up around 7:00 and begins their daily conference with Mission Control in Beijing before starting work at 08:00 (00:00UTC). The crew will then follow their planned schedule until 21:00, after which they report their work process to Mission Control. At 13:30, astronauts enter their living quarters to take a nap, which typically takes an hour. The crew also has multiple breaks for eating and resting. The Tiangong station features a lighting scene function to simulate lighting conditions on Earth, including daylight, dusk, and night. As the station experiences 16 sunrises and sunsets per day in low Earth orbit, this function helps to avoid disruption to the crew's circadian rhythm. The Tiangong space station is fitted with home automation functions, including remote-controlled appliances and a logistics management system. The crew can use their tablet computers to identify, locate, and organize items inside the station, as all items in the station are marked by QR codes. This will help ensure an orderly environment as more cargo arrives. Inter-device communication inside the station is completely wireless via the Wi-Fi network to avoid cord mess. Food and personal hygiene Meals consisting of 120 different types of food, selected based on astronauts' preferences, are stored aboard. Staples including shredded pork in garlic sauce, kung pao chicken, black pepper beef, yuxiang shredded pork, pickled cabbage, and beverages, including a variety of teas and juices, are resupplied by trips of the Tianzhou-class robotic cargo spacecraft. Fresh fruit and vegetables are stored in coolers. Huang Weifen, the chief astronaut trainer of CMSA, explains that most of the food is prepared to be solid, boneless, and in small pieces. Condiments such as pork sauce and Sichuan pepper sauce are used to compensate for the changes in the sense of taste in microgravity. The station is equipped with a small kitchen table for food preparation, a refrigerator, a water dispenser, and the first-ever microwave oven in spaceflight so that astronauts can "always have hot food whenever they need." Following the astronauts' feedback, larger supplies of vegetables have been included since Tianzhou 4, increasing the variety of vegetable to 32. The station's core module, Tianhe, provides the living quarters for the crew members, consisting of three separate sleeping berths, a space toilet, shower facility, and gym equipment. Each berth features one small circular window, a headphone set, ventilation, and other amenities. Neuromuscular electrical stimulation is used to prevent muscle atrophy. The noise level in the working area is set at 58 decibels, while in the sleeping area, the noise is kept at 49 decibels. The ventilation system provides air circulation to the crew, with 0.08 m/s wind speed for the working areas and 0.05 m/s for the sleeping stations. Three additional living quarters for short-term stay are located in the Wentian laboratory module. Operations Since 5 June 2022, Tiangong has been a permanently crewed station, typically staffed with a crew of three but capable of supporting up to six people. After the completion of the station in November 2022, it housed a crew of 6 for the first time for 5 days during the crew rotation from Shenzhou 14 to Shenzhou 15 in December 2022. Operations are controlled from the Beijing Aerospace Flight Control Center in China. To guarantee the safety of astronauts on board, a Long March 2F/G with a Shenzhou spacecraft will always be on standby for an emergency rescue mission. Crewed missions The first crewed mission to Tiangong, Shenzhou 12, lasted 90 days. Starting with Shenzhou 13, subsequent missions have had a normal duration of about 180 days. CMSA has announced the testing of the Mengzhou spacecraft to eventually replace Shenzhou. It is designed to carry astronauts to Tiangong and offer the capability for lunar exploration. China's next-generation crew carrier is reusable with a detachable heat shield built to handle higher-temperature returns through Earth's atmosphere. According to CMSA officials, the new capsule design is larger than the Shenzhou. Mengzhou is capable of carrying astronauts to the Moon, and can accommodate up to six to seven crew members at a time, three more astronauts than Shenzhou. The new crewed spacecraft has a cargo section that allows astronauts to bring cargo back to Earth, whereas the Tianzhou cargo resupply spacecraft is not designed to bring any cargo back to Earth. Cargo resupply Tianzhou''' (Heavenly Vessel), a modified derivative of the Tiangong-1 spacecraft, is used as robotic cargo spacecraft to resupply this station. The launch mass of Tianzhou is around 13,000 kg with a payload of around 6,000 kg. Launch, rendezvous and docking shall be fully autonomous, with mission control and crew used in override or monitoring roles. List of missions All dates are UTC. Dates are the earliest possible dates and may change. Forward ports are at the front of the station according to its normal direction of travel and orientation (attitude). Aft is at the rear of the station, used by spacecraft to boost the station's orbit. Nadir is closest to the Earth, zenith is on top. Port is to the left if pointing one's feet towards the Earth and looking in the direction of travel; starboard to the right. Key End of mission Tiangong is designed to be used for 10 years, though it could be extended to 15 years and will accommodate three astronauts. CMSA crewed spacecraft use deorbital burns to slow their velocity, resulting in their re-entry to the Earth's atmosphere. Vehicles carrying a crew have a heat shield which prevents the vehicle's destruction caused by aerodynamic heating upon contact with the Earth's atmosphere. The station itself has no heat-shield; however, small parts of space stations can reach the surface of the Earth, so uninhabited areas will be targeted for de-orbit manoeuvres. Visibility Similar to the ISS, the Tiangong space station can also be seen from Earth with the naked eye due to sunlight illumination reflected off the modules and solar panels, seen a few hours after sunset and before sunrise, reaching a brightness magnitude of at least -2.2 mag. In popular culture A predecessor, the Tiangong-1 space laboratory, and the International Space Station are subjects in the 2013 feature film Gravity. Near the end of the Netflix original animated film Over the Moon'' (2020), a red dragon is depicted playing with Tiangong space station.
Technology
Crewed vehicles
null
1185479
https://en.wikipedia.org/wiki/Nerium
Nerium
Nerium oleander ( ), commonly known as oleander or rosebay, is a shrub or small tree cultivated worldwide in temperate and subtropical areas as an ornamental and landscaping plant. It is the only species currently classified in the genus Nerium, belonging to subfamily Apocynoideae of the dogbane family Apocynaceae. It is so widely cultivated that no precise region of origin has been identified, though it is usually associated with the Mediterranean Basin. Nerium grows to tall. It is most commonly grown in its natural shrub form, but can be trained into a small tree with a single trunk. It is tolerant to both drought and inundation, but not to prolonged frost. White, pink or red five-lobed flowers grow in clusters year-round, peaking during the summer. The fruit is a long narrow pair of follicles, which splits open at maturity to release numerous downy seeds. Nerium is a poisonous plant but its bitterness renders it unpalatable to humans and most animals, so poisoning cases are rare and the general risk for human mortality is low. Ingestion of larger amounts may cause nausea, vomiting, excess salivation, abdominal pain, bloody diarrhea and irregular heart rhythm. Prolonged contact with sap may cause skin irritation, eye inflammation and dermatitis. Description Oleander grows to tall, with erect stems that splay outward as they mature; first-year stems have a glaucous bloom, while mature stems have a grayish bark. The leaves are in pairs or whorls of three, thick and leathery, dark-green, narrow lanceolate, long and broad, and with an entire margin filled with minute reticulate venation web typical of eudicots. The leaves are light green and very glossy when young, maturing to a dull dark green. The flowers grow in clusters at the end of each branch; they are white, pink to red, diameter, with a deeply 5-lobed fringed corolla round the central corolla tube. They are often, but not always, sweet-scented. The fruit is a long narrow pair of follicles long, which splits open at maturity to release numerous downy seeds. Taxonomy Nerium oleander is the only species currently classified in the genus Nerium. It belongs to (and gives its name to) the small tribe Nerieae of subfamily Apocynoideae of the dogbane family Apocynaceae. The genera most closely related thus include the equally ornamental (and equally toxic) Adenium G.Don and Strophanthus DC. - both of which contain (like oleander) potent cardiac glycosides that have led to their use as arrow poisons in Africa. The three remaining genera Alafia Thouars, Farquharia Stapf and Isonema R.Br. are less well-known in cultivation. Synonymy The plant has been described under a wide variety of names that are today considered its synonyms: Oleander Medik. Nerion Tourn. ex St.-Lag. Nerion oleandrum St.-Lag. Nerium carneum Dum.Cours. Nerium flavescens Spin Nerium floridum Salisb. Nerium grandiflorum Desf. Nerium indicum Mill. Nerium japonicum Gentil Nerium kotschyi Boiss. Nerium latifolium Mill. Nerium lauriforme Lam. Nerium luteum Nois. ex Steud. Nerium madonii M.Vincent Nerium mascatense A.DC. Nerium odoratissimum Wender. Nerium odoratum Lam. Nerium odorum Aiton Nerium splendens Paxton Nerium thyrsiflorum Paxton Nerium verecundum Salisb. Oleander indica (Mill.) Medik. Oleander vulgaris Medik. Etymology The taxonomic name Nerium oleander was first assigned by Linnaeus in 1753. The genus name Nerium is the Latinized form of the Ancient Greek name for the plant nẽrion (νήριον), which is in turn derived from the Greek for water, nẽros (νηρός), because of the natural habitat of the oleander along rivers and streams. The origins of the species name are disputed. The word oleander appears as far back as the first century AD, when the Greek physician Pedanius Dioscorides cited it as one of the terms used by the Romans for the plant. Merriam-Webster believes the word is a Medieval Latin corruption of Late Latin names for the plant: arodandrum or lorandrum, or more plausibly rhododendron (another Ancient Greek name for the plant), with the addition of olea because of the superficial resemblance to the olive tree (Olea europea) Another theory posited is that oleander is the Latinized form of a Greek compound noun: οllyo (ὀλλύω) 'I kill', and the Greek noun for man, aner, genitive andros (ἀνήρ, ἀνδρός). ascribed to oleander's toxicity to humans. The etymological association of oleander with the bay laurel has continued into the modern day: in France the plant is known as "laurier rose", while the Spanish term, "Adelfa", is the descendant of the original Ancient Greek name for both the bay laurel and the oleander, daphne, which subsequently passed into Arabic usage and thence to Spain. The ancient city of Volubilis in Morocco may have taken its name from the Berber name alili or oualilt for the flower. Distribution and habitat Nerium oleander is either native or naturalized to a broad area spanning from Northwest Africa and Iberian and Italian Peninsula eastward through the Mediterranean region and warmer areas of the Black Sea region, Arabian Peninsula, southern Asia, and as far east as Yunnan in southern parts of China. It typically occurs around stream beds in river valleys, where it can alternatively tolerate long seasons of drought and inundation from winter rains. N. oleander is planted in many subtropical and tropical areas of the world. On the East Coast of the US, it grows as far north as Virginia Beach, while in California and Texas miles of oleander shrubs are planted on median strips. There are estimated to be 25 million oleanders planted along highways and roadsides throughout the state of California. Because of its durability, oleander was planted prolifically on Galveston Island in Texas after the disastrous Hurricane of 1900. They are so prolific that Galveston is known as the 'Oleander City'; an annual oleander festival is hosted every spring. Moody Gardens in Galveston hosts the propagation program for the International Oleander Society, which promotes the cultivation of oleanders. New varieties are hybridized and grown on the Moody Gardens grounds, encompassing every named variety. Beyond the traditional Mediterranean and subtropical range of oleander, the plant can also be cultivated in mild oceanic climates with the appropriate precautions. It is grown without protection in warmer areas in Switzerland, southern and western Germany and southern England and can reach great sizes in London and to a lesser extent in Paris due to the urban heat island effect. This is also the case with North American cities in the Pacific Northwest like Portland, Seattle, and Vancouver. Plants may suffer damage or die back in such marginal climates during severe winter cold but will rebound from the roots. Ecology Some invertebrates are known to be unaffected by oleander toxins, and feed on the plants. Caterpillars of the polka-dot wasp moth (Syntomeida epilais) feed specifically on oleanders and survive by eating only the pulp surrounding the leaf-veins, avoiding the fibers. Larvae of the common crow butterfly (Euploea core) and oleander hawk-moth (Daphnis nerii) also feed on oleanders, and they retain or modify toxins, making them unpalatable to potential predators such as birds, but not to other invertebrates such as spiders and wasps. The flowers require insect visits to set seed, and seem to be pollinated through a deception mechanism. The showy corolla acts as a potent advertisement to attract pollinators from a distance, but the flowers are nectarless and offer no reward to their visitors. They therefore receive very few visits, as typical of many rewardless flower species. Fears of honey contamination with toxic oleander nectar are therefore unsubstantiated. Leaf scorch A bacterial disease known as (Xylella fastidiosa subspecies sandyi) has become a serious threat to the shrub since it was first noticed in Palm Springs, California, in 1992. The disease has since devastated hundreds of thousands of shrubs mainly in Southern California, but also on a smaller scale in Arizona, Nevada and Texas. The culprit is a bacterium which is spread via insects (the glassy-winged sharpshooter primarily) which feed on the tissue of oleanders and spread the bacteria. This inhibits the circulation of water in the tissue of the plant, causing individual branches to die until the entire plant is consumed. Symptoms of leaf scorch infection may be slow to manifest themselves, but it becomes evident when parts of otherwise healthy oleanders begin to yellow and wither, as if scorched by heat or fire. Die-back may cease during winter dormancy, but the disease flares up in summer heat while the shrub is actively growing, which allows the bacteria to spread through the xylem of the plant. As such it can be difficult to identify at first because gardeners may mistake the symptoms for those of drought stress or nutrient deficiency. Pruning out affected parts can slow the progression of the disease but not eliminate it. This malaise can continue for several years until the plant completely dies—there is no known cure. The best method for preventing further spread of the disease is to prune infected oleanders to the ground immediately after the infection is noticed. The responsible pathogen was identified as the subspecies sandyi by Purcell et al., 1999. Cultivation History Nerium oleander has a history of cultivation going back millennia, especially amongst the great ancient civilizations of the Mediterranean Basin. Some scholars believe it to be the rhodon (rose), also called the 'Rose of Jericho', mentioned in apocryphal writings (Ecclesiasticus XXIV, 13) dating back to between 450 and 180 BC. The ancient Greeks had several names for the plant, including rhododaphne, nerion, rhododendron and rhodon. Pliny confirmed that the Romans had no Latin word for the plant, but used the Greek terms instead. Pedanius Dioscorides states in his 1st century AD pharmacopeia De Materia Medica that the Romans used the Greek rhododendron but also the Latin Oleander and Laurorosa. The Egyptians apparently called it scinphe, the North Africans rhodedaphane, and the Lucanians (a southern Italic people) icmane. Both Pliny and Dioscorides stated that oleander was an effective antidote to venomous snake bites if mixed with rue and drunk. However, both rue and oleander are poisonous themselves, and consuming them after a venomous snake bite can accelerate the rate of mortality and increase fatalities. A 2014 article in the medical journal Perspectives in Biology and Medicine posited that oleander was the substance used to induce hallucinations in the Pythia, the female priestess of Apollo, also known as the Oracle of Delphi in Ancient Greece. According to this theory, the symptoms of the Pythia's trances (enthusiasmos) correspond to either inhaling the smoke of or chewing small amounts of oleander leaves, often called by the generic term laurel in Ancient Greece, which led to confusion with the bay laurel that ancient authors cite. In his book Enquiries into Plants of circa 300 BC, Theophrastus described (among plants that affect the mind) a shrub he called onotheras, which modern editors render oleander: "the root of onotheras [oleander] administered in wine", he alleges, has a beneficial effect on mood: The root of onotheras [oleander] administered in wine makes the temper gentler and more cheerful. The plant has a leaf like that of the almond, but smaller, and the flower is red like a rose. The plant itself (which loves hilly country) forms a large bush; the root is red and large, and, if this is dried, it gives off a fragrance like wine. In another mention, of "wild bay" (Daphne agria), Theophrastus appears to intend the same shrub. Oleander was a very popular ornamental shrub in Roman peristyle gardens; it is one of the flora most frequently depicted on murals in Pompeii and elsewhere in Italy. These murals include the famous garden scene from the House of Livia at Prima Porta outside Rome, and those from the House of the Wedding of Alexander and the Marine Venus in Pompeii. Carbonized fragments of oleander wood have been identified at the Villa Poppaea in Oplontis, likewise buried by the eruption of Mount Vesuvius in 79 AD. They were found to have been planted in a decorative arrangement with citron trees (Citrus medica) alongside the villa's swimming pool. Herbaria of oleander varieties are compiled and held at the Smithsonian Institution in Washington, D.C., and at Moody Gardens in Galveston, Texas. Ornamental gardening Oleander is a vigorous grower in warm subtropical regions, where it is extensively used as an ornamental plant in parks, along roadsides and in private gardens. It is most commonly grown in its natural shrub form, but can be trained into a small tree with a single trunk. Hardy versions like white, red and pink oleander will tolerate occasional light frost down to , though the leaves may be damaged. The toxicity of oleander renders it deer-resistant and its large size makes for a good windbreak – as such it is frequently planted as a hedge along property lines and in agricultural settings. The plant is tolerant of poor soils, intense heat, salt spray, and sustained drought – although it will flower and grow more vigorously with regular water. Although it does not require pruning to thrive and bloom, oleander can become unruly with age and older branches tend to become gangly, with new growth emerging from the base. For this reason gardeners are advised to prune mature shrubs in the autumn to shape and induce lush new growth and flowering for the following spring. Unless they wish to harvest the seeds, many gardeners choose to prune away the seedpods that form on spent flower clusters, which are a drain on energy. Propagation can be made from cuttings, where they can readily root after being placed in water or in rich organic potting material, like compost. In Mediterranean climates oleanders can be expected to bloom from April through October, with the heaviest bloom usually occurring between May and June. Free-flowering varieties like 'Petite Salmon' or 'Mont Blanc' require no period of rest and can flower continuously throughout the year if the weather remains warm. In cold winter climates, oleander is a popular summer potted plant readily available at most nurseries. They require frequent heavy watering and fertilizing as compared to being planted in the ground, but oleander is nonetheless an ideal flowering shrub for patios and other spaces with hot sunshine. During the winter they should be moved indoors, ideally into an unheated greenhouse or basement where they can be allowed to go dormant. Once they are dormant they require little light and only occasional watering. Placing them in a space with central heating and poor air flow can make them susceptible to a variety of pests – aphids, mealybugs, oleander scale, whitefly and spider mites. Colors and varieties Oleander flowers are showy, profuse, and often fragrant, which makes them very attractive in many contexts. Over 400 cultivars have been named, with several additional flower colors not found in wild plants having been selected, including yellow, peach and salmon. Many cultivars, like 'Hawaii' or 'Turner's Carnival', are multi-colored, with brilliant striped corollas. The solid whites, reds and a variety of pinks are the most common. Double flowered cultivars like 'Mrs. Isadore Dyer' (deep pink), 'Mathilde Ferrier' (yellow) or 'Mont Blanc' (white) are enjoyed for their large, rose-like blooms and strong fragrance. There is also a variegated form, 'Variegata', featuring leaves striped in yellow and white. Several dwarf cultivars have also been developed, offering a more compact form and size for small spaces. These include 'Little Red', 'Petite White', 'Petite Pink' and 'Petite Salmon', which grow to about at maturity. Toxicity Oleander is a poisonous plant because of toxic compounds it contains, especially when consumed in large amounts. Among these compounds are oleandrin and oleandrigenin, known as cardiac glycosides, which are known to have a narrow therapeutic index and are toxic when ingested. Side effects after ingestion are weakness, diarrhoea, nausea, vomiting, headache, stomach pain, and death. Toxicity studies of animals concluded that birds and rodents were observed to be relatively insensitive to the administered oleander cardiac glycosides. Other mammals, however, such as dogs and humans, are relatively sensitive to the effects of cardiac glycosides and the clinical manifestations of "glycoside intoxication". It is also hazardous to animals such as sheep, horses, cattle, and other grazing animals, with as little as 100 g being enough to kill an adult horse. Plant clippings are especially dangerous to horses, as they are sweet. In July 2009, several horses were poisoned in this manner from the leaves of the plant. Symptoms of a poisoned horse include severe diarrhea and abnormal heartbeat. This is aptly reflected in the plant's Sanskrit name aśvamāra (अश्वमार), a compound of aśva "horse" and māra "killing". In reviewing oleander toxicity cases seen in-hospital, Lanford and Boor concluded that, except for children who might be at greater risk, "the human mortality associated with oleander ingestion is generally very low, even in cases of moderate intentional consumption (suicide attempts)." In 2000, a rare instance of death from oleander poisoning occurred when two toddlers adopted from an orphanage ate the leaves from a neighbor's shrub in El Segundo, California. Because oleander is extremely bitter, officials speculated that the toddlers had developed a condition caused by malnutrition, pica, which causes people to eat otherwise inedible material. Effects of poisoning Ingestion of this plant can affect the gastrointestinal system, the heart, and the central nervous system. The main effect of cardiotoxic glycosides is positive inotropy. Glycosides bind to the sarcolemma transmembrane ATPase of cardiac muscle cells and compete with K+ ions, inactivating the enzyme. This results in an accumulation of Na+ and Ca2+ ions into the cardiac muscle cells, leading to stronger and faster heart contractions. Moreover, the increased amount of extracellular K+ ions may lead to lethal hyperkalemia. Therefore, clinical features of oleander poisoning are similar to digoxin toxicity and include nausea, diarrhea, and vomiting due to stimulation of the area postrema of the medulla oblongata, neuropsychic disorders, and pathological motor manifestations. Cardiotoxic glycosides are also responsible for stimulating the vagus nerve (leading to sinus bradycardia) and the phrenic nerve (leading to hyperventilation), and lethal brady- and tachyarrhythmias, including asystole and ventricular fibrillation. Oleander poisoning can also result in blurred vision, and vision disturbances, including halos appearing around objects. Oleander sap can cause skin irritations, severe eye inflammation and irritation, and allergic reactions characterized by dermatitis. The severity of the intoxication can vary based on the quantity ingested and an individual's physiological response, as well as the time of symptom onset after oleander ingestion: they can rapidly occur after drinking teas prepared with oleander leaves or roots or develop more slowly due to the ingestion of unprepared plant parts. Treatment Poisoning and reactions to oleander plants are evident quickly, requiring immediate medical care in suspected or known poisonings of both humans and animals. Induced vomiting and gastric lavage are protective measures to reduce absorption of the toxic compounds. Activated carbon may also be administered to help absorb any remaining toxins. Further medical attention may be required depending on the severity of the poisoning and symptoms. Temporary cardiac pacing will be required in many cases (usually for a few days) until the toxin is excreted. Digoxin immune fab is the best way to cure an oleander poisoning if inducing vomiting has no or minimal success, although it is usually used only for life-threatening conditions due to side effects. Drying of plant materials does not eliminate the toxins. There is a wide range of toxins and secondary compounds within oleander, and care should be taken around this plant due to its toxic nature. Different names for oleander are used around the world in different locations, so, when encountering a plant with this appearance, regardless of the name used for it, one should exercise great care and caution to avoid ingestion of any part of the plant, including its sap and dried leaves or twigs. The dried or fresh branches should not be used for spearing food, for preparing a cooking fire, or as a food skewer. Many of the oleander relatives, such as the desert rose (Adenium obesum) found in East Africa, have similar leaves and flowers and are equally toxic. Research Drugs derived from N. oleander have been investigated as a treatment for cancer, but have failed to demonstrate clinical utility. According to the American Cancer Society, the trials conducted so far have produced no evidence of benefit, while they did cause adverse side effects. Culture Oracle of Delphi In a research study done by Haralampos V. Harissis, he claims that the laurel the Pythia is commonly depicted with is actually an oleander plant, and the poisonous plant and its subsequent hallucinations are the source of the oracle's mystical power and subsequent prophecies. Many of the symptoms that primary sources such as Plutarch and Democritus report align with results of oleander poisoning. Harissis also provides evidence claiming that the word laurel may have been used to describe an oleander leaf. Folklore The toxicity of the plant makes it the center of an urban legend documented on several continents and over more than a century. Often told as a true and local event, typically an entire family, or in other tellings a group of scouts, succumbs after consuming hot dogs or other food roasted over a campfire using oleander sticks. Some variants tell of this happening to Napoleon's or Alexander the Great's soldiers. There is an ancient account mentioned by Pliny the Elder in his Natural History, who described a region in Pontus in Turkey where the honey was poisoned from bees having pollinated poisonous flowers, with the honey left as a poisonous trap for an invading army. The flowers have sometimes been mis-translated as oleander, but oleander flowers are nectarless and therefore cannot transmit any toxins via nectar. The actual flower referenced by Pliny was either Azalea or Rhododendron, which is still used in Turkey to produce a hallucinogenic honey. Oleander is the official flower of the city of Hiroshima, having been the first to bloom following the atomic bombing of the city in 1945. In painting Oleander was part of subject matter of paintings by famous artists including: Gustav Klimt, who painted "Two Girls with an Oleander" between 1890 and 1892. Vincent van Gogh painted his famous "Oleanders" in Arles in 1888. Van Gogh found the flowers "joyous" and "life-affirming" because of their inexhaustible blooms and vigour. Anglo-Dutch artist Sir Lawrence Alma-Tadema incorporated oleanders into his classically inspired paintings, including "An Oleander" (1882), "Courtship", "Under the Roof of Blue Ionian Weather" and "A Roman Flower Market" (1868). "The Terrace at Méric (Oleanders)", an 1867 Impressionist painting by Frédéric Bazille. In literature, film and music Janet Fitch's 1999 novel White Oleander is centered around a young Southern California girl's experiences growing up in foster care after her mother is imprisoned for poisoning an ex-boyfriend with the plant. The book was adapted into a 2002 film of the same name starring Michelle Pfeiffer and Alison Lohman. In the 17th century AD Farsi-language book the Jahangirnama, the Mughal emperor Jahangir passes a stream overgrowing with oleanders along its banks. He orders the nobles in his train to adorn their turbans with oleander blossoms, creating a "field of flowers" on their heads. Steely Dan's 1973 song "My Old School" contains the line "Oleanders growing outside her door, soon they're gonna be in bloom up in Annandale" in the second verse. It has been theorized that this reference is either a metaphor for a harmful relationship, or marijuana, which is the subcontext of the song. Gallery
Biology and health sciences
Gentianales
Plants
1185756
https://en.wikipedia.org/wiki/Eightfold%20way%20%28physics%29
Eightfold way (physics)
In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman independently and simultaneously proposed the idea in 1961. The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism. Background By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as visible matter and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and the hypothesized neutrinos. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a "strong interaction" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force; but those that are, are dubbed "hadrons"; these are now further classified as mesons (middle mass) and baryons (heavy weight). But the discovery of the neutral kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way, and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952. In 1953, Murray Gell-Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as "strangeness" during their attempts to understand the growing collection of known particles. The discovery of new mesons and baryons continued through the 1950s; the number of known "elementary" particles ballooned. Physicists were interested in understanding hadron-hadron interactions via the strong interaction. The concept of isospin, introduced in 1932 by Werner Heisenberg shortly after the discovery of the neutron, was used to group some hadrons together into "multiplets" but no successful scientific theory as yet covered the hadrons as a whole. This was the beginning of a chaotic period in particle physics that has become known as the "particle zoo" era. The eightfold way represented a step out of this confusion and towards the quark model, which proved to be the solution. Organization Group representation theory is the mathematical underpinning of the eightfold way, but that rather technical mathematics is not needed to understand how it helps organize particles. Particles are sorted into groups as mesons or baryons. Within each group, they are further separated by their spin angular momentum. Symmetrical patterns appear when these groups of particles have their strangeness plotted against their electric charge. (This is the most common way to make these plots today, but originally physicists used an equivalent pair of properties called hypercharge and isotopic spin, the latter of which is now known as isospin.) The symmetry in these patterns is a hint of the underlying symmetry of the strong interaction between the particles themselves. In the plots below, points representing particles that lie along the same horizontal line share the same strangeness, , while those on the same left-leaning diagonals share the same electric charge, (given as multiples of the elementary charge). Mesons In the original eightfold way, the mesons were organized into octets and singlets. This is one of the finer points of differences between the eightfold way and the quark model it inspired, which suggests the mesons should be grouped into nonets (groups of nine). Meson octet The eightfold way organizes eight of the lowest spin-0 mesons into an octet. They are: , , and kaons , , and pions , the eta meson Diametrically opposite particles in the diagram are anti-particles of one another, while particles in the center are their own anti-particle. Meson singlet The chargeless, strangeless eta prime meson was originally classified by itself as a singlet: Under the quark model later developed, it is better viewed as part of a meson nonet, as previously mentioned. Baryons Baryon octet The eightfold way organizes the spin- baryons into an octet. They consist of neutron (n) and proton (p) , , and sigma baryons , the strange lambda baryon and xi baryons Baryon decuplet The organizational principles of the eightfold way also apply to the spin- baryons, forming a decuplet. , , , and delta baryons , , and sigma baryons and xi baryons omega baryon However, one of the particles of this decuplet had never been previously observed when the eightfold way was proposed. Gell-Mann called this particle the and predicted in 1962 that it would have a strangeness −3, electric charge −1 and a mass near . In 1964, a particle closely matching these predictions was discovered by a particle accelerator group at Brookhaven. Gell-Mann received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles. Historical development Development Historically, quarks were motivated by an understanding of flavour symmetry. First, it was noticed (1961) that groups of particles were related to each other in a way that matched the representation theory of SU(3). From that, it was inferred that there is an approximate symmetry of the universe which is represented by the group SU(3). Finally (1964), this led to the discovery of three light quarks (up, down, and strange) interchanged by these SU(3) transformations. Modern interpretation The eightfold way may be understood in modern terms as a consequence of flavor symmetries between various kinds of quarks. Since the strong nuclear force affects quarks the same way regardless of their flavor, replacing one flavor of quark with another in a hadron should not alter its mass very much, provided the respective quark masses are smaller than the strong interaction scale—which holds for the three light quarks. Mathematically, this replacement may be described by elements of the SU(3) group. The octets and other hadron arrangements are representations of this group. Flavor symmetry SU(3) There is an abstract three-dimensional vector space: and the laws of physics are approximately invariant under a determinant-1 unitary transformation to this space (sometimes called a flavour rotation): Here, SU(3) refers to the Lie group of 3×3 unitary matrices with determinant 1 (special unitary group). For example, the flavour rotation is a transformation that simultaneously turns all the up quarks in the universe into down quarks and vice versa. More specifically, these flavour rotations are exact symmetries if only strong force interactions are looked at, but they are not truly exact symmetries of the universe because the three quarks have different masses and different electroweak interactions. This approximate symmetry is called flavour symmetry, or more specifically flavour SU(3) symmetry. Connection to representation theory Assume we have a certain particle—for example, a proton—in a quantum state . If we apply one of the flavour rotations A to our particle, it enters a new quantum state which we can call . Depending on A, this new state might be a proton, or a neutron, or a superposition of a proton and a neutron, or various other possibilities. The set of all possible quantum states spans a vector space. Representation theory is a mathematical theory that describes the situation where elements of a group (here, the flavour rotations A in the group SU(3)) are automorphisms of a vector space (here, the set of all possible quantum states that you get from flavour-rotating a proton). Therefore, by studying the representation theory of SU(3), we can learn the possibilities for what the vector space is and how it is affected by flavour symmetry. Since the flavour rotations A are approximate, not exact, symmetries, each orthogonal state in the vector space corresponds to a different particle species. In the example above, when a proton is transformed by every possible flavour rotation A, it turns out that it moves around an 8 dimensional vector space. Those 8 dimensions correspond to the 8 particles in the so-called "baryon octet" (proton, neutron, , , , , , ). This corresponds to an 8-dimensional ("octet") representation of the group SU(3). Since A is an approximate symmetry, all the particles in this octet have similar mass. Every Lie group has a corresponding Lie algebra, and each group representation of the Lie group can be mapped to a corresponding Lie algebra representation on the same vector space. The Lie algebra (3) can be written as the set of 3×3 traceless Hermitian matrices. Physicists generally discuss the representation theory of the Lie algebra (3) instead of the Lie group SU(3), since the former is simpler and the two are ultimately equivalent.
Physical sciences
Subatomic particles: General
Physics
1186804
https://en.wikipedia.org/wiki/Initial%20condition
Initial condition
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons. Linear system Discrete time A linear matrix difference equation of the homogeneous (having no constant term) form has closed form solution predicated on the vector of initial conditions on the individual variables that are stacked into the vector; is called the vector of initial conditions or simply the initial condition, and contains nk pieces of information, n being the dimension of the vector X and k = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable X; that behavior is stable or unstable based on the eigenvalues of the matrix A but not based on the initial conditions. Alternatively, a dynamic process in a single variable x having multiple time lags is Here the dimension is n = 1 and the order is k, so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is nk = k. Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation to obtain the latter's k solutions, which are the characteristic values for use in the solution equation Here the constants are found by solving a system of k different equations based on this equation, each using one of k different values of t for which the specific initial condition Is known. Continuous time A differential equation system of the first order with n variables stacked in a vector X is Its behavior through time can be traced with a closed form solution conditional on an initial condition vector . The number of required initial pieces of information is the dimension n of the system times the order k = 1 of the system, or n. The initial conditions do not affect the qualitative behavior (stable or unstable) of the system. A single kth order linear equation in a single variable x is Here the number of initial conditions necessary for obtaining a closed form solution is the dimension n = 1 times the order k, or simply k. In this case the k initial pieces of information will typically not be different values of the variable x at different points in time, but rather the values of x and its first k – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is whose solutions are the characteristic values these are used in the solution equation This equation and its first k – 1 derivatives form a system of k equations that can be solved for the k parameters given the known initial conditions on x and its k – 1 derivatives' values at some time t. Nonlinear systems Nonlinear systems can exhibit a substantially richer variety of behavior than linear systems can. In particular, the initial conditions can affect whether the system diverges to infinity or whether it converges to one or another attractor of the system. Each attractor, a (possibly disconnected) region of values that some dynamic paths approach but never leave, has a (possibly disconnected) basin of attraction such that state variables with initial conditions in that basin (and nowhere else) will evolve toward that attractor. Even nearby initial conditions could be in basins of attraction of different attractors (see for example Newton's method#Basins of attraction). Moreover, in those nonlinear systems showing chaotic behavior, the evolution of the variables exhibits sensitive dependence on initial conditions: the iterated values of any two very nearby points on the same strange attractor, while each remaining on the attractor, will diverge from each other over time. Thus even on a single attractor the precise values of the initial conditions make a substantial difference for the future positions of the iterates. This feature makes accurate simulation of future values difficult, and impossible over long horizons, because stating the initial conditions with exact precision is seldom possible and because rounding error is inevitable after even only a few iterations from an exact initial condition. Empirical laws and initial conditions
Mathematics
Dynamical systems
null
1187691
https://en.wikipedia.org/wiki/Polyvinyl%20alcohol
Polyvinyl alcohol
Polyvinyl alcohol (PVOH, PVA, or PVAl) is a water-soluble synthetic polymer. It has the idealized formula [CH2CH(OH)]n. It is used in papermaking, textile warp sizing, as a thickener and emulsion stabilizer in polyvinyl acetate (PVAc) adhesive formulations, in a variety of coatings, and 3D printing. It is colourless (white) and odorless. It is commonly supplied as beads or as solutions in water. Without an externally added crosslinking agent, PVA solution can be gelled through repeated freezing-thawing, yielding highly strong, ultrapure, biocompatible hydrogels which have been used for a variety of applications such as vascular stents, cartilages, contact lenses, etc. Although polyvinyl alcohol is often referred to by the acronym PVA, more generally PVA refers to polyvinyl acetate, which is commonly used as a wood adhesive and sealer. Uses PVA is used in a variety of medical applications because of its biocompatibility, low tendency for protein adhesion, and low toxicity. Specific uses include cartilage replacements, contact lenses, laundry detergent pods and eye drops. Polyvinyl alcohol is used as an aid in suspension polymerizations. Its largest application in China is its use as a protective colloid to make PVAc dispersions. In Japan its major use is the production of Vinylon fiber. This fiber is also manufactured in North Korea for self-sufficiency reasons, because no oil is required to produce it. Another application is photographic film. PVA-based polymers are used widely in additive manufacturing. For example, 3D printed oral dosage forms demonstrate great potential in the pharmaceutical industry. It is possible to create drug-loaded tablets with modified drug-release characteristics where PVA is used as a binder substance. Medically, PVA-based microparticles have received FDA 510(k) approval to be used as embolisation particles to be used for peripheral hypervascular tumors. It may also used as the embolic agent in a Uterine Fibroid Embolectomy (UFE). In biomedical engineering research, PVA has also been studied for cartilage, orthopaedic applications, and potential materials for vascular graft. PVA is commonly used in household sponges that absorb more water than polyurethane sponges. PVA may be used as an adhesive during preparation of stool samples for microscopic examination in pathology. Polyvinyl acetals Polyvinyl acetals are prepared by treating PVA with aldehydes. Butyraldehyde and formaldehyde afford polyvinyl butyral (PVB) and polyvinyl formal (PVF), respectively. Preparation of polyvinyl butyral is the largest use for polyvinyl alcohol in the US and Western Europe. Preparation Unlike most vinyl polymers, PVA is not prepared by polymerization of the corresponding monomer, since the monomer, vinyl alcohol, is thermodynamically unstable with respect to its tautomerization to acetaldehyde. Instead, PVA is prepared by hydrolysis of polyvinyl acetate, or sometimes other vinyl ester-derived polymers with formate or chloroacetate groups instead of acetate. The conversion of the polyvinyl esters is usually conducted by base-catalysed transesterification with ethanol: [CH2CH(OAc)]n + C2H5OH → [CH2CH(OH)]n + C2H5OAc The properties of the polymer are affected by the degree of transesterification. Worldwide consumption of polyvinyl alcohol was over one million metric tons in 2006. Structure and properties PVA is an atactic material that exhibits crystallinity. In terms of microstructure, it is composed mainly of 1,3-diol linkages [−CH2−CH(OH)−CH2−CH(OH)−], but a few percent of 1,2-diols [−CH2−CH(OH)−CH(OH)−CH2−] occur, depending on the conditions for the polymerization of the vinyl ester precursor. Polyvinyl alcohol has excellent film-forming, emulsifying and adhesive properties. It is also resistant to oil, grease and solvents. It has high tensile strength and flexibility, as well as high oxygen and aroma barrier properties. However, these properties are dependent on humidity: water absorbed at higher humidity levels acts as a plasticiser, which reduces the polymer's tensile strength, but increases its elongation and tear strength. Safety and environmental considerations Polyvinyl alcohol is widely used, thus its toxicity and biodegradation are of interest. Tests showed that fish (guppies) are not harmed, even at a poly(vinyl alcohol) concentration of 500 mg/L of water. The biodegradability of PVA is affected by the molecular weight of the sample. Aqueous solutions of PVA degrade faster, which is why PVA grades that are highly water-soluble tend to have a faster biodegradation. Not all PVA grades are readily biodegradable, but studies show that high water-soluble PVA grades such as the ones used in detergents can be readily biodegradable according to OECD screening test conditions. Orally administered PVA is relatively harmless. The safety of polyvinyl alcohol is based on some of the following observations: The acute oral toxicity of polyvinyl alcohol is very low, with LD(50)s in the range of 15-20 g/kg; Orally administered PVA is very poorly absorbed from the gastrointestinal tract; PVA does not accumulate in the body when administered orally; Polyvinyl alcohol is not mutagenic or clastogenic
Physical sciences
Polymers
Chemistry
1188375
https://en.wikipedia.org/wiki/Turing%20reduction
Turing reduction
In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine that decides problem given an oracle for (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems. If a Turing reduction from to exists, then every algorithm for can be used to produce an algorithm for , by inserting the algorithm for at each place where the oracle machine computing queries the oracle for . However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm for or the oracle machine computing . A Turing reduction in which the oracle machine runs in polynomial time is known as a Cook reduction. The first formal definition of relative computability, then called relative reducibility, was given by Alan Turing in 1939 in terms of oracle machines. Later in 1943 and 1952 Stephen Kleene defined an equivalent concept in terms of recursive functions. In 1944 Emil Post used the term "Turing reducibility" to refer to the concept. Definition Given two sets of natural numbers, we say is Turing reducible to and write if and only if there is an oracle machine that computes the characteristic function of A when run with oracle B. In this case, we also say A is B-recursive and B-computable. If there is an oracle machine that, when run with oracle B, computes a partial function with domain A, then A is said to be B-recursively enumerable and B-computably enumerable. We say is Turing equivalent to and write if both and The equivalence classes of Turing equivalent sets are called Turing degrees. The Turing degree of a set is written . Given a set , a set is called Turing hard for if for all . If additionally then is called Turing complete for . Relation of Turing completeness to computational universality Turing completeness, as just defined above, corresponds only partially to Turing completeness in the sense of computational universality. Specifically, a Turing machine is a universal Turing machine if its halting problem (i.e., the set of inputs for which it eventually halts) is many-one complete for the set of recursively enumerable sets. Thus, a necessary but insufficient condition for a machine to be computationally universal, is that the machine's halting problem be Turing-complete for . Insufficient because it may still be the case that, the language accepted by the machine is not itself recursively enumerable. Example Let denote the set of input values for which the Turing machine with index e halts. Then the sets and are Turing equivalent (here denotes an effective pairing function). A reduction showing can be constructed using the fact that . Given a pair , a new index can be constructed using the smn theorem such that the program coded by ignores its input and merely simulates the computation of the machine with index e on input n. In particular, the machine with index either halts on every input or halts on no input. Thus holds for all e and n. Because the function i is computable, this shows . The reductions presented here are not only Turing reductions but many-one reductions, discussed below. Properties Every set is Turing equivalent to its complement. Every computable set is Turing reducible to every other set. Because any computable set can be computed with no oracle, it can be computed by an oracle machine that ignores the given oracle. The relation is transitive: if and then . Moreover, holds for every set A, and thus the relation is a preorder (it is not a partial order because and does not necessarily imply ). There are pairs of sets such that A is not Turing reducible to B and B is not Turing reducible to A. Thus is not a total order. There are infinite decreasing sequences of sets under . Thus this relation is not well-founded. Every set is Turing reducible to its own Turing jump, but the Turing jump of a set is never Turing reducible to the original set. The use of a reduction Since every reduction from a set to a set has to determine whether a single element is in in only finitely many steps, it can only make finitely many queries of membership in the set . When the amount of information about the set used to compute a single bit of is discussed, this is made precise by the use function. Formally, the use of a reduction is the function that sends each natural number to the largest natural number whose membership in the set was queried by the reduction while determining the membership of in . Stronger reductions There are two common ways of producing reductions stronger than Turing reducibility. The first way is to limit the number and manner of oracle queries. Set is many-one reducible to if there is a total computable function such that an element is in if and only if is in . Such a function can be used to generate a Turing reduction (by computing , querying the oracle, and then interpreting the result). A truth table reduction or a weak truth table reduction must present all of its oracle queries at the same time. In a truth table reduction, the reduction also gives a boolean function (a truth table) that, when given the answers to the queries, will produce the final answer of the reduction. In a weak truth table reduction, the reduction uses the oracle answers as a basis for further computation depending on the given answers (but not using the oracle). Equivalently, a weak truth table reduction is one for which the use of the reduction is bounded by a computable function. For this reason, weak truth table reductions are sometimes called "bounded Turing" reductions. The second way to produce a stronger reducibility notion is to limit the computational resources that the program implementing the Turing reduction may use. These limits on the computational complexity of the reduction are important when studying subrecursive classes such as P. A set A is polynomial-time reducible to a set if there is a Turing reduction of to that runs in polynomial time. The concept of log-space reduction is similar. These reductions are stronger in the sense that they provide a finer distinction into equivalence classes, and satisfy more restrictive requirements than Turing reductions. Consequently, such reductions are harder to find. There may be no way to build a many-one reduction from one set to another even when a Turing reduction for the same sets exists. Weaker reductions According to the Church–Turing thesis, a Turing reduction is the most general form of an effectively calculable reduction. Nevertheless, weaker reductions are also considered. Set is said to be arithmetical in if is definable by a formula of Peano arithmetic with as a parameter. The set is hyperarithmetical in if there is a recursive ordinal such that is computable from , the α-iterated Turing jump of . The notion of relative constructibility is an important reducibility notion in set theory.
Mathematics
Computability theory
null
15407327
https://en.wikipedia.org/wiki/Absolute%20electrode%20potential
Absolute electrode potential
Absolute electrode potential, in electrochemistry, according to an IUPAC definition, is the electrode potential of a metal measured with respect to a universal reference system (without any additional metal–solution interface). Definition According to a more specific definition presented by Trasatti, the absolute electrode potential is the difference in electronic energy between a point inside the metal (Fermi level) of an electrode and a point outside the electrolyte in which the electrode is submerged (an electron at rest in vacuum). This potential is difficult to determine accurately. For this reason, a standard hydrogen electrode is typically used for reference potential. The absolute potential of the SHE is 4.44 ± 0.02 V at 25 °C. Therefore, for any electrode at 25 °C: where: is electrode potential V is the unit volt M denotes the electrode made of metal M (abs) denotes the absolute potential (SHE) denotes the electrode potential relative to the standard hydrogen electrode. A different definition for the absolute electrode potential (also known as absolute half-cell potential and single electrode potential) has also been discussed in the literature. In this approach, one first defines an isothermal absolute single-electrode process (or absolute half-cell process.) For example, in the case of a generic metal being oxidized to form a solution-phase ion, the process would be M(metal) → M+(solution) + (gas) For the hydrogen electrode, the absolute half-cell process would be H2 (gas) → H+(solution) + (gas) Other types of absolute electrode reactions would be defined analogously. In this approach, all three species taking part in the reaction, including the electron, must be placed in thermodynamically well-defined states. All species, including the electron, are at the same temperature, and appropriate standard states for all species, including the electron, must be fully defined. The absolute electrode potential is then defined as the Gibbs free energy for the absolute electrode process. To express this in volts one divides the Gibbs free energy by the negative of Faraday's constant. Rockwood's approach to absolute-electrode thermodynamics is easily expendable to other thermodynamic functions. For example, the absolute half-cell entropy has been defined as the entropy of the absolute half-cell process defined above. An alternative definition of the absolute half-cell entropy has recently been published by Fang et al. who define it as the entropy of the following reaction (using the hydrogen electrode as an example): H2 (gas) → H+(solution) + (metal) This approach differs from the approach described by Rockwood in the treatment of the electron, i.e. whether it is placed in the gas phase or the metal. The electron can also be in another state, that of a solvated electron in solution, as studied by Alexander Frumkin and B. Damaskin and others. Determination The basis for determination of the absolute electrode potential under the Trasatti definition is given by the equation: where: is the absolute potential of the electrode made of metal M is the electron work function of metal M is the contact (Volta) potential difference at the metal(M)–solution(S) interface. For practical purposes, the value of the absolute electrode potential of the standard hydrogen electrode is best determined with the utility of data for an ideally-polarizable mercury (Hg) electrode: where: is the absolute standard potential of the hydrogen electrode denotes the condition of the point of zero charge at the interface. The types of physical measurements required under the Rockwood definition are similar to those required under the Trasatti definition, but they are used in a different way, e.g. in Rockwood's approach they are used to calculate the equilibrium vapour pressure of the electron gas. The numerical value for the absolute potential of the standard hydrogen electrode one would calculate under the Rockwood definition is sometimes fortuitously close to the value one would obtain under the Trasatti definition. This near-agreement in the numerical value depends on the choice of ambient temperature and standard states, and is the result of the near-cancellation of certain terms in the expressions. For example, if a standard state of one atmosphere ideal gas is chosen for the electron gas then the cancellation of terms occurs at a temperature of 296 K, and the two definitions give an equal numerical result. At 298.15 K a near-cancellation of terms would apply and the two approaches would produce nearly the same numerical values. However, there is no fundamental significance to this near agreement because it depends on arbitrary choices, such as temperature and definitions of standard states.
Physical sciences
Electrochemistry
Chemistry
4597674
https://en.wikipedia.org/wiki/Photon%20epoch
Photon epoch
In physical cosmology, the photon epoch was the period in the evolution of the early universe in which photons dominated the energy of the universe. The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis, which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch, the universe contained a hot dense plasma of nuclei, electrons and photons. At the start of this period, many photons had sufficient energy to photodissociate deuterium, so those atomic nuclei that formed were quickly separated back into protons and neutrons. By the ten second mark, ever fewer high energy photons were available to photodissociate deuterium, and thus the abundance of these nuclei began to increase. Heavier atoms began to form through nuclear fusion processes: tritium, helium-3, and helium-4. Finally, trace amounts of lithium and beryllium began to appear. Once the thermal energy dropped below 0.03 MeV, nucleosynthesis effectively came to an end. Primordial abundances were now set, with the measured amounts in the modern epoch providing checks on the physical models of this period. 370,000 years after the Big Bang, the temperature of the universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter, the universe became transparent and the cosmic microwave background radiation was created and then structure formation took place. This is referred to as the surface of last scattering, as it corresponds to a virtual outer surface of the spherical observable universe.
Physical sciences
Physical cosmology
Astronomy
4598061
https://en.wikipedia.org/wiki/Vachellia%20tortilis
Vachellia tortilis
Vachellia tortilis, widely known as Acacia tortilis but now attributed to the genus Vachellia, is the umbrella thorn acacia, also known as umbrella thorn and Israeli babool, a medium to large canopied tree native to most of Africa, primarily to the savanna and Sahel of Africa (especially the Somali peninsula and Sudan), but also occurring in the Middle East. Distribution and growing conditions Vachellia tortilis is widespread in Africa, being found in countries like Tunisia, Morocco, Uganda, Angola, Zimbabwe, Djibouti, and Botswana. It tends to grow in areas where temperatures vary from and rainfall is anywhere from about per year. Characteristics In extremely arid conditions, it may occur as a small, wiry bush. In more favorable conditions, it grows up to in height. The tree carries leaves that grow to approx. in length with between 4 and 10 pair of pinnae each with up to 15 pairs of leaflets. Flowers are small and white, highly aromatic, and occur in tight clusters. Seeds are produced in pods which are flat and coiled into a springlike structure. The plant is known to tolerate high alkalinity, drought, high temperatures, sandy and stony soils, strongly sloped rooting surfaces to withstand sandblasting too. Also, plants older than two years have shown a degree of frost resistance. Importance Timber from the tree is used for furniture, wagon wheels, fence posts, cages, and pens. Vachellia wood was also used exclusively by the Israelites in the bible in the building of the tabernacle and the tabernacle furniture, including the Ark of the Covenant. The pods and foliage, which grow prolifically on the tree, are used as fodder for desert grazing animals. The bark is often used as a string medium in Tanzania, and is a source for tannin. Gum from the tree is edible and can be used as gum arabic. Parts of the tree including roots, shoots, and pods are also often used by natives for a vast number of purposes including decorations, weapons, tools, and medicines. The Umbrella thorn is also an important species for rehabilitation of degraded arid land; it tolerates drought, wind, salinity and a wide range of soil types, and has the additional benefit of fixing nitrogen, an essential plant nutrient, in the soil via its interaction with symbiotic root bacteria. Religious connotations It is also the tree under which the historic pledge of allegiance of Hudaybiya of Muhammad was held, as God says in the Quran, "Allah's Good Pleasure was on the Believers when they swore Fealty to thee under the Tree: He knew what was in their hearts, and He sent down Tranquillity to them; and He rewarded them with a speedy Victory;" Abu Zubayr said in Sahih Muslim that, "Umar was holding the latter's hand (when he was sitting) under the tree (called) Samura."
Biology and health sciences
Fabales
Plants
4602964
https://en.wikipedia.org/wiki/Vacuum%20permeability
Vacuum permeability
The vacuum magnetic permeability (variously vacuum permeability, permeability of free space, permeability of vacuum, magnetic constant) is the magnetic permeability in a classical vacuum. It is a physical constant, conventionally written as μ0 (pronounced "mu nought" or "mu zero"). It quantifies the strength of the magnetic field induced by an electric current. Expressed in terms of SI base units, it has the unit kg⋅m⋅s−2⋅A−2. It can be also expressed in terms of SI derived units, N⋅A−2. Since the revision of the SI in 2019 (when the values of e and h were fixed as defined quantities), μ0 is an experimentally determined constant, its value being proportional to the dimensionless fine-structure constant, which is known to a relative uncertainty of with no other dependencies with experimental uncertainty. Its value in SI units as recommended by CODATA is: The terminology of permeability and susceptibility was introduced by William Thomson, 1st Baron Kelvin in 1872. The modern notation of permeability as μ and permittivity as ε has been in use since the 1950s. Ampere-defined vacuum permeability Two thin, straight, stationary, parallel wires, a distance r apart in free space, each carrying a current I, will exert a force on each other. Ampère's force law states that the magnetic force Fm per length L is given by From 1948 until 2019 the ampere was defined as "that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to newton per metre of length". This is equivalent to a definition of of exactly , since The current in this definition needed to be measured with a known weight and known separation of the wires, defined in terms of the international standards of mass, length and time in order to produce a standard for the ampere (and this is what the Kibble balance was designed for). In the 2019 revision of the SI, the ampere is defined exactly in terms of the elementary charge and the second, and the value of is determined experimentally; is the 2022 CODATA value in the new system (and the Kibble balance has become an instrument for measuring weight from a known current, rather than measuring current from a known weight). From 1948 to 2019, μ0 had a defined value (per the former definition of the SI ampere), equal to: The deviation of the recommended measured value from the former defined value is within its uncertainty. Terminology NIST/CODATA refers to μ0 as the vacuum magnetic permeability. Prior to the 2019 revision, it was referred to as the magnetic constant. Historically, the constant μ0 has had different names. In the 1987 IUPAP Red book, for example, this constant was called the permeability of vacuum. Another, now rather rare and obsolete, term is "magnetic permittivity of vacuum". See, for example, Servant et al. Variations thereof, such as "permeability of free space", remain widespread. The name "magnetic constant" was briefly used by standards organizations in order to avoid use of the terms "permeability" and "vacuum", which have physical meanings. The change of name had been made because μ0 was a defined value, and was not the result of experimental measurement (see below). In the new SI system, the permeability of vacuum no longer has a defined value, but is a measured quantity, with an uncertainty related to that of the (measured) dimensionless fine structure constant. Systems of units and historical origin of value of μ0 In principle, there are several equation systems that could be used to set up a system of electrical quantities and units. Since the late 19th century, the fundamental definitions of current units have been related to the definitions of mass, length, and time units, using Ampère's force law. However, the precise way in which this has "officially" been done has changed many times, as measurement techniques and thinking on the topic developed. The overall history of the unit of electric current, and of the related question of how to define a set of equations for describing electromagnetic phenomena, is very complicated. Briefly, the basic reason why μ0 has the value it does is as follows. Ampère's force law describes the experimentally-derived fact that, for two thin, straight, stationary, parallel wires, a distance r apart, in each of which a current I flows, the force per unit length, Fm/L, that one wire exerts upon the other in the vacuum of free space would be given by Writing the constant of proportionality as km gives The form of km needs to be chosen in order to set up a system of equations, and a value then needs to be allocated in order to define the unit of current. In the old "electromagnetic (emu)" system of units, defined in the late 19th century, km was chosen to be a pure number equal to 2, distance was measured in centimetres, force was measured in the cgs unit dyne, and the currents defined by this equation were measured in the "electromagnetic unit (emu) of current", the "abampere". A practical unit to be used by electricians and engineers, the ampere, was then defined as equal to one tenth of the electromagnetic unit of current. In another system, the "rationalized metre–kilogram–second (rmks) system" (or alternatively the "metre–kilogram–second–ampere (mksa) system"), km is written as μ0/2π, where μ0 is a measurement-system constant called the "magnetic constant". The value of μ0 was chosen such that the rmks unit of current is equal in size to the ampere in the emu system: μ0 was defined to be . Historically, several different systems (including the two described above) were in use simultaneously. In particular, physicists and engineers used different systems, and physicists used three different systems for different parts of physics theory and a fourth different system (the engineers' system) for laboratory experiments. In 1948, international decisions were made by standards organizations to adopt the rmks system, and its related set of electrical quantities and units, as the single main international system for describing electromagnetic phenomena in the International System of Units. Significance in electromagnetism The magnetic constant μ0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation, and relate them to their sources. In particular, it appears in relationship to quantities such as permeability and magnetization density, such as the relationship that defines the magnetic H-field in terms of the magnetic B-field. In real media, this relationship has the form: where M is the magnetization density. In vacuum, . In the International System of Quantities (ISQ), the speed of light in vacuum, c, is related to the magnetic constant and the electric constant (vacuum permittivity), ε0, by the equation: This relation can be derived using Maxwell's equations of classical electromagnetism in the medium of classical vacuum. Between 1948 and 2018, this relation was used by BIPM (International Bureau of Weights and Measures) and NIST (National Institute of Standards and Technology) as a definition of ε0 in terms of the defined numerical value for c and, prior to 2018, the defined numerical value for μ0. During this period of standards definitions, it was not presented as a derived result contingent upon the validity of Maxwell's equations. Conversely, as the permittivity is related to the fine-structure constant (α), the permeability can be derived from the latter (using the Planck constant, h, and the elementary charge, e): In the new SI units, only the fine structure constant is a measured value in SI units in the expression on the right, since the remaining constants have defined values in SI units.
Physical sciences
Physical constants
Physics
6042703
https://en.wikipedia.org/wiki/Jack%20%28device%29
Jack (device)
A jack is a mechanical lifting device used to apply great forces or lift heavy loads. A mechanical jack employs a screw thread for lifting heavy equipment. A hydraulic jack uses hydraulic power. The most common form is a car jack, floor jack or garage jack, which lifts vehicles so that maintenance can be performed. Jacks are usually rated for a maximum lifting capacity (for example, 1.5 tons or 3 tons). Industrial jacks can be rated for many tons of load. Etymology The personal name Jack, which came into English usage around the thirteenth century as a nickname form of John, came in the sixteenth century to be used as a colloquial word for 'a man (of low status)' (much as in the modern usage 'jack of all trades, master of none'). From here, the word was 'applied to things which in some way take the place of a lad or man, or save human labour'. The first attestation in the Oxford English Dictionary of jack in the sense 'a machine, usually portable, for lifting heavy weights by force acting from below' is from 1679, referring to 'an Engine used for the removing and commodious placing of great Timber.' Jackscrew Scissor jack A scissor jack uses the mechanical advantage of a leadscrew and 4-bar linkage to allow a human to lift a vehicle by manual force alone. They are inexpensive and are common in manufacturer-supplied breakdown kits. The jack shown at the left is made for a modern vehicle and the notch fits into a jack-up point on a unibody. Earlier versions have a platform to lift on a vehicle's frame or axle. Electrically operated car scissor jacks are powered by 12 volt electricity supplied directly from the car's cigarette lighter receptacle. The electrical energy is used to power these car jacks to raise and lower automatically. Electric jacks require less effort from the motorist for operation. House jack A house jack, also called a screw jack, is a mechanical device primarily used to lift buildings from their foundations for repairs or relocation. A series of jacks is used and then wood cribbing temporarily supports the structure. This process is repeated until the desired height is reached. The house jack can be used for jacking carrying beams that have settled or for installing new structural beams. On the top of the jack is a cast iron circular pad that the jacking post rests on. This pad moves independently of the house jack so that it does not turn as the acme-threaded rod is turned with a metal rod. This piece tilts very slightly, but not enough to render the post dangerously out of plumb. Hydraulic jack In 1838 William Joseph Curtis filed a British patent for a hydraulic jack. In 1851, inventor Richard Dudgeon was granted a patent for a "portable hydraulic press" – the hydraulic jack, a jack which proved to be vastly superior to the screw jacks in use at the time. Hydraulic jacks are typically used for shop work, rather than as an emergency jack to be carried with the vehicle. Use of jacks not designed for a specific vehicle requires more than the usual care in selecting ground conditions, the jacking point on a vehicle, and to ensure stability when the jack is extended. Hydraulic jacks are often used to lift elevators in low and medium rise buildings. A hydraulic jack uses a liquid, which is incompressible, that is forced into a cylinder by a pump plunger. Oil is used since it is self lubricating and stable. When the plunger pulls back, it draws oil out of the reservoir through a suction check valve into the pump chamber. When the plunger moves forward, it pushes the oil through a discharge check valve into the cylinder. The suction valve ball is within the chamber and opens with each draw of the plunger. The discharge valve ball is outside the chamber and opens when the oil is pushed into the cylinder. At this point the suction ball within the chamber is forced shut and oil pressure builds in the cylinder. Floor jack In a floor jack (aka 'trolley jack') a horizontal piston pushes on the short end of a bellcrank, with the long arm providing the vertical motion to a lifting pad, kept horizontal with a horizontal linkage. Floor jacks usually include casters and wheels, allowing compensation for the arc taken by the lifting pad. This mechanism provides a low profile when collapsed, for easy maneuvering underneath the vehicle, while allowing considerable extension. Bottle jack A bottle jack or whiskey jack is a jack which resembles a bottle in shape, having a cylindrical body and a neck. Within is a vertical lifting ram with a support pad of some kind fixed to the top. The jack may be hydraulic or work by screw action. In the hydraulic version, the hydraulic ram emerges from the body vertically by hydraulic pressure provided by a pump either on the baseplate or at a remote location via a pressure hose. With a single action piston the lift range is somewhat limited, so its use for lifting vehicles is limited to those with a relatively high clearance. For lifting structures such as houses the hydraulic interconnection of multiple vertical jacks through valves enables the even distribution of forces while enabling close control of the lift. The screw version of the bottle jack works by turning a large nut running on the threaded vertical ram at the neck of the body. The nut has gear teeth, and is generally turned by a bevel gear spigotted to the body, the bevel gear being turned manually by a jack handle fitting into a square socket. The ram may have a second screwed ram within it, which doubles the lifting range telescopically. Bottle jacks have a capacity of up to 50 tons and may be used to lift a variety of objects. Typical uses include the repair of automobiles and house foundations. Larger, heavy-duty models may be known as a barrel jack. This type of jack is best used for short vertical lifts. Blocks may be used to repeat the operation when a greater amount of elevation is required. Pneumatic jack Air hydraulic jack An air hydraulic jack is a hydraulic jack that is actuated by compressed air - for example, air from a compressor - instead of human work. This eliminates the need for the user to actuate the hydraulic mechanism, saving effort and potentially increasing speed. Sometimes, such jacks are also able to be operated by the normal hydraulic actuation method, thereby retaining functionality, even if a source of compressed air is not available. Inflatable jack An inflatable jack, lifting bag, or pneumatic lifting bag is an air bag that is inflated by compressed air (without a hydraulic component) in order to lift objects. The bag can be deflated to be reused later. The objects can be of a smaller load such as an automobile or it can be a larger object such as an airplane. Air bags are also used by rescuers to lift heavy objects up to help victims who are trapped under those objects. There are three main types of lifting bags for rescue: high pressure, medium pressure and low pressure systems. Low-pressure bags are operated at 7.25 psi for high vertical lift in a large surface area but lower lifting capacities. Medium-pressure bags are operated at 15 psi. High-pressure bags which have higher lifting capacities are operated at pressure between 90 and 145 psi. Two air bags can be stacked together to provide a higher lift. It is recommended that no more than two bags can be used in a stacked configuration, the bigger bag must be the bottom one, and no other objects are inserted between the stacked bags. Incorrect use of stacked bags may result in a bag (or other objects) shooting out to create a dangerous projectile. Strand jack A strand jack is a specialized hydraulic jack that grips steel cables. Often used in concert, strand jacks can lift hundreds of tons and are used in engineering and construction. Farm jack The farm jack also known as a railroad jack, high lift jack, handyman jack, trail jack or kanga-jack was invented in 1905. It consists of a steel beam with a series of equally spaced holes along its length, and a hand-operated mechanism which can be moved from one end of the beam to the other through the use of a pair of climbing pins. Typical sizes for the farm jack are , and referring to the length of the beam. The jack's versatility stems from its use for such applications as lifting, winching, clamping, pulling and pushing. It is this versatility, along with the long travel it offers and its relative portability, which make the farm jack so popular with off-road drivers. Safety standards National and international standards have been developed to standardize the safety and performance requirements for jacks and other lifting devices. Selection of the standard is an agreement between the purchaser and the manufacturer, and has some significance in the design of the jack. In the United States, ASME has developed the Safety Standard for Portable Automotive Service Equipment, last revised in 2014, including requirements for hydraulic hand jacks, transmission jacks, emergency tire changing jacks, service jacks, fork lift jacks, and other lifting devices.
Technology
Tools
null
6044675
https://en.wikipedia.org/wiki/Faber%E2%80%93Jackson%20relation
Faber–Jackson relation
The Faber–Jackson relation provided the first empirical power-law relation between the luminosity and the central stellar velocity dispersion of elliptical galaxy, and was presented by the astronomers Sandra M. Faber and Robert Earl Jackson in 1976. Their relation can be expressed mathematically as: with the index approximately equal to 4. In 1962, Rudolph Minkowski had discovered and wrote that a "correlation between velocity dispersion and [luminosity] exists, but it is poor" and that "it seems important to extend the observations to more objects, especially at low and medium absolute magnitudes". This was important because the value of depends on the range of galaxy luminosities that is fitted, with a value of 2 for low-luminosity elliptical galaxies discovered by a team led by Roger Davies, and a value of 5 reported by Paul L. Schechter for luminous elliptical galaxies. The Faber–Jackson relation is understood as a projection of the fundamental plane of elliptical galaxies. One of its main uses is as a tool for determining distances to external galaxies. Theory The gravitational potential of a mass distribution of radius and mass is given by the expression: Where α is a constant depending e.g. on the density profile of the system and G is the gravitational constant. For a constant density, The kinetic energy is: (Recall is the 1-dimensional velocity dispersion. Therefore, .) From the virial theorem ( ) it follows If we assume that the mass to light ratio, , is constant, e.g. we can use this and the above expression to obtain a relation between and : Let us introduce the surface brightness, and assume this is a constant (which from a fundamental theoretical point of view, is a totally unjustified assumption) to get Using this and combining it with the relation between and , this results in and by rewriting the above expression, we finally obtain the relation between luminosity and velocity dispersion: that is Given that massive galaxies originate from homologous merging, and the fainter ones from dissipation, the assumption of constant surface brightness can no longer be supported. Empirically, surface brightness exhibits a peak at about . The revised relation then becomes for the less massive galaxies, and for the more massive ones. With these revised formulae, the fundamental plane splits into two planes inclined by about 11 degrees to each other. Even first-ranked cluster galaxies do not have constant surface brightness. A claim supporting constant surface brightness was presented by astronomer Allan R. Sandage in 1972 based on three logical arguments and his own empirical data. In 1975, Donald Gudehus showed that each of the logical arguments was incorrect and that first-ranked cluster galaxies exhibited a standard deviation of about half a magnitude. Estimating distances to galaxies Like the Tully–Fisher relation, the Faber–Jackson relation provides a means of estimating the distance to a galaxy, which is otherwise hard to obtain, by relating it to more easily observable properties of the galaxy. In the case of elliptical galaxies, if one can measure the central stellar velocity dispersion, which can be done relatively easily by using spectroscopy to measure the Doppler shift of light emitted by the stars, then one can obtain an estimate of the true luminosity of the galaxy via the Faber–Jackson relation. This can be compared to the apparent magnitude of the galaxy, which provides an estimate of the distance modulus and, hence, the distance to the galaxy. By combining a galaxy's central velocity dispersion with measurements of its central surface brightness and radius parameter, it is possible to improve the estimate of the galaxy's distance even more. This standard yardstick, or "reduced galaxian radius-parameter", , devised by Gudehus in 1991, can yield distances, free of systematic bias, accurate to about 31%.
Physical sciences
Galaxy classification
Astronomy
731386
https://en.wikipedia.org/wiki/Toy%20dog
Toy dog
Toy dog traditionally refers to a very small dog or a grouping of small and very small breeds of dog. A toy dog may be of any of various dog types. Types of dogs referred to as toy dogs may include spaniels, pinschers and terriers that have been bred down in size. Not all toy dogs are lap dogs. Small dogs Dogs found in the toy group of breed registries may be of the very ancient lapdog type, or they may be small versions of hunting dogs or working dogs, bred down in size for a particular kind of work or to create a pet of convenient size. In the past, very small dogs not used for hunting were kept as symbols of affluence, as watchdogs, and for the health function of attracting fleas away from their owners. Breeds Most major dog clubs in the English-speaking world have a toy group, under one exact name or another, in which they place breeds of dog that the kennel club categorizes as toy, based on size and tradition. The Kennel Club (UK), the Canadian Kennel Club, the American Kennel Club, the Australian National Kennel Council, and the New Zealand Kennel Club all have a group named "Toy", although they may not all categorise the same breeds in this category. The United States has a second major kennel club, the United Kennel Club (UKC, originally formed for hunting and working breeds, though general today), and it does not recognize such a group; instead, small dogs are placed with larger dogs of their type, or into a UKC's "Companion Dog" group. the American Kennel Club began debating whether or not to change the name of their "Toy" group to "Companion", in order to emphasise that dogs are not playthings, but the name change was resisted by traditionalists. The breeds in the "Companion and Toy" category of the Fédération Cynologique Internationale are: Bichon Frisé Bichon Havanais, Havanese Bolognese Boston Terrier Bouledogue Français, French Bulldog Caniche, Poodle Cavalier King Charles Spaniel Chihuahueño, Chihuahua Chin, Japanese Chin Chinese Crested Dog Coton de Tuléar Epagneul Nain Continental, Continental Toy Spaniel: Papillon, Phalène Griffon Belge Griffon Bruxellois, Brussels Griffon King Charles Spaniel Kromfohrländer Lhasa Apso Maltese Pekingese Petit Brabançon, Small Brabant Griffon Petit Chien Lion, Löwchen, Little Lion Dog Pug Russkiy Toy Shih Tzu Tibetan Spaniel Tibetan Terrier Small or toy-sized breeds not classified by the FCI in its toy group include: Affenpinscher Australian Silky Terrier Italian Greyhound Miniature Pinscher Dwarf German Spitz: Pomeranian Volpino Italiano Yorkshire Terrier Xoloitzcuintle Member kennel clubs of the Fédération Cynologique Internationale and non-member clubs may use slightly different nomenclature, depending on the country. The term toy is only used to group dogs for show purposes. Some breeds without FCI recognition are recognised by The Kennel Club of Great Britain (UK), by the Canadian Kennel Club (Can), or by the American Kennel Club: Chihuahua (Long Coat) (UK, Aus, NZ, Can) Chihuahua (Smooth Coat) (UK, Aus, NZ) Chihuahua (Short Coat) (Can) Mi-Ki (US) Toy Fox Terrier (US) Toy Manchester Terrier (Can, US) The major national kennel club for each country will have its own list of breeds that it recognizes as Toy. In addition, some new or newly documented rare breeds may be awaiting approval by a given kennel club. Some new breeds may currently be recognized only by their breed clubs. Some rare new breeds have been given breed names, but may only be available from the breeder or breeders who are developing the breed, and may not yet be recognized by any kennel club. In addition to the major registries, there are a plethora of sporting clubs, breed clubs, and internet-based breed registries and businesses in which dogs may be registered in whatever way the owner or seller wishes.
Biology and health sciences
Dogs
Animals
731780
https://en.wikipedia.org/wiki/Geometrical%20optics
Geometrical optics
Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays. The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances. The simplifying assumptions of geometrical optics include that light rays: propagate in straight-line paths as they travel in a homogeneous medium bend, and in particular circumstances may split in two, at the interface between two dissimilar media follow curved paths in a medium in which the refractive index changes may be absorbed or reflected. Geometrical optics does not account for certain optical effects such as diffraction and interference, which are considered in physical optics. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations. Explanation A light ray is a line or curve that is perpendicular to the light's wavefronts (and is therefore collinear with the wave vector). A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. Geometrical optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. Reflection Glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. With such surfaces, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. (The magnification of a flat mirror is equal to one.) The law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. Refraction Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction and another medium with index of refraction . In such situations, Snell's Law describes the resulting deflection of the light ray: where and are the angles between the normal (to the interface) and the incident and refracted waves, respectively. This phenomenon is also associated with a changing speed of light as seen from the definition of index of refraction provided above which implies: where and are the wave velocities through the respective media. Various consequences of Snell's Law include the fact that for light rays traveling from a material with a high index of refraction to a material with a low index of refraction, it is possible for the interaction with the interface to result in zero transmission. This phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber optic cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable. It is also possible to produce polarized light rays using a combination of reflection and refraction: When a refracted ray and the reflected ray form a right angle, the reflected ray has the property of "plane polarization". The angle of incidence required for such a scenario is known as Brewster's angle. Snell's Law can be used to predict the deflection of light rays as they pass through "linear media" as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. Additionally, since different frequencies of light have slightly different indexes of refraction in most materials, refraction can be used to produce dispersion spectra that appear as rainbows. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. Some media have an index of refraction which varies gradually with position and, thus, light rays curve through the medium rather than travel in straight lines. This effect is what is responsible for mirages seen on hot days where the changing index of refraction of the air causes the light rays to bend creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Material that has a varying index of refraction is called a gradient-index (GRIN) material and has many useful properties used in modern optical scanning technologies including photocopiers and scanners. The phenomenon is studied in the field of gradient-index optics. A device which produces converging or diverging light rays due to refraction is known as a lens. Thin lenses produce focal points on either side that can be modeled using the lensmaker's equation. In general, two types of lenses exist: convex lenses, which cause parallel light rays to converge, and concave lenses, which cause parallel light rays to diverge. The detailed prediction of how images are produced by these lenses can be made using ray-tracing similar to curved mirrors. Similarly to curved mirrors, thin lenses follow a simple equation that determines the location of the images given a particular focal length () and object distance where is the distance associated with the image and is considered by convention to be negative if on the same side of the lens as the object and positive if on the opposite side of the lens. The focal length f is considered negative for concave lenses. Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. Likewise, the magnification of a lens is given by where the negative sign is given, by convention, to indicate an upright object for positive values and an inverted object for negative values. Similar to mirrors, upright images produced by single lenses are virtual while inverted images are real. Lenses suffer from aberrations that distort images and focal points. These are due to both to geometrical imperfections and due to the changing index of refraction for different wavelengths of light (chromatic aberration). Underlying mathematics As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations (Sommerfeld–Runge method) or as a property of propagation of field discontinuities according to Maxwell's equations (Luneburg method). In this short-wavelength limit, it is possible to approximate the solution locally by where satisfy a dispersion relation, and the amplitude varies slowly. More precisely, the leading order solution takes the form The phase can be linearized to recover large wavenumber , and frequency . The amplitude satisfies a transport equation. The small parameter enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis. Sommerfeld–Runge method The method of obtaining equations of geometrical optics by taking the limit of zero wavelength was first described by Arnold Sommerfeld and J. Runge in 1911. Their derivation was based on an oral remark by Peter Debye. Consider a monochromatic scalar field , where could be any of the components of electric or magnetic field and hence the function satisfy the wave equation where with being the speed of light in vacuum. Here, is the refractive index of the medium. Without loss of generality, let us introduce to convert the equation to Since the underlying principle of geometrical optics lies in the limit , the following asymptotic series is assumed, For large but finite value of , the series diverges, and one has to be careful in keeping only appropriate first few terms. For each value of , one can find an optimum number of terms to be kept and adding more terms than the optimum number might result in a poorer approximation. Substituting the series into the equation and collecting terms of different orders, one finds in general, The first equation is known as the eikonal equation, which determines the eikonal is a Hamilton–Jacobi equation, written for example in Cartesian coordinates becomes The remaining equations determine the functions . Luneburg method The method of obtaining equations of geometrical optics by analysing surfaces of discontinuities of solutions to Maxwell's equations was first described by Rudolf Karl Luneburg in 1944. It does not restrict the electromagnetic field to have a special form required by the Sommerfeld-Runge method which assumes the amplitude and phase satisfy the equation . This condition is satisfied by e.g. plane waves but is not additive. The main conclusion of Luneburg's approach is the following: Theorem. Suppose the fields and (in a linear isotropic medium described by dielectric constants and ) have finite discontinuities along a (moving) surface in described by the equation Then Maxwell's equations in the integral form imply that satisfies the eikonal equation: where is the index of refraction of the medium (Gaussian units). An example of such surface of discontinuity is the initial wave front emanating from a source that starts radiating at a certain instant of time. The surfaces of field discontinuity thus become geometrical optics wave fronts with the corresponding geometrical optics fields defined as: Those fields obey transport equations consistent with the transport equations of the Sommerfeld-Runge approach. Light rays in Luneburg's theory are defined as trajectories orthogonal to the discontinuity surfaces and can be shown to obey Fermat's principle of least time thus establishing the identity of those rays with light rays of standard optics. The above developments can be generalised to anisotropic media. The proof of Luneburg's theorem is based on investigating how Maxwell's equations govern the propagation of discontinuities of solutions. The basic technical lemma is as follows: A technical lemma. Let be a hypersurface (a 3-dimensional manifold) in spacetime on which one or more of: , , , , have a finite discontinuity. Then at each point of the hypersurface the following formulas hold: where the operator acts in the -space (for every fixed ) and the square brackets denote the difference in values on both sides of the discontinuity surface (set up according to an arbitrary but fixed convention, e.g. the gradient pointing in the direction of the quantities being subtracted from). Sketch of proof. Start with Maxwell's equations away from the sources (Gaussian units): Using Stokes' theorem in one can conclude from the first of the above equations that for any domain in with a piecewise smooth (3-dimensional) boundary the following is true: where is the projection of the outward unit normal of onto the 3D slice , and is the volume 3-form on . Similarly, one establishes the following from the remaining Maxwell's equations: Now by considering arbitrary small sub-surfaces of and setting up small neighbourhoods surrounding in , and subtracting the above integrals accordingly, one obtains: where denotes the gradient in the 4D -space. And since is arbitrary, the integrands must be equal to 0 which proves the lemma. It's now easy to show that as they propagate through a continuous medium, the discontinuity surfaces obey the eikonal equation. Specifically, if and are continuous, then the discontinuities of and satisfy: and . In this case the last two equations of the lemma can be written as: Taking the cross product of the second equation with and substituting the first yields: The continuity of and the second equation of the lemma imply: , hence, for points lying on the surface only: (Notice the presence of the discontinuity is essential in this step as we'd be dividing by zero otherwise.) Because of the physical considerations one can assume without loss of generality that is of the following form: , i.e. a 2D surface moving through space, modelled as level surfaces of . (Mathematically exists if by the implicit function theorem.) The above equation written in terms of becomes: i.e., which is the eikonal equation and it holds for all , , , since the variable is absent. Other laws of optics like Snell's law and Fresnel formulae can be similarly obtained by considering discontinuities in and . General equation using four-vector notation In four-vector notation used in special relativity, the wave equation can be written as and the substitution leads to Therefore, the eikonal equation is given by Once eikonal is found by solving the above equation, the wave four-vector can be found from
Physical sciences
Optics
Physics
731884
https://en.wikipedia.org/wiki/Electromagnetic%20four-potential
Electromagnetic four-potential
An electromagnetic four-potential is a relativistic vector function from which the electromagnetic field can be derived. It combines both an electric scalar potential and a magnetic vector potential into a single four-vector. As measured in a given frame of reference, and for a given gauge, the first component of the electromagnetic four-potential is conventionally taken to be the electric scalar potential, and the other three components make up the magnetic vector potential. While both the scalar and vector potential depend upon the frame, the electromagnetic four-potential is Lorentz covariant. Like other potentials, many different electromagnetic four-potentials correspond to the same electromagnetic field, depending upon the choice of gauge. This article uses tensor index notation and the Minkowski metric sign convention .
Physical sciences
Electrodynamics
Physics
731893
https://en.wikipedia.org/wiki/Grey
Grey
Grey (more frequent British English) or gray (more frequent American English) is an intermediate color between black and white. It is a neutral or achromatic color, meaning that it has no chroma and therefore no hue. It is the color of a cloud-covered sky, of ash, and of lead. The first recorded use of grey as a color name in the English language was in 700 CE. Grey is the dominant spelling in European and Commonwealth English, while gray is more common in American English; however, both spellings are valid in both varieties of English. In Europe and North America, surveys show that gray is the color most commonly associated with neutrality, conformity, boredom, uncertainty, old age, indifference, and modesty. Only one percent of respondents chose it as their favorite color. Etymology Grey comes from the Middle English or , from the Old English , and is related to the Dutch and German . There are no certain cognates outside Germanic languages; terms such as Spanish and Italian are considered Germanic loanwords from Medieval Latin griseus. The first recorded use of grey as a color name in the English language was in 700 AD. The distinction between grey and gray spellings in usual Commonwealth and American English respectively developed the 20th century. In history and art Antiquity through the Middle Ages In antiquity and the Middle Ages, grey was the color of undyed wool, and thus was the color most commonly worn by peasants and the poor. It was also the color worn by Cistercian monks and friars of the Franciscan and Capuchin orders as a symbol of their vows of humility and poverty. Franciscan friars in England and Scotland were commonly known as the grey friars, and that name is now attached to many places in Great Britain. Renaissance and the Baroque During the Renaissance and the Baroque, grey began to play an important role in fashion and art. Black became the most popular color of the nobility, particularly in Italy, France, and Spain, and grey and white were harmonious with it. Grey was also frequently used for the drawing of oil paintings, a technique called grisaille. The painting would first be composed in grey and white, and then the colors, made with thin transparent glazes, would be added on top. The grisaille beneath would provide the shading, visible through the layers of color. Sometimes, the grisaille was simply left uncovered, giving the appearance of carved stone. Grey was a particularly good background color for gold and for skin tones. It became the most common background for the portraits of Rembrandt van Rijn and for many of the paintings of El Greco, who used it to highlight the faces and costumes of the central figures. The palette of Rembrandt was composed almost entirely of somber colors. He composed his warm greys out of black pigments made from charcoal or burnt animal bones, mixed with lead white or a white made of lime, which he warmed with a little red lake color from cochineal or madder. In one painting, the portrait of Margaretha de Geer (1661), one part of a grey wall in the background is painted with a layer of dark brown over a layer of orange, red, and yellow earths, mixed with ivory black and some lead white. Over this he put an additional layer of glaze made of mixture of blue smalt, red ochre, and yellow lake. Using these ingredients and many others, he made greys which had, according to art historian Philip Ball, "an incredible subtlety of pigmentation". The warm, dark and rich greys and browns served to emphasize the golden light on the faces in the paintings. Eighteenth and nineteenth centuries Grey became a highly fashionable color in the 18th century, both for women's dresses and for men's waistcoats and coats. It looked particularly luminous coloring the silk and satin fabrics worn by the nobility and wealthy. Women's fashion in the 19th century was dominated by Paris, while men's fashion was set by London. The grey business suit appeared in the mid-19th century in London; light grey in summer, dark grey in winter; replacing the more colorful palette of men's clothing early in the century. The clothing of women working in the factories and workshops of Paris in the 19th century was usually grey. This gave them the name of grisettes. "Gris" or grey also meant drunk, and the name "grisette" was also given to the lower class of Parisian prostitutes. Grey also became a common color for military uniforms; in an age of rifles with longer range, soldiers in grey were less visible as targets than those in blue or red. Grey was the color of the uniforms of the Confederate Army during the American Civil War, and of the Prussian Army for active service wear from 1910 onwards. Several artists of the mid-19th century used tones of grey to create memorable paintings; Jean-Baptiste-Camille Corot used tones of green-grey and blue grey to give harmony to his landscapes, and James McNeill Whistler created a special grey for the background of the portrait of his mother, and for his own self-portrait. Whistler's arrangement of tones of grey had an effect on the world of music, on the French composer Claude Debussy. In 1894, Debussy wrote to violinist Eugène Ysaÿe describing his Nocturnes as "an experiment in the combinations that can be obtained from one color – what a study in grey would be in painting". Twentieth and twenty-first centuries In the late 1930s, grey became a symbol of industrialization and war. It was the dominant color of Pablo Picasso's celebrated painting about the horrors of the Spanish Civil War, Guernica. After the war, the grey business suit became a metaphor for uniformity of thought, popularized in such books as The Man in the Gray Flannel Suit (1955), which became a successful film in 1956. In the sciences, nature, and technology Storm clouds The whiteness or darkness of clouds is a function of their depth. Small, fluffy white clouds in summer look white because the sunlight is being scattered by the tiny water droplets they contain, and that white light comes to the viewer's eye. However, as clouds become larger and thicker, the white light cannot penetrate through the cloud, and is reflected off the top. Clouds look darkest grey during thunderstorms, when they can be as much as 20,000 to 30,000 feet high. Stratiform clouds are a layer of clouds that covers the entire sky, and which have a depth of between a few hundred to a few thousand feet thick. The thicker the clouds, the darker they appear from below, because little of the sunlight is able to pass through. From above, in an airplane, the same clouds look perfectly white, but from the ground the sky looks gloomy and grey. The greying of hair The color of a person's hair is created by the pigment melanin, found in the core of each hair. Melanin is also responsible for the color of the skin and of the eyes. There are only two types of pigment: dark (eumelanin) or light (phaeomelanin). Combined in various combinations, these pigments create all natural hair colors. Melanin itself is the product of a specialized cell, the melanocyte, which is found in each hair follicle, from which the hair grows. As hair grows, the melanocyte injects melanin into the hair cells, which contain the protein keratin and which makes up our hair, skin, and nails. As long as the melanocytes continue injecting melanin into the hair cells, the hair retains its original color. At a certain age, however, which varies from person to person, the amount of melanin injected is reduced and eventually stops. The hair, without pigment, turns grey and eventually white. The reason for this decline of production of melanocytes is uncertain. In the February 2005 issue of Science, a team of Harvard scientists suggested that the cause was the failure of the melanocyte stem cells to maintain the production of the essential pigments, due to age or genetic factors, after a certain period of time. For some people, the breakdown comes in their twenties; for others, many years later. According to the site of the magazine Scientific American, "Generally speaking, among Caucasians 50 percent are 50 percent grey by age 50." Adult male gorillas also develop silver hair, but only on their backs – see Physical characteristics of gorillas. Optics Over the centuries, artists have traditionally created grey by mixing black and white in various proportions. They added a little red to make a warmer grey, or a little blue for a cooler grey. Artists could also make a grey by mixing two complementary colors, such as orange and blue. Today the grey on televisions, computer displays, and telephones is usually created using the RGB color model. Red, green, and blue light combined at full intensity on the black screen makes white; by lowering the intensity, it is possible to create shades of grey. In printing, grey is usually obtained with the CMYK color model, using cyan, magenta, yellow, and black. Grey is produced either by using black and white, or by combining equal amounts of cyan, magenta, and yellow. Most greys have a cool or warm cast to them, as the human eye can detect even a minute amount of color saturation. Yellow, orange, and red create a "warm grey". Green, blue, and violet create a "cool grey". When no color is added, the color is "neutral grey", "achromatic grey", or simply "grey". Images consisting wholly of black, white and greys are called monochrome, black-and-white, or greyscale. RGB model Grey values result when r = g = b, for the color (r, g, b) CMYK model Grey values are produced by c = m = y = 0, for the color (c, m, y, k). Lightness is adjusted by varying k. In theory, any mixture where c = m = y is neutral, but in practice such mixtures are often a muddy brown. HSL and HSV model Achromatic greys have no hue, so the h code is marked as "undefined" using a dash: --; greys also result whenever s is 0 or undefined, as is the case when v is 0 or l is 0 or 1 Web colors There are several tones of grey available for use with HTML and Cascading Style Sheets (CSS) as named colors, while 254 true greys are available by specification of a hex triplet for the RGB value. All are spelled gray, using the spelling grey can cause errors. This spelling was inherited from the X11 color list. Internet Explorer's Trident browser engine does not recognize grey and renders it green. Another anomaly is that gray is in fact much darker than the X11 color marked darkgray; this is because of a conflict with the original HTML gray and the X11 gray, which is closer to HTML's silver. The three slategray colors are not themselves on the greyscale, but are slightly saturated toward cyan (green + blue). Since there are an even (256, including black and white) number of unsaturated tones of grey, there are two grey tones straddling the midpoint in the 8-bit greyscale. The color name gray has been assigned the lighter of the two shades (128, also known as #808080), due to rounding up. Pigments Until the 19th century, artists traditionally created grey by simply combining black and white. Rembrandt Van Rijn, for instance, usually used lead white and either carbon black or ivory black, along with touches of either blues or reds to cool or warm the grey. In the early 19th century, a new grey, Payne's grey, appeared on the market. Payne's grey is a dark blue-grey, a mixture of ultramarine and black or of ultramarine and sienna. It is named after William Payne, a British artist who painted watercolors in the late 18th century. The first recorded use of Payne's grey as a color name in English was in 1835. Animal color Grey is a very common color for animals, birds, and fish, ranging in size from whales to mice. It provides a natural camouflage and allows them to blend with their surroundings. Grey matter of the brain The substance that composes the brain is sometimes referred to as grey matter, or "the little grey cells", so the color grey is associated with things intellectual. However, the living human brain is actually pink in color; it only turns grey when dead. Nanotechnology and grey goo Grey goo is a hypothetical end-of-the-world scenario, also known as ecophagy: out-of-control self-replicating nanobots consume all living matter on Earth while building more of themselves. Grey noise In sound engineering, grey noise is random noise subjected to an equal-loudness contour, such as an inverted A-weighting curve, over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies. In culture Religion In the Christian religion, grey is the color of ashes, and so a biblical symbol of mourning and repentance, described as sackcloth and ashes. It can be used during Lent or on special days of fasting and prayer. As the color of humility and modesty, grey is worn by friars of the Order of Friars Minor Capuchin and Franciscan order as well as monks of the Cistercian order. Grey cassocks are worn by clergy of the Brazilian Catholic Apostolic Church. Buddhist monks and priests in Japan and Korea will often wear a sleeved grey, brown, or black outer robe. Taoist priests in China also often wear grey. Politics Grey is rarely used as a color by political parties, largely because of its common association with conformity, boredom and indecision. An example of a political party using grey as a color are the German Grey Panthers. The term "grey power" or "the grey vote" is sometimes used to describe the influence of older voters as a voting bloc. In the United States, older people are more likely to vote, and usually vote to protect certain social benefits, such as Social Security. Greys is a term sometimes used pejoratively by environmentalists in the green movement to describe those who oppose environmental measures and supposedly prefer the grey of concrete and cement. Military During the American Civil War, the soldiers of the Confederate Army wore grey uniforms. At the beginning of the war, the armies of the North and of the South had very similar uniforms; some Confederate units wore blue, and some Union units wore grey. There naturally was confusion, and sometimes soldiers fired by mistake at soldiers of their own army. On June 6, 1861, the Confederate government issued regulations standardizing the army uniform and establishing cadet grey as the uniform color. This was (and still is) the color of the uniform of cadets at the United States Military Academy at West Point, and cadets at the Virginia Military Institute, which produced many officers for the Confederacy. The new uniforms were designed by Nicola Marschall, a German-American artist, who also designed the original Confederate flag. He closely followed the design of contemporary French and Austrian military uniforms. Grey was not chosen for its camouflage value; this benefit was not appreciated for several more decades. The South lacked a major dye industry, though, and grey dyes were inexpensive and easy to manufacture. While some units had uniforms colored with good-quality dyes, which were a solid bluish-grey, others had uniforms colored with vegetable dyes made from sumac or logwood, which quickly faded in sunshine to the yellowish color of butternut squash. The German Army wore grey uniforms from 1907 until 1945, during both the First World War and Second World War. The color chosen was a grey-green called field grey (). It was chosen because it was less visible at a distance than the previous German uniforms, which were Prussian blue. It was one of the first uniform colors to be chosen for its camouflage value, important in the new age of smokeless powder and more accurate rifles and machine guns. It gave the Germans a distinct advantage at the beginning of the First World War, when the French soldiers were dressed in blue jackets and red trousers. The Finnish Army also began using grey uniforms on the German model. Some of the more recent uniforms of the German Army and East German Army were field grey, as were some uniforms of the Swedish army. The formal dress (M/83) of the Finnish Army is grey. The Army of Chile wears field grey today. The grey suit During the 19th century, women's fashions were largely dictated by Paris, while London set fashions for men. The intent of a business suit was above all to show seriousness, and to show one's position in business and society. Over the course of the century, bright colors disappeared from men's fashion, and were largely replaced by a black or dark charcoal grey frock coat in winter, and lighter greys in summer. In the early 20th century, the frock coat was gradually replaced by the lounge suit, a less formal version of evening dress, which was also usually black or charcoal grey. In the 1930s the English suit style was called the drape suit, with wide shoulders and a nipped waist, usually dark or light grey. After World War II, the style changed to a slimmer fit called the continental cut, but the color remained grey. Sports In baseball, grey is the color typically used for road uniforms. This came about because in the 19th and early 20th century, away teams did not normally have access to laundry facilities on the road, thus stains were not noticeable on the darker grey uniforms as opposed to the white uniforms worn by the home team. The Vegas Golden Knights of the National Hockey League features steel grey as its primary color and its current alternate uniforms are steel grey. New Caledonia national football teams have worn grey home shirts and the color is featured on its football badge. Georgetown University's basketball teams traditionally wear grey uniforms at home. Gay culture In gay slang, a grey queen is a gay person who works for the financial services industry. This term originates from the fact that in the 1950s, people who worked in this profession often wore grey flannel suits. Associations and symbolism In America and Europe, grey is one of the least popular colors; In a European survey, only one percent of men said it was their favorite color, and thirteen percent called it their least favorite color; the response from women was almost the same. According to color historian Eva Heller, "grey is too weak to be considered masculine, but too menacing to be considered a feminine color. It is neither warm nor cold, neither material or spiritual. With grey, nothing seems to be decided." It also denotes undefinedness and ambiguity, as in a grey area. Grey is the color most commonly associated in many cultures with the elderly and old age, because of the association with grey hair; it symbolizes the wisdom and dignity that come with experience and age. The New York Times is sometimes called The Grey Lady because of its long history and esteemed position in American journalism. Grey is the color most often associated in Europe and America with modesty.
Physical sciences
Color terms
null
732333
https://en.wikipedia.org/wiki/Leafy%20seadragon
Leafy seadragon
The leafy seadragon (Phycodurus eques) or Glauert's seadragon, is a marine fish. It is the only member of the genus Phycodurus in the family Syngnathidae, which includes seadragons, pipefish, and seahorses. It is found along the southern and western coasts of Australia. The name is derived from their appearance, with long leaf-like protrusions coming from all over the body. These protrusions are not used for propulsion; they serve only as camouflage. The leafy seadragon propels itself utilising a pair of pectoral fins on the sides of its neck and a dorsal fin on its back closer to the tail end. These small fins are almost completely transparent and difficult to see as they undulate minutely to move the creature sedately through the water, completing the illusion of floating seaweed. Popularly known as "leafies", they are the marine emblem of the state of South Australia and a focus for local marine conservation. Taxonomy The generic name is derived from the Greek words phûkos "seaweed" and derma "skin". Description Much like the seahorse, the leafy seadragon's name is derived from its resemblance to another creature (in this case, the mythical dragon). While not large, they are slightly larger than most seahorses, growing to about . They feed on plankton and small crustaceans. The lobes of skin that grow on the leafy seadragon provide camouflage, giving it the appearance of seaweed. It is able to maintain the illusion when swimming, appearing to move through the water like a piece of floating seaweed. It can also change colour to blend in, but this ability depends on the seadragon's diet, age, location, and stress level. The leafy seadragon is related to the pipefish and belongs to the family Syngnathidae, along with the seahorse. It differs from the seahorse in appearance, form of locomotion, and its inability to coil or grasp things with its tail. A related species is the weedy seadragon, which is multicoloured and grows weed-like fins, but is smaller than the leafy seadragon. Another unique feature is the small, circular gill openings covering tufted gills, very unlike the crescent-shaped gill openings and ridged gills of most fish species. Habitat and distribution The leafy seadragon is found only in southern Australian waters, from Wilson’s Promontory in Victoria at the eastern end of its range, westward to Jurien Bay, north of Perth in Western Australia. Individuals were once thought to have very restricted ranges; but further research has discovered that seadragons actually travel several hundred metres from their habitual locations, returning to the same spot using a strong sense of direction. They are mostly found over sand patches in waters up to deep, around kelp-covered rocks and clumps of sea grass. They are commonly sighted by scuba divers near Adelaide in South Australia, especially at Rapid Bay, Edithburgh, and Victor Harbor. Ecology Leafy seadragons usually live a solitary lifestyle. When the time comes, males court the females, they then pair up to breed. From the moment they hatch, leafy seadragons are completely independent. By the age of two, they are typically full grown and ready to breed. The species feeds by sucking up small crustaceans, such as amphipods and mysid shrimp, plankton, and larval fish through its long, pipe-like snout. Reproduction As with seahorses, the male leafy seadragon cares for the eggs. The female produces up to 250 bright pink eggs, then deposits them onto the male's tail with her ovipositor, a long tube. The eggs then attach themselves to a brood patch, which supplies them with oxygen. After 9 weeks, the eggs begin to hatch, depending on water conditions. The eggs turn a ripe purple or orange over this period, after which the male pumps his tail until the young emerge, a process which takes place over 24–48 hours. The male aids the hatching of the eggs by shaking his tail, and rubbing it against seaweed and rocks. Once born, the young seadragon is completely independent, eating small zooplankton until large enough to hunt mysids. Only about 5% of the eggs survive. Each newborn fry begins life with a small, externally-attached yolk-sac. This sac provides them sustenance for their first few days of life. Despite this initial nutrition source, the majority of fry will instinctively learn to hunt and catch prey upon hatching, and will become self-reliant before the sac is gone. Movement The leafy seadragon uses the fins along the side of its head to allow it to steer and turn. However, its outer skin is fairly rigid, limiting mobility. Individual leafy seadragons have been observed remaining in one location for extended periods of time (up to 68 hours), but will sometimes move for lengthy periods. The tracking of one individual indicated it moved at up to per hour. Conservation Leafy seadragons are subject to many threats, both natural and man-made. They are caught by collectors, and used in alternative medicine. They are vulnerable when first born, and are slow swimmers, reducing their chance of escaping from a predator. Seadragons are sometimes washed ashore after storms. The species has become endangered through pollution and industrial runoff, as well as collection for the aquarium trade. In response to these dangers, the species has been totally protected in South Australia since 1987, Victoria since at least 1995, and Western Australia since 1991. Additionally, the species' listing in the Australian government's Environment Protection and Biodiversity Conservation Act 1999 means that the welfare of the species has to be considered as a part of any developmental project. In captivity Due to being protected by law, obtaining seadragons is often an expensive and difficult process as they must be from captive bred stock, and exporters must prove their broodstock were caught before collecting restrictions went into effect, or that they had a license to collect seadragons. Seadragons have a specific level of protection under federal fisheries legislation as well as in most Australian states where they occur. Seadragons are difficult to maintain in aquaria. Success in keeping them has been largely confined to the public aquarium sector, due to funding and knowledge that would not be available to the average enthusiast. Attempts to breed the leafy seadragon in captivity have so far been unsuccessful. Australia Australian aquaria featuring leafy seadragons include the Sydney Aquarium, the Melbourne Aquarium, and the Aquarium of Western Australia. Canada Ripley's Aquarium of Canada in Toronto displays both leafy and weedy seadragons. South East Asia S.E.A. Aquarium, located in the Marine Life Park of Singapore, displays both leafy and weedy seadragons. United States A number of aquaria in the United States have leafy seadragon research programs and/or displays. Among these are the Adventure Aquarium in Camden, New Jersey; Aquarium of the Pacific at Long Beach; Birch Aquarium in San Diego; the Minnesota Zoo; Monterey Bay Aquarium; the Dallas World Aquarium & the Dallas Children's Aquarium, Dallas; the New England Aquarium, Boston; the Point Defiance Zoo & Aquarium in Tacoma, Washington; the Newport Aquarium in Kentucky, the Shedd Aquarium, Chicago; the California Academy of Sciences; the Tennessee Aquarium; Sea World Orlando, Florida; the Pittsburgh Zoo & PPG Aquarium;, Ripley's Aquarium of the Smokies, Gatlinburg, Tennessee;, The Florida Aquarium in Tampa, Florida; the Mote Aquarium in Sarasota, Florida;, and Ripley's Aquarium Broadway at Myrtle Beach, South Carolina. Europe The Lisbon Aquarium (Lisboa Oceanarium) has both leafy sea dragons and weedy sea dragons. Cultural references The leafy seadragon is the official marine emblem of the state of South Australia. It also features in the logos of the following South Australian associations — the Adelaide University Scuba Club Inc. and the Marine Life Society of South Australia Inc. A biennial Leafy Sea Dragon Festival is held within the boundaries of the District Council of Yankalilla in South Australia. It is a festival of the environment, arts and culture of the Fleurieu Peninsula, with the theme of celebrating the leafy seadragon. The inaugural festival in 2005 attracted over 7,000 participants including 4000 visitors. In 2006, an animated short film, The Amazing Adventures of Gavin, a Leafy Seadragon, was made on behalf of several South Australian organisations involved in conserving the marine environment, including the Coast Protection Board, the Department of Environment and Heritage and the Marine Discovery Centre. Made through a collaboration of The People's Republic of Animation, Waterline Productions and the SA Film Corporation, the film is an introductory guide to marine conservation and the marine bioregions of South Australia suitable for 8–12 year olds, and copies were distributed on DVD to all primary schools in the State. An educator's resource kit to accompany the film was released in 2008.
Biology and health sciences
Acanthomorpha
Animals
732446
https://en.wikipedia.org/wiki/Lorentz%20factor
Lorentz factor
The Lorentz factor or Lorentz term (also known as the gamma factor) is a dimensionless quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity, and it arises in derivations of the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz. It is generally denoted (the Greek lowercase letter gamma). Sometimes (especially in discussion of superluminal motion) the factor is written as (Greek uppercase-gamma) rather than . Definition The Lorentz factor is defined as where: is the relative velocity between inertial reference frames, is the speed of light in vacuum, is the ratio of to , is coordinate time, is the proper time for an observer (measuring time intervals in the observer's own frame). This is the most frequently used form in practice, though not the only one (see below for alternative forms). To complement the definition, some authors define the reciprocal see velocity addition formula. Occurrence Following is a list of formulae from Special relativity which use as a shorthand: The Lorentz transformation: The simplest case is a boost in the -direction (more general forms including arbitrary directions and rotations not listed here), which describes how spacetime coordinates change from one inertial frame using coordinates to another with relative velocity : Corollaries of the above transformations are the results: Time dilation: The time () between two ticks as measured in the frame in which the clock is moving, is longer than the time () between these ticks as measured in the rest frame of the clock: Length contraction: The length () of an object as measured in the frame in which it is moving, is shorter than its length () in its own rest frame: Applying conservation of momentum and energy leads to these results: Relativistic mass: The mass of an object in motion is dependent on and the rest mass : Relativistic momentum: The relativistic momentum relation takes the same form as for classical momentum, but using the above relativistic mass: Relativistic kinetic energy: The relativistic kinetic energy relation takes the slightly modified form: As is a function of , the non-relativistic limit gives , as expected from Newtonian considerations. Numerical values In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of ). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact. Alternative representations There are other ways to write the factor. Above, velocity was used, but related variables such as momentum and rapidity may also be convenient. Momentum Solving the previous relativistic momentum equation for leads to This form is rarely used, although it does appear in the Maxwell–Jüttner distribution. Rapidity Applying the definition of rapidity as the hyperbolic angle : also leads to (by use of hyperbolic identities): Using the property of Lorentz transformation, it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group, a foundation for physical models. Bessel function The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions: Series expansion (velocity) The Lorentz factor has the Maclaurin series: which is a special case of a binomial series. The approximation may be used to calculate relativistic effects at low speeds. It holds to within 1% error for  < 0.4  ( < 120,000 km/s), and to within 0.1% error for  < 0.22  ( < 66,000 km/s). The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold: For and , respectively, these reduce to their Newtonian equivalents: The Lorentz factor equation can also be inverted to yield This has an asymptotic form The first two terms are occasionally used to quickly calculate velocities from large values. The approximation holds to within 1% tolerance for and to within 0.1% tolerance for Applications in astronomy The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal. Muons, a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation. Since muons have a mean lifetime of just 2.2 μs, muons generated from cosmic-ray collisions high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate.
Physical sciences
Theory of relativity
Physics
732448
https://en.wikipedia.org/wiki/Amphotericin%20B
Amphotericin B
Amphotericin B is an antifungal medication used for serious fungal infections and leishmaniasis. The fungal infections it is used to treat include mucormycosis, aspergillosis, blastomycosis, candidiasis, coccidioidomycosis, and cryptococcosis. For certain infections it is given with flucytosine. It is typically given intravenously (injection into a vein). Common side effects include a reaction with fever, chills, and headaches soon after the medication is given, as well as kidney problems. Allergic symptoms including anaphylaxis may occur. Other serious side effects include low blood potassium and myocarditis (inflammation of the heart). It appears to be relatively safe in pregnancy. There is a lipid formulation that has a lower risk of side effects. It is in the polyene class of medications and works in part by interfering with the cell membrane of the fungus. Amphotericin B was isolated from Streptomyces nodosus in 1955 at the Squibb Institute for Medical Research from cultures isolated from the streptomycete obtained from the river bed of Orinoco in that region of Venezuela and came into medical use in 1958. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses Antifungal One of the main uses of amphotericin B is treating a wide range of systemic fungal infections. Due to its extensive side effects, it is often reserved for severe infections in critically ill, or immunocompromised patients. It is considered first line therapy for invasive mucormycosis infections, cryptococcal meningitis, and certain aspergillus and candidal infections. It has been a highly effective drug for over fifty years in large part because it has a low incidence of drug resistance in the pathogens it treats. This is because amphotericin B resistance requires sacrifices on the part of the pathogen that make it susceptible to the host environment, and too weak to cause infection. Antiprotozoal Amphotericin B is used for life-threatening protozoan infections such as visceral leishmaniasis and primary amoebic meningoencephalitis. Spectrum of susceptibility The following table shows the amphotericin B susceptibility for a selection of medically important fungi. Available formulations Intravenous Amphotericin B alone is insoluble in normal saline at a pH of 7. Therefore, several formulations have been devised to improve its intravenous bioavailability. Lipid-based formulations of amphotericin B are no more effective than conventional formulations, although there is some evidence that lipid-based formulations may be better tolerated by patients and may have fewer adverse effects. Deoxycholate The original formulation uses sodium deoxycholate to improve solubility. Amphotericin B deoxycholate (ABD) is administered intravenously. As the original formulation of amphotericin, it is often referred to as "conventional" amphotericin. Liposomal In order to improve the tolerability of amphotericin and reduce toxicity, several lipid formulations have been developed. Liposomal formulations have been found to have less renal toxicity than deoxycholate, and fewer infusion-related reactions. They are more expensive than amphotericin B deoxycholate. AmBisome (liposomal amphotericin B; LAMB) is a liposomal formulation of amphotericin B for injection and consists of a mixture of phosphatidylcholine, cholesterol and distearoyl phosphatidylglycerol that in aqueous media spontaneously arrange into unilamellar vesicles that contain amphotericin B. It was developed by NeXstar Pharmaceuticals (acquired by Gilead Sciences in 1999). It was approved by the FDA in 1997. It is marketed by Gilead in Europe and licensed to Astellas Pharma (formerly Fujisawa Pharmaceuticals) for marketing in the US, and Sumitomo Pharmaceuticals in Japan. Lipid complex formulations A number of lipid complex preparations are also available. Abelcet was approved by the FDA in 1995. It consists of amphotericin B and two lipids in a 1:1 ratio that form large ribbon-like structures. Amphotec is a complex of amphotericin and sodium cholesteryl sulfate in a 1:1 ratio. Two molecules of each form a tetramer that aggregate into spiral arms on a disk-like complex. It was approved by the FDA in 1996. By mouth An oral preparation exists but is not widely available. The amphipathic nature of amphotericin along with its low solubility and permeability has posed major hurdles for oral administration given its low bioavailability. In the past it had been used for fungal infections of the surface of the GI tract such as thrush, but has been replaced by other antifungals such as nystatin and fluconazole. However, recently novel nanoparticulate drug delivery systems such as AmbiOnp, nanosuspensions, lipid-based drug delivery systems including cochleates, self-emulsifying drug delivery systems, solid lipid nanoparticles and polymeric nanoparticles—such as amphotericin B in pegylated polylactide coglycolide copolymer nanoparticles—have demonstrated potential for oral formulation of amphotericin B. The oral lipid nanocrystal amphotericin by Matinas Biopharma is furthest along having completed a successful phase 2 clinical trial in cryptococcal meningitis. Side effects Amphotericin B is well known for its severe and potentially lethal side effects, earning it the nickname "amphoterrible". Very often, it causes a serious reaction soon after infusion (within 1 to 3 hours), consisting of high fever, shaking chills, hypotension, anorexia, nausea, vomiting, headache, dyspnea and tachypnea, drowsiness, and generalized weakness. The violent chills and fevers have caused the drug to be nicknamed "shake and bake". The precise etiology of the reaction is unclear, although it may involve increased prostaglandin synthesis and the release of cytokines from macrophages. Deoxycholate formulations (ABD) may also stimulate the release of histamine from mast cells and basophils. Reactions sometimes subside with later applications of the drug. This nearly universal febrile response necessitates a critical (and diagnostically difficult) professional determination as to whether the onset of high fever is a novel symptom of a fast-progressing disease, or merely the effect of the drug. To decrease the likelihood and severity of the symptoms, initial doses should be low, and increased slowly. Paracetamol, pethidine, diphenhydramine, and hydrocortisone have all been used to treat or prevent the syndrome, but the prophylactic use of these drugs is often limited by the patient's condition. Intravenously administered amphotericin B in therapeutic doses has also been associated with multiple organ damage. Kidney damage, including Type I (distal) renal tubular acidosis, is a frequently reported side effect, and can be severe and/or irreversible. Less kidney toxicity has been reported with liposomal formulations (such as AmBisome) and it has become preferred in patients with preexisting renal injury. The integrity of the liposome is disrupted when it binds to the fungal cell wall, but is not affected by the mammalian cell membrane, so the association with liposomes decreases the exposure of the kidneys to amphotericin B, which explains its less nephrotoxic effects. In addition, electrolyte imbalances such as hypokalemia and hypomagnesemia are also common. In the liver, increased liver enzymes and hepatotoxicity (up to and including fulminant liver failure) are common. In the circulatory system, several forms of anemia and other blood dyscrasias (leukopenia, thrombopenia), serious cardiac arrhythmias (including ventricular fibrillation), and even frank cardiac failure have been reported. Skin reactions, including serious forms, are also possible. The analogue AM-2-19 has been engineered to be less toxic to the kidneys. Interactions Drug-drug interactions may occur when amphotericin B is coadministered with the following agents: Flucytosine: Toxicity of flucytosine is increased and allows a lower dose of amphotericin B. Amphotericin B may also facilitate entry of flucystosine into the fungal cell by interfering with the permeability of the fungal cell membrane. Diuretics or cisplatin: Increased renal toxicity and increased risk of hypokalemia Corticosteroids: Increased risk of hypokalemia Imidazole Antifungals: Amphotericin B may antagonize the activity of ketoconazole and miconazole. The clinical significance of this interaction is unknown. Neuromuscular-blocking agents: Amphotericin B-induced hypokalemia may potentiate the effects of certain paralytic agents. Foscarnet, ganciclovir, tenofovir, adefovir: Risk of hematological and renal side effects of amphotericin B are increased Zidovudine: Increased risk of renal and hematological toxicity . Other nephrotoxic drugs (such as aminoglycosides): Increased risk of serious renal damage Cytostatic drugs: Increased risk of kidney damage, hypotension, and bronchospasms Transfusion of leukocytes: Risk of pulmonal (lung) damage occurs, space the intervals between the application of amphotericin B and the transfusion, and monitor pulmonary function Mechanism of action Amphotericin B binds with ergosterol, a component of fungal cell membranes, forming pores that cause rapid leakage of monovalent ions (K+, Na+, H+ and Cl−) and subsequent fungal cell death. This is amphotericin B's primary effect as an antifungal agent. It has been found that the amphotericin B/ergosterol bimolecular complex that maintains these pores is stabilized by Van der Waals interactions. Researchers have found evidence that amphotericin B also causes oxidative stress within the fungal cell, but it remains unclear to what extent this oxidative damage contributes to the drug's effectiveness. The addition of free radical scavengers or antioxidants can lead to amphotericin resistance in some species, such as Scedosporium prolificans, without affecting the cell wall. Two amphotericins, amphotericin A and amphotericin B, are known, but only B is used clinically, because it is significantly more active in vivo. Amphotericin A is almost identical to amphotericin B (having a C=C double bond between the 27th and 28th carbons), but has little antifungal activity. Mechanism of toxicity Mammalian and fungal membranes both contain sterols, a primary membrane target for amphotericin B. Because mammalian and fungal membranes are similar in structure and composition, this is one mechanism by which amphotericin B causes cellular toxicity. Amphotericin B molecules can form pores in the host membrane as well as the fungal membrane. This impairment in membrane barrier function can have lethal effects. Ergosterol, the fungal sterol, is more sensitive to amphotericin B than cholesterol, the common mammalian sterol. Reactivity with the membrane is also sterol concentration dependent. Bacteria are not affected as their cell membranes do not usually contain sterols. Amphotericin B administration is limited by infusion-related toxicity. This is thought to result from innate immune production of proinflammatory cytokines. Biosynthesis The natural route to synthesis includes polyketide synthase components. The carbon chains of amphotericin B are assembled from sixteen 'C2' acetate and three 'C3'propionate units by polyketide syntheses (PKSs). Polyketide biosynthesis begins with the decarboxylative condensation of a dicarboxylic acid extender unit with a starter acyl unit to form a β-ketoacyl intermediate. The growing chain is constructed by a series of Claisen reactions. Within each module, the extender units are loaded onto the current ACP domain by acetyl transferase (AT). The ACP-bound elongation group reacts in a Claisen condensation with the KS-bound polyketide chain. Ketoreductase (KR), dehydratase (DH) and enoyl reductase (ER) enzymes may also be present to form alcohol, double bonds or single bonds. After cyclisation, the macrolactone core undergoes further modification by hydroxylation, methylation and glycosylation. The order of these three post-cyclization processes is unknown. History It was originally extracted from Streptomyces nodosus, a filamentous bacterium, in 1955, at the Squibb Institute for Medical Research from cultures of an undescribed streptomycete isolated from the soil collected in the Orinoco River region of Venezuela. Two antifungal substances were isolated from the soil culture, amphotericin A and amphotericin B, but B had better antifungal activity. For decades it remained the only effective therapy for invasive fungal disease until the development of the azole antifungals in the early 1980s. Its complete stereo structure was determined in 1970 by an X-ray structure of the N-iodoacetyl derivative. The first synthesis of the compound's naturally occurring enantiomeric form was achieved in 1987 by K. C. Nicolaou. Amphotericin B was used to treat a patient with disseminated coccidioidomycosis who was admitted to the U.S. Public Health Service Hospital, Seattle, Washington on January 16, 1957. "The course was rapidly downhill with a grim prognosis as manifested by positive blood cultures, rising complement fixation titers, and failure of the skin to react to intradermal coccidioidin. Amphotericin B was started eight weeks following the onset of his illness. Following this there was remarkable improvement both objectively and subjectively. A fourteen-month follow-up following discontinuance of the drug revealed stabilization of all laboratory studies except for a re-elevation of the complement fixation titer from 1 to 16 to 1 to 32. The patient was completely asymptomatic except for the production of sputum containing a few spherules. The clinical effect of this drug in this patient has been most encouraging and is in agreement with results obtained by others. The lasting effect of the drug seems suggested by the patient's complete well-being after fourteen months of cessation of treatment. It is reasonable to assume that this drug will play a major part in the specific treatment of this disease." Formulations It is a subgroup of the macrolide antibiotics, and exhibits similar structural elements. Currently, the drug is available in many forms. Either "conventionally" complexed with sodium deoxycholate (ABD), as a cholesteryl sulfate complex (ABCD), as a lipid complex (ABLC), and as a liposomal formulation (LAMB). The latter formulations have been developed to improve tolerability and decrease toxicity, but may show considerably different pharmacokinetic characteristics compared to conventional amphotericin B. Names Amphotericin's name originates from the chemical's amphoteric properties. It is commercially known as Fungilin, Fungizone, Abelcet, AmBisome, Fungisome, Amphocil, Amphotec, and Halizon.
Biology and health sciences
Antifungals
Health
733141
https://en.wikipedia.org/wiki/Agricultural%20economics
Agricultural economics
Agricultural economics is an applied field of economics concerned with the application of economic theory in optimizing the production and distribution of food and fiber products. Agricultural economics began as a branch of economics that specifically dealt with land usage. It focused on maximizing the crop yield while maintaining a good soil ecosystem. Throughout the 20th century the discipline expanded and the current scope of the discipline is much broader. Agricultural economics today includes a variety of applied areas, having considerable overlap with conventional economics. Agricultural economists have made substantial contributions to research in economics, econometrics, development economics, and environmental economics. Agricultural economics influences food policy, agricultural policy, and environmental policy. Origins Economics has been defined as the study of resource allocation under scarcity. Agricultural economics, or the application of economic methods to optimize the decisions made by agricultural producers, grew to prominence around the turn of the 20th century. The field of agricultural economics can be traced back to works on land economics. Henry Charles Taylor was the greatest contributor in this period, with the establishment of the Department of Agricultural Economics at the University of Wisconsin in 1909. Another contributor, 1979 Nobel Economics Prize winner Theodore Schultz, was among the first to examine development economics as a problem related directly to agriculture. Schultz was also instrumental in establishing econometrics as a tool for use in analyzing agricultural economics empirically; he noted in his landmark 1956 article that agricultural supply analysis is rooted in "shifting sand", implying that it was and is simply not being done correctly. One scholar in the field, Ford Runge, summarizes the development of agricultural economics as follows: Agricultural economics arose in the late 19th century, combined the theory of the firm with marketing and organization theory, and developed throughout the 20th century largely as an empirical branch of general economics. The discipline was closely linked to empirical applications of mathematical statistics and made early and significant contributions to econometric methods. In the 1960s and afterwards, as agricultural sectors in the OECD countries contracted, agricultural economists were drawn to the development problems of poor countries, to the trade and macroeconomic policy implications of agriculture in rich countries, and to a variety of production, consumption, and environmental and resource problems. Agricultural economists have made many well-known contributions to the economics field with such models as the cobweb model, hedonic regression pricing models, new technology and diffusion models (Zvi Griliches), multifactor productivity and efficiency theory and measurement, and the random coefficients regression. The farm sector is frequently cited as a prime example of the perfect competition economic paradigm. In Asia, the Faculty of Agricultural Economics was established in September 1919 in Hokkaido Imperial University, Japan, as Tokyo Imperial University's School of Agriculture started a faculty on agricultural economics in its second department of agricultural science. In the Philippines, agricultural economics was offered first by the University of the Philippines Los Baños Department of Agricultural Economics in 1919. Today, the field of agricultural economics has transformed into a more integrative discipline which covers farm management and production economics, rural finance and institutions, agricultural marketing and prices, agricultural policy and development, food and nutrition economics, and environmental and natural resource economics. Since the 1970s, agricultural economics has primarily focused on seven main topics, according to Ford Runge: agricultural environment and resources; risk and uncertainty; food and consumer economics; prices and incomes; market structures; trade and development; and technical change and human capital. Major topics in agricultural economics Agricultural environment and natural resources In the field of environmental economics, agricultural economists have contributed in three main areas: designing incentives to control environmental externalities (such as water pollution due to agricultural production), estimating the value of non-market benefits from natural resources and environmental amenities (such as an appealing rural landscape), and the complex interrelationship between economic activities and environmental consequences. With regard to natural resources, agricultural economists have developed quantitative tools for improving land management, preventing erosion, managing pests, protecting biodiversity, and preventing livestock diseases. Food and consumer economics While at one time, the field of agricultural economics was focused primarily on farm-level issues, in recent years agricultural economists have studied diverse topics related to the economics of food consumption. In addition to economists' long-standing emphasis on the effects of prices and incomes, researchers in this field have studied how information and quality attributes influence consumer behavior. Agricultural economists have contributed to understanding how households make choices between purchasing food or preparing it at home, how food prices are determined, definitions of poverty thresholds, how consumers respond to price and income changes in a consistent way, and survey and experimental tools for understanding consumer preferences. Production economics and farm management Agricultural economics research has addressed diminishing returns in agricultural production, as well as farmers' costs and supply responses. Much research has applied economic theory to farm-level decisions. Studies of risk and decision-making under uncertainty have real-world applications to crop insurance policies and to understanding how farmers in developing countries make choices about technology adoption. These topics are important for understanding prospects for producing sufficient food for a growing world population, subject to new resource and environmental challenges such as water scarcity and global climate change. Development economics Development economics is broadly concerned with the improvement of living conditions in low-income countries, and the improvement of economic performance in low-income settings. Because agriculture is a large part of most developing economies, both in terms of employment and share of GDP, agricultural economists have been at the forefront of empirical research on development economics, contributing to our understanding of agriculture's role in economic development, economic growth and structural transformation. Many agricultural economists are interested in the food systems of developing economies, the linkages between agriculture and nutrition, and the ways in which agriculture interact with other domains, such as the natural environment. Professional associations The International Association of Agricultural Economists (IAAE) is a worldwide professional association, which holds its major conference every three years. The association publishes the journal Agricultural Economics. There also is a European Association of Agricultural Economists (EAAE), an African Association of Agricultural Economists (AAAE) and an Australian Agricultural and Resource Economics Society. Substantial work in agricultural economics internationally is conducted by the International Food Policy Research Institute. In the United States, the primary professional association is the Agricultural & Applied Economics Association (AAEA), which holds its own annual conference and also co-sponsors the annual meetings of the Allied Social Sciences Association (ASSA). The AAEA publishes the American Journal of Agricultural Economics and Applied Economic Perspectives and Policy. Careers in agricultural economics Graduates from agricultural and applied economics departments find jobs in many sectors of the economy: agricultural management, agribusiness, agricultural marketing, education, the financial sector, government, natural resource and environmental management, real estate, and public relations. Careers in agricultural economics require at least a bachelor's degree, and research careers in the field require graduate-level training; see Masters in Agricultural Economics. A 2011 study by the Georgetown Center on Education and the Workforce rated agricultural economics tied for 8th out of 171 fields in terms of employability. Literature Evenson, Robert E. and Prabhu Pingali (eds.) (2007). Handbook of Agricultural Economics. Amsterdam, NL: Elsevier.
Technology
Academic disciplines
null
733497
https://en.wikipedia.org/wiki/Hydroxyzine
Hydroxyzine
Hydroxyzine, sold under the brand names Atarax and Vistaril among others, is an antihistamine medication. It is used in the treatment of itchiness, anxiety, insomnia, and nausea (including that due to motion sickness). It is used either by mouth or injection into a muscle. Hydroxyzine works by blocking the effects of histamine. It is a first-generation antihistamine in the piperazine family of chemicals. Common side effects include sleepiness, headache, and dry mouth. Serious side effects may include QT prolongation. It is unclear if use during pregnancy or breastfeeding is safe. It was first made by Union Chimique Belge in 1956 and was approved for sale by Pfizer in the United States later that year. In 2022, it was the 46th most commonly prescribed medication in the United States, with more than 13million prescriptions. Medical uses Hydroxyzine is used in the treatment of itchiness, anxiety, and nausea due to motion sickness. A systematic review concluded that hydroxyzine outperforms placebo in treating generalized anxiety disorder. Insufficient data were available to compare the drug with benzodiazepines and buspirone. Hydroxyzine can also be used for the treatment of allergic conditions, such as chronic urticaria, atopic or contact dermatoses, and histamine-mediated pruritus. These have also been confirmed in both recent and past studies to have no adverse effects on the liver, blood, nervous system, or urinary tract. Use of hydroxyzine for premedication as a sedative has no effects on tropane alkaloids, such as atropine, but may, following general anesthesia, potentiate meperidine and barbiturates, and use in pre-anesthetic adjunctive therapy should be modified depending upon the state of the individual. Doses of hydroxyzine hydrochloride used for sleep range from 25 to 100 mg. As with other antihistamine sleep aids, hydroxyzine is usually only prescribed for short term or "as-needed" use since tolerance to the CNS (central nervous system) effects of hydroxyzine can develop in as little as a few days. A major systematic review and network meta-analysis of medications for the treatment of insomnia published in 2022 found little evidence to inform the use of hydroxyzine for insomnia. A 2023 meta-review concludes that hydroxyzine is effective for inducing sleep onset but less effective for maintaining sleep for eight hours. Contraindications Hydroxyzine is contraindicated for subcutaneous or intra-articular administration. The administration of hydroxyzine in large amounts by ingestion or intramuscular administration during the onset of pregnancy can cause fetal abnormalities. When administered to pregnant rats, mice, and rabbits, hydroxyzine caused abnormalities such as hypogonadism with doses significantly above that of the human therapeutic range. In humans, a significant dose has not yet been established in studies, and, by default, the Food and Drug Administration (FDA) has introduced contraindication guidelines regarding hydroxyzine. Use by those at risk for or showing previous signs of hypersensitivity is also contraindicated. Other contraindications include the administration of hydroxyzine alongside depressants and other compounds that affect the central nervous system; if necessary, it should only be administered concomitantly in small doses. If administered in small doses with other substances, as mentioned, then patients should refrain from using dangerous machinery, motor vehicles, or any other practice requiring absolute concentration, under safety laws. Studies have also been conducted which show that long-term prescription of hydroxyzine can lead to tardive dyskinesia after years of use, but effects related to dyskinesia have also anecdotally been reported after periods of 7.5 months, such as continual head rolling, lip licking, and other forms of athetoid movement. In certain cases, elderly patients' previous interactions with phenothiazine derivatives or pre-existing neuroleptic treatment may have contributed to dyskinesia at the administration of hydroxyzine due to hypersensitivity caused by prolonged treatment, and therefore some contraindication is given for short-term administration of hydroxyzine to those with previous phenothiazine use. Side effects Several reactions have been noted in manufacturer guidelines—deep sleep, incoordination, sedation, calmness, and dizziness have been reported in children and adults, as well as others such as hypotension, tinnitus, and headaches. Gastrointestinal effects have also been observed, as well as less serious effects such as dryness of the mouth and constipation caused by the mild antimuscarinic properties of hydroxyzine. Central nervous system effects such as hallucinations or confusion have been observed in rare cases, attributed mostly to overdosage. Such properties have been attributed to hydroxyzine in several cases, particularly in patients treated for neuropsychological disorders, as well as in cases where overdoses have been observed. While there are reports of the "hallucinogenic" or "hypnotic" properties of hydroxyzine, several clinical data trials have not reported such side effects from the sole consumption of hydroxyzine, but rather, have described its overall calming effect described through the stimulation of areas within the reticular formation. The hallucinogenic or hypnotic properties have been described as being an additional effect from overall central nervous system suppression by other CNS agents, such as lithium or ethanol. Hydroxyzine exhibits anxiolytic and sedative properties in many psychiatric patients. One study showed that patients reported very high levels of subjective sedation when first taking the drug, but that levels of reported sedation decreased markedly over 5–7 days, likely due to CNS receptor desensitization. Other studies have suggested that hydroxyzine acts as an acute hypnotic, reducing sleep onset latency and increasing sleep duration — also showing that some drowsiness did occur. This was observed more in female patients, who also had greater hypnotic responses. The use of sedating drugs alongside hydroxyzine can cause oversedation and confusion if administered at high doses—any form of hydroxyzine treatment alongside sedatives should be done under the supervision of a doctor. Because of the potential for more severe side effects, this drug is on the list to avoid in the elderly. Pharmacology Pharmacodynamics Hydroxyzine's predominant mechanism of action is as a potent and selective histamine H1 receptor inverse agonist. This action is responsible for its antihistamine and sedative effects. Unlike many other first-generation antihistamines, hydroxyzine has a lower affinity for the muscarinic acetylcholine receptors, and in accordance, has a lower risk of anticholinergic side effects. In addition to its antihistamine activity, hydroxyzine has also been shown to act more weakly as an antagonist of the serotonin 5-HT2A receptor, the dopamine D2 receptor, and the α1-adrenergic receptor. Similarly to the atypical antipsychotics, the comparably weak antiserotonergic effects of hydroxyzine likely underlie its usefulness as an anxiolytic. Other antihistamines without such properties have not been found to be effective in the treatment of anxiety. Hydroxyzine crosses the blood–brain barrier easily and exerts effects in the central nervous system. A positron emission tomography (PET) study found that brain occupancy of the H1 receptor was 67.6% for a single 30 mg dose of hydroxyzine. In addition, subjective sleepiness correlated well with the brain H1 receptor occupancy. PET studies with antihistamines have found that brain H1 receptor occupancy of more than 50% is associated with a high prevalence of somnolence and cognitive decline, whereas brain H1 receptor occupancy of less than 20% is considered to be non-sedative. Hydroxyzine also acts as a functional inhibitor of acid sphingomyelinase. Pharmacokinetics Hydroxyzine can be administered orally or via intramuscular injection. When given orally, hydroxyzine is rapidly absorbed from the gastrointestinal tract. Hydroxyzine is rapidly absorbed and distributed with oral and intramuscular administration, and is metabolized in the liver; the main metabolite (45%), cetirizine, is formed through oxidation of the alcohol moiety to a carboxylic acid by alcohol dehydrogenase, and overall effects are observed within one hour of administration. Higher concentrations are found in the skin than in the plasma. Cetirizine, although less sedating, is non-dialyzable and possesses similar antihistamine properties. The other metabolites identified include a N-dealkylated metabolite, and an O-dealkylated 1/16 metabolite with a plasma half-life of 59 hours. These pathways are mediated principally by CYP3A4 and CYP3A5. The N-dealykylated metabolite, norchlorcyclizine, bears some structural similarities to trazodone, but it has not been established whether it is pharmacologically active. In animals, hydroxyzine and its metabolites are excreted in feces primarily through biliary elimination. In rats, less than 2% of the drug is excreted unchanged. The time to reach maximum concentration (Tmax) of hydroxyzine is about 2.0 hours in both adults and children and its elimination half-life is around 20.0 hours in adults (mean age 29.3 years) and 7.1 hours in children. Its elimination half-life is shorter in children compared to adults. In another study, the elimination half-life of hydroxyzine in elderly adults was 29.3 hours. One study found that the elimination half-life of hydroxyzine in adults was as short as 3 hours, but this may have just been due to methodological limitations. Although hydroxyzine has a long elimination half-life and acts, in-vivo, as an antihistamine for as long as 24 hours, the predominant CNS effects of hydroxyzine and other antihistamines with long half-lives seem to diminish after 8 hours. Administration in geriatrics differs from the administration of hydroxyzine in younger patients; according to the FDA, there have not been significant studies made (2004), which include population groups over 65, which provide a distinction between elderly aged patients and other younger groups. Hydroxyzine should be administered carefully in the elderly with consideration given to possible reduced elimination. Chemistry Hydroxyzine is a member of the diphenylmethylpiperazine class of antihistamines. Hydroxyzine is supplied mainly as a dihydrochloride salt (hydroxyzine hydrochloride) but also to a lesser extent as an embonate salt (hydroxyzine pamoate). The molecular weights of hydroxyzine, hydroxyzine dihydrochloride, and hydroxyzine pamoate are 374.9 g/mol, 447.8 g/mol, and 763.3 g/mol, respectively. Due to their differences in molecular weight, 1 mg hydroxyzine dihydrochloride is equivalent to about 1.7 mg hydroxyzine pamoate. Analogues Analogues of hydroxyzine include buclizine, cetirizine, cinnarizine, cyclizine, etodroxizine, meclizine, and pipoxizine among others. Society and culture Brand names Hydroxyzine preparations require a doctor's prescription. The drug is available in two formulations, the pamoate and the dihydrochloride or hydrochloride salts. Vistaril, Equipose, Masmoran, and Paxistil are preparations of the pamoate salt, while Atarax, Alamon, Aterax, Durrax, Tran-Q, Orgatrax, Quiess, and Tranquizine are of the hydrochloride salt.
Biology and health sciences
Antihistamines
Health