id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
30549 | https://en.wikipedia.org/wiki/Trans-Neptunian%20object | Trans-Neptunian object | A trans-Neptunian object (TNO), also written transneptunian object, is any minor planet in the Solar System that orbits the Sun at a greater average distance than Neptune, which has an orbital semi-major axis of 30.1 astronomical units (AU).
Typically, TNOs are further divided into the classical and resonant objects of the Kuiper belt, the scattered disc and detached objects with the sednoids being the most distant ones. As of July 2024, the catalog of minor planets contains 901 numbered and more than 3,000 unnumbered TNOs. However, nearly 5000 objects with semimajor axis over 30 AU are present in the MPC catalog, with 1000 being numbered.
The first trans-Neptunian object to be discovered was Pluto in 1930. It took until 1992 to discover a second trans-Neptunian object orbiting the Sun directly, 15760 Albion. The most massive TNO known is Eris, followed by Pluto, , , and . More than 80 satellites have been discovered in orbit of trans-Neptunian objects. TNOs vary in color and are either grey-blue (BB) or very red (RR). They are thought to be composed of mixtures of rock, amorphous carbon and volatile ices such as water and methane, coated with tholins and other organic compounds.
Twelve minor planets with a semi-major axis greater than 150 AU and perihelion greater than 30 AU are known, which are called extreme trans-Neptunian objects (ETNOs).
History
Discovery of Pluto
The orbit of each of the planets is slightly affected by the gravitational influences of the other planets. Discrepancies in the early 1900s between the observed and expected orbits of Uranus and Neptune suggested that there were one or more additional planets beyond Neptune. The search for these led to the discovery of Pluto in February 1930, which was too small to explain the discrepancies. Revised estimates of Neptune's mass from the Voyager 2 flyby in 1989 showed that the problem was spurious. Pluto was easiest to find because it has the highest apparent magnitude of all known trans-Neptunian objects. It also has a lower inclination to the ecliptic than most other large TNOs.
Subsequent discoveries
After Pluto's discovery, American astronomer Clyde Tombaugh continued searching for some years for similar objects but found none. For a long time, no one searched for other TNOs as it was generally believed that Pluto, which up to August 2006 was classified as a planet, was the only major object beyond Neptune. Only after the 1992 discovery of a second TNO, 15760 Albion, did systematic searches for further such objects begin. A broad strip of the sky around the ecliptic was photographed and digitally evaluated for slowly moving objects. Hundreds of TNOs were found, with diameters in the range of 50 to 2,500 kilometers. Eris, the most massive TNO, was discovered in 2005, revisiting a long-running dispute within the scientific community over the classification of large TNOs, and whether objects like Pluto can be considered planets. Pluto and Eris were eventually classified as dwarf planets by the International Astronomical Union.
Classification
According to their distance from the Sun and their orbital parameters, TNOs are classified in two large groups: the Kuiper belt objects (KBOs) and the scattered disc objects (SDOs). The diagram to the right illustrates the distribution of known trans-Neptunian objects (up to 70 au) in relation to the orbits of the planets and the centaurs for reference. Different classes are represented in different colours. Resonant objects (including Neptune trojans) are plotted in red, classical Kuiper belt objects in blue. The scattered disc extends to the right, far beyond the diagram, with known objects at mean distances beyond 500 au (Sedna) and aphelia beyond 1,000 ().
KBOs
The EdgeworthKuiper belt contains objects with an average distance to the Sun of 30 to about 55 au, usually having close-to-circular orbits with a small inclination from the ecliptic. EdgeworthKuiper belt objects are further classified into the resonant trans-Neptunian object that are locked in an orbital resonance with Neptune, and the classical Kuiper belt objects, also called "cubewanos", that have no such resonance, moving on almost circular orbits, unperturbed by Neptune. There are a large number of resonant subgroups, the largest being the twotinos (1:2 resonance) and the plutinos (2:3 resonance), named after their most prominent member, Pluto. Members of the classical EdgeworthKuiper belt include 15760 Albion, Quaoar and Makemake.
Another subclass of Kuiper belt objects is the so-called scattering objects (SO). These are non-resonant objects that come near enough to Neptune to have their orbits changed from time to time (such as causing changes in semi-major axis of at least 1.5 AU in 10 million years) and are thus undergoing gravitational scattering. Scattering objects are easier to detect than other trans-Neptunian objects of the same size because they come nearer to Earth, some having perihelia around 20 AU. Several are known with g-band absolute magnitude below 9, meaning that the estimated diameter is more than 100 km. It is estimated that there are between 240,000 and 830,000 scattering objects bigger than r-band absolute magnitude 12, corresponding to diameters greater than about 18 km. Scattering objects are hypothesized to be the source of the so-called Jupiter-family comets (JFCs), which have periods of less than 20 years.
SDOs
The scattered disc contains objects farther from the Sun, with very eccentric and inclined orbits. These orbits are non-resonant and non-planetary-orbit-crossing. A typical example is the most-massive-known TNO, Eris. Based on the Tisserand parameter relative to Neptune (TN), the objects in the scattered disc can be further divided into the "typical" scattered disc objects (SDOs, Scattered-near) with a TN of less than 3, and into the detached objects (ESDOs, Scattered-extended) with a TN greater than 3. In addition, detached objects have a time-averaged eccentricity greater than 0.2 The Sednoids are a further extreme sub-grouping of the detached objects with perihelia so distant that it is confirmed that their orbits cannot be explained by perturbations from the giant planets, nor by interaction with the galactic tides. However, a passing star could have moved them on their orbit.
Physical characteristics
Given the apparent magnitude (>20) of all but the biggest trans-Neptunian objects, the physical studies are limited to the following:
thermal emissions for the largest objects (see size determination)
colour indices, i.e. comparisons of the apparent magnitudes using different filters
analysis of spectra, visual and infrared
Studying colours and spectra provides insight into the objects' origin and a potential correlation with other classes of objects, namely centaurs and some satellites of giant planets (Triton, Phoebe), suspected to originate in the Kuiper belt. However, the interpretations are typically ambiguous as the spectra can fit more than one model of the surface composition and depend on the unknown particle size. More significantly, the optical surfaces of small bodies are subject to modification by intense radiation, solar wind and micrometeorites. Consequently, the thin optical surface layer could be quite different from the regolith underneath, and not representative of the bulk composition of the body.
Small TNOs are thought to be low-density mixtures of rock and ice with some organic (carbon-containing) surface material such as tholins, detected in their spectra. On the other hand, the high density of , 2.6–3.3 g/cm3, suggests a very high non-ice content (compare with Pluto's density: 1.86 g/cm3). The composition of some small TNOs could be similar to that of comets. Indeed, some centaurs undergo seasonal changes when they approach the Sun, making the boundary blurred (see 2060 Chiron and 7968 Elst–Pizarro). However, population comparisons between centaurs and TNOs are still controversial.
Color indices
Colour indices are simple measures of the differences in the apparent magnitude of an object seen through blue (B), visible (V), i.e. green-yellow, and red (R) filters. The diagram illustrates known colour indices for all but the biggest objects (in slightly enhanced colour).
For reference, two moons, Triton and Phoebe, the centaur Pholus and the planet Mars are plotted (yellow labels, size not to scale). Correlations between the colours and the orbital characteristics have been studied, to confirm theories of different origin of the different dynamic classes:
Classical Kuiper belt object (cubewano) seem to be composed of two different colour populations: the so-called cold (inclination <5°) population, displaying only red colours, and the so-called hot (higher inclination) population displaying the whole range of colours from blue to very red. A recent analysis based on the data from Deep Ecliptic Survey confirms this difference in colour between low-inclination (named Core) and high-inclination (named Halo) objects. Red colours of the Core objects together with their unperturbed orbits suggest that these objects could be a relic of the original population of the belt.
Scattered disc objects show colour resemblances with hot classical objects pointing to a common origin.
While the relatively dimmer bodies, as well as the population as the whole, are reddish (V−I = 0.3–0.6), the bigger objects are often more neutral in colour (infrared index V−I < 0.2). This distinction leads to suggestion that the surface of the largest bodies is covered with ices, hiding the redder, darker areas underneath.
Spectral type
Among TNOs, as among centaurs, there is a wide range of colors from blue-grey (neutral) to very red, but unlike the centaurs, bimodally grouped into grey and red centaurs, the distribution for TNOs appears to be uniform. The wide range of spectra differ in reflectivity in visible red and near infrared. Neutral objects present a flat spectrum, reflecting as much red and infrared as visible spectrum. Very red objects present a steep slope, reflecting much more in red and infrared.
A recent attempt at classification (common with centaurs) uses the total of four classes from BB (blue, or neutral color, average B−V 0.70, V−R 0.39, e.g. Orcus) to RR (very red, B−V 1.08, V−R 0.71, e.g. Sedna) with BR and IR as intermediate classes. BR (intermediate blue-red) and IR (moderately red) differ mostly in the infrared bands I, J and H.
Typical models of the surface include water ice, amorphous carbon, silicates and organic macromolecules, named tholins, created by intense radiation. Four major tholins are used to fit the reddening slope:
Titan tholin, believed to be produced from a mixture of 90% N2 (nitrogen) and 10% (methane)
Triton tholin, as above but with very low (0.1%) methane content
(ethane) Ice tholin I, believed to be produced from a mixture of 86% and 14% C2H6 (ethane)
(methanol) Ice tholin II, 80% H2O, 16% CH3OH (methanol) and 3%
As an illustration of the two extreme classes BB and RR, the following compositions have been suggested
for Sedna (RR very red): 24% Triton tholin, 7% carbon, 10% N2, 26% methanol, and 33% methane
for Orcus (BB, grey/blue): 85% amorphous carbon, +4% Titan tholin, and 11% H2O ice
Size determination and distribution
Characteristically, big (bright) objects are typically on inclined orbits, whereas the invariable plane regroups mostly small and dim objects.
It is difficult to estimate the diameter of TNOs. For very large objects, with very well known orbital elements (like Pluto), diameters can be precisely measured by occultation of stars. For other large TNOs, diameters can be estimated by thermal measurements. The intensity of light illuminating the object is known (from its distance to the Sun), and one assumes that most of its surface is in thermal equilibrium (usually not a bad assumption for an airless body). For a known albedo, it is possible to estimate the surface temperature, and correspondingly the intensity of heat radiation. Further, if the size of the object is known, it is possible to predict both the amount of visible light and emitted heat radiation reaching Earth. A simplifying factor is that the Sun emits almost all of its energy in visible light and at nearby frequencies, while at the cold temperatures of TNOs, the heat radiation is emitted at completely different wavelengths (the far infrared).
Thus there are two unknowns (albedo and size), which can be determined by two independent measurements (of the amount of reflected light and emitted infrared heat radiation). TNOs are so far from the Sun that they are very cold, hence producing black-body radiation around 60 micrometres in wavelength. This wavelength of light is impossible to observe on the Earth's surface, but only from space using, e.g. the Spitzer Space Telescope. For ground-based observations, astronomers observe the tail of the black-body radiation in the far infrared. This far infrared radiation is so dim that the thermal method is only applicable to the largest KBOs. For the majority of (small) objects, the diameter is estimated by assuming an albedo. However, the albedos found range from 0.50 down to 0.05, resulting in a size range of 1,200–3,700 km for an object of magnitude of 1.0.
Notable objects
Exploration
The only mission to date that primarily targeted a trans-Neptunian object was NASA's New Horizons, which was launched in January 2006 and flew by the Pluto system in July 2015 and 486958 Arrokoth in January 2019.
In 2011, a design study explored a spacecraft survey of Quaoar, Sedna, Makemake, Haumea, and Eris.
In 2019 one mission to TNOs included designs for orbital capture and multi-target scenarios.
Some TNOs that were studied in a design study paper were , , and Lempo.
The existence of planets beyond Neptune, ranging from less than an Earth mass (Sub-Earth) up to a brown dwarf has been often postulated for different theoretical reasons to explain several observed or speculated features of the Kuiper belt and the Oort cloud. It was recently proposed to use ranging data from the New Horizons spacecraft to constrain the position of such a hypothesized body.
NASA has been working towards a dedicated Interstellar Precursor in the 21st century, one intentionally designed to reach the interstellar medium, and as part of this the flyby of objects like Sedna are also considered. Overall this type of spacecraft studies have proposed a launch in the 2020s, and would try to go a little faster than the Voyagers using existing technology. One 2018 design study for an Interstellar Precursor, included a visit of minor planet 50000 Quaoar, in the 2030s.
Extreme trans-Neptunian objects
Among the extreme trans-Neptunian objects are three high-perihelion objects classified as sednoids: 90377 Sedna, , and 541132 Leleākūhonua. They are distant detached objects with perihelia greater than 70 au. Their high perihelia keep them at a sufficient distance to avoid significant gravitational perturbations from Neptune. Previous explanations for the high perihelion of Sedna include a close encounter with an unknown planet on a distant orbit and a distant encounter with a random star or a member of the Sun's birth cluster that passed near the Solar System.
| Physical sciences | Solar System | Astronomy |
30598 | https://en.wikipedia.org/wiki/Train | Train | A train (from Old French , from Latin , "to pull, to draw") is a series of connected vehicles that run along a railway track and transport people or freight. Trains are typically pulled or pushed by locomotives (often known simply as "engines"), though some are self-propelled, such as multiple units or railcars. Passengers and cargo are carried in railroad cars, also known as wagons or carriages. Trains are designed to a certain gauge, or distance between rails. Most trains operate on steel tracks with steel wheels, the low friction of which makes them more efficient than other forms of transport. Many countries use rail transport.
Trains have their roots in wagonways, which used railway tracks and were powered by horses or pulled by cables. Following the invention of the steam locomotive in the United Kingdom in 1802, trains rapidly spread around the world, allowing freight and passengers to move over land faster and cheaper than ever possible before. Rapid transit and trams were first built in the late 1800s to transport large numbers of people in and around cities. Beginning in the 1920s, and accelerating following World War II, diesel and electric locomotives replaced steam as the means of motive power. Following the development of cars, trucks, and extensive networks of highways which offered greater mobility, as well as faster airplanes, trains declined in importance and market share, and many rail lines were abandoned. The spread of buses led to the closure of many rapid transit and tram systems during this time as well.
Since the 1970s, governments, environmentalists, and train advocates have promoted increased use of trains due to their greater fuel efficiency and lower greenhouse gas emissions compared to other modes of land transport. High-speed rail, first built in the 1960s, has proven competitive with cars and planes over short to medium distances. Commuter rail has grown in importance since the 1970s as an alternative to congested highways and a means to promote development, as has light rail in the 21st century. Freight trains remain important for the transport of bulk commodities such as coal and grain, as well as being a means of reducing road traffic congestion by freight trucks.
While conventional trains operate on relatively flat tracks with two rails, a number of specialized trains exist which are significantly different in their mode of operation. Monorails operate on a single rail, while funiculars and rack railways are uniquely designed to traverse steep slopes. Experimental trains such as high speed maglevs, which use magnetic levitation to float above a guideway, are under development in the 2020s and offer higher speeds than even the fastest conventional trains. Trains which use alternative fuels such as natural gas and hydrogen are another 21st-century development.
Types and terminology
Trains can be sorted into types based on whether they haul passengers or freight (though mixed trains which haul both exist), by their weight (heavy rail for regular trains, light rail for lighter transit systems), by their speed, by their distance (short haul, long distance, transcontinental), and by what form of track they use. Conventional trains operate on two rails, but several other types of track systems are also in use around the world, such as monorail.
Terminology
The railway terminology that is used to describe a train varies between countries. The International Union of Railways seeks to provide standardised terminology across languages. The Association of American Railroads provides terminology for North America.
The British Rail Safety and Standards Board defines a train as a "light locomotive, self-propelled rail vehicle or road-rail vehicle in rail mode." A collection of passenger or freight carriages connected together (not necessarily with a locomotive) is referred to as a rake. A collection of rail vehicles may also be called a consist. A set of vehicles that are coupled together (such as the Pioneer Zephyr) is called a trainset. The term rolling stock is used to describe any kind of railway vehicle.
History
Early history
Trains are an evolution of wheeled wagons running on stone wagonways, the earliest of which were built by Babylon circa 2,200 BCE. Starting in the 1500s, wagonways were introduced to haul material from mines; from the 1790s, stronger iron rails were introduced. Following early developments in the second half of the 1700s, in 1804 a steam locomotive built by British inventor Richard Trevithick powered the first ever steam train. Outside of coal mines, where fuel was readily available, steam locomotives remained untried until the opening of the Stockton and Darlington Railway in 1825. British engineer George Stephenson ran a steam locomotive named Locomotion No. 1 on this long line, hauling over 400 passengers at up to . The success of this locomotive, and Stephenson's Rocket in 1829, convinced many of the value in steam locomotives, and within a decade the stock market bubble known as "Railway Mania" started across the United Kingdom.
News of the success of steam locomotives quickly reached the United States, where the first steam railroad opened in 1829. American railroad pioneers soon started manufacturing their own locomotives, designed to handle the sharper curves and rougher track typical of the country's railroads. The other nations of Europe also took note of British railroad developments, and most countries on the continent constructed and opened their first railroads in the 1830s and 1840s, following the first run of a steam train in France in late 1829. In the 1850s, trains continued to expand across Europe, with many influenced by or purchases of American locomotive designs. Other European countries pursued their own distinct designs. Around the world, steam locomotives grew larger and more powerful throughout the rest of the century as technology advanced.
Trains first entered service in South America, Africa, and Asia through construction by imperial powers, which starting in the 1840s built railroads to solidify control of their colonies and transport cargo for export. In Japan, which was never colonized, railroads first arrived in the early 1870s. By 1900, railroads were operating on every continent besides uninhabited Antarctica.
New technologies
Even as steam locomotive technology continued to improve, inventors in Germany started work on alternative methods for powering trains. Werner von Siemens built the first train powered by electricity in 1879, and went on to pioneer electric trams. Another German inventor, Rudolf Diesel, constructed the first diesel engine in the 1890s, though the potential of his invention to power trains was not realized until decades later. Between 1897 and 1903, tests of experimental electric locomotives on the Royal Prussian Military Railway in Germany demonstrated they were viable, setting speed records in excess of .Early gas powered "doodlebug" self-propelled railcars entered service on railroads in the first decade of the 1900s. Experimentation with diesel and gas power continued, culminating in the German "Flying Hamburger" in 1933, and the influential American EMD FT in 1939. These successful diesel locomotives showed that diesel power was superior to steam, due to lower costs, ease of maintenance, and better reliability. Meanwhile, Italy developed an extensive network of electric trains during the first decades of the 20th century, driven by that country's lack of significant coal reserves.
Dieselization and increased competition
World War II brought great destruction to existing railroads across Europe, Asia, and Africa. Following the war's conclusion in 1945, nations which had suffered extensive damage to their railroad networks took the opportunity provided by Marshall Plan funds (or economic assistance from the USSR and Comecon, for nations behind the Iron Curtain) and advances in technology to convert their trains to diesel or electric power. France, Russia, Switzerland, and Japan were leaders in adopting widespread electrified railroads, while other nations focused primarily on dieselization. By 1980, the majority of the world's steam locomotives had been retired, though they continued to be used in parts of Africa and Asia, along with a few holdouts in Europe and South America. China was the last country to fully dieselize, due to its abundant coal reserves; steam locomotives were used to haul mainline trains as late as 2005 in Inner Mongolia.
Trains began to face strong competition from automobiles and freight trucks in the 1930s, which greatly intensified following World War II. After the war, air transport also became a significant competitor for passenger trains. Large amounts of traffic shifted to these new forms of transportation, resulting in a widespread decline in train service, both freight and passenger. A new development in the 1960s was high-speed rail, which runs on dedicated rights of way and travels at speeds of or greater. The first high-speed rail service was the Japanese Shinkansen, which entered service in 1964. In the following decades, high speed rail networks were developed across much of Europe and Eastern Asia, providing fast and reliable service competitive with automobiles and airplanes. The first high-speed train in the Americas was Amtrak's Acela in the United States, which entered service in 2000.
To the present day
Towards the end of the 20th century, increased awareness of the benefits of trains for transport led to a revival in their use and importance. Freight trains are significantly more efficient than trucks, while also emitting far fewer greenhouse gas emissions per ton-mile; passenger trains are also far more energy efficient than other modes of transport. According to the International Energy Agency, "On average, rail requires 12 times less energy and emits 7–11 times less GHGs per passenger-km travelled than private vehicles and airplanes, making it the most efficient mode of motorised passenger transport. Aside from shipping, freight rail is the most energy-efficient and least carbon-intensive way to transport goods." As such, rail transport is considered an important part of achieving sustainable energy. Intermodal freight trains, carrying double-stack shipping containers, have since the 1970s generated significant business for railroads and gained market share from trucks. Increased use of commuter rail has also been promoted as a means of fighting traffic congestion on highways in urban areas.
Components
Bogies
Bogies, also known in North America as trucks, support the wheels and axles of trains. Trucks range from just one axle to as many as four or more. Two-axle trucks are in the widest use worldwide, as they are better able to handle curves and support heavy loads than single axle trucks.
Couplers
Train vehicles are linked to one another by various systems of coupling. In much of Europe, India, and South America, trains primarily use buffers and chain couplers. In the rest of the world, Janney couplers are the most popular, with a few local variations persisting (such as Wilson couplers in the former Soviet Union). On multiple units all over the world, Scharfenberg couplers are common.
Brakes
Because trains are heavy, powerful brakes are needed to slow or stop trains, and because steel wheels on steel rails have relatively low friction, brakes must be distributed among as many wheels as possible. Early trains could only be stopped by manually applied hand brakes, requiring workers to ride on top of the cars and apply the brakes when the train went downhill. Hand brakes are still used to park cars and locomotives, but the predominant braking system for trains globally is air brakes, invented in 1869 by George Westinghouse. Air brakes are applied at once to the entire train using air hoses.
Warning devices
For safety and communication, trains are equipped with bells, horns, . Steam locomotives typically use steam whistles rather than horns. Other types of lights may be installed on locomotives and cars, such as classification lights, Mars Lights, and ditch lights.
Cabs
Locomotives are in most cases equipped with cabs, also known as driving compartments, where a train driver controls the train's operation. They may also be installed on unpowered train cars known as cab or control cars, to allow for a train to operate with the locomotive at the rear.
Operations
Scheduling and dispatching
To prevent collisions or other accidents, trains are often scheduled, and almost always are under the control of train dispatchers. Historically, trains operated based on timetables; most trains (including nearly all passenger trains), continue to operate based on fixed schedules, though freight trains may instead run on an as-needed basis, or when enough freight cars are available to justify running a train.
Maintenance
Simple repairs may be done while a train is parked on the tracks, but more extensive repairs will be done at a motive power depot. Similar facilities exist for repairing damaged or defective train cars. Maintenance of way trains are used to build and repair railroad tracks and other equipment.
Crew
Train drivers, also known as engineers, are responsible for operating trains. Conductors are in charge of trains and their cargo, and help passengers on passenger trains. Brakeman, also known as trainmen, were historically responsible for manually applying brakes, though the term is used today to refer to crew members who perform tasks such as operating switches, coupling and uncoupling train cars, and setting handbrakes on equipment. Steam locomotives require a fireman who is responsible for fueling and regulating the locomotive's fire and boiler. On passenger trains, other crew members assist passengers, such as chefs to prepare food, and service attendants to provide food and drinks to passengers. Other passenger train specific duties include passenger car attendants, who assist passengers with boarding and alighting from trains, answer questions, and keep train cars clean, and sleeping car attendants, who perform similar duties in sleeping cars. Some trains can operate with automatic train operation without a driver directly present.
Gauge
Around the world, various track gauges are in use for trains. In most cases, trains can only operate on tracks that are of the same gauge; where different gauge trains meet, it is known as a break of gauge. Standard gauge, defined as between the rails, is the most common gauge worldwide, though both broad-gauge and narrow-gauge trains are also in use. Trains also need to fit within the loading gauge profile to avoid fouling bridges and lineside infrastructure with this being a potential limiting factor on loads such as intermodal container types that may be carried.
Safety
Train accidents sometimes occur, including derailments (when a train leaves the tracks) and train wrecks (collisions between trains). Accidents were more common in the early days of trains, when railway signal systems, centralized traffic control, and failsafe systems to prevent collisions were primitive or did not yet exist. To prevent accidents, systems such as automatic train stop are used; these are failsafe systems that apply the brakes on a train if it passes a red signal and enters an occupied block, or if any of the train's equipment malfunctions. More advanced safety systems, such as positive train control, can also automatically regulate train speed, preventing derailments from entering curves or switches too fast.
Modern trains have a very good safety record overall, comparable with air travel. In the United States between 2000 and 2009, train travel averaged 0.43 deaths per billion passenger miles traveled. While this was higher than that of air travel at 0.07 deaths per billion passenger miles, it was also far below the 7.28 deaths per billion passenger miles of car travel. In the 21st century, several derailments of oil trains caused fatalities, most notably the Canadian Lac-Mégantic rail disaster in 2013 which killed 47 people and leveled much of the town of Lac-Mégantic.
The vast majority of train-related fatalities, over 90 percent, are due to trespassing on railroad tracks, or collisions with road vehicles at level crossings. Organizations such as Operation Lifesaver have been formed to improve safety awareness at railroad crossings, and governments have also launched ad campaigns. Trains cannot stop quickly when at speed; even an emergency brake application may still require more than a mile of stopping distance. As such, emphasis is on educating motorists to yield to trains at crossings and avoid trespassing.
Motive power
Before steam
The first trains were rope-hauled, gravity powered or pulled by horses.
Steam
Steam locomotives work by burning coal, wood or oil fuel in a boiler to heat water into steam, which powers the locomotive's pistons which are in turn connected to the wheels. In the mid 20th century, most steam locomotives were replaced by diesel or electric locomotives, which were cheaper, cleaner, and more reliable. Steam locomotives are still used in heritage railways operated in many countries for the leisure and enthusiast market.
Diesel
Diesel locomotives are powered with a diesel engine, which generates electricity to drive traction motors. This is known as a diesel–electric transmission, and is used on most larger diesels. Diesel power replaced steam for a variety of reasons: diesel locomotives were less complex, far more reliable, cheaper, cleaner, easier to maintain, and more fuel efficient.
Electric
Electric trains receive their current via overhead lines or through a third rail electric system, which is then used to power traction motors that drive the wheels. Electric traction offers a lower cost per mile of train operation but at a higher initial cost, which can only be justified on high traffic lines. Even though the cost per mile of construction is much higher, electric traction is cheaper to operate thanks to lower maintenance and purchase costs for locomotives and equipment. Compared to diesel locomotives, electric locomotives produce no direct emissions and accelerate much faster, making them better suited to passenger service, especially underground.
Other types
Various other types of train propulsion have been tried, some more successful than others.
In the mid 1900s, gas turbine locomotives were developed and successfully used, though most were retired due to high fuel costs and poor reliability.
In the 21st century, alternative fuels for locomotives are under development, due to increasing costs for diesel and a desire to reduce greenhouse gas emissions from trains. Examples include hydrail (trains powered by hydrogen fuel cells) and the use of compressed or liquefied natural gas.
Train cars
Train cars, also known as wagons, are unpowered rail vehicles which are typically pulled by locomotives. Many different types exist, specialized to handle various types of cargo. Some common types include boxcars (also known as covered goods wagons) that carry a wide variety of cargo, flatcars (also known as flat wagons) which have flat tops to hold cargo, hopper cars which carry bulk commodities, and tank cars which carry liquids and gases. Examples of more specialized types of train cars include bottle cars which hold molten steel, Schnabel cars which handle very heavy loads, and refrigerator cars which carry perishable goods.
Early train cars were small and light, much like early locomotives, but over time they have become larger as locomotives have become more powerful.
Passenger trains
A passenger train is used to transport people along a railroad line. These trains may consist of unpowered passenger railroad cars (also known as coaches or carriages) hauled by one or more locomotives, or may be self-propelled; self propelled passenger trains are known as multiple units or railcars. Passenger trains travel between stations or depots, where passengers may board and disembark. In most cases, passenger trains operate on a fixed schedule and have priority over freight trains. In Europe, passenger trains are assinged to different train categories.
Passenger trains can be divided into short and long distance services.
Long distance trains
Long distance passenger trains travel over hundreds or even thousands of miles between cities. The longest passenger train service in the world is Russia's Trans-Siberian Railway between Moscow and Vladivostok, a distance of . In general, long distance trains may take days to complete their journeys, and stop at dozens of stations along their routes. For many rural communities, they are the only form of public transportation available.
Short distance trains
Short distance or regional passenger trains have travel times measured in hours or even minutes, as opposed to days. They run more frequently than long distance trains, and are often used by commuters. Short distance passenger trains specifically designed for commuters are known as commuter rail.
High speed trains
High speed trains are designed to be much faster than conventional trains, and typically run on their own separate tracks than other, slower trains. The first high speed train was the Japanese Shinkansen, which opened in 1964. In the 21st century, services such as the French TGV and German Intercity Express are competitive with airplanes in travel time over short to medium distances.
A subset of high speed trains are higher speed trains, which bridge the gap between conventional and high speed trains, and travel at speeds between the two. Examples include the Northeast Regional in the United States, the Gatimaan Express in India, and the KTM ETS in Malaysia.
Luxury trains
Luxury trains provide permium rail services on their journey, either within a given country or across country borders. Some use refurbished classic rail cars.
Rapid transit trains
A number of types of trains are used to provide rapid transit to urban areas. These are distinct from traditional passenger trains in that they operate more frequently, typically do not share tracks with freight trains, and cover relatively short distances. Many different kinds of systems are in use globally.
Rapid transit trains that operate in tunnels below ground are known as subways, undergrounds, or metros. Elevated railways operate on viaducts or bridges above the ground, often on top of city streets. "Metro" may also refer to rapid transit that operates at ground level. In many systems, two or even all three of these types may exist on different portions of a network.
Trams
Trams, also known in North America as streetcars, typically operate on or parallel to streets in cities, with frequent stops and a high frequency of service.
Light rail
Light rail is a catchall term for a variety of systems, which may include characteristics of trams, heavier passenger trains, and rapid transit systems.
Specialized trains
There are a number of specialized trains which differ from the traditional definition of a train as a set of vehicles which travels on two rails.
Monorail
Monorails were developed to meet medium-demand traffic in urban transit, and consist of a train running on a single rail, typically elevated.
Maglev
Maglev technology uses magnets to levitate the train above the track, reducing friction and allowing higher speeds. The first commercial maglev train was an airport shuttle introduced in 1984 at Birmingham Airport in England.
The Shanghai maglev train, opened in 2002, is the fastest commercial train service of any kind, operating at speeds of up to . Japan's L0 Series maglev holds the record for the world's fastest train ever, with a top speed of . Maglev has not yet been used for inter-city mass transit routes, with only a few examples in use worldwide .
Mine trains
Mine trains are operated in large mines and carry both workers and goods. They are usually powered by electricity, to prevent emissions which would pose a health risk to workers underground.
Militarized trains
While they have long been important in transporting troops and military equipment, trains have occasionally been used for direct combat. Armored trains have been used in a number of conflicts, as have railroad based artillery systems. Railcar-launched ICBM systems have also been used by nuclear weapon states.
Rack railway
For climbing steep slopes, specialized rack railroads are used. In order to avoid slipping, a rack and pinion system is used, with a toothed rail placed between the two regular rails, which meshes with a drive gear under the locomotive.
Funicular
Funiculars are also used to climb steep slopes, but instead of a rack use a rope, which is attached to two cars and a pulley. The two funicular cars travel up and down the slope on parallel sets of rails when the pulley is rotated. This design makes funiculars an efficient means of moving people and cargo up and down slopes. The earliest funicular railroad, the Reisszug, opened around 1500.
Rubber-tired train
Rubber tire trains, or rubber-tired metro systems, employ rubber tires for traction and guidance, offering advantages like better acceleration and reduced noise. However, they come with disadvantages, including higher costs for installation and maintenance, faster tire wear, and complex tire inflation mechanisms that require regular upkeep. Nonetheless, these systems are utilized in many urban rapid transit networks worldwide, enhancing passenger comfort and urban transportation efficiency.
Freight trains
Freight trains are dedicated to the transport of cargo (also known as goods), rather than people, and are made up of freight cars or wagons. Longer freight trains typically operate between classification yards, while local trains provide freight service between yards and individual loading and unloading points along railroad lines. Major origin or destination points for freight may instead be served by unit trains, which exclusively carry one type of cargo and move directly from the origin to the destination and back without any intermediate stops.
Under the right circumstances, transporting freight by train is less expensive than other modes of transport, and also more energy efficient than transporting freight by road. In the United States, railroads on average moved a ton of freight per gallon of fuel, as of 2008, an efficiency four times greater than that of trucks. The Environmental and Energy Study Institute estimates that train transportation of freight is between 1.9 and 5.5 times more efficient than by truck, and also generates significantly less pollution. Rail freight is most economic when goods are being carried in bulk and over large distances, but it is less suited to short distances and small loads. With the advent of containerization, freight rail has become part of an intermodal freight network linked with trucking and container ships.
The main disadvantage of rail freight is its lack of flexibility and for this reason, rail has lost much of the freight business to road competition. Many governments are trying to encourage more freight back on to trains because of the community benefits that it would bring.
Cultural impact
From the dawn of railroading, trains have had a significant cultural impact worldwide. Fast train travel made possible in days or hours journeys which previously took months. Transport of both freight and passengers became far cheaper, allowing for networked economies over large areas. Towns and cities along railroad lines grew in importance, while those bypassed declined or even became ghost towns. Major cities such as Chicago became prominent because they were places where multiple train lines met. In the United States, the completion of the first transcontinental railroad played a major role in the settling of the western part of the nation by non-indigenous migrants and its incorporation into the rest of the country. The Russian Trans-Siberian Railway had a similar impact by connecting the vast country from east to west, and making travel across frozen Siberia possible.
Trains have long had a major influence on music, art, and literature. Many films heavily involve or are set on trains. Toy train sets are commonly used by children, traditionally boys. Railfans are found around the world, along with hobbyists who create model train layouts. Train enthusiasts generally have a positive relationship with the railroad industry, though sometimes cause issues by trespassing.
| Technology | Transportation | null |
30606 | https://en.wikipedia.org/wiki/Tetrahedron | Tetrahedron | In geometry, a tetrahedron (: tetrahedra or tetrahedrons), also known as a triangular pyramid, is a polyhedron composed of four triangular faces, six straight edges, and four vertices. The tetrahedron is the simplest of all the ordinary convex polyhedra.
The tetrahedron is the three-dimensional case of the more general concept of a Euclidean simplex, and may thus also be called a 3-simplex.
The tetrahedron is one kind of pyramid, which is a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron, the base is a triangle (any of the four faces can be considered the base), so a tetrahedron is also known as a "triangular pyramid".
Like all convex polyhedra, a tetrahedron can be folded from a single sheet of paper. It has two such nets.
For any tetrahedron there exists a sphere (called the circumsphere) on which all four vertices lie, and another sphere (the insphere) tangent to the tetrahedron's faces.
Regular tetrahedron
A regular tetrahedron is a tetrahedron in which all four faces are equilateral triangles. In other words, all of its faces are the same size and shape (congruent) and all edges are the same length. The regular tetrahedron is the simplest convex deltahedron, a polyhedron in which all of its faces are equilateral triangles; there are seven other convex deltahedra.
The regular tetrahedron is also one of the five regular Platonic solids, a set of polyhedrons in which all of their faces are regular polygons. Known since antiquity, the Platonic solid is named after the Greek philosopher Plato, who associated those four solids with nature. The regular tetrahedron was considered as the classical element of fire, because of his interpretation of its sharpest corner being most penetrating.
The regular tetrahedron is self-dual, meaning its dual is another regular tetrahedron. The compound figure comprising two such dual tetrahedra form a stellated octahedron or stella octangula. Its interior is an octahedron, and correspondingly, a regular octahedron is the result of cutting off, from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e., rectifying the tetrahedron).
The tetrahedron is yet related to another two solids: By truncation the tetrahedron becomes a truncated tetrahedron. The dual of this solid is the triakis tetrahedron, a regular tetrahedron with four triangular pyramids attached to each of its faces. i.e., its kleetope.
Regular tetrahedra alone do not tessellate (fill space), but if alternated with regular octahedra in the ratio of two tetrahedra to one octahedron, they form the alternated cubic honeycomb, which is a tessellation. Some tetrahedra that are not regular, including the Schläfli orthoscheme and the Hill tetrahedron, can tessellate.
Measurement
Consider a regular tetrahedron with edge length .
The height of a regular tetrahedron is .
Its surface area is four times the area of an equilateral triangle:
Its volume can be ascertained similarly as the other pyramids, one-third of the base times height. Because the base is an equilateral, it is:
Its volume can also be obtained by dissecting a cube into a tetrahedron and four triangular pyramids.
Its dihedral angle—the angle formed by two planes in which adjacent faces lie—is
Its vertex–center–vertex angle—the angle between lines from the tetrahedron center to any two vertices—is denoted the tetrahedral angle. It is the angle between Plateau borders at a vertex. Its value in radians is the length of the circular arc on the unit sphere resulting from centrally projecting one edge of the tetrahedron to the sphere. In chemistry, it is also known as the tetrahedral bond angle.
The radii of its circumsphere , insphere , midsphere , and exsphere are:
For a regular tetrahedron with side length and circumsphere radius , the distances from an arbitrary point in 3-space to its four vertices satisfy the equations:
With respect to the base plane the slope of a face (2) is twice that of an edge (), corresponding to the fact that the horizontal distance covered from the base to the apex along an edge is twice that along the median of a face. In other words, if C is the centroid of the base, the distance from C to a vertex of the base is twice that from C to the midpoint of an edge of the base. This follows from the fact that the medians of a triangle intersect at its centroid, and this point divides each of them in two segments, one of which is twice as long as the other (see proof).
Its solid angle at a vertex subtended by a face is or approximately 0.55129 steradians, 1809.8 square degrees, and 0.04387 spats.
Cartesian coordinates
One way to construct a regular tetrahedron is by using the following Cartesian coordinates, defining the four vertices of a tetrahedron with edge length 2, centered at the origin, and two-level edges:
Expressed symmetrically as 4 points on the unit sphere, centroid at the origin, with lower face parallel to the plane, the vertices are:
with the edge length of .
A regular tetrahedron can be embedded inside a cube in two ways such that each vertex is a vertex of the cube, and each edge is a diagonal of one of the cube's faces. For one such embedding, the Cartesian coordinates of the vertices are
This yields a tetrahedron with edge-length , centered at the origin. For the other tetrahedron (which is dual to the first), reverse all the signs. These two tetrahedra's vertices combined are the vertices of a cube, demonstrating that the regular tetrahedron is the 3-demicube, a polyhedron that is by alternating a cube. This form has Coxeter diagram and Schläfli symbol .
Symmetry
The vertices of a cube can be grouped into two groups of four, each forming a regular tetrahedron, showing one of the two tetrahedra in the cube. The symmetries of a regular tetrahedron correspond to half of those of a cube: those that map the tetrahedra to themselves, and not to each other. The tetrahedron is the only Platonic solid not mapped to itself by point inversion.
The regular tetrahedron has 24 isometries, forming the symmetry group known as full tetrahedral symmetry . This symmetry group is isomorphic to the symmetric group . They can be categorized as follows:
It has rotational tetrahedral symmetry . This symmetry is isomorphic to alternating group —the identity and 11 proper rotations—with the following conjugacy classes (in parentheses are given the permutations of the vertices, or correspondingly, the faces, and the unit quaternion representation):
identity (identity; 1)
rotation about an axis through a vertex, perpendicular to the opposite plane, by an angle of ±120°: 4 axes, 2 per axis, together , etc.; )
rotation by an angle of 180° such that an edge maps to the opposite edge: , etc.; )
reflections in a plane perpendicular to an edge: 6
reflections in a plane combined with 90° rotation about an axis perpendicular to the plane: 3 axes, 2 per axis, together 6; equivalently, they are 90° rotations combined with inversion (x is mapped to −x): the rotations correspond to those of the cube about face-to-face axes
Orthogonal projections of the regular tetrahedron
The regular tetrahedron has two special orthogonal projections, one centered on a vertex or equivalently on a face, and one centered on an edge. The first corresponds to the A2 Coxeter plane.
Cross section of regular tetrahedron
The two skew perpendicular opposite edges of a regular tetrahedron define a set of parallel planes. When one of these planes intersects the tetrahedron the resulting cross section is a rectangle. When the intersecting plane is near one of the edges the rectangle is long and skinny. When halfway between the two edges the intersection is a square. The aspect ratio of the rectangle reverses as you pass this halfway point. For the midpoint square intersection the resulting boundary line traverses every face of the tetrahedron similarly. If the tetrahedron is bisected on this plane, both halves become wedges.
This property also applies for tetragonal disphenoids when applied to the two special edge pairs.
Spherical tiling
The tetrahedron can also be represented as a spherical tiling (of spherical triangles), and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Helical stacking
Regular tetrahedra can be stacked face-to-face in a chiral aperiodic chain called the Boerdijk–Coxeter helix.
In four dimensions, all the convex regular 4-polytopes with tetrahedral cells (the 5-cell, 16-cell and 600-cell) can be constructed as tilings of the 3-sphere by these chains, which become periodic in the three-dimensional space of the 4-polytope's boundary surface.
Irregular tetrahedra
Tetrahedra which do not have four equilateral faces are categorized and named by the symmetries they do possess.
If all three pairs of opposite edges of a tetrahedron are perpendicular, then it is called an orthocentric tetrahedron. When only one pair of opposite edges are perpendicular, it is called a semi-orthocentric tetrahedron.
In a trirectangular tetrahedron the three face angles at one vertex are right angles, as at the corner of a cube.
An isodynamic tetrahedron is one in which the cevians that join the vertices to the incenters of the opposite faces are concurrent.
An isogonic tetrahedron has concurrent cevians that join the vertices to the points of contact of the opposite faces with the inscribed sphere of the tetrahedron.
Disphenoid
A disphenoid is a tetrahedron with four congruent triangles as faces; the triangles necessarily have all angles acute. The regular tetrahedron is a special case of a disphenoid. Other names for the same shape include bisphenoid, isosceles tetrahedron and equifacial tetrahedron.
Orthoschemes
A 3-orthoscheme is a tetrahedron where all four faces are right triangles. A 3-orthoscheme is not a disphenoid, because its opposite edges are not of equal length. It is not possible to construct a disphenoid with right triangle or obtuse triangle faces.
An orthoscheme is an irregular simplex that is the convex hull of a tree in which all edges are mutually perpendicular. In a 3-dimensional orthoscheme, the tree consists of three perpendicular edges connecting all four vertices in a linear path that makes two right-angled turns. The 3-orthoscheme is a tetrahedron having two right angles at each of two vertices, so another name for it is birectangular tetrahedron. It is also called a quadrirectangular tetrahedron because it contains four right angles.
Coxeter also calls quadrirectangular tetrahedra "characteristic tetrahedra", because of their integral relationship to the regular polytopes and their symmetry groups. For example, the special case of a 3-orthoscheme with equal-length perpendicular edges is characteristic of the cube, which means that the cube can be subdivided into instances of this orthoscheme. If its three perpendicular edges are of unit length, its remaining edges are two of length and one of length , so all its edges are edges or diagonals of the cube. The cube can be dissected into six such 3-orthoschemes four different ways, with all six surrounding the same cube diagonal. The cube can also be dissected into 48 smaller instances of this same characteristic 3-orthoscheme (just one way, by all of its symmetry planes at once). The characteristic tetrahedron of the cube is an example of a Heronian tetrahedron.
Every regular polytope, including the regular tetrahedron, has its characteristic orthoscheme. There is a 3-orthoscheme, which is the "characteristic tetrahedron of the regular tetrahedron". The regular tetrahedron is subdivided into 24 instances of its characteristic tetrahedron by its planes of symmetry. The 24 characteristic tetrahedra of the regular tetrahedron occur in two mirror-image forms, 12 of each.
If the regular tetrahedron has edge length 𝒍 = 2, its characteristic tetrahedron's six edges have lengths , , around its exterior right-triangle face (the edges opposite the characteristic angles 𝟀, 𝝉, 𝟁), plus , , (edges that are the characteristic radii of the regular tetrahedron). The 3-edge path along orthogonal edges of the orthoscheme is , , , first from a tetrahedron vertex to an tetrahedron edge center, then turning 90° to an tetrahedron face center, then turning 90° to the tetrahedron center. The orthoscheme has four dissimilar right triangle faces. The exterior face is a 60-90-30 triangle which is one-sixth of a tetrahedron face. The three faces interior to the tetrahedron are: a right triangle with edges , , , a right triangle with edges , , , and a right triangle with edges , , .
Space-filling tetrahedra
A space-filling tetrahedron packs with directly congruent or enantiomorphous (mirror image) copies of itself to tile space. The cube can be dissected into six 3-orthoschemes, three left-handed and three right-handed (one of each at each cube face), and cubes can fill space, so the characteristic 3-orthoscheme of the cube is a space-filling tetrahedron in this sense. (The characteristic orthoscheme of the cube is one of the Hill tetrahedra, a family of space-filling tetrahedra. All space-filling tetrahedra are scissors-congruent to a cube.)
A disphenoid can be a space-filling tetrahedron in the directly congruent sense, as in the disphenoid tetrahedral honeycomb. Regular tetrahedra, however, cannot fill space by themselves (moreover, it is not scissors-congruent to any other polyhedra which can fill the space, see Hilbert's third problem). The tetrahedral-octahedral honeycomb fills space with alternating regular tetrahedron cells and regular octahedron cells in a ratio of 2:1.
Fundamental domains
An irregular tetrahedron which is the fundamental domain of a symmetry group is an example of a Goursat tetrahedron. The Goursat tetrahedra generate all the regular polyhedra (and many other uniform polyhedra) by mirror reflections, a process referred to as Wythoff's kaleidoscopic construction.
For polyhedra, Wythoff's construction arranges three mirrors at angles to each other, as in a kaleidoscope. Unlike a cylindrical kaleidoscope, Wythoff's mirrors are located at three faces of a Goursat tetrahedron such that all three mirrors intersect at a single point. (The Coxeter-Dynkin diagram of the generated polyhedron contains three nodes representing the three mirrors. The dihedral angle between each pair of mirrors is encoded in the diagram, as well as the location of a single generating point which is multiplied by mirror reflections into the vertices of the polyhedron.)
Among the Goursat tetrahedra which generate 3-dimensional honeycombs we can recognize an orthoscheme (the characteristic tetrahedron of the cube), a double orthoscheme (the characteristic tetrahedron of the cube face-bonded to its mirror image), and the space-filling disphenoid illustrated above. The disphenoid is the double orthoscheme face-bonded to its mirror image (a quadruple orthoscheme). Thus all three of these Goursat tetrahedra, and all the polyhedra they generate by reflections, can be dissected into characteristic tetrahedra of the cube.
Isometries of irregular tetrahedra
The isometries of an irregular (unmarked) tetrahedron depend on the geometry of the tetrahedron, with 7 cases possible. In each case a 3-dimensional point group is formed. Two other isometries (C3, [3]+), and (S4, [2+,4+]) can exist if the face or edge marking are included. Tetrahedral diagrams are included for each type below, with edges colored by isometric equivalence, and are gray colored for unique edges.
Subdivision and similarity classes
Tetrahedra subdivision is a process used in computational geometry and 3D modeling to divide a tetrahedron into several smaller tetrahedra. This process enhances the complexity and detail of tetrahedral meshes, which is particularly beneficial in numerical simulations, finite element analysis, and computer graphics. One of the commonly used subdivision methods is the Longest Edge Bisection (LEB), which identifies the longest edge of the tetrahedron and bisects it at its midpoint, generating two new, smaller tetrahedra. When this process is repeated multiple times, bisecting all the tetrahedra generated in each previous iteration, the process is called iterative LEB.
A similarity class is the set of tetrahedra with the same geometric shape, regardless of their specific position, orientation, and scale. So, any two tetrahedra belonging to the same similarity class may be transformed to each other by an affine transformation. The outcome of having a limited number of similarity classes in iterative subdivision methods is significant for computational modeling and simulation. It reduces the variability in the shapes and sizes of generated tetrahedra, preventing the formation of highly irregular elements that could compromise simulation results.
The iterative LEB of the regular tetrahedron has been shown to produce only 8 similarity classes. Furthermore, in the case of nearly equilateral tetrahedra where their two longest edges are not connected to each other, and the ratio between their longest and their shortest edge is less than or equal to , the iterated LEB produces no more than 37 similarity classes.
General properties
Volume
The volume of a tetrahedron can be obtained in many ways. It can be given by using the formula of the pyramid's volume:
where is the base' area and is the height from the base to the apex. This applies for each of the four choices of the base, so the distances from the apices to the opposite faces are inversely proportional to the areas of these faces. Another way is by dissecting a triangular prism into three pieces.
Given the vertices of a tetrahedron in the following:
The volume of a tetrahedron can be ascertained in terms of a determinant , or any other combination of pairs of vertices that form a simply connected graph. Comparing this formula with that used to compute the volume of a parallelepiped, we conclude that the volume of a tetrahedron is equal to of the volume of any parallelepiped that shares three converging edges with it.
The absolute value of the scalar triple product can be represented as the following absolute values of determinants:
orwhereare expressed as row or column vectors.
Hence
where
where , , and , which gives
where α, β, γ are the plane angles occurring in vertex d. The angle α, is the angle between the two edges connecting the vertex d to the vertices b and c. The angle β, does so for the vertices a and c, while γ, is defined by the position of the vertices a and b.
If we do not require that d = 0 then
Given the distances between the vertices of a tetrahedron the volume can be computed using the Cayley–Menger determinant:
where the subscripts represent the vertices and d is the pairwise distance between them – i.e., the length of the edge connecting the two vertices. A negative value of the determinant means that a tetrahedron cannot be constructed with the given distances. This formula, sometimes called Tartaglia's formula, is essentially due to the painter Piero della Francesca in the 15th century, as a three-dimensional analogue of the 1st century Heron's formula for the area of a triangle.
Let , , and be the lengths of three edges that meet at a point, and , , and be those of the opposite edges. The volume of the tetrahedron is:
where
The above formula uses six lengths of edges, and the following formula uses three lengths of edges and three angles.
The volume of a tetrahedron can be ascertained by using the Heron formula. Suppose , , , . , and are the lengths of the tetrahedron's edges as in the following image. Here, the first three form a triangle, with opposite , opposite , and opposite . Then,
where
and
Any plane containing a bimedian (connector of opposite edges' midpoints) of a tetrahedron bisects the volume of the tetrahedron.
For tetrahedra in hyperbolic space or in three-dimensional elliptic geometry, the dihedral angles of the tetrahedron determine its shape and hence its volume. In these cases, the volume is given by the Murakami–Yano formula, after Jun Murakami and Masakazu Yano. However, in Euclidean space, scaling a tetrahedron changes its volume but not its dihedral angles, so no such formula can exist.
Any two opposite edges of a tetrahedron lie on two skew lines, and the distance between the edges is defined as the distance between the two skew lines. Let be the distance between the skew lines formed by opposite edges and as calculated here. Then another formula for the volume of a tetrahedron is given by
Properties analogous to those of a triangle
The tetrahedron has many properties analogous to those of a triangle, including an insphere, circumsphere, medial tetrahedron, and exspheres. It has respective centers such as incenter, circumcenter, excenters, Spieker center and points such as a centroid. However, there is generally no orthocenter in the sense of intersecting altitudes.
Gaspard Monge found a center that exists in every tetrahedron, now known as the Monge point: the point where the six midplanes of a tetrahedron intersect. A midplane is defined as a plane that is orthogonal to an edge joining any two vertices that also contains the centroid of an opposite edge formed by joining the other two vertices. If the tetrahedron's altitudes do intersect, then the Monge point and the orthocenter coincide to give the class of orthocentric tetrahedron.
An orthogonal line dropped from the Monge point to any face meets that face at the midpoint of the line segment between that face's orthocenter and the foot of the altitude dropped from the opposite vertex.
A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median and a line segment joining the midpoints of two opposite edges is called a bimedian of the tetrahedron. Hence there are four medians and three bimedians in a tetrahedron. These seven line segments are all concurrent at a point called the centroid of the tetrahedron. In addition the four medians are divided in a 3:1 ratio by the centroid (see Commandino's theorem). The centroid of a tetrahedron is the midpoint between its Monge point and circumcenter. These points define the Euler line of the tetrahedron that is analogous to the Euler line of a triangle.
The nine-point circle of the general triangle has an analogue in the circumsphere of a tetrahedron's medial tetrahedron. It is the twelve-point sphere and besides the centroids of the four faces of the reference tetrahedron, it passes through four substitute Euler points, one third of the way from the Monge point toward each of the four vertices. Finally it passes through the four base points of orthogonal lines dropped from each Euler point to the face not containing the vertex that generated the Euler point.
The center T of the twelve-point sphere also lies on the Euler line. Unlike its triangular counterpart, this center lies one third of the way from the Monge point M towards the circumcenter. Also, an orthogonal line through T to a chosen face is coplanar with two other orthogonal lines to the same face. The first is an orthogonal line passing through the corresponding Euler point to the chosen face. The second is an orthogonal line passing through the centroid of the chosen face. This orthogonal line through the twelve-point center lies midway between the Euler point orthogonal line and the centroidal orthogonal line. Furthermore, for any face, the twelve-point center lies at the midpoint of the corresponding Euler point and the orthocenter for that face.
The radius of the twelve-point sphere is one third of the circumradius of the reference tetrahedron.
There is a relation among the angles made by the faces of a general tetrahedron given by
where α is the angle between the faces i and j.
The geometric median of the vertex position coordinates of a tetrahedron and its isogonic center are associated, under circumstances analogous to those observed for a triangle. Lorenz Lindelöf found that, corresponding to any given tetrahedron is a point now known as an isogonic center, O, at which the solid angles subtended by the faces are equal, having a common value of π sr, and at which the angles subtended by opposite edges are equal. A solid angle of π sr is one quarter of that subtended by all of space. When all the solid angles at the vertices of a tetrahedron are smaller than π sr, O lies inside the tetrahedron, and because the sum of distances from O to the vertices is a minimum, O coincides with the geometric median, M, of the vertices. In the event that the solid angle at one of the vertices, v, measures exactly π sr, then O and M coincide with v. If however, a tetrahedron has a vertex, v, with solid angle greater than π sr, M still corresponds to v, but O lies outside the tetrahedron.
Geometric relations
A tetrahedron is a 3-simplex. Unlike the case of the other Platonic solids, all the vertices of a regular tetrahedron are equidistant from each other (they are the only possible arrangement of four equidistant points in 3-dimensional space, for an example in electromagnetism cf. Thomson problem).
The above embedding divides the cube into five tetrahedra, one of which is regular. In fact, five is the minimum number of tetrahedra required to compose a cube. To see this, starting from a base tetrahedron with 4 vertices, each added tetrahedra adds at most 1 new vertex, so at least 4 more must be added to make a cube, which has 8 vertices.
Inscribing tetrahedra inside the regular compound of five cubes gives two more regular compounds, containing five and ten tetrahedra.
Regular tetrahedra cannot tessellate space by themselves, although this result seems likely enough that Aristotle claimed it was possible. However, two regular tetrahedra can be combined with an octahedron, giving a rhombohedron that can tile space as the tetrahedral-octahedral honeycomb.
On otherhand, several irregular tetrahedra are known, of which copies can tile space, for instance the characteristic orthoscheme of the cube and the disphenoid of the disphenoid tetrahedral honeycomb. The complete list remains an open problem.
If one relaxes the requirement that the tetrahedra be all the same shape, one can tile space using only tetrahedra in many different ways. For example, one can divide an octahedron into four identical tetrahedra and combine them again with two regular ones. (As a side-note: these two kinds of tetrahedron have the same volume.)
The tetrahedron is unique among the uniform polyhedra in possessing no parallel faces.
A law of sines for tetrahedra and the space of all shapes of tetrahedra
A corollary of the usual law of sines is that in a tetrahedron with vertices O, A, B, C, we have
One may view the two sides of this identity as corresponding to clockwise and counterclockwise orientations of the surface.
Putting any of the four vertices in the role of O yields four such identities, but at most three of them are independent: If the "clockwise" sides of three of them are multiplied and the product is inferred to be equal to the product of the "counterclockwise" sides of the same three identities, and then common factors are cancelled from both sides, the result is the fourth identity.
Three angles are the angles of some triangle if and only if their sum is 180° (π radians). What condition on 12 angles is necessary and sufficient for them to be the 12 angles of some tetrahedron? Clearly the sum of the angles of any side of the tetrahedron must be 180°. Since there are four such triangles, there are four such constraints on sums of angles, and the number of degrees of freedom is thereby reduced from 12 to 8. The four relations given by this sine law further reduce the number of degrees of freedom, from 8 down to not 4 but 5, since the fourth constraint is not independent of the first three. Thus the space of all shapes of tetrahedra is 5-dimensional.
Law of cosines for tetrahedra
Let , , , be the points of a tetrahedron. Let be the area of the face opposite vertex and let be the dihedral angle between the two faces of the tetrahedron adjacent to the edge . The law of cosines for a tetrahedron, which relates the areas of the faces of the tetrahedron to the dihedral angles about a vertex, is given by the following relation:
Interior point
Let P be any interior point of a tetrahedron of volume V for which the vertices are A, B, C, and D, and for which the areas of the opposite faces are Fa, Fb, Fc, and Fd. Then
For vertices A, B, C, and D, interior point P, and feet J, K, L, and M of the perpendiculars from P to the faces, and suppose the faces have equal areas, then
Inradius
Denoting the inradius of a tetrahedron as r and the inradii of its triangular faces as ri for i = 1, 2, 3, 4, we have
with equality if and only if the tetrahedron is regular.
If A1, A2, A3 and A4 denote the area of each faces, the value of r is given by
.
This formula is obtained from dividing the tetrahedron into four tetrahedra whose points are the three points of one of the original faces and the incenter. Since the four subtetrahedra fill the volume, we have .
Circumradius
Denote the circumradius of a tetrahedron as R. Let a, b, c be the lengths of the three edges that meet at a vertex, and A, B, C the length of the opposite edges. Let V be the volume of the tetrahedron. Then
Circumcenter
The circumcenter of a tetrahedron can be found as intersection of three bisector planes. A bisector plane is defined as the plane centered on, and orthogonal to an edge of the tetrahedron.
With this definition, the circumcenter of a tetrahedron with vertices ,,, can be formulated as matrix-vector product:
In contrast to the centroid, the circumcenter may not always lay on the inside of a tetrahedron.
Analogously to an obtuse triangle, the circumcenter is outside of the object for an obtuse tetrahedron.
Centroid
The tetrahedron's center of mass can be computed as the arithmetic mean of its four vertices, see Centroid.
Faces
The sum of the areas of any three faces is greater than the area of the fourth face.
Integer tetrahedra
There exist tetrahedra having integer-valued edge lengths, face areas and volume. These are called Heronian tetrahedra. One example has one edge of 896, the opposite edge of 990 and the other four edges of 1073; two faces are isosceles triangles with areas of and the other two are isosceles with areas of , while the volume is .
A tetrahedron can have integer volume and consecutive integers as edges, an example being the one with edges 6, 7, 8, 9, 10, and 11 and volume 48.
Related polyhedra and compounds
A regular tetrahedron can be seen as a triangular pyramid.
A regular tetrahedron can be seen as a degenerate polyhedron, a uniform digonal antiprism, where base polygons are reduced digons.
A regular tetrahedron can be seen as a degenerate polyhedron, a uniform dual digonal trapezohedron, containing 6 vertices, in two sets of colinear edges.
A truncation process applied to the tetrahedron produces a series of uniform polyhedra. Truncating edges down to points produces the octahedron as a rectified tetrahedron. The process completes as a birectification, reducing the original faces down to points, and producing the self-dual tetrahedron once again.
This polyhedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane.
The tetrahedron is topologically related to a series of regular polyhedra and tilings with order-3 vertex figures.
An interesting polyhedron can be constructed from five intersecting tetrahedra. This compound of five tetrahedra has been known for hundreds of years. It comes up regularly in the world of origami. Joining the twenty vertices would form a regular dodecahedron. There are both left-handed and right-handed forms, which are mirror images of each other. Superimposing both forms gives a compound of ten tetrahedra, in which the ten tetrahedra are arranged as five pairs of stellae octangulae. A stella octangula is a compound of two tetrahedra in dual position and its eight vertices define a cube as their convex hull.
The square hosohedron is another polyhedron with four faces, but it does not have triangular faces.
The Szilassi polyhedron and the tetrahedron are the only two known polyhedra in which each face shares an edge with each other face. Furthermore, the Császár polyhedron (itself is the dual of Szilassi polyhedron) and the tetrahedron are the only two known polyhedra in which every diagonal lies on the sides.
Applications
Numerical analysis
In numerical analysis, complicated three-dimensional shapes are commonly broken down into, or approximated by, a polygonal mesh of irregular tetrahedra in the process of setting up the equations for finite element analysis especially in the numerical solution of partial differential equations. These methods have wide applications in practical applications in computational fluid dynamics, aerodynamics, electromagnetic fields, civil engineering, chemical engineering, naval architecture and engineering, and related fields.
Structural engineering
A tetrahedron having stiff edges is inherently rigid. For this reason it is often used to stiffen frame structures such as spaceframes.
Fortification
Tetrahedrons are used in caltrops to provide an area denial weapon. This is due to their nature of having a sharp corner that always points upwards.
Large concrete tetrahedrons have been used as anti-tank measures, or as Tetrapods to break down waves at coastlines.
Aviation
At some airfields, a large frame in the shape of a tetrahedron with two sides covered with a thin material is mounted on a rotating pivot and always points into the wind. It is built big enough to be seen from the air and is sometimes illuminated. Its purpose is to serve as a reference to pilots indicating wind direction.
Chemistry
The tetrahedron shape is seen in nature in covalently bonded molecules. All sp3-hybridized atoms are surrounded by atoms (or lone electron pairs) at the four corners of a tetrahedron. For instance in a methane molecule () or an ammonium ion (), four hydrogen atoms surround a central carbon or nitrogen atom with tetrahedral symmetry. For this reason, one of the leading journals in organic chemistry is called Tetrahedron. The central angle between any two vertices of a perfect tetrahedron is arccos(−), or approximately 109.47°.
Water, , also has a tetrahedral structure, with two hydrogen atoms and two lone pairs of electrons around the central oxygen atoms. Its tetrahedral symmetry is not perfect, however, because the lone pairs repel more than the single O–H bonds.
Quaternary phase diagrams of mixtures of chemical substances are represented graphically as tetrahedra.
However, quaternary phase diagrams in communication engineering are represented graphically on a two-dimensional plane.
There are molecules with the shape based on four nearby atoms whose bonds form the sides of a tetrahedral structure, such as white phosphorus allotrope and tetra-t-butyltetrahedrane, known derivative of the hypothetical tetrahedrane.
Electricity and electronics
If six equal resistors are soldered together to form a tetrahedron, then the resistance measured between any two vertices is half that of one resistor.
Since silicon is the most common semiconductor used in solid-state electronics, and silicon has a valence of four, the tetrahedral shape of the four chemical bonds in silicon is a strong influence on how crystals of silicon form and what shapes they assume.
Color space
Tetrahedra are used in color space conversion algorithms specifically for cases in which the luminance axis diagonally segments the color space (e.g. RGB, CMY).
Games
The Royal Game of Ur, dating from 2600 BC, was played with a set of tetrahedral dice.
Especially in roleplaying, this solid is known as a 4-sided die, one of the more common polyhedral dice, with the number rolled appearing around the bottom or on the top vertex. Some Rubik's Cube-like puzzles are tetrahedral, such as the Pyraminx and Pyramorphix.
Geology
The tetrahedral hypothesis, originally published by William Lowthian Green to explain the formation of the Earth, was popular through the early 20th century.
Popular culture
Stanley Kubrick originally intended the monolith in 2001: A Space Odyssey to be a tetrahedron, according to Marvin Minsky, a cognitive scientist and expert on artificial intelligence who advised Kubrick on the HAL 9000 computer and other aspects of the movie. Kubrick scrapped the idea of using the tetrahedron as a visitor who saw footage of it did not recognize what it was and he did not want anything in the movie regular people did not understand.
The tetrahedron with regular faces is a solution to an old puzzle asking to form four equilateral triangles using six unbroken matchsticks. The solution places the matchsticks along the edges of a tetrahedron.
Tetrahedral graph
The skeleton of the tetrahedron (comprising the vertices and edges) forms a graph, with 4 vertices, and 6 edges. It is a special case of the complete graph, K4, and wheel graph, W4. It is one of 5 Platonic graphs, each a skeleton of its Platonic solid.
| Mathematics | Three-dimensional space | null |
30647 | https://en.wikipedia.org/wiki/Tidal%20acceleration | Tidal acceleration | Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The acceleration causes a gradual recession of a satellite in a prograde orbit (satellite moving to a higher orbit, away from the primary body), and a corresponding slowdown of the primary's rotation. The process eventually leads to tidal locking, usually of the smaller body first, and later the larger body (e.g. theoretically with Earth in 50 billion years). The Earth–Moon system is the best-studied case.
The similar process of tidal deceleration occurs for satellites that have an orbital period that is shorter than the primary's rotational period, or that orbit in a retrograde direction.
The naming is somewhat confusing, because the average speed of the satellite relative to the body it orbits is decreased as a result of tidal acceleration, and increased as a result of tidal deceleration. This conundrum occurs because a positive acceleration at one instant causes the satellite to loop farther outward during the next half orbit, decreasing its average speed. A continuing positive acceleration causes the satellite to spiral outward with a decreasing speed and angular rate, resulting in a negative acceleration of angle. A continuing negative acceleration has the opposite effect.
Earth–Moon system
Discovery history of the secular acceleration
Edmond Halley was the first to suggest, in 1695, that the mean motion of the Moon was apparently getting faster, by comparison with ancient eclipse observations, but he gave no data. (It was not yet known in Halley's time that what is actually occurring includes a slowing-down of Earth's rate of rotation: see also Ephemeris time – History. When measured as a function of mean solar time rather than uniform time, the effect appears as a positive acceleration.) In 1749 Richard Dunthorne confirmed Halley's suspicion after re-examining ancient records, and produced the first quantitative estimate for the size of this apparent effect: a centurial rate of +10″ (arcseconds) in lunar longitude, which is a surprisingly accurate result for its time, not differing greatly from values assessed later, e.g. in 1786 by de Lalande, and to compare with values from about 10″ to nearly 13″ being derived about a century later.
Pierre-Simon Laplace produced in 1786 a theoretical analysis giving a basis on which the Moon's mean motion should accelerate in response to perturbational changes in the eccentricity of the orbit of Earth around the Sun. Laplace's initial computation accounted for the whole effect, thus seeming to tie up the theory neatly with both modern and ancient observations.
However, in 1854, John Couch Adams caused the question to be re-opened by finding an error in Laplace's computations: it turned out that only about half of the Moon's apparent acceleration could be accounted for on Laplace's basis by the change in Earth's orbital eccentricity. Adams' finding provoked a sharp astronomical controversy that lasted some years, but the correctness of his result, agreed upon by other mathematical astronomers including C. E. Delaunay, was eventually accepted. The question depended on correct analysis of the lunar motions, and received a further complication with another discovery, around the same time, that another significant long-term perturbation that had been calculated for the Moon (supposedly due to the action of Venus) was also in error, was found on re-examination to be almost negligible, and practically had to disappear from the theory. A part of the answer was suggested independently in the 1860s by Delaunay and by William Ferrel: tidal retardation of Earth's rotation rate was lengthening the unit of time and causing a lunar acceleration that was only apparent.
It took some time for the astronomical community to accept the reality and the scale of tidal effects. But eventually it became clear that three effects are involved, when measured in terms of mean solar time. Beside the effects of perturbational changes in Earth's orbital eccentricity, as found by Laplace and corrected by Adams, there are two tidal effects (a combination first suggested by Emmanuel Liais). First there is a real retardation of the Moon's angular rate of orbital motion, due to tidal exchange of angular momentum between Earth and Moon. This increases the Moon's angular momentum around Earth (and moves the Moon to a higher orbit with a lower orbital speed). Secondly, there is an apparent increase in the Moon's angular rate of orbital motion (when measured in terms of mean solar time). This arises from Earth's loss of angular momentum and the consequent increase in length of day.
Effects of Moon's gravity
The plane of the Moon's orbit around Earth lies close to the plane of Earth's orbit around the Sun (the ecliptic), rather than in the plane of the Earth's rotation (the equator) as is usually the case with planetary satellites. The mass of the Moon is sufficiently large, and it is sufficiently close, to raise tides in the matter of Earth. Foremost among such matter, the water of the oceans bulges out both towards and away from the Moon. If the material of the Earth responded immediately, there would be a bulge directly toward and away from the Moon. In the solid Earth tides, there is a delayed response due to the dissipation of tidal energy. The case for the oceans is more complicated, but there is also a delay associated with the dissipation of energy since the Earth rotates at a faster rate than the Moon's orbital angular velocity. This lunitidal interval in the responses causes the tidal bulge to be carried forward. Consequently, the line through the two bulges is tilted with respect to the Earth-Moon direction exerting torque between the Earth and the Moon. This torque boosts the Moon in its orbit and slows the rotation of Earth.
As a result of this process, the mean solar day, which has to be 86,400 equal seconds, is actually getting longer when measured in SI seconds with stable atomic clocks. (The SI second, when adopted, was already a little shorter than the current value of the second of mean solar time.) The small difference accumulates over time, which leads to an increasing difference between our clock time (Universal Time) on the one hand, and International Atomic Time and ephemeris time on the other hand: see ΔT. This led to the introduction of the leap second in 1972 to compensate for differences in the bases for time standardization.
In addition to the effect of the ocean tides, there is also a tidal acceleration due to flexing of Earth's crust, but this accounts for only about 4% of the total effect when expressed in terms of heat dissipation.
If other effects were ignored, tidal acceleration would continue until the rotational period of Earth matched the orbital period of the Moon. At that time, the Moon would always be overhead of a single fixed place on Earth. Such a situation already exists in the Pluto–Charon system. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects make this irrelevant: about 1 to 1.5 billion years from now, the continual increase of the Sun's radiation will likely cause Earth's oceans to vaporize, removing the bulk of the tidal friction and acceleration. Even without this, the slowdown to a month-long day would still not have been completed by 4.5 billion years from now when the Sun will probably evolve into a red giant and likely destroy both Earth and the Moon.
Tidal acceleration is one of the few examples in the dynamics of the Solar System of a so-called secular perturbation of an orbit, i.e. a perturbation that continuously increases with time and is not periodic. Up to a high order of approximation, mutual gravitational perturbations between major or minor planets only cause periodic variations in their orbits, that is, parameters oscillate between maximum and minimum values. The tidal effect gives rise to a quadratic term in the equations, which leads to unbounded growth. In the mathematical theories of the planetary orbits that form the basis of ephemerides, quadratic and higher order secular terms do occur, but these are mostly Taylor expansions of very long time periodic terms. The reason that tidal effects are different is that unlike distant gravitational perturbations, friction is an essential part of tidal acceleration, and leads to permanent loss of energy from the dynamic system in the form of heat. In other words, we do not have a Hamiltonian system here.
Angular momentum and energy
The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit and Earth to be decelerated in its rotation. As in any physical process within an isolated system, total energy and angular momentum are conserved. Effectively, energy and angular momentum are transferred from the rotation of Earth to the orbital motion of the Moon (however, most of the energy lost by Earth (−3.78 TW) is converted to heat by frictional losses in the oceans and their interaction with the solid Earth, and only about 1/30th (+0.121 TW) is transferred to the Moon). The Moon moves farther away from Earth (+38.30±0.08 mm/yr), so its potential energy, which is still negative (in Earth's gravity well), increases, i. e. becomes less negative. It stays in orbit, and from Kepler's 3rd law it follows that its average angular velocity actually decreases, so the tidal action on the Moon actually causes an angular deceleration, i.e. a negative acceleration (−25.97±0.05"/century2) of its rotation around Earth. The actual speed of the Moon also decreases. Although its kinetic energy decreases, its potential energy increases by a larger amount, i. e. Ep = -2Ec (Virial Theorem).
The rotational angular momentum of Earth decreases and consequently the length of the day increases. The net tide raised on Earth by the Moon is dragged ahead of the Moon by Earth's much faster rotation. Tidal friction is required to drag and maintain the bulge ahead of the Moon, and it dissipates the excess energy of the exchange of rotational and orbital energy between Earth and the Moon as heat. If the friction and heat dissipation were not present, the Moon's gravitational force on the tidal bulge would rapidly (within two days) bring the tide back into synchronization with the Moon, and the Moon would no longer recede. Most of the dissipation occurs in a turbulent bottom boundary layer in shallow seas such as the European Shelf around the British Isles, the Patagonian Shelf off Argentina, and the Bering Sea.
The dissipation of energy by tidal friction averages about 3.64 terawatts of the 3.78 terawatts extracted, of which 2.5 terawatts are from the principal M lunar component and the remainder from other components, both lunar and solar.
An equilibrium tidal bulge does not really exist on Earth because the continents do not allow this mathematical solution to take place. Oceanic tides actually rotate around the ocean basins as vast gyres around several amphidromic points where no tide exists. The Moon pulls on each individual undulation as Earth rotates—some undulations are ahead of the Moon, others are behind it, whereas still others are on either side. The "bulges" that actually do exist for the Moon to pull on (and which pull on the Moon) are the net result of integrating the actual undulations over all the world's oceans.
Historical evidence
This mechanism has been working for 4.5 billion years, since oceans first formed on Earth, but less so at times when much or most of the water was ice. There is geological and paleontological evidence that Earth rotated faster and that the Moon was closer to Earth in the remote past. Tidal rhythmites are alternating layers of sand and silt laid down offshore from estuaries having great tidal flows. Daily, monthly and seasonal cycles can be found in the deposits. This geological record is consistent with these conditions 620 million years ago: the day was 21.9±0.4 hours, and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies.
Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then.
Quantitative description of the Earth–Moon case
The motion of the Moon can be followed with an accuracy of a few centimeters by lunar laser ranging (LLR). Laser pulses are bounced off corner-cube prism retroreflectors on the surface of the Moon, emplaced during the Apollo missions of 1969 to 1972 and by Lunokhod 1 in 1970 and Lunokhod 2 in 1973. Measuring the return time of the pulse yields a very accurate measure of the distance. These measurements are fitted to the equations of motion. This yields numerical values for the Moon's secular deceleration, i.e. negative acceleration, in longitude and the rate of change of the semimajor axis of the Earth–Moon ellipse. From the period 1970–2015, the results are:
−25.97 ± 0.05 arcsecond/century2 in ecliptic longitude
+38.30 ± 0.08 mm/yr in the mean Earth–Moon distance
This is consistent with results from satellite laser ranging (SLR), a similar technique applied to artificial satellites orbiting Earth, which yields a model for the gravitational field of Earth, including that of the tides. The model accurately predicts the changes in the motion of the Moon.
Finally, ancient observations of solar eclipses give fairly accurate positions for the Moon at those moments. Studies of these observations give results consistent with the value quoted above.
The other consequence of tidal acceleration is the deceleration of the rotation of Earth. The rotation of Earth is somewhat erratic on all time scales (from hours to centuries) due to various causes. The small tidal effect cannot be observed in a short period, but the cumulative effect on Earth's rotation as measured with a stable clock (ephemeris time, International Atomic Time) of a shortfall of even a few milliseconds every day becomes readily noticeable in a few centuries. Since some event in the remote past, more days and hours have passed (as measured in full rotations of Earth) (Universal Time) than would be measured by stable clocks calibrated to the present, longer length of the day (ephemeris time). This is known as ΔT. Recent values can be obtained from the International Earth Rotation and Reference Systems Service (IERS). A table of the actual length of the day in the past few centuries is also available.
From the observed change in the Moon's orbit, the corresponding change in the length of the day can be computed (where "cy" means "century"):
+2.4 ms/d/century or +88 s/cy2 or +66 ns/d2.
However, from historical records over the past 2700 years the following average value is found:
+1.72 ± 0.03 ms/d/century or +63 s/cy2 or +47 ns/d2. (i.e. an accelerating cause is responsible for -0.7 ms/d/cy)
By twice integrating over the time, the corresponding cumulative value is a parabola having a coefficient of T2 (time in centuries squared) of (1/2) 63 s/cy2 :
ΔT = (1/2) 63 s/cy2 T2 = +31 s/cy2 T2.
Opposing the tidal deceleration of Earth is a mechanism that is in fact accelerating the rotation. Earth is not a sphere, but rather an ellipsoid that is flattened at the poles. SLR has shown that this flattening is decreasing. The explanation is that during the ice age large masses of ice collected at the poles, and depressed the underlying rocks. The ice mass started disappearing over 10000 years ago, but Earth's crust is still not in hydrostatic equilibrium and is still rebounding (the relaxation time is estimated to be about 4000 years). As a consequence, the polar diameter of Earth increases, and the equatorial diameter decreases (Earth's volume must remain the same). This means that mass moves closer to the rotation axis of Earth, and that Earth's moment of inertia decreases. This process alone leads to an increase of the rotation rate (phenomenon of a spinning figure skater who spins ever faster as they retract their arms). From the observed change in the moment of inertia the acceleration of rotation can be computed: the average value over the historical period must have been about −0.6 ms/century. This largely explains the historical observations.
Other cases of tidal acceleration
Most natural satellites of the planets undergo tidal acceleration to some degree (usually small), except for the two classes of tidally decelerated bodies. In most cases, however, the effect is small enough that even after billions of years most satellites will not actually be lost. The effect is probably most pronounced for Mars's second moon Deimos, which may become an Earth-crossing asteroid after it leaks out of Mars's grip.
The effect also arises between different components in a binary star.
Moreover, this tidal effect isn't solely limited to planetary satellites; it also manifests between different components within a binary star system. The gravitational interactions within such systems can induce tidal forces, leading to fascinating dynamics between the stars or their orbiting bodies, influencing their evolution and behavior over cosmic timescales.
Tidal deceleration
This comes in two varieties:
Mercury and Venus are believed to have no satellites chiefly because any hypothetical satellite would have suffered deceleration long ago and crashed into the planets due to the very slow rotation speeds of both planets; in addition, Venus also has retrograde rotation.
| Physical sciences | Celestial mechanics | Astronomy |
30649 | https://en.wikipedia.org/wiki/Tetracycline | Tetracycline | Tetracycline, sold under various brand names, is an antibiotic in the tetracyclines family of medications, used to treat a number of infections, including acne, cholera, brucellosis, plague, malaria, and syphilis. It is available in oral and topical formulations.
Common side effects include vomiting, diarrhea, rash, and loss of appetite. Other side effects include poor tooth development if used by children less than eight years of age, kidney problems, and sunburning easily. Use during pregnancy may harm the baby. It works by inhibiting protein synthesis in bacteria.
Tetracycline was patented in 1953 and was approved for prescription use in 1954. It is on the World Health Organization's List of Essential Medicines. Tetracycline is available as a generic medication. Tetracycline was originally made from bacteria of the genus Streptomyces.
Medical uses
Spectrum of activity
Tetracyclines have a broad spectrum of antibiotic action. Originally, they possessed some level of bacteriostatic activity against almost all medically relevant aerobic and anaerobic bacterial genera, both Gram-positive and Gram-negative, with a few exceptions, such as Pseudomonas aeruginosa and Proteus spp., which display intrinsic resistance. However, acquired (as opposed to inherent) resistance has proliferated in many pathogenic organisms and greatly eroded the formerly vast versatility of this group of antibiotics. Resistance amongst Staphylococcus spp., Streptococcus spp., Neisseria gonorrhoeae, anaerobes, members of the Enterobacteriaceae, and several other previously sensitive organisms is now quite common. Tetracyclines remain especially useful in the management of infections by certain obligately intracellular bacterial pathogens such as Chlamydia, Mycoplasma, and Rickettsia. They are also of value in spirochaetal infections, such as syphilis, and Lyme disease. Certain rare or exotic infections, including anthrax, plague, and brucellosis, are also susceptible to tetracyclines. Tetracycline tablets were used in the plague outbreak in India in 1994. Tetracycline is first-line therapy for Rocky Mountain spotted fever (Rickettsia), Lyme disease (B. burgdorferi), Q fever (Coxiella), psittacosis, Mycoplasma pneumoniae, and nasal carriage of meningococci.
It is also one of a group of antibiotics which together may be used to treat peptic ulcers caused by bacterial infections. The mechanism of action for the antibacterial effect of tetracyclines relies on disrupting protein translation in bacteria, thereby damaging the ability of microbes to grow and repair; however, protein translation is also disrupted in eukaryotic mitochondria leading to effects that may confound experimental results.
The following list presents MIC susceptibility data for some medically significant microorganisms:
Escherichia coli: 1 / to >128 μg/mL
Shigella : 1 μg/mL to 128 μg/mL
Anti-eukaryote use
The tetracyclines also have activity against certain eukaryotic parasites, including those responsible for diseases such as dysentery caused by an amoeba, malaria (a plasmodium), and balantidiasis (a ciliate).
Use as a biomarker
Since tetracycline is absorbed into bone, it is used as a marker of bone growth for biopsies in humans. Tetracycline labeling is used to determine the amount of bone growth within a certain period of time, usually a period around 21 days. Tetracycline is incorporated into mineralizing bone and can be detected by its fluorescence. In "double tetracycline labeling", a second dose is given 11–14 days after the first dose, and the amount of bone formed during that interval can be calculated by measuring the distance between the two fluorescent labels.
Tetracycline is also used as a biomarker in wildlife to detect consumption of medicine- or vaccine-containing baits.
Side effects
Use of tetracycline antibiotics can:
Discolor permanent teeth (yellow-gray-brown), from prenatal period through childhood and adulthood. Children receiving long- or short-term therapy with a tetracycline or glycylcycline may develop permanent brown discoloration of the teeth.
Be inactivated by calcium ions, so are not to be taken with milk, yogurt, and other dairy products
Be inactivated by aluminium, iron, and zinc ions, not to be taken at the same time as indigestion remedies (some common antacids and over-the-counter heartburn medicines)
Cause skin photosensitivity, so exposure to the sun or intense light is not recommended
Cause drug-induced lupus, and hepatitis
Cause microvesicular fatty liver
Cause tinnitus
Cause epigastric pain
Interfere with methotrexate by displacing it from the various protein-binding sites
Cause breathing complications, as well as anaphylactic shock, in some individuals
Affect bone growth of the fetus, so should be avoided during pregnancy
Fanconi syndrome may result from ingesting expired tetracyclines.
Caution should be exercised in long-term use when breastfeeding. Short-term use is safe; bioavailability in milk is low to nil. According to the U.S. Food and Drug Administration (FDA), cases of Stevens–Johnson syndrome, toxic epidermal necrolysis, and erythema multiforme associated with doxycycline use have been reported, but a causative role has not been established.
Pharmacology
Mechanism of action
Tetracycline inhibits protein synthesis by blocking the attachment of charged tRNA at the P site peptide chain. Tetracycline blocks the A-site so that a hydrogen bond is not formed between the amino acids. Tetracycline binds to the 30S and 50S subunit of microbial ribosomes. Thus, it prevents the formation of a peptide chain. The action is usually not inhibitory and irreversible even with the withdrawal of the drug. Mammalian cells are not vulnerable to the effect of Tetracycline as these cells contain no 30S ribosomal subunits so do not accumulate the drug. This accounts for the relatively small off-site effect of tetracycline on human cells.
Mechanisms of resistance
Bacteria usually acquire resistance to tetracycline from horizontal transfer of a gene that either encodes an efflux pump or a ribosomal protection protein. Efflux pumps actively eject tetracycline from the cell, preventing the build up of an inhibitory concentration of tetracycline in the cytoplasm. Ribosomal protection proteins interact with the ribosome and dislodge tetracycline from the ribosome, allowing for translation to continue.
History
Discovery
The tetracyclines, a large family of antibiotics, were discovered by Benjamin Minge Duggar in 1948 as natural products, and first prescribed in 1948. Benjamin Duggar, working under Yellapragada Subbarow at Lederle Laboratories, discovered the first tetracycline antibiotic, chlortetracycline (Aureomycin), in 1945. The structure of Aureomycin was elucidated in 1952 and published in 1954 by the Pfizer-Woodward group. After the discovery of the structure, researchers at Pfizer began chemically modifying aureomycin by treating it with hydrogen in the presence of a palladized carbon catalyst. This chemical reaction replaced a chlorine moiety with a hydrogen, creating a compound named tetracycline via hydrogenolysis. Tetracycline displayed higher potency, better solubility, and more favorable pharmacology than the other antibiotics in its class, leading to its FDA approval in 1954. The new compound was one of the first commercially successful semi-synthetic antibiotics that was used, and laid the foundation for the development of Sancycline, Minocycline, and later the Glycylcyclines.
Evidence in antiquity
Tetracycline has a high affinity for calcium and is incorporated into bones during the active mineralization of hydroxyapatite. When incorporated into bones, tetracycline can be identified using ultraviolet light.
There is evidence that early inhabitants of Northeastern Africa consumed tetracycline antibiotics. Nubian mummies from between 350 and 550 A.D. were found to exhibit patterns of fluorescence identical with that of modern tetracycline labelled bone.
It is conjectured that the beer brewed by the Nubians was the source of the tetracycline found in these bones.
Society and culture
Economics
According to data from EvaluatePharma and published in the Boston Globe, in the USA the price of tetracycline rose from $0.06 per 250-mg pill in 2013 to $4.06 a pill in 2015. The Globe described the "big price hikes of some generic drugs" as a "relatively new phenomenon" which has left most pharmacists "grappling" with large upswings" in the "costs of generics, with 'overnight' price changes sometimes exceeding 1,000%."
Brand names
It is marketed under the brand names Sumycin, Tetracyn, and Panmycin, among others. Actisite is a thread-like fiber formulation used in dental applications.
It is also used to produce several semisynthetic derivatives, which together are known as the tetracycline antibiotics. The term "tetracycline" is also used to denote the four-ring system of this compound; "tetracyclines" are related substances that contain the same four-ring system.
Media
Due to the drug's association with fighting infections, it serves as the main "commodity" in the science fiction series Aftermath, with the search for tetracycline becoming a major preoccupation in later episodes.
Tetracycline is also represented in Bohemia Interactive's survival sandbox, DayZ. In the game, players may find the antibiotic to treat the common cold, influenza, cholera and infected wounds, but does not portray any side effects associated with tetracycline.
Research
Genetic engineering
In genetic engineering, tetracycline is used in transcriptional activation. It has been used as an engineered "control switch" in chronic myelogenous leukemia models in mice. Engineers were able to develop a retrovirus that induced a particular type of leukemia in mice, and could then "switch" the cancer on and off through tetracycline administration. This could be used to grow the cancer in mice and then halt it at a particular stage to allow for further experimentation or study.
A technique being developed for the control of the mosquito species Aedes aegypti (the infection vector for yellow fever, dengue fever, Zika fever, and several other diseases) uses a strain that is genetically modified to require tetracycline to develop beyond the larval stage. Modified males raised in a laboratory develop normally as they are supplied with this chemical and can be released into the wild. Their subsequent offspring inherit this trait, but find no tetracycline in their environments, so never develop into adults.
| Biology and health sciences | Antibiotics | Health |
30651 | https://en.wikipedia.org/wiki/Transposable%20element | Transposable element | A transposable element (TE), also transposon, or jumping gene, is a type of mobile genetic element, a nucleic acid sequence in DNA that can change its position within a genome, sometimes creating or reversing mutations and altering the cell's genetic identity and genome size.
Transposition often results in duplication of the same genetic material. The discovery of mobile genetic elements earned Barbara McClintock a Nobel Prize in 1983. Further research into transposons has potential for use in gene therapy, and the finding of new drug targets in personalized medicine. The vast number of variables in the transposon makes data analytics difficult but combined with other sequencing technologies significant advances may be made in the understanding and treatment of disease.
Transposable elements make up about half of the genome in a eukaryotic cell, accounting for much of human genetic diversity. Although TEs are selfish genetic elements, many are important in genome function and evolution. Transposons are also very useful to researchers as a means to alter DNA inside a living organism.
There are at least two classes of TEs: Class I TEs or retrotransposons generally function via reverse transcription, while Class II TEs or DNA transposons encode the protein transposase, which they require for insertion and excision, and some of these TEs also encode other proteins.
Discovery by Barbara McClintock
Barbara McClintock discovered the first TEs in maize (Zea mays) at the Cold Spring Harbor Laboratory in New York. McClintock was experimenting with maize plants that had broken chromosomes.
In the winter of 1944–1945, McClintock planted corn kernels that were self-pollinated, meaning that the silk (style) of the flower received pollen from its own anther. These kernels came from a long line of plants that had been self-pollinated, causing broken arms on the end of their ninth chromosomes. As the maize plants began to grow, McClintock noted unusual color patterns on the leaves. For example, one leaf had two albino patches of almost identical size, located side by side on the leaf. McClintock hypothesized that during cell division certain cells lost genetic material, while others gained what they had lost. However, when comparing the chromosomes of the current generation of plants with the parent generation, she found certain parts of the chromosome had switched position. This refuted the popular genetic theory of the time that genes were fixed in their position on a chromosome. McClintock found that genes could not only move but they could also be turned on or off due to certain environmental conditions or during different stages of cell development.
McClintock also showed that gene mutations could be reversed. She presented her report on her findings in 1951, and published an article on her discoveries in Genetics in November 1953 entitled "Induction of Instability at Selected Loci in Maize".
At the 1951 Cold Spring Harbor Symposium where she first publicized her findings, her talk was met with silence. Her work was largely dismissed and ignored until the late 1960s–1970s when, after TEs were found in bacteria, it was rediscovered. She was awarded a Nobel Prize in Physiology or Medicine in 1983 for her discovery of TEs, more than thirty years after her initial research.
Classification
Transposable elements represent one of several types of mobile genetic elements. TEs are assigned to one of two classes according to their mechanism of transposition, which can be described as either copy and paste (Class I TEs) or cut and paste (Class II TEs).
Retrotransposon
Class I TEs are copied in two stages: first, they are transcribed from DNA to RNA, and the RNA produced is then reverse transcribed to DNA. This copied DNA is then inserted back into the genome at a new position. The reverse transcription step is catalyzed by a reverse transcriptase, which is often encoded by the TE itself. The characteristics of retrotransposons are similar to retroviruses, such as HIV.
Despite the potential negative effects of retrotransposons, like inserting itself into the middle of a necessary DNA sequence, which can render important genes unusable, they are still essential to keep a species' ribosomal DNA intact over the generations, preventing infertility.
Retrotransposons are commonly grouped into three main orders:
Retrotransposons, with long terminal repeats (LTRs), which encode reverse transcriptase, similar to retroviruses
Retroposons, long interspersed nuclear elements (LINEs, LINE-1s, or L1s), which encode reverse transcriptase but lack LTRs, and are transcribed by RNA polymerase II
Short interspersed nuclear elements (SINEs) do not encode reverse transcriptase and are transcribed by RNA polymerase III
Retroviruses can also be considered TEs. For example, after the conversion of retroviral RNA into DNA inside a host cell, the newly produced retroviral DNA is integrated into the genome of the host cell. These integrated DNAs are termed proviruses. The provirus is a specialized form of eukaryotic retrotransposon, which can produce RNA intermediates that may leave the host cell and infect other cells. The transposition cycle of retroviruses has similarities to that of prokaryotic TEs, suggesting a distant relationship between the two.
DNA transposons
The cut-and-paste transposition mechanism of class II TEs does not involve an RNA intermediate. The transpositions are catalyzed by several transposase enzymes. Some transposases non-specifically bind to any target site in DNA, whereas others bind to specific target sequences. The transposase makes a staggered cut at the target site producing sticky ends, cuts out the DNA transposon and ligates it into the target site. A DNA polymerase fills in the resulting gaps from the sticky ends and DNA ligase closes the sugar-phosphate backbone. This results in target site duplication and the insertion sites of DNA transposons may be identified by short direct repeats (a staggered cut in the target DNA filled by DNA polymerase) followed by inverted repeats (which are important for the TE excision by transposase).
Cut-and-paste TEs may be duplicated if their transposition takes place during S phase of the cell cycle, when a donor site has already been replicated but a target site has not yet been replicated. Such duplications at the target site can result in gene duplication, which plays an important role in genomic evolution.
Not all DNA transposons transpose through the cut-and-paste mechanism. In some cases, a replicative transposition is observed in which a transposon replicates itself to a new target site (e.g. helitron).
Class II TEs comprise less than 2% of the human genome, making the rest Class I.
Autonomous and non-autonomous
Transposition can be classified as either "autonomous" or "non-autonomous" in both Class I and Class II TEs. Autonomous TEs can move by themselves, whereas non-autonomous TEs require the presence of another TE to move. This is often because dependent TEs lack transposase (for Class II) or reverse transcriptase (for Class I).
Activator element (Ac) is an example of an autonomous TE, and dissociation elements (Ds) is an example of a non-autonomous TE. Without Ac, Ds is not able to transpose.
Class III
Some researchers also identify a third class of transposable elements, which has been described as "a grab-bag consisting of transposons that don't clearly fit into the other two categories". Examples of such TEs are the Foldback (FB) elements of Drosophila melanogaster, the TU elements of Strongylocentrotus purpuratus, and Miniature Inverted-repeat Transposable Elements.
Distribution
Approximately 64% of the maize genome is made up of TEs, as is 44% of the human genome, and almost half of murine genomes.
New discoveries of transposable elements have shown the exact distribution of TEs with respect to their transcription start sites (TSSs) and enhancers. A recent study found that a promoter contains 25% of regions that harbor TEs. It is known that older TEs are not found in TSS locations because TEs frequency starts as a function once there is a distance from the TSS. A possible theory for this is that TEs might interfere with the transcription pausing or the first-intro splicing. Also as mentioned before, the presence of TEs closed by the TSS locations is correlated to their evolutionary age (number of different mutations that TEs can develop during the time).
Examples
The first TEs were discovered in maize (Zea mays) by Barbara McClintock in 1948, for which she was later awarded a Nobel Prize. She noticed chromosomal insertions, deletions, and translocations caused by these elements. These changes in the genome could, for example, lead to a change in the color of corn kernels. About 64% of the maize genome consists of TEs. The Ac/Ds system described by McClintock are Class II TEs. Transposition of Ac in tobacco has been demonstrated by B. Baker.
In the pond microorganism, Oxytricha, TEs play such a critical role that when removed, the organism fails to develop.
One family of TEs in the fruit fly Drosophila melanogaster are called P elements. They seem to have first appeared in the species only in the middle of the twentieth century; within the last 50 years, they spread through every population of the species. Gerald M. Rubin and Allan C. Spradling pioneered technology to use artificial P elements to insert genes into Drosophila by injecting the embryo.
In bacteria, TEs usually carry an additional gene for functions other than transposition, often for antibiotic resistance. In bacteria, transposons can jump from chromosomal DNA to plasmid DNA and back, allowing for the transfer and permanent addition of genes such as those encoding antibiotic resistance (multi-antibiotic resistant bacterial strains can be generated in this way). Bacterial transposons of this type belong to the Tn family. When the transposable elements lack additional genes, they are known as insertion sequences.
In humans, the most common TE is the Alu sequence. It is approximately 300 bases long and can be found between 300,000 and one million times in the human genome. Alu alone is estimated to make up 15–17% of the human genome.
Mariner-like elements are another prominent class of transposons found in multiple species, including humans. The Mariner transposon was first discovered by Jacobson and Hartl in Drosophila. This Class II transposable element is known for its uncanny ability to be transmitted horizontally in many species. There are an estimated 14,000 copies of Mariner in the human genome comprising 2.6 million base pairs. The first mariner-element transposons outside of animals were found in Trichomonas vaginalis.
Mu phage transposition is the best-known example of replicative transposition.
In Yeast genomes (Saccharomyces cerevisiae) there are five distinct retrotransposon families: Ty1, Ty2, Ty3, Ty4 and Ty5.
A helitron is a TE found in eukaryotes that is thought to replicate by a rolling-circle mechanism.
In human embryos, two types of transposons combined to form noncoding RNA that catalyzes the development of stem cells. During the early stages of a fetus's growth, the embryo's inner cell mass expands as these stem cells enumerate. The increase of this type of cells is crucial, since stem cells later change form and give rise to all the cells in the body.
In peppered moths, a transposon in a gene called cortex caused the moths' wings to turn completely black. This change in coloration helped moths to blend in with ash and soot-covered areas during the Industrial Revolution.
Aedes aegypti carries a large and diverse number of TEs. This analysis by Matthews et al. 2018 also suggests this is common to all mosquitoes.
Negative effects
Transposons have coexisted with eukaryotes for thousands of years and through their coexistence have become integrated in many organisms' genomes. Colloquially known as 'jumping genes', transposons can move within and between genomes allowing for this integration.
While there are many positive effects of transposons in their host eukaryotic genomes, there are some instances of mutagenic effects that TEs have on genomes leading to disease and malignant genetic alterations.
Mechanisms of mutagenesis
TEs are mutagens and due to the contribution to the formation of new cis-regulatory DNA elements that are connected to many transcription factors that are found in living cells; TEs can undergo many evolutionary mutations and alterations. These are often the causes of genetic disease, and gives the potential lethal effects of ectopic expression.
TEs can damage the genome of their host cell in different ways:
A transposon or a retrotransposon that inserts itself into a functional gene can disable that gene.
After a DNA transposon leaves a gene, the resulting gap may not be repaired correctly.
Multiple copies of the same sequence, such as Alu sequences, can hinder precise chromosomal pairing during mitosis and meiosis, resulting in unequal crossovers, one of the main reasons for chromosome duplication.
TEs use a number of different mechanisms to cause genetic instability and disease in their host genomes.
Expression of disease-causing, damaging proteins that inhibit normal cellular function.
Many TEs contain promoters which drive transcription of their own transposase. These promoters can cause aberrant expression of linked genes, causing disease or mutant phenotypes.
Diseases
Diseases often caused by TEs include
Hemophilia A and B
LINE1 (L1) TEs that land on the human Factor VIII have been shown to cause haemophilia
Severe combined immunodeficiency
Insertion of L1 into the APC gene causes colon cancer, confirming that TEs play an important role in disease development.
Porphyria
Insertion of Alu element into the PBGD gene leads to interference with the coding region and leads to acute intermittent porphyria (AIP).
Predisposition to cancer
LINE1(L1) TE's and other retrotransposons have been linked to cancer because they cause genomic instability.
Duchenne muscular dystrophy.
Caused by SVA transposable element insertion in the fukutin (FKTN) gene which renders the gene inactive.
Alzheimer's Disease and other Tauopathies
Transposable element dysregulation can cause neuronal death, leading to neurodegenerative disorders
Rate of transposition, induction and defense
One study estimated the rate of transposition of a particular retrotransposon, the Ty1 element in Saccharomyces cerevisiae. Using several assumptions, the rate of successful transposition event per single Ty1 element came out to be about once every few months to once every few years. Some TEs contain heat-shock like promoters and their rate of transposition increases if the cell is subjected to stress, thus increasing the mutation rate under these conditions, which might be beneficial to the cell.
Cells defend against the proliferation of TEs in a number of ways. These include piRNAs and siRNAs, which silence TEs after they have been transcribed.
If organisms are mostly composed of TEs, one might assume that disease caused by misplaced TEs is very common, but in most cases TEs are silenced through epigenetic mechanisms like DNA methylation, chromatin remodeling and piRNA, such that little to no phenotypic effects nor movements of TEs occur as in some wild-type plant TEs. Certain mutated plants have been found to have defects in methylation-related enzymes (methyl transferase) which cause the transcription of TEs, thus affecting the phenotype.
One hypothesis suggests that only approximately 100 LINE1 related sequences are active, despite their sequences making up 17% of the human genome. In human cells, silencing of LINE1 sequences is triggered by an RNA interference (RNAi) mechanism. Surprisingly, the RNAi sequences are derived from the 5′ untranslated region (UTR) of the LINE1, a long terminal which repeats itself. Supposedly, the 5′ LINE1 UTR that codes for the sense promoter for LINE1 transcription also encodes the antisense promoter for the miRNA that becomes the substrate for siRNA production. Inhibition of the RNAi silencing mechanism in this region showed an increase in LINE1 transcription.
Evolution
TEs are found in almost all life forms, and the scientific community is still exploring their evolution and their effect on genome evolution. It is unclear whether TEs originated in the last universal common ancestor, arose independently multiple times, or arose once and then spread to other kingdoms by horizontal gene transfer. While some TEs confer benefits on their hosts, most are regarded as selfish DNA parasites. In this way, they are similar to viruses. Various viruses and TEs also share features in their genome structures and biochemical abilities, leading to speculation that they share a common ancestor.
Because excessive TE activity can damage exons, many organisms have acquired mechanisms to inhibit their activity. Bacteria may undergo high rates of gene deletion as part of a mechanism to remove TEs and viruses from their genomes, while eukaryotic organisms typically use RNA interference to inhibit TE activity. Nevertheless, some TEs generate large families often associated with speciation events. Evolution often deactivates DNA transposons, leaving them as introns (inactive gene sequences). In vertebrate animal cells, nearly all 100,000+ DNA transposons per genome have genes that encode inactive transposase polypeptides. The first synthetic transposon designed for use in vertebrate (including human) cells, the Sleeping Beauty transposon system, is a Tc1/mariner-like transposon. Its dead ("fossil") versions are spread widely in the salmonid genome and a functional version was engineered by comparing those versions. Human Tc1-like transposons are divided into Hsmar1 and Hsmar2 subfamilies. Although both types are inactive, one copy of Hsmar1 found in the SETMAR gene is under selection as it provides DNA-binding for the histone-modifying protein. Many other human genes are similarly derived from transposons. Hsmar2 has been reconstructed multiple times from the fossil sequences.
The frequency and location of TE integrations influence genomic structure and evolution and affect gene and protein regulatory networks during development and in differentiated cell types. Large quantities of TEs within genomes may still present evolutionary advantages, however. Interspersed repeats within genomes are created by transposition events accumulating over evolutionary time. Because interspersed repeats block gene conversion, they protect novel gene sequences from being overwritten by similar gene sequences and thereby facilitate the development of new genes. TEs may also have been co-opted by the vertebrate immune system as a means of producing antibody diversity. The V(D)J recombination system operates by a mechanism similar to that of some TEs. TEs also serve to generate repeating sequences that can form dsRNA to act as a substrate for the action of ADAR in RNA editing.
TEs can contain many types of genes, including those conferring antibiotic resistance and the ability to transpose to conjugative plasmids. Some TEs also contain integrons, genetic elements that can capture and express genes from other sources. These contain integrase, which can integrate gene cassettes. There are over 40 antibiotic resistance genes identified on cassettes, as well as virulence genes.
Transposons do not always excise their elements precisely, sometimes removing the adjacent base pairs; this phenomenon is called exon shuffling. Shuffling two unrelated exons can create a novel gene product or, more likely, an intron.
Some non-autonomous DNA TEs found in plants can capture coding DNA from genes and shuffle them across the genome. This process can duplicate genes in the genome (a phenomenon called transduplication), and can contribute to generate novel genes by exon shuffling.
Evolutionary drive for TEs on the genomic context
There is a hypothesis that states that TEs might provide a ready source of DNA that could be co-opted by the cell to help regulate gene expression. Research showed that many diverse modes of TEs co-evolution along with some transcription factors targeting TE-associated genomic elements and chromatin are evolving from TE sequences. Most of the time, these particular modes do not follow the simple model of TEs and regulating host gene expression.
Applications
Transposable elements can be harnessed in laboratory and research settings to study genomes of organisms and even engineer genetic sequences. The use of transposable elements can be split into two categories: for genetic engineering and as a genetic tool.
Genetic engineering
Insertional mutagenesis uses the features of a TE to insert a sequence. In most cases, this is used to remove a DNA sequence or cause a frameshift mutation.
In some cases the insertion of a TE into a gene can disrupt that gene's function in a reversible manner where transposase-mediated excision of the DNA transposon restores gene function.
This produces plants in which neighboring cells have different genotypes.
This feature allows researchers to distinguish between genes that must be present inside of a cell in order to function (cell-autonomous) and genes that produce observable effects in cells other than those where the gene is expressed.
Genetic tool
In addition to the qualities mentioned for Genetic engineering, a Genetic tool also:-
Used for analysis of gene expression and protein functioning in signature-tagging mutagenesis.
This analytical tool allows researchers the ability to determine phenotypic expression of gene sequences. Also, this analytic technique mutates the desired locus of interest so that the phenotypes of the original and the mutated gene can be compared.
Specific applications
TEs are also a widely used tool for mutagenesis of most experimentally tractable organisms. The Sleeping Beauty transposon system has been used extensively as an insertional tag for identifying cancer genes.
The Tc1/mariner-class of TEs Sleeping Beauty transposon system, awarded Molecule of the Year in 2009, is active in mammalian cells and is being investigated for use in human gene therapy.
TEs are used for the reconstruction of phylogenies by the means of presence/absence analyses. Transposons can act as biological mutagen in bacteria.
Common organisms which the use of Transposons has been well developed are:
Drosophila
Arabidopsis thaliana
Escherichia coli
De novo repeat identification
De novo repeat identification is an initial scan of sequence data that seeks to find the repetitive regions of the genome, and to classify these repeats. Many computer programs exist to perform de novo repeat identification, all operating under the same general principles. As short tandem repeats are generally 1–6 base pairs in length and are often consecutive, their identification is relatively simple. Dispersed repetitive elements, on the other hand, are more challenging to identify, due to the fact that they are longer and have often acquired mutations. However, it is important to identify these repeats as they are often found to be transposable elements (TEs).
De novo identification of transposons involves three steps: 1) find all repeats within the genome, 2) build a consensus of each family of sequences, and 3) classify these repeats. There are three groups of algorithms for the first step. One group is referred to as the k-mer approach, where a k-mer is a sequence of length k. In this approach, the genome is scanned for overrepresented k-mers; that is, k-mers that occur more often than is likely based on probability alone. The length k is determined by the type of transposon being searched for. The k-mer approach also allows mismatches, the number of which is determined by the analyst. Some k-mer approach programs use the k-mer as a base, and extend both ends of each repeated k-mer until there is no more similarity between them, indicating the ends of the repeats. Another group of algorithms employs a method called sequence self-comparison. Sequence self-comparison programs use databases such as AB-BLAST to conduct an initial sequence alignment. As these programs find groups of elements that partially overlap, they are useful for finding highly diverged transposons, or transposons with only a small region copied into other parts of the genome. Another group of algorithms follows the periodicity approach. These algorithms perform a Fourier transformation on the sequence data, identifying periodicities, regions that are repeated periodically, and are able to use peaks in the resultant spectrum to find candidate repetitive elements. This method works best for tandem repeats, but can be used for dispersed repeats as well. However, it is a slow process, making it an unlikely choice for genome-scale analysis.
The second step of de novo repeat identification involves building a consensus of each family of sequences. A consensus sequence is a sequence that is created based on the repeats that comprise a TE family. A base pair in a consensus is the one that occurred most often in the sequences being compared to make the consensus. For example, in a family of 50 repeats where 42 have a T base pair in the same position, the consensus sequence would have a T at this position as well, as the base pair is representative of the family as a whole at that particular position, and is most likely the base pair found in the family's ancestor at that position. Once a consensus sequence has been made for each family, it is then possible to move on to further analysis, such as TE classification and genome masking in order to quantify the overall TE content of the genome.
Adaptive TEs
Transposable elements have been recognized as good candidates for stimulating gene adaptation, through their ability to regulate the expression levels of nearby genes. Combined with their "mobility", transposable elements can be relocated adjacent to their targeted genes, and control the expression levels of the gene, dependent upon the circumstances.
The study conducted in 2008, "High Rate of Recent Transposable Element–Induced Adaptation in Drosophila melanogaster", used D. melanogaster that had recently migrated from Africa to other parts of the world, as a basis for studying adaptations caused by transposable elements. Although most of the TEs were located on introns, the experiment showed a significant difference in gene expressions between the population in Africa and other parts of the world. The four TEs that caused the selective sweep were more prevalent in D. melanogaster from temperate climates, leading the researchers to conclude that the selective pressures of the climate prompted genetic adaptation. From this experiment, it has been confirmed that adaptive TEs are prevalent in nature, by enabling organisms to adapt gene expression as a result of new selective pressures.
However, not all effects of adaptive TEs are beneficial to the population. In the research conducted in 2009, "A Recent Adaptive Transposable Element Insertion Near Highly Conserved Developmental Loci in Drosophila melanogaster", a TE, inserted between Jheh 2 and Jheh 3, revealed a downgrade in the expression level of both of the genes. Downregulation of such genes has caused Drosophila to exhibit extended developmental time and reduced egg to adult viability. Although this adaptation was observed in high frequency in all non-African populations, it was not fixed in any of them. This is not hard to believe, since it is logical for a population to favor higher egg to adult viability, therefore trying to purge the trait caused by this specific TE adaptation.
At the same time, there have been several reports showing the advantageous adaptation caused by TEs. In the research done with silkworms, "An Adaptive Transposable Element insertion in the Regulatory Region of the EO Gene in the Domesticated Silkworm", a TE insertion was observed in the cis-regulatory region of the EO gene, which regulates molting hormone 20E, and enhanced expression was recorded. While populations without the TE insert are often unable to effectively regulate hormone 20E under starvation conditions, those with the insert had a more stable development, which resulted in higher developmental uniformity.
These three experiments all demonstrated different ways in which TE insertions can be advantageous or disadvantageous, through means of regulating the expression level of adjacent genes. The field of adaptive TE research is still under development and more findings can be expected in the future.
TEs participates in gene control networks
Recent studies have confirmed that TEs can contribute to the generation of transcription factors. However, how this process of contribution can have an impact on the participation of genome control networks. TEs are more common in many regions of the DNA and it makes up 45% of total human DNA. Also, TEs contributed to 16% of transcription factor binding sites. A larger number of motifs are also found in non-TE-derived DNA, and the number is larger than TE-derived DNA. All these factors correlate to the direct participation of TEs in many ways of gene control networks.
| Biology and health sciences | Molecular biology | Biology |
30653 | https://en.wikipedia.org/wiki/Tuberculosis | Tuberculosis | Tuberculosis (TB), also known colloquially as the "white death", or historically as consumption, is a contagious disease usually caused by Mycobacterium tuberculosis (MTB) bacteria. Tuberculosis generally affects the lungs, but it can also affect other parts of the body. Most infections show no symptoms, in which case it is known as latent tuberculosis. Around 10% of latent infections progress to active disease that, if left untreated, kill about half of those affected. Typical symptoms of active TB are chronic cough with blood-containing mucus, fever, night sweats, and weight loss. Infection of other organs can cause a wide range of symptoms.
Tuberculosis is spread from one person to the next through the air when people who have active TB in their lungs cough, spit, speak, or sneeze. People with latent TB do not spread the disease. Active infection occurs more often in people with HIV/AIDS and in those who smoke. Diagnosis of active TB is based on chest X-rays, as well as microscopic examination and culture of bodily fluids. Diagnosis of latent TB relies on the tuberculin skin test (TST) or blood tests.
Prevention of TB involves screening those at high risk, early detection and treatment of cases, and vaccination with the bacillus Calmette-Guérin (BCG) vaccine. Those at high risk include household, workplace, and social contacts of people with active TB. Treatment requires the use of multiple antibiotics over a long period of time. Antibiotic resistance is a growing problem, with increasing rates of multiple drug-resistant tuberculosis (MDR-TB).
In 2018, one quarter of the world's population was thought to have a latent infection of TB. New infections occur in about 1% of the population each year. In 2022, an estimated 10.6 million people developed active TB, resulting in 1.3 million deaths, making it the second leading cause of death from an infectious disease after COVID-19. As of 2018, most TB cases occurred in the WHO regions of South-East Asia (44%), Africa (24%), and the Western Pacific (18%), with more than 50% of cases being diagnosed in seven countries: India (27%), China (9%), Indonesia (8%), the Philippines (6%), Pakistan (6%), Nigeria (4%), and Bangladesh (4%). By 2021, the number of new cases each year was decreasing by around 2% annually. About 80% of people in many Asian and African countries test positive, while 5–10% of people in the United States test positive via the tuberculin test. Tuberculosis has been present in humans since ancient times.
History
Tuberculosis has existed since antiquity. The oldest unambiguously detected M. tuberculosis gives evidence of the disease in the remains of bison in Wyoming dated to around 17,000 years ago. However, whether tuberculosis originated in bovines, then transferred to humans, or whether both bovine and human tuberculosis diverged from a common ancestor, remains unclear. A comparison of the genes of M. tuberculosis complex (MTBC) in humans to MTBC in animals suggests humans did not acquire MTBC from animals during animal domestication, as researchers previously believed. Both strains of the tuberculosis bacteria share a common ancestor, which could have infected humans even before the Neolithic Revolution. Skeletal remains show some prehistoric humans (4000 BC) had TB, and researchers have found tubercular decay in the spines of Egyptian mummies dating from 3000 to 2400 BC. Genetic studies suggest the presence of TB in the Americas from about AD 100.
Before the Industrial Revolution, folklore often associated tuberculosis with vampires. When one member of a family died from the disease, the other infected members would lose their health slowly. People believed this was caused by the original person with TB draining the life from the other family members.
Identification
Although Richard Morton established the pulmonary form associated with tubercles as a pathology in 1689, due to the variety of its symptoms, TB was not identified as a single disease until the 1820s. Benjamin Marten conjectured in 1720 that consumptions were caused by microbes which were spread by people living close to each other. In 1819, René Laennec claimed that tubercles were the cause of pulmonary tuberculosis. J. L. Schönlein first published the name "tuberculosis" (German: Tuberkulose) in 1832.
Between 1838 and 1845, John Croghan, the owner of Mammoth Cave in Kentucky from 1839 onwards, brought a number of people with tuberculosis into the cave in the hope of curing the disease with the constant temperature and purity of the cave air; each died within a year. Hermann Brehmer opened the first TB sanatorium in 1859 in Görbersdorf (now Sokołowsko) in Silesia. In 1865, Jean Antoine Villemin demonstrated that tuberculosis could be transmitted, via inoculation, from humans to animals and among animals. (Villemin's findings were confirmed in 1867 and 1868 by John Burdon-Sanderson.)
Robert Koch identified and described the bacillus causing tuberculosis, M. tuberculosis, on 24 March 1882. In 1905, he was awarded the Nobel Prize in Physiology or Medicine for this discovery.
Development of treatments
In Europe, rates of tuberculosis began to rise in the early 1600s to a peak level in the 1800s, when it caused nearly 25% of all deaths. In the 18th and 19th century, tuberculosis had become epidemic in Europe, showing a seasonal pattern. Tuberculosis caused widespread public concern in the 19th and early 20th centuries as the disease became common among the urban poor. In 1815, one in four deaths in England was due to "consumption". By 1918, TB still caused one in six deaths in France.
After TB was determined to be contagious, in the 1880s, it was put on a notifiable-disease list in Britain. Campaigns started to stop people from spitting in public places, and the infected poor were "encouraged" to enter sanatoria that resembled prisons. The sanatoria for the middle and upper classes offered excellent care and constant medical attention. What later became known as the Alexandra Hospital for Children with Hip Disease (tuberculous arthritis) was opened in London in 1867. Whatever the benefits of the "fresh air" and labor in the sanatoria, even under the best conditions, 50% of those who entered died within five years ( 1916).
Robert Koch did not believe the cattle and human tuberculosis diseases were similar, which delayed the recognition of infected milk as a source of infection. During the first half of the 1900s, the risk of transmission from this source was dramatically reduced after the application of the pasteurization process. Koch announced a glycerine extract of the tubercle bacilli as a "remedy" for tuberculosis in 1890, calling it "tuberculin". Although it was not effective, it was later successfully adapted as a screening test for the presence of pre-symptomatic tuberculosis. World Tuberculosis Day is marked on 24 March each year, the anniversary of Koch's original scientific announcement. When the Medical Research Council formed in Britain in 1913, it initially focused on tuberculosis research.
Albert Calmette and Camille Guérin achieved the first genuine success in immunization against tuberculosis in 1906, using attenuated bovine-strain tuberculosis. It was called bacille Calmette–Guérin (BCG). The BCG vaccine was first used on humans in 1921 in France, but achieved widespread acceptance in the US, Great Britain, and Germany only after World War II.
By the 1950s mortality in Europe had decreased about 90%. Improvements in sanitation, vaccination, and other public-health measures began significantly reducing rates of tuberculosis even before the arrival of streptomycin and other antibiotics, although the disease remained a significant threat. In 1946, the development of the antibiotic streptomycin made effective treatment and cure of TB a reality. Prior to the introduction of this medication, the only treatment was surgical intervention, including the "pneumothorax technique", which involved collapsing an infected lung to "rest" it and to allow tuberculous lesions to heal.
Current reemergence
Because of the emergence of multidrug-resistant tuberculosis (MDR-TB), surgery has been re-introduced for certain cases of TB infections. It involves the removal of infected chest cavities ("bullae") in the lungs to reduce the number of bacteria and to increase exposure of the remaining bacteria to antibiotics in the bloodstream. Hopes of eliminating TB ended with the rise of drug-resistant strains in the 1980s. The subsequent resurgence of tuberculosis resulted in the declaration of a global health emergency by the World Health Organization (WHO) in 1993.
Signs and symptoms
There is a popular misconception that tuberculosis is purely a disease of the lungs that manifests as coughing. Tuberculosis may infect many organs, even though it most commonly occurs in the lungs (known as pulmonary tuberculosis). Extrapulmonary TB occurs when tuberculosis develops outside of the lungs, although extrapulmonary TB may coexist with pulmonary TB.
General signs and symptoms include fever, chills, night sweats, loss of appetite, weight loss, and fatigue. Significant nail clubbing may also occur.
Pulmonary
If a tuberculosis infection does become active, it most commonly involves the lungs (in about 90% of cases). Symptoms may include chest pain and a prolonged cough producing sputum. About 25% of people may not have any symptoms (i.e., they remain asymptomatic). Occasionally, people may cough up blood in small amounts, and in very rare cases, the infection may erode into the pulmonary artery or a Rasmussen aneurysm, resulting in massive bleeding. Tuberculosis may become a chronic illness and cause extensive scarring in the upper lobes of the lungs. The upper lung lobes are more frequently affected by tuberculosis than the lower ones. The reason for this difference is not clear. It may be due to either better air flow, or poor lymph drainage within the upper lungs.
Extrapulmonary
In 15–20% of active cases, the infection spreads outside the lungs, causing other kinds of TB. These are collectively denoted as extrapulmonary tuberculosis. Extrapulmonary TB occurs more commonly in people with a weakened immune system and young children. In those with HIV, this occurs in more than 50% of cases. Notable extrapulmonary infection sites include the pleura (in tuberculous pleurisy), the central nervous system (in tuberculous meningitis), the lymphatic system (in scrofula of the neck), the genitourinary system (in urogenital tuberculosis), and the bones and joints (in Pott disease of the spine), among others. A potentially more serious, widespread form of TB is called "disseminated tuberculosis"; it is also known as miliary tuberculosis. Miliary TB currently makes up about 10% of extrapulmonary cases.
Causes
Mycobacteria
The main cause of TB is Mycobacterium tuberculosis (MTB), a small, aerobic, nonmotile bacillus. The high lipid content of this pathogen accounts for many of its unique clinical characteristics. It divides every 16 to 20 hours, which is an extremely slow rate compared with other bacteria, which usually divide in less than an hour. Mycobacteria have an outer membrane lipid bilayer. If a Gram stain is performed, MTB either stains very weakly "Gram-positive" or does not retain dye as a result of the high lipid and mycolic acid content of its cell wall. MTB can withstand weak disinfectants and survive in a dry state for weeks. In nature, the bacterium can grow only within the cells of a host organism, but M. tuberculosis can be cultured in the laboratory.
Using histological stains on expectorated samples from phlegm (also called sputum), scientists can identify MTB under a microscope. Since MTB retains certain stains even after being treated with acidic solution, it is classified as an acid-fast bacillus. The most common acid-fast staining techniques are the Ziehl–Neelsen stain and the Kinyoun stain, which dye acid-fast bacilli a bright red that stands out against a blue background. Auramine-rhodamine staining and fluorescence microscopy are also used.
The M. tuberculosis complex (MTBC) includes four other TB-causing mycobacteria: M. bovis, M. africanum, M. canettii, and M. microti. M. africanum is not widespread, but it is a significant cause of tuberculosis in parts of Africa. M. bovis was once a common cause of tuberculosis, but the introduction of pasteurized milk has almost eliminated this as a public health problem in developed countries. M. canettii is rare and seems to be limited to the Horn of Africa, although a few cases have been seen in African emigrants. M. microti is also rare and is seen almost only in immunodeficient people, although its prevalence may be significantly underestimated.
Other known pathogenic mycobacteria include M. leprae, M. avium, and M. kansasii. The latter two species are classified as "nontuberculous mycobacteria" (NTM) or atypical mycobacteria. NTM cause neither TB nor leprosy, but they do cause lung diseases that resemble TB.
Transmission
When people with active pulmonary TB cough, sneeze, speak, sing, or spit, they expel infectious aerosol droplets 0.5 to 5.0 μm in diameter. A single sneeze can release up to 40,000 droplets. Each one of these droplets may transmit the disease, since the infectious dose of tuberculosis is very small (the inhalation of fewer than 10 bacteria may cause an infection).
Risk of transmission
People with prolonged, frequent, or close contact with people with TB are at particularly high risk of becoming infected, with an estimated 22% infection rate. A person with active but untreated tuberculosis may infect 10–15 (or more) other people per year. Transmission should occur from only people with active TB – those with latent infection are not thought to be contagious. The probability of transmission from one person to another depends upon several factors, including the number of infectious droplets expelled by the carrier, the effectiveness of ventilation, the duration of exposure, the virulence of the M. tuberculosis strain, the level of immunity in the uninfected person, and others.
The cascade of person-to-person spread can be circumvented by segregating those with active ("overt") TB and putting them on anti-TB drug regimens. After about two weeks of effective treatment, subjects with nonresistant active infections generally do not remain contagious to others. If someone does become infected, it typically takes three to four weeks before the newly infected person becomes infectious enough to transmit the disease to others.
Risk factors
A number of factors make individuals more susceptible to TB infection or disease.
Active disease risk
The most important risk factor globally for developing active TB is concurrent HIV infection; 13% of those with TB are also infected with HIV. This is a particular problem in sub-Saharan Africa, where HIV infection rates are high. Of those without HIV infection who are infected with tuberculosis, about 5–10% develop active disease during their lifetimes; in contrast, 30% of those co-infected with HIV develop the active disease.
Use of certain medications, such as corticosteroids and infliximab (an anti-αTNF monoclonal antibody), is another important risk factor, especially in the developed world.
Other risk factors include: alcoholism, diabetes mellitus (3-fold increased risk), silicosis (30-fold increased risk), tobacco smoking (2-fold increased risk), indoor air pollution, malnutrition, young age, recently acquired TB infection, recreational drug use, severe kidney disease, low body weight, organ transplant, head and neck cancer, and genetic susceptibility (the overall importance of genetic risk factors remains undefined).
Infection susceptibility
Tobacco smoking increases the risk of infections (in addition to increasing the risk of active disease and death). Additional factors increasing infection susceptibility include young age.
Pathogenesis
About 90% of those infected with M. tuberculosis have asymptomatic, latent TB infections (sometimes called LTBI), with only a 10% lifetime chance that the latent infection will progress to overt, active tuberculous disease. In those with HIV, the risk of developing active TB increases to nearly 10% a year. If effective treatment is not given, the death rate for active TB cases is up to 66%.
TB infection begins when the mycobacteria reach the alveolar air sacs of the lungs, where they invade and replicate within endosomes of alveolar macrophages. Macrophages identify the bacterium as foreign and attempt to eliminate it by phagocytosis. During this process, the bacterium is enveloped by the macrophage and stored temporarily in a membrane-bound vesicle called a phagosome. The phagosome then combines with a lysosome to create a phagolysosome. In the phagolysosome, the cell attempts to use reactive oxygen species and acid to kill the bacterium. However, M. tuberculosis has a thick, waxy mycolic acid capsule that protects it from these toxic substances. M. tuberculosis is able to reproduce inside the macrophage and will eventually kill the immune cell.
The primary site of infection in the lungs, known as the Ghon focus, is generally located in either the upper part of the lower lobe, or the lower part of the upper lobe. Tuberculosis of the lungs may also occur via infection from the blood stream. This is known as a Simon focus and is typically found in the top of the lung. This hematogenous transmission can also spread infection to more distant sites, such as peripheral lymph nodes, the kidneys, the brain, and the bones. All parts of the body can be affected by the disease, though for unknown reasons it rarely affects the heart, skeletal muscles, pancreas, or thyroid.
Tuberculosis is classified as one of the granulomatous inflammatory diseases. Macrophages, epithelioid cells, T lymphocytes, B lymphocytes, and fibroblasts aggregate to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.
However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host's immune system. Macrophages and dendritic cells in the granulomas are unable to present antigen to lymphocytes; thus the immune response is suppressed. Bacteria inside the granuloma can become dormant, resulting in latent infection. Another feature of the granulomas is the development of abnormal cell death (necrosis) in the center of tubercles. To the naked eye, this has the texture of soft, white cheese and is termed caseous necrosis.
If TB bacteria gain entry to the blood stream from an area of damaged tissue, they can spread throughout the body and set up many foci of infection, all appearing as tiny, white tubercles in the tissues. This severe form of TB disease, most common in young children and those with HIV, is called miliary tuberculosis. People with this disseminated TB have a high fatality rate even with treatment (about 30%).
In many people, the infection waxes and wanes. Tissue destruction and necrosis are often balanced by healing and fibrosis. Affected tissue is replaced by scarring and cavities filled with caseous necrotic material. During active disease, some of these cavities are joined to the air passages (bronchi) and this material can be coughed up. It contains living bacteria and thus can spread the infection. Treatment with appropriate antibiotics kills bacteria and allows healing to take place. Upon cure, affected areas are eventually replaced by scar tissue.
Diagnosis
Active tuberculosis
Diagnosing active tuberculosis based only on signs and symptoms is difficult, as is diagnosing the disease in those who have a weakened immune system. A diagnosis of TB should, however, be considered in those with signs of lung disease or constitutional symptoms lasting longer than two weeks. A chest X-ray and multiple sputum cultures for acid-fast bacilli are typically part of the initial evaluation. Interferon-γ release assays (IGRA) and tuberculin skin tests are of little use in most of the developing world. IGRA have similar limitations in those with HIV.
A definitive diagnosis of TB is made by identifying M. tuberculosis in a clinical sample (e.g., sputum, pus, or a tissue biopsy). However, the difficult culture process for this slow-growing organism can take two to six weeks for blood or sputum culture. Thus, treatment is often begun before cultures are confirmed.
Nucleic acid amplification tests and adenosine deaminase testing may allow rapid diagnosis of TB. Blood tests to detect antibodies are not specific or sensitive, so they are not recommended.
Latent tuberculosis
The Mantoux tuberculin skin test is often used to screen people at high risk for TB. Those who have been previously immunized with the Bacille Calmette-Guerin vaccine may have a false-positive test result. The test may be falsely negative in those with sarcoidosis, Hodgkin's lymphoma, malnutrition, and most notably, active tuberculosis. Interferon gamma release assays, on a blood sample, are recommended in those who are positive to the Mantoux test. These are not affected by immunization or most environmental mycobacteria, so they generate fewer false-positive results. However, they are affected by M. szulgai, M. marinum, and M. kansasii. IGRAs may increase sensitivity when used in addition to the skin test, but may be less sensitive than the skin test when used alone.
The US Preventive Services Task Force (USPSTF) has recommended screening people who are at high risk for latent tuberculosis with either tuberculin skin tests or interferon-gamma release assays. While some have recommend testing health care workers, evidence of benefit for this is poor . The Centers for Disease Control and Prevention (CDC) stopped recommending yearly testing of health care workers without known exposure in 2019.
Prevention
Tuberculosis prevention and control efforts rely primarily on the vaccination of infants and the detection and appropriate treatment of active cases. The World Health Organization (WHO) has achieved some success with improved treatment regimens, and a small decrease in case numbers. Some countries have legislation to involuntarily detain or examine those suspected to have tuberculosis, or involuntarily treat them if infected.
Vaccines
The only available vaccine is bacillus Calmette-Guérin (BCG). In children it decreases the risk of getting the infection by 20% and the risk of infection turning into active disease by nearly 60%.
It is the most widely used vaccine worldwide, with more than 90% of all children being vaccinated. The immunity it induces decreases after about ten years. As tuberculosis is uncommon in most of Canada, Western Europe, and the United States, BCG is administered to only those people at high risk. Part of the reasoning against the use of the vaccine is that it makes the tuberculin skin test falsely positive, reducing the test's usefulness as a screening tool. Several vaccines are being developed.
Intradermal MVA85A vaccine in addition to BCG injection is not effective in preventing tuberculosis.
Public health
Public health campaigns which have focused on overcrowding, public spitting and regular sanitation (including hand washing) during the 1800s helped to either interrupt or slow spread which when combined with contact tracing, isolation and treatment helped to dramatically curb the transmission of both tuberculosis and other airborne diseases which led to the elimination of tuberculosis as a major public health issue in most developed economies. Other risk factors which worsened TB spread such as malnutrition were also ameliorated, but since the emergence of HIV a new population of immunocompromised individuals was available for TB to infect.
Source control in the US
During the HIV/AIDS epidemic in the US, up to 35% of those affected by TB were also infected by HIV. Handling of TB-infected patients in US hospitals was known to create airborne TB that could infect others, especially in unventilated spaces.
Multiple US agencies rolled out new public health rules as a result of the TB spread: the CDC brought in new guidelines mandating HEPA filters and HEPA respirators, NIOSH pushed through new 42 CFR 84 respirator regulations in 1995 (like the N95), and OSHA created a proposed rule for TB in 1997, a result of pressure from groups like the Labor Coalition to Fight TB in the Workplace.
However, in 2003, OSHA dropped their proposed TB rules, citing a decline of TB in the US, and public disapproval.
Worldwide campaigns
The World Health Organization (WHO) declared TB a "global health emergency" in 1993, and in 2006, the Stop TB Partnership developed a Global Plan to Stop Tuberculosis that aimed to save 14 million lives between its launch and 2015. A number of targets they set were not achieved by 2015, mostly due to the increase in HIV-associated tuberculosis and the emergence of multiple drug-resistant tuberculosis. A tuberculosis classification system developed by the American Thoracic Society is used primarily in public health programs. In 2015, it launched the End TB Strategy to reduce deaths by 95% and incidence by 90% before 2035. The goal of tuberculosis elimination is being hampered by the lack of rapid testing, short and effective treatment courses, and completely effective vaccines.
The benefits and risks of giving anti-tubercular drugs to those exposed to MDR-TB is unclear. Making HAART therapy available to HIV-positive individuals significantly reduces the risk of progression to an active TB infection by up to 90% and can mitigate the spread through this population.
Management
Treatment of TB uses antibiotics to kill the bacteria. Effective TB treatment is difficult, due to the unusual structure and chemical composition of the mycobacterial cell wall, which hinders the entry of drugs and makes many antibiotics ineffective.
Active TB is best treated with combinations of several antibiotics to reduce the risk of the bacteria developing antibiotic resistance. The routine use of rifabutin instead of rifampicin in HIV-positive people with tuberculosis is of unclear benefit .
Acetylsalicylic acid (aspirin) at a dose of 100 mg per day has been shown to improve clinical signs and symptoms, reduce cavitary lesions, lower inflammatory markers, and increase the rate of sputum-negative conversion in patients with pulmonary tuberculosis.
Latent TB
Latent TB is treated with either isoniazid or rifampin alone, or a combination of isoniazid with either rifampicin or rifapentine.
The treatment takes three to nine months depending on the medications used. People with latent infections are treated to prevent them from progressing to active TB disease later in life.
Education or counselling may improve the latent tuberculosis treatment completion rates.
New onset
The recommended treatment of new-onset pulmonary tuberculosis, , is six months of a combination of antibiotics containing rifampicin, isoniazid, pyrazinamide, and ethambutol for the first two months, and only rifampicin and isoniazid for the last four months. Where resistance to isoniazid is high, ethambutol may be added for the last four months as an alternative. Treatment with anti-TB drugs for at least 6 months results in higher success rates when compared with treatment less than 6 months, even though the difference is small. Shorter treatment regimen may be recommended for those with compliance issues. There is also no evidence to support shorter anti-tuberculosis treatment regimens when compared to a 6-month treatment regimen. However, results presented in 2020 from an international, randomized, controlled clinical trial indicate that a four-month daily treatment regimen containing high-dose, or "optimized", rifapentine with moxifloxacin (2PHZM/2PHM) is as safe and effective as the existing standard six-month daily regimen at curing drug-susceptible tuberculosis (TB) disease.
Recurrent disease
If tuberculosis recurs, testing to determine which antibiotics it is sensitive to is important before determining treatment. If multiple drug-resistant TB (MDR-TB) is detected, treatment with at least four effective antibiotics for 18 to 24 months is recommended.
Medication administration
Directly observed therapy, i.e., having a health care provider watch the person take their medications, is recommended by the World Health Organization (WHO) in an effort to reduce the number of people not appropriately taking antibiotics. The evidence to support this practice over people simply taking their medications independently is of poor quality. There is no strong evidence indicating that directly observed therapy improves the number of people who were cured or the number of people who complete their medicine. Moderate quality evidence suggests that there is also no difference if people are observed at home versus at a clinic, or by a family member versus a health care worker.
Methods to remind people of the importance of treatment and appointments may result in a small but important improvement. There is also not enough evidence to support intermittent rifampicin-containing therapy given two to three times a week has equal effectiveness as daily dose regimen on improving cure rates and reducing relapsing rates. There is also not enough evidence on effectiveness of giving intermittent twice or thrice weekly short course regimen compared to daily dosing regimen in treating children with tuberculosis.
Medication resistance
Primary resistance occurs when a person becomes infected with a resistant strain of TB. A person with fully susceptible MTB may develop secondary (acquired) resistance during therapy because of inadequate treatment, not taking the prescribed regimen appropriately (lack of compliance), or using low-quality medication. Drug-resistant TB is a serious public health issue in many developing countries, as its treatment is longer and requires more expensive drugs. MDR-TB is defined as resistance to the two most effective first-line TB drugs: rifampicin and isoniazid. Extensively drug-resistant TB is also resistant to three or more of the six classes of second-line drugs. Totally drug-resistant TB is resistant to all currently used drugs. It was first observed in 2003 in Italy, but not widely reported until 2012, and has also been found in Iran and India. There is some efficacy for linezolid to treat those with XDR-TB but side effects and discontinuation of medications were common. Bedaquiline is tentatively supported for use in multiple drug-resistant TB.
XDR-TB is a term sometimes used to define extensively resistant TB, and constitutes one in ten cases of MDR-TB. Cases of XDR TB have been identified in more than 90% of countries.
For those with known rifampicin or MDR-TB, molecular tests such as the Genotype MTBDRsl Assay (performed on culture isolates or smear positive specimens) may be useful to detect second-line anti-tubercular drug resistance.
Prognosis
Progression from TB infection to overt TB disease occurs when the bacilli overcome the immune system defenses and begin to multiply. In primary TB disease (some 1–5% of cases), this occurs soon after the initial infection. However, in the majority of cases, a latent infection occurs with no obvious symptoms. These dormant bacilli produce active tuberculosis in 5–10% of these latent cases, often many years after infection.
The risk of reactivation increases with immunosuppression, such as that caused by infection with HIV. In people coinfected with M. tuberculosis and HIV, the risk of reactivation increases to 10% per year. Studies using DNA fingerprinting of M. tuberculosis strains have shown reinfection contributes more substantially to recurrent TB than previously thought, with estimates that it might account for more than 50% of reactivated cases in areas where TB is common. The chance of death from a case of tuberculosis is about 4% , down from 8% in 1995.
In people with smear-positive pulmonary TB (without HIV co-infection), after 5 years without treatment, 50–60% die while 20–25% achieve spontaneous resolution (cure). TB is almost always fatal in those with untreated HIV co-infection and death rates are increased even with antiretroviral treatment of HIV.
Epidemiology
Roughly one-quarter of the world's population has been infected with M. tuberculosis, with new infections occurring in about 1% of the population each year. However, most infections with M. tuberculosis do not cause disease, and 90–95% of infections remain asymptomatic. In 2012, an estimated 8.6 million chronic cases were active. In 2010, 8.8 million new cases of tuberculosis were diagnosed, and 1.20–1.45 million deaths occurred (most of these occurring in developing countries). Of these, about 0.35 million occur in those also infected with HIV. In 2018, tuberculosis was the leading cause of death worldwide from a single infectious agent. The total number of tuberculosis cases has been decreasing since 2005, while new cases have decreased since 2002.
Tuberculosis incidence is seasonal, with peaks occurring every spring and summer. The reasons for this are unclear, but may be related to vitamin D deficiency during the winter. There are also studies linking tuberculosis to different weather conditions like low temperature, low humidity and low rainfall. It has been suggested that tuberculosis incidence rates may be connected to climate change.
At-risk groups
Tuberculosis is closely linked to both overcrowding and malnutrition, making it one of the principal diseases of poverty. Those at high risk thus include: people who inject illicit drugs, inhabitants and employees of locales where vulnerable people gather (e.g., prisons and homeless shelters), medically underprivileged and resource-poor communities, high-risk ethnic minorities, children in close contact with high-risk category patients, and health-care providers serving these patients.
The rate of tuberculosis varies with age. In Africa, it primarily affects adolescents and young adults. However, in countries where incidence rates have declined dramatically (such as the United States), tuberculosis is mainly a disease of the elderly and immunocompromised (risk factors are listed above). Worldwide, 22 "high-burden" states or countries together experience 80% of cases as well as 83% of deaths.
In Canada and Australia, tuberculosis is many times more common among the Indigenous peoples, especially in remote areas. Factors contributing to this include higher prevalence of predisposing health conditions and behaviours, and overcrowding and poverty. In some Canadian Indigenous groups, genetic susceptibility may play a role.
Socioeconomic status (SES) strongly affects TB risk. People of low SES are both more likely to contract TB and to be more severely affected by the disease. Those with low SES are more likely to be affected by risk factors for developing TB (e.g., malnutrition, indoor air pollution, HIV co-infection, etc.), and are additionally more likely to be exposed to crowded and poorly ventilated spaces. Inadequate healthcare also means that people with active disease who facilitate spread are not diagnosed and treated promptly; sick people thus remain in the infectious state and (continue to) spread the infection.
Geographical epidemiology
The distribution of tuberculosis is not uniform across the globe; about 80% of the population in many African, Caribbean, South Asian, and eastern European countries test positive in tuberculin tests, while only 5–10% of the U.S. population test positive. Hopes of totally controlling the disease have been dramatically dampened because of many factors, including the difficulty of developing an effective vaccine, the expensive and time-consuming diagnostic process, the necessity of many months of treatment, the increase in HIV-associated tuberculosis, and the emergence of drug-resistant cases in the 1980s.
In developed countries, tuberculosis is less common and is found mainly in urban areas. In Europe, deaths from TB fell from 500 out of 100,000 in 1850 to 50 out of 100,000 by 1950. Improvements in public health were reducing tuberculosis even before the arrival of antibiotics, although the disease remained a significant threat to public health, such that when the Medical Research Council was formed in Britain in 1913 its initial focus was tuberculosis research.
In 2010, rates per 100,000 people in different areas of the world were: globally 178, Africa 332, the Americas 36, Eastern Mediterranean 173, Europe 63, Southeast Asia 278, and Western Pacific 139.
In 2023, tuberculosis overtook COVID-19 as the leading cause of infectious disease-related deaths globally, according to a World Health Organization. Around 8.2 million people were newly diagnosed with TB last year, allowing them access to treatment—a record high since WHO’s tracking began in 1995 and an increase from 7.5 million cases in 2022. The report highlights ongoing obstacles in combating TB, including severe funding shortages that hinder efforts toward eradication. Although TB-related deaths decreased slightly to 1.25 million in 2023 from 1.32 million in 2022, the overall number of new cases rose marginally to an estimated 10.8 million.
Russia
Russia has achieved particularly dramatic progress with a decline in its TB mortality rate—from 61.9 per 100,000 in 1965 to 2.7 per 100,000 in 1993; however, mortality rate increased to 24 per 100,000 in 2005 and then recoiled to 11 per 100,000 by 2015.
China
China has achieved particularly dramatic progress, with about an 80% reduction in its TB mortality rate between 1990 and 2010. The number of new cases has declined by 17% between 2004 and 2014.
Africa
In 2007, the country with the highest estimated incidence rate of TB was Eswatini, with 1,200 cases per 100,000 people. In 2017, the country with the highest estimated incidence rate as a % of the population was Lesotho, with 665 cases per 100,000 people.
In South Africa, 54,200 people died in 2022 from TB. The incidence rate was 468 per 100,000 people; in 2015, this was 988 per 100,000. The total incidence was 280,000 in 2022; in 2015, this was 552,000.
India
As of 2017, India had the largest total incidence, with an estimated 2,740,000 cases. According to the World Health Organization (WHO), in 2000–2015, India's estimated mortality rate dropped from 55 to 36 per 100,000 population per year with estimated 480 thousand people died of TB in 2015. In India a major proportion of tuberculosis patients are being treated by private partners and private hospitals. Evidence indicates that the tuberculosis national survey does not represent the number of cases that are diagnosed and recorded by private clinics and hospitals in India.
North America
In Canada, tuberculosis was endemic in some rural areas as of 1998. The tuberculosis case rate in Canada in 2021 was 4.8 per 100,000 persons. The rates were highest among Inuit (135.1 per 100,000), First Nations (16.1 per 100,000) and people born outside of Canada (12.3 per 100,000).
In the United States, Native Americans have a fivefold greater mortality from TB, and racial and ethnic minorities accounted for 88% of all reported TB cases. The overall tuberculosis case rate in the United States was 2.9 per 100,000 persons in
2023, representing a 16% increase in cases compared to 2022.
In 2024, Long Beach, California authorized a public health emergency in response to a local outbreak of TB.
Western Europe
In 2017, in the United Kingdom, the national average was 9 per 100,000 and the highest incidence rates in Western Europe were 20 per 100,000 in Portugal.
Society and culture
Names
Tuberculosis has been known by many names from the technical to the familiar. () is the Greek word for consumption, an old term for pulmonary tuberculosis; around 460 BCE, Hippocrates described phthisis as a disease of dry seasons. The abbreviation TB is short for tubercle bacillus. Consumption was the most common nineteenth century English word for the disease, and was also in use well into the twentieth century. The Latin root meaning 'completely' is linked to meaning 'to take up from under'. In The Life and Death of Mr Badman by John Bunyan, the author calls consumption "the captain of all these men of death." "Great white plague" has also been used.
Art and literature
Tuberculosis was for centuries associated with poetic and artistic qualities among those infected, and was also known as "the romantic disease". Major artistic figures such as the poets John Keats, Percy Bysshe Shelley, and Edgar Allan Poe, the composer Frédéric Chopin, the playwright Anton Chekhov, the novelists Franz Kafka, Katherine Mansfield, Charlotte Brontë, Fyodor Dostoevsky, Thomas Mann, W. Somerset Maugham, George Orwell, and Robert Louis Stevenson, and the artists Alice Neel, Jean-Antoine Watteau, Elizabeth Siddal, Marie Bashkirtseff, Edvard Munch, Aubrey Beardsley and Amedeo Modigliani either had the disease or were surrounded by people who did. A widespread belief was that tuberculosis assisted artistic talent. Physical mechanisms proposed for this effect included the slight fever and toxaemia that it caused, allegedly helping them to see life more clearly and to act decisively.
Tuberculosis formed an often-reused theme in literature, as in Thomas Mann's The Magic Mountain, set in a sanatorium; in music, as in Van Morrison's song "T.B. Sheets"; in opera, as in Puccini's La bohème and Verdi's La Traviata; in art, as in Munch's painting of his ill sister; and in film, such as the 1945 The Bells of St. Mary's starring Ingrid Bergman as a nun with tuberculosis.
Public health efforts
In 2014, the WHO adopted the "End TB" strategy which aims to reduce TB incidence by 80% and TB deaths by 90% by 2030. The strategy contains a milestone to reduce TB incidence by 20% and TB deaths by 35% by 2020. However, by 2020 only a 9% reduction in incidence per population was achieved globally, with the European region achieving 19% and the African region achieving 16% reductions. Similarly, the number of deaths only fell by 14%, missing the 2020 milestone of a 35% reduction, with some regions making better progress (31% reduction in Europe and 19% in Africa). Correspondingly, also treatment, prevention and funding milestones were missed in 2020, for example only 6.3 million people were started on TB prevention short of the target of 30 million.
The World Health Organization (WHO), the Bill and Melinda Gates Foundation, and the U.S. government are subsidizing a fast-acting diagnostic tuberculosis test for use in low- and middle-income countries as of 2012. In addition to being fast-acting, the test can determine if there is resistance to the antibiotic rifampicin which may indicate multi-drug resistant tuberculosis and is accurate in those who are also infected with HIV. Many resource-poor places have access to only sputum microscopy.
India had the highest total number of TB cases worldwide in 2010, in part due to poor disease management within the private and public health care sector. Programs such as the Revised National Tuberculosis Control Program are working to reduce TB levels among people receiving public health care.
A 2014 EIU-healthcare report finds there is a need to address apathy and urges for increased funding. The report cites among others Lucica Ditui "[TB] is like an orphan. It has been neglected even in countries with a high burden and often forgotten by donors and those investing in health interventions."
Slow progress has led to frustration, expressed by the executive director of the Global Fund to Fight AIDS, Tuberculosis and Malaria – Mark Dybul: "we have the tools to end TB as a pandemic and public health threat on the planet, but we are not doing it." Several international organizations are pushing for more transparency in treatment, and more countries are implementing mandatory reporting of cases to the government as of 2014, although adherence is often variable. Commercial treatment providers may at times overprescribe second-line drugs as well as supplementary treatment, promoting demands for further regulations.
The government of Brazil provides universal TB care, which reduces this problem. Conversely, falling rates of TB infection may not relate to the number of programs directed at reducing infection rates but may be tied to an increased level of education, income, and health of the population. Costs of the disease, as calculated by the World Bank in 2009 may exceed US$150 billion per year in "high burden" countries. Lack of progress eradicating the disease may also be due to lack of patient follow-up – as among the 250 million rural migrants in China.
There is insufficient data to show that active contact tracing helps to improve case detection rates for tuberculosis. Interventions such as house-to-house visits, educational leaflets, mass media strategies, educational sessions may increase tuberculosis detection rates in short-term. There is no study that compares new methods of contact tracing such as social network analysis with existing contact tracing methods.
Stigma
Slow progress in preventing the disease may in part be due to stigma associated with TB. Stigma may be due to the fear of transmission from affected individuals. This stigma may additionally arise due to links between TB and poverty, and in Africa, AIDS. Such stigmatization may be both real and perceived; for example, in Ghana, individuals with TB are banned from attending public gatherings.
Stigma towards TB may result in delays in seeking treatment, lower treatment compliance, and family members keeping cause of death secret – allowing the disease to spread further. In contrast, in Russia stigma was associated with increased treatment compliance. TB stigma also affects socially marginalized individuals to a greater degree and varies between regions.
One way to decrease stigma may be through the promotion of "TB clubs", where those infected may share experiences and offer support, or through counseling. Some studies have shown TB education programs to be effective in decreasing stigma, and may thus be effective in increasing treatment adherence. Despite this, studies on the relationship between reduced stigma and mortality are lacking , and similar efforts to decrease stigma surrounding AIDS have been minimally effective. Some have claimed the stigma to be worse than the disease, and healthcare providers may unintentionally reinforce stigma, as those with TB are often perceived as difficult or otherwise undesirable. A greater understanding of the social and cultural dimensions of tuberculosis may also help with stigma reduction.
Research
The BCG vaccine has limitations and research to develop new TB vaccines is ongoing. A number of potential candidates are currently in phase I and II clinical trials. Two main approaches are used to attempt to improve the efficacy of available vaccines. One approach involves adding a subunit vaccine to BCG, while the other strategy is attempting to create new and better live vaccines. MVA85A, an example of a subunit vaccine, is in trials in South Africa as of 2006, is based on a genetically modified vaccinia virus. Vaccines are hoped to play a significant role in treatment of both latent and active disease.
To encourage further discovery, researchers and policymakers are promoting new economic models of vaccine development as of 2006, including prizes, tax incentives, and advance market commitments. A number of groups, including the Stop TB Partnership, the South African Tuberculosis Vaccine Initiative, and the Aeras Global TB Vaccine Foundation, are involved with research. Among these, the Aeras Global TB Vaccine Foundation received a gift of more than $280 million (US) from the Bill and Melinda Gates Foundation to develop and license an improved vaccine against tuberculosis for use in high burden countries.
In 2012 a new medication regimen was approved in the US for multidrug-resistant tuberculosis, using bedaquiline as well as existing drugs. There were initial concerns about the safety of this drug, but later research on larger groups found that this regimen improved health outcomes. By 2017 the drug was used in at least 89 countries. Another new drug is delamanid, which was first approved by the European Medicines Agency in 2013 to be used in multidrug-resistant tuberculosis patients, and by 2017 was used in at least 54 countries.
Steroids add-on therapy has not shown any benefits for active pulmonary tuberculosis infection.
Other animals
Mycobacteria infect many different animals, including birds, fish, rodents, and reptiles. The subspecies Mycobacterium tuberculosis, though, is rarely present in wild animals. An effort to eradicate bovine tuberculosis caused by Mycobacterium bovis from the cattle and deer herds of New Zealand has been relatively successful. Efforts in Great Britain have been less successful.
, tuberculosis appears to be widespread among captive elephants in the US. It is believed that the animals originally acquired the disease from humans, a process called reverse zoonosis. Because the disease can spread through the air to infect both humans and other animals, it is a public health concern affecting circuses and zoos.
| Biology and health sciences | Illness and injury | null |
30654 | https://en.wikipedia.org/wiki/Triangle | Triangle | A triangle is a polygon with three corners and three sides, one of the basic shapes in geometry. The corners, also called vertices, are zero-dimensional points while the sides connecting them, also called edges, are one-dimensional line segments. A triangle has three internal angles, each one bounded by a pair of adjacent edges; the sum of angles of a triangle always equals a straight angle (180 degrees or π radians). The triangle is a plane figure and its interior is a planar region. Sometimes an arbitrary edge is chosen to be the base, in which case the opposite vertex is called the apex; the shortest segment between the base and apex is the height. The area of a triangle equals one-half the product of height and base length.
In Euclidean geometry, any two points determine a unique line segment situated within a unique straight line, and any three points that do not all lie on the same straight line determine a unique triangle situated within a unique flat plane. More generally, four points in three-dimensional Euclidean space determine a solid figure called tetrahedron.
In non-Euclidean geometries, three "straight" segments (having zero curvature) also determine a "triangle", for instance, a spherical triangle or hyperbolic triangle. A geodesic triangle is a region of a general two-dimensional surface enclosed by three sides that are straight relative to the surface (geodesics). A triangle is a shape with three curved sides, for instance, a circular triangle with circular-arc sides. (This article is about straight-sided triangles in Euclidean geometry, except where otherwise noted.)
Triangles are classified into different types based on their angles and the lengths of their sides. Relations between angles and side lengths are a major focus of trigonometry. In particular, the sine, cosine, and tangent functions relate side lengths and angles in right triangles.
Definition, terminology, and types
A triangle is a figure consisting of three line segments, each of whose endpoints are connected. This forms a polygon with three sides and three angles. The terminology for categorizing triangles is more than two thousand years old, having been defined in Book One of Euclid's Elements. The names used for modern classification are either a direct transliteration of Euclid's Greek or their Latin translations.
Triangles have many types based on the length of the sides and the angles. A triangle whose sides are all the same length is an equilateral triangle, a triangle with two sides having the same length is an isosceles triangle, and a triangle with three different-length sides is a scalene triangle. A triangle in which one of the angles is a right angle is a right triangle, a triangle in which all of its angles are less than that angle is an acute triangle, and a triangle in which one of it angles is greater than that angle is an obtuse triangle. These definitions date back at least to Euclid.
Appearances
All types of triangles are commonly found in real life. In man-made construction, the isosceles triangles may be found in the shape of gables and pediments, and the equilateral triangle can be found in the yield sign. The faces of the Great Pyramid of Giza are sometimes considered to be equilateral, but more accurate measurements show they are isosceles instead. Other appearances are in heraldic symbols as in the flag of Saint Lucia and flag of the Philippines.
Triangles also appear in three-dimensional objects. A polyhedron is a solid whose boundary is covered by flat polygonals known as the faces, sharp corners known as the vertices, and line segments known as the edges. Polyhedra in some cases can be classified, judging from the shape of their faces. For example, when polyhedra have all equilateral triangles as their faces, they are known as deltahedra. Antiprisms have alternating triangles on their sides. Pyramids and bipyramids are polyhedra with polygonal bases and triangles for lateral faces; the triangles are isosceles whenever they are right pyramids and bipyramids. The Kleetope of a polyhedron is a new polyhedron made by replacing each face of the original with a pyramid, and so the faces of a Kleetope will be triangles. More generally, triangles can be found in higher dimensions, as in the generalized notion of triangles known as the simplex, and the polytopes with triangular facets known as the simplicial polytopes.
Properties
Points, lines, and circles associated with a triangle
Each triangle has many special points inside it, on its edges, or otherwise associated with it. They are constructed by finding three lines associated symmetrically with the three sides (or vertices) and then proving that the three lines meet in a single point. An important tool for proving the existence of these points is Ceva's theorem, which gives a criterion for determining when three such lines are concurrent. Similarly, lines associated with a triangle are often constructed by proving that three symmetrically constructed points are collinear; here Menelaus' theorem gives a useful general criterion. In this section, just a few of the most commonly encountered constructions are explained.
A perpendicular bisector of a side of a triangle is a straight line passing through the midpoint of the side and being perpendicular to it, forming a right angle with it. The three perpendicular bisectors meet in a single point, the triangle's circumcenter; this point is the center of the circumcircle, the circle passing through all three vertices. Thales' theorem implies that if the circumcenter is located on the side of the triangle, then the angle opposite that side is a right angle. If the circumcenter is located inside the triangle, then the triangle is acute; if the circumcenter is located outside the triangle, then the triangle is obtuse.
An altitude of a triangle is a straight line through a vertex and perpendicular to the opposite side. This opposite side is called the base of the altitude, and the point where the altitude intersects the base (or its extension) is called the foot of the altitude. The length of the altitude is the distance between the base and the vertex. The three altitudes intersect in a single point, called the orthocenter of the triangle. The orthocenter lies inside the triangle if and only if the triangle is acute.
An angle bisector of a triangle is a straight line through a vertex that cuts the corresponding angle in half. The three angle bisectors intersect in a single point, the incenter, which is the center of the triangle's incircle. The incircle is the circle that lies inside the triangle and touches all three sides. Its radius is called the inradius. There are three other important circles, the excircles; they lie outside the triangle and touch one side, as well as the extensions of the other two. The centers of the incircles and excircles form an orthocentric system. The midpoints of the three sides and the feet of the three altitudes all lie on a single circle, the triangle's nine-point circle. The remaining three points for which it is named are the midpoints of the portion of altitude between the vertices and the orthocenter. The radius of the nine-point circle is half that of the circumcircle. It touches the incircle (at the Feuerbach point) and the three excircles. The orthocenter (blue point), the center of the nine-point circle (red), the centroid (orange), and the circumcenter (green) all lie on a single line, known as Euler's line (red line). The center of the nine-point circle lies at the midpoint between the orthocenter and the circumcenter, and the distance between the centroid and the circumcenter is half that between the centroid and the orthocenter. Generally, the incircle's center is not located on Euler's line.
A median of a triangle is a straight line through a vertex and the midpoint of the opposite side, and divides the triangle into two equal areas. The three medians intersect in a single point, the triangle's centroid or geometric barycenter. The centroid of a rigid triangular object (cut out of a thin sheet of uniform density) is also its center of mass: the object can be balanced on its centroid in a uniform gravitational field. The centroid cuts every median in the ratio 2:1, i.e. the distance between a vertex and the centroid is twice the distance between the centroid and the midpoint of the opposite side. If one reflects a median in the angle bisector that passes through the same vertex, one obtains a symmedian. The three symmedians intersect in a single point, the symmedian point of the triangle.
Angles
The sum of the measures of the interior angles of a triangle in Euclidean space is always 180 degrees. This fact is equivalent to Euclid's parallel postulate. This allows the determination of the measure of the third angle of any triangle, given the measure of two angles. An exterior angle of a triangle is an angle that is a linear pair (and hence supplementary) to an interior angle. The measure of an exterior angle of a triangle is equal to the sum of the measures of the two interior angles that are not adjacent to it; this is the exterior angle theorem. The sum of the measures of the three exterior angles (one for each vertex) of any triangle is 360 degrees, and indeed, this is true for any convex polygon, no matter how many sides it has.
Another relation between the internal angles and triangles creates a new concept of trigonometric functions. The primary trigonometric functions are sine and cosine, as well as the other functions. They can be defined as the ratio between any two sides of a right triangle. In a scalene triangle, the trigonometric functions can be used to find the unknown measure of either a side or an internal angle; methods for doing so use the law of sines and the law of cosines.
Any three angles that add to 180° can be the internal angles of a triangle. Infinitely many triangles have the same angles, since specifying the angles of a triangle does not determine its size. (A degenerate triangle, whose vertices are collinear, has internal angles of 0° and 180°; whether such a shape counts as a triangle is a matter of convention.) The conditions for three angles , , and , each of them between 0° and 180°, to be the angles of a triangle can also be stated using trigonometric functions. For example, a triangle with angles , , and exists if and only if
Similarity and congruence
Two triangles are said to be similar, if every angle of one triangle has the same measure as the corresponding angle in the other triangle. The corresponding sides of similar triangles have lengths that are in the same proportion, and this property is also sufficient to establish similarity.
Some basic theorems about similar triangles are:
If and only if one pair of internal angles of two triangles have the same measure as each other, and another pair also have the same measure as each other, the triangles are similar.
If and only if one pair of corresponding sides of two triangles are in the same proportion as another pair of corresponding sides, and their included angles have the same measure, then the triangles are similar. (The included angle for any two sides of a polygon is the internal angle between those two sides.)
If and only if three pairs of corresponding sides of two triangles are all in the same proportion, then the triangles are similar.
Two triangles that are congruent have exactly the same size and shape. All pairs of congruent triangles are also similar, but not all pairs of similar triangles are congruent. Given two congruent triangles, all pairs of corresponding interior angles are equal in measure, and all pairs of corresponding sides have the same length. This is a total of six equalities, but three are often sufficient to prove congruence.
Some individually necessary and sufficient conditions for a pair of triangles to be congruent are:
SAS Postulate: Two sides in a triangle have the same length as two sides in the other triangle, and the included angles have the same measure.
ASA: Two interior angles and the side between them in a triangle have the same measure and length, respectively, as those in the other triangle. (This is the basis of surveying by triangulation.)
SSS: Each side of a triangle has the same length as the corresponding side of the other triangle.
AAS: Two angles and a corresponding (non-included) side in a triangle have the same measure and length, respectively, as those in the other triangle. (This is sometimes referred to as AAcorrS and then includes ASA above.)
Area
In the Euclidean plane, area is defined by comparison with a square of side length , which has area 1. There are several ways to calculate the area of an arbitrary triangle. One of the oldest and simplest is to take half the product of the length of one side (the base) times the corresponding altitude :
This formula can be proven by cutting up the triangle and an identical copy into pieces and rearranging the pieces into the shape of a rectangle of base and height .
If two sides and and their included angle are known, then the altitude can be calculated using trigonometry, , so the area of the triangle is:
Heron's formula, named after Heron of Alexandria, is a formula for finding the area of a triangle from the lengths of its sides , , . Letting be the semiperimeter,
Because the ratios between areas of shapes in the same plane are preserved by affine transformations, the relative areas of triangles in any affine plane can be defined without reference to a notion of distance or squares. In any affine space (including Euclidean planes), every triangle with the same base and oriented area has its apex (the third vertex) on a line parallel to the base, and their common area is half of that of a parallelogram with the same base whose opposite side lies on the parallel line. This affine approach was developed in Book 1 of Euclid's Elements.
Given affine coordinates (such as Cartesian coordinates) , , for the vertices of a triangle, its relative oriented area can be calculated using the shoelace formula,
where is the matrix determinant.
Possible side lengths
The triangle inequality states that the sum of the lengths of any two sides of a triangle must be greater than or equal to the length of the third side. Conversely, some triangle with three given positive side lengths exists if and only if those side lengths satisfy the triangle inequality. The sum of two side lengths can equal the length of the third side only in the case of a degenerate triangle, one with collinear vertices.
Rigidity
Unlike a rectangle, which may collapse into a parallelogram from pressure to one of its points, triangles are sturdy because specifying the lengths of all three sides determines the angles. Therefore, a triangle will not change shape unless its sides are bent or extended or broken or if its joints break; in essence, each of the three sides supports the other two. A rectangle, in contrast, is more dependent on the strength of its joints in a structural sense.
Triangles are strong in terms of rigidity, but while packed in a tessellating arrangement triangles are not as strong as hexagons under compression (hence the prevalence of hexagonal forms in nature). Tessellated triangles still maintain superior strength for cantilevering, however, which is why engineering makes use of tetrahedral trusses.
Triangulation
Triangulation means the partition of any planar object into a collection of triangles. For example, in polygon triangulation, a polygon is subdivided into multiple triangles that are attached edge-to-edge, with the property that their vertices coincide with the set of vertices of the polygon. In the case of a simple polygon with sides, there are triangles that are separated by diagonals. Triangulation of a simple polygon has a relationship to the ear, a vertex connected by two other vertices, the diagonal between which lies entirely within the polygon. The two ears theorem states that every simple polygon that is not itself a triangle has at least two ears.
Location of a point
One way to identify locations of points in (or outside) a triangle is to place the triangle in an arbitrary location and orientation in the Cartesian plane, and to use Cartesian coordinates. While convenient for many purposes, this approach has the disadvantage of all points' coordinate values being dependent on the arbitrary placement in the plane.
Two systems avoid that feature, so that the coordinates of a point are not affected by moving the triangle, rotating it, or reflecting it as in a mirror, any of which gives a congruent triangle, or even by rescaling it to a similar triangle:
Trilinear coordinates specify the relative distances of a point from the sides, so that coordinates indicate that the ratio of the distance of the point from the first side to its distance from the second side is , etc.
Barycentric coordinates of the form specify the point's location by the relative weights that would have to be put on the three vertices in order to balance the otherwise weightless triangle on the given point.
Related figures
Figures inscribed in a triangle
As discussed above, every triangle has a unique inscribed circle (incircle) that is interior to the triangle and tangent to all three sides. Every triangle has a unique Steiner inellipse which is interior to the triangle and tangent at the midpoints of the sides. Marden's theorem shows how to find the foci of this ellipse. This ellipse has the greatest area of any ellipse tangent to all three sides of the triangle. The Mandart inellipse of a triangle is the ellipse inscribed within the triangle tangent to its sides at the contact points of its excircles. For any ellipse inscribed in a triangle , let the foci be and , then:
From an interior point in a reference triangle, the nearest points on the three sides serve as the vertices of the pedal triangle of that point. If the interior point is the circumcenter of the reference triangle, the vertices of the pedal triangle are the midpoints of the reference triangle's sides, and so the pedal triangle is called the midpoint triangle or medial triangle. The midpoint triangle subdivides the reference triangle into four congruent triangles which are similar to the reference triangle.
The intouch triangle of a reference triangle has its vertices at the three points of tangency of the reference triangle's sides with its incircle. The extouch triangle of a reference triangle has its vertices at the points of tangency of the reference triangle's excircles with its sides (not extended).
Every acute triangle has three inscribed squares (squares in its interior such that all four of a square's vertices lie on a side of the triangle, so two of them lie on the same side and hence one side of the square coincides with part of a side of the triangle). In a right triangle two of the squares coincide and have a vertex at the triangle's right angle, so a right triangle has only two distinct inscribed squares. An obtuse triangle has only one inscribed square, with a side coinciding with part of the triangle's longest side. Within a given triangle, a longer common side is associated with a smaller inscribed square. If an inscribed square has a side of length and the triangle has a side of length , part of which side coincides with a side of the square, then , , from the side , and the triangle's area are related according toThe largest possible ratio of the area of the inscribed square to the area of the triangle is 1/2, which occurs when , , and the altitude of the triangle from the base of length is equal to . The smallest possible ratio of the side of one inscribed square to the side of another in the same non-obtuse triangle is . Both of these extreme cases occur for the isosceles right triangle.
The Lemoine hexagon is a cyclic hexagon with vertices given by the six intersections of the sides of a triangle with the three lines that are parallel to the sides and that pass through its symmedian point. In either its simple form or its self-intersecting form, the Lemoine hexagon is interior to the triangle with two vertices on each side of the triangle.
Every convex polygon with area can be inscribed in a triangle of area at most equal to . Equality holds only if the polygon is a parallelogram.
Figures circumscribed about a triangle
The tangential triangle of a reference triangle (other than a right triangle) is the triangle whose sides are on the tangent lines to the reference triangle's circumcircle at its vertices.
As mentioned above, every triangle has a unique circumcircle, a circle passing through all three vertices, whose center is the intersection of the perpendicular bisectors of the triangle's sides. Furthermore, every triangle has a unique Steiner circumellipse, which passes through the triangle's vertices and has its center at the triangle's centroid. Of all ellipses going through the triangle's vertices, it has the smallest area.
The Kiepert hyperbola is unique conic that passes through the triangle's three vertices, its centroid, and its circumcenter.
Of all triangles contained in a given convex polygon, one with maximal area can be found in linear time; its vertices may be chosen as three of the vertices of the given polygon.
Miscellaneous triangles
Circular triangles
A circular triangle is a triangle with circular arc edges. The edges of a circular triangle may be either convex (bending outward) or concave (bending inward). The intersection of three disks forms a circular triangle whose sides are all convex. An example of a circular triangle with three convex edges is a Reuleaux triangle, which can be made by intersecting three circles of equal size. The construction may be performed with a compass alone without needing a straightedge, by the Mohr–Mascheroni theorem. Alternatively, it can be constructed by rounding the sides of an equilateral triangle.
A special case of concave circular triangle can be seen in a pseudotriangle. A pseudotriangle is a simply-connected subset of the plane lying between three mutually tangent convex regions. These sides are three smoothed curved lines connecting their endpoints called the cusp points. Any pseudotriangle can be partitioned into many pseudotriangles with the boundaries of convex disks and bitangent lines, a process known as pseudo-triangulation. For disks in a pseudotriangle, the partition gives pseudotriangles and bitangent lines. The convex hull of any pseudotriangle is a triangle.
Triangle in non-planar space
A non-planar triangle is a triangle not included in Euclidean space, roughly speaking a flat space. This means triangles may also be discovered in several spaces, as in hyperbolic space and spherical geometry. A triangle in hyperbolic space is called a hyperbolic triangle, and it can be obtained by drawing on a negatively curved surface, such as a saddle surface. Likewise, a triangle in spherical geometry is called a spherical triangle, and it can be obtained by drawing on a positively curved surface such as a sphere.
The triangles in both spaces have properties different from the triangles in Euclidean space. For example, as mentioned above, the internal angles of a triangle in Euclidean space always add up to 180°. However, the sum of the internal angles of a hyperbolic triangle is less than 180°, and for any spherical triangle, the sum is more than 180°. In particular, it is possible to draw a triangle on a sphere such that the measure of each of its internal angles equals 90°, adding up to a total of 270°. By Girard's theorem, the sum of the angles of a triangle on a sphere is , where is the fraction of the sphere's area enclosed by the triangle.
In more general spaces, there are comparison theorems relating the properties of a triangle in the space to properties of a corresponding triangle in a model space like hyperbolic or elliptic space. For example, a CAT(k) space is characterized by such comparisons.
Fractal geometry
Fractal shapes based on triangles include the Sierpiński gasket and the Koch snowflake.
| Mathematics | Geometry and topology | null |
30659 | https://en.wikipedia.org/wiki/Triangulum | Triangulum | Triangulum is a small constellation in the northern sky. Its name is Latin for "triangle", derived from its three brightest stars, which form a long and narrow triangle. Known to the ancient Babylonians and Greeks, Triangulum was one of the 48 constellations listed by the 2nd century astronomer Ptolemy. The celestial cartographers Johann Bayer and John Flamsteed catalogued the constellation's stars, giving six of them Bayer designations.
The white stars Beta and Gamma Trianguli, of apparent magnitudes 3.00 and 4.00, respectively, form the base of the triangle and the yellow-white Alpha Trianguli, of magnitude 3.41, the apex. Iota Trianguli is a notable double star system, and there are three star systems with known planets located in Triangulum. The constellation contains several galaxies, the brightest and nearest of which is the Triangulum Galaxy or Messier 33—a member of the Local Group. The first quasar ever observed, 3C 48, also lies within the boundaries of Triangulum.
History and mythology
In the Babylonian star catalogues, Triangulum, together with Gamma Andromedae, formed the constellation known as () "The Plough". It is notable as the first constellation presented on (and giving its name to) a pair of tablets containing canonical star lists that were compiled around 1000 BC, the MUL.APIN. The Plough was the first constellation of the "Way of Enlil"—that is, the northernmost quarter of the Sun's path, which corresponds to the 45 days on either side of summer solstice. Its first appearance in the pre-dawn sky (heliacal rising) in February marked the time to begin spring ploughing in Mesopotamia.
The Ancient Greeks called Triangulum Deltoton (Δελτωτόν), as the constellation resembled an upper-case Greek letter delta (Δ). It was transliterated by Roman writers, then later Latinised as Deltotum. Eratosthenes linked it with the Nile Delta, while the Roman writer Hyginus associated it with the triangular island of Sicily, formerly known as Trinacria due to its shape. It was also called Sicilia, because the Romans believed Ceres, patron goddess of Sicily, begged Jupiter to place the island in the heavens. Greek astronomers such as Hipparchos and Ptolemy called it Trigonon (Τρίγωνον), and later, it was Romanized as Trigonum. Other names referring to its shape include Tricuspis and Triquetrum. Alpha and Beta Trianguli were called Al Mīzān, which is Arabic for "The Scale Beam". In Chinese astronomy, Gamma Andromedae and neighbouring stars including Beta, Gamma and Delta Trianguli were called Teen Ta Tseang Keun (天大将军, "Heaven's great general"), representing honour in astrology and a great general in mythology.
Later, the 17th-century German celestial cartographer Johann Bayer called the constellation Triplicitas and Orbis terrarum tripertitus, for the three regions Europe, Asia, and Africa. Triangulus Septentrionalis was a name used to distinguish it from Triangulum Australe, the Southern Triangle. Polish astronomer Johannes Hevelius excised three faint stars—ι, 10 and 12 Trianguli—to form the new constellation of Triangulum Minus in his 1690 Firmamentum Sobiescianum, renaming the original as Triangulum Majus. The smaller constellation was not recognised by the International Astronomical Union (IAU) when the constellations were established in the 1920s.
Characteristics
A small constellation, Triangulum is bordered by Andromeda to the north and west, Pisces to the west and south, Aries to the south, and Perseus to the east. The centre of the constellation lies halfway between Gamma Andromedae and Alpha Arietis. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Tri". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 14 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 25.60° and 37.35°. Covering 132 square degrees and 0.320% of the night sky, Triangulum ranks 78th of the 88 constellations in size.
Features
Bayer catalogued five stars in the constellation, giving them the Bayer designations Alpha to Epsilon. John Flamsteed added Eta, Iota and four Roman letters; of these, only Iota is still used as the others were dropped in subsequent catalogues and star charts. Flamsteed gave 16 stars Flamsteed designations, of which numbers 1 and 16 are not used—1's coordinates were in error as there was no star present at the location that corresponds to any star in his Catalogus Britannicus; Baily presumed that the coordinates were mistranscribed 32s in error by Flamsteed and in fact referred to 7.4 magnitude HD 10407. Baily also noted that 16 Trianguli was closer to Aries and included it in the latter constellation.
Stars
Three stars make up the long narrow triangle that gives the constellation its name. The brightest member is the white giant star Beta Trianguli of apparent magnitude 3.00, lying 127 light-years distant from Earth. It is actually a spectroscopic binary system; the primary is a white star of spectral type A5IV with 3.5 times the mass of the Sun that is beginning to expand and evolve off the main sequence. The secondary is poorly known, but calculated to be a yellow-white F-type main-sequence star around 1.4 solar masses. The two orbit around a common centre of gravity every 31 days, and are surrounded by a ring of dust that extends from 50 to 400 AU away from the stars.
The second-brightest star, the yellow-white subgiant star Alpha Trianguli (3.41m) with a close dimmer companion, is also known as Caput Trianguli or Ras al Muthallath, and is at the apex of the triangle. It lies around 7 degrees north-northwest of Alpha Arietis. Making up the triangle is Gamma Trianguli, a white main sequence star of spectral type A1Vnn of apparent magnitude 4.00 about 112 light-years from Earth. It is around double the size of and around 33 times as luminous as the sun and rotates rapidly. Like Beta, it is surrounded by a dusty debris disk, which has a radius 80 times the distance of the Earth from the Sun. Lying near Gamma and forming an optical triple system with it are Delta and 7 Trianguli. Delta is a spectroscopic binary system composed of two yellow main sequence stars of similar dimensions to the Sun that lies 35 light-years from Earth. The two stars orbit each other every ten days and are a mere 0.1 AU apart. This system is the closest in the constellation to the Earth. Only of magnitude 5.25, 7 Trianguli is much further away at around 280 light-years distant from Earth.
Iota Trianguli is a double star whose components can be separated by medium-sized telescopes into a strong yellow and a contrasting pale blue star. Both of these are themselves close binaries. X Trianguli is an eclipsing binary system that ranges between magnitudes 8.5 and 11.2 over a period of 0.97 days. RW Trianguli is a cataclysmic variable star system composed of a white dwarf primary and an orange main sequence star of spectral type K7 V. The former is drawing off matter from the latter, forming a prominent accretion disc. The system is around 1075 light-years distant.
R Trianguli is a long period (Mira) variable that ranges from magnitude 6.2 to 11.7 over a period of 267 days. It is a red giant of spectral type M3.5-8e, lying around 960 light-years away. HD 12545, also known as XX Trianguli, is
an orange giant of spectral type K0III around 520 light-years distant with a visual magnitude of 8.42. A huge starspot larger than the diameter of the Sun was detected on its surface in 1999 by astronomers using Doppler imaging.
Two star systems appear to have planets. HD 9446 is a Sun-like star around 171 light-years distant that has two planets of masses 0.7 and 1.8 times that of Jupiter, with orbital periods of 30 and 193 days respectively. HD 13189 is an orange giant of spectral type K2II about 2–7 times as massive as the Sun with a planetary or brown dwarf companion between 8 and 20 times as massive as Jupiter, which takes 472 days to complete an orbit. It is one of the largest stars discovered to have a planetary companion.
Deep-sky objects
The Triangulum Galaxy, also known as Messier 33, was discovered by Giovanni Battista Hodierna in the 17th century. A distant member of the Local Group, it is about 2.3 million light-years away, and at magnitude 5.8 it is bright enough to be seen by the naked eye under dark skies. Being a diffuse object, it is challenging to see under light-polluted skies, even with a small telescope or binoculars, and low power is required to view it. It is a spiral galaxy with a diameter of 46,000 light-years and is thus smaller than both the Andromeda Galaxy and the Milky Way. A distance of less than 300 kiloparsecs between it and Andromeda supports the hypothesis that it is a satellite of the larger galaxy. It is believed to have been interacting with it from their velocities. Within the constellation, it lies near the border of Pisces, 3.5 degrees west-northwest of Alpha Trianguli and 7 degrees southwest of Beta Andromedae. Within the galaxy, NGC 604 is an H II region where star formation takes place.
In addition to M33, there are several NGC galaxies of visual magnitudes 12 to 14. The largest of these include the 10 arcminute long magnitude 12 NGC 925 spiral galaxy and the 5 arcminute long magnitude 11.6 NGC 672 barred spiral galaxy. The latter is close by and appears to be interacting with IC 1727. The two are 88,000 light-years apart and lie around 18 million light-years away. These two plus another four nearby dwarf irregular galaxies constitute the NGC 672 group, and all six appear to have had a burst of star formation in the last ten million years. The group is thought connected to another group of six galaxies known as the NGC 784 group, named for its principal galaxy, the barred spiral NGC 784. Together with two isolated dwarf galaxies, these fourteen appear to be moving in a common direction and constitute a group possibly located on a dark matter filament. 3C 48 was the first quasar ever to be observed, although its true identity was not uncovered until after that of 3C 273 in 1963. It has an apparent magnitude of 16.2 and is located about 5 degrees northwest of Alpha Trianguli.
| Physical sciences | Other | Astronomy |
30660 | https://en.wikipedia.org/wiki/Tucana | Tucana | Tucana (The Toucan) is a constellation in the southern sky, named after the toucan, a South American bird. It is one of twelve constellations conceived in the late sixteenth century by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Tucana first appeared on a celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas Uranometria of 1603. French explorer and astronomer Nicolas Louis de Lacaille gave its stars Bayer designations in 1756. The constellations Tucana, Grus, Phoenix and Pavo are collectively known as the "Southern Birds".
Tucana is not a prominent constellation as all of its stars are third magnitude or fainter; the brightest is Alpha Tucanae with an apparent visual magnitude of 2.87. Beta Tucanae is a star system with six member stars, while Kappa is a quadruple system. The constellation contains 47 Tucanae, one of the brightest globular clusters in the sky, and most of the Small Magellanic Cloud.
History
Tucana is one of the twelve constellations established by the astronomer Petrus Plancius from the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. Both Plancius and Bayer depict it as a toucan. De Houtman included it in his southern star catalogue the same year under the Dutch name Den Indiaenschen Exster, op Indies Lang ghenaemt "the Indian magpie, named Lang in the Indies", by this meaning a particular bird with a long beak—a hornbill, a bird native to the East Indies. A 1603 celestial globe by Willem Blaeu depicts it with a casque. It was interpreted on Chinese charts as Niǎohuì "bird's beak", and in England as "Brasilian Pye", while Johannes Kepler and Giovanni Battista Riccioli termed it Anser Americanus "American Goose", and Caesius as Pica Indica. Tucana and the nearby constellations Phoenix, Grus and Pavo are collectively called the "Southern Birds".
Characteristics
Irregular in shape, Tucana is bordered by Hydrus to the east, Grus and Phoenix to the north, Indus to the west and Octans to the south. Covering 295 square degrees, it ranks 48th of the 88 constellations in size. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Tuc". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −56.31° and −75.35°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere.
Features
Stars
Although he depicted Tucana on his chart, Bayer did not assign its stars Bayer designations. French explorer and astronomer Nicolas Louis de Lacaille labelled them Alpha to Rho in 1756, but omitted Omicron and Xi, and labelled a pair of stars close together Lambda Tucanae, and a group of three stars Beta Tucanae. In 1879, American astronomer Benjamin Gould designated a star Xi Tucanae—this had not been given a designation by Lacaille who had recognized it as nebulous, and it is now known as the globular cluster 47 Tucanae. Mu Tucanae was dropped by Francis Baily, who felt the star was too faint to warrant a designation, and Kappa's two components came to be known as Kappa1 and Kappa2.
The layout of the brighter stars of Tucana has been likened to a kite. Within the constellation's boundaries are around 80 stars brighter than an apparent magnitude of 7. At an apparent magnitude of 2.86, Alpha Tucanae is the brightest star in the constellation and marks the toucan's head. It is an orange subgiant of spectral type K3III around 199 light-years distant from the Solar System. A cool star with a surface temperature of 4300 K, it is 424 times as luminous as the Sun and 37 times its diameter. It is 2.5 to 3 times as massive. Alpha Tucanae is a spectroscopic binary, which means that the two stars have not been individually resolved using a telescope, but the presence of the companion has been inferred from measuring changes in the spectrum of the primary. The orbital period of the binary system is 4197.7 days (11.5 years). Nothing is known about the companion. Two degrees southeast of Alpha is the red-hued Nu Tucanae, of spectral type M4III and lying around 290 light-years distant. It is classified as a semiregular variable star and its brightness varies from magnitude +4.75 to +4.93. Described by Richard Hinckley Allen as bluish, Gamma Tucanae is a yellow-white sequence star of spectral type F4V and an apparent magnitude of 4.00 located around 75 light-years from Earth. It also marks the toucan's beak.
Beta, Delta and Kappa are multiple star systems containing six, two and four stars respectively. Located near the tail of the toucan, Beta Tucanae's two brightest components, Beta1 and Beta2 are separated by an angle of 27 arcseconds and have apparent magnitudes of 4.4 and 4.5 respectively. They can be separated in small telescopes. A third star, Beta3 Tucanae, is separated by 10 arcminutes from the two, and able to be seen as a separate star with the unaided eye. Each star is itself a binary star, making six in total. Lying in the southwestern corner of the constellation around 251 light-years away from Earth, Delta Tucanae consists of a blue-white primary contrasting with a yellowish companion. Delta Tucanae A is a main sequence star of spectral type B9.5V and an apparent magnitude of 4.49. The companion has an apparent magnitude of 9.3. The Kappa Tucanae system shines with a combined apparent magnitude of 4.25, and is located around 68 light-years from the Solar System. The brighter component is a yellowish star, known as Kappa Tucanae A with an apparent magnitude of 5.33 and spectral type F6V, while the fainter lies 5 arcseconds to the northwest. Known as Kappa Tucanae B, it has an apparent magnitude of 7.58 and spectral type K1V. Five arcminutes to the northwest is a fainter star of apparent magnitude 7.24—actually a pair of orange main sequence stars of spectral types K2V and K3V, which can be seen individually as stars one arcsecond apart with a telescope such as a Dobsonian with high power.
Lambda Tucanae is an optical double—that is, the name is given to two stars (Lambda1 and Lambda2) which appear close together from the Earth, but are in fact far apart in space. Lambda1 is itself a binary star, with two components—a yellow-white star of spectral type F7IV-V and an apparent magnitude of 6.22, and a yellow main sequence star of spectral type G1V and an apparent magnitude of 7.28. The system is 186 light-years distant. Lambda2 is an orange subgiant of spectral type K2III that is expanding and cooling and has left the main sequence. Of apparent magnitude 5.46, it is approximately 220 light-years distant from Earth.
Epsilon Tucanae traditionally marks the toucan's left leg. A B-type subgiant, it has a spectral type B9IV and an apparent magnitude of 4.49. It is approximately 373 light-years from Earth. It is around four times as massive as the Sun.
Theta Tucanae is a white A-type star around 423 light-years distant from Earth, which is actually a close binary system. The main star is classified as a Delta Scuti variable—a class of short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology. It is around double the Sun's mass, having siphoned off one whole solar mass from its companion, now a hydrogen-depleted dwarf star of around only 0.2 solar masses. The system shines with a combined light that varies between magnitudes 6.06 to 6.15 every 70 to 80 minutes.
Zeta Tucanae is a yellow-white main sequence star of spectral type F9.5V and an apparent magnitude of 4.20 located 28 light-years away from the Solar System. Despite having a slightly lower mass, this star is more luminous than the Sun. The composition and mass of this star are very similar to the Sun, with a slightly lower mass and an estimated age of three billion years. The solar-like qualities make it a target of interest for investigating the possible existence of a life-bearing planet. It appears to have a debris disk orbiting it at a minimum radius of 2.3 astronomical units. As of 2009, no planet has been discovered in orbit around this star.
Five star systems have been found to have planets, four of which have been discovered by the High Accuracy Radial Velocity Planet Searcher (HARPS) in Chile. HD 4308 is a star with around 83% of the Sun's mass located 72 light-years away with a super-Earth planet with an orbital period of around 15 days. HD 215497 is an orange star of spectral type K3V around 142 light-years distant. It is orbited by a hot super-Earth every 3 days and a second planet around the size of Saturn with a period of around 567 days. HD 221287 has a spectral type of F7V and lies 173 light-years away, and has a super-Jovian planet. HD 7199 has spectral type KOIV/V and is located 117 light-years away. It has a planet with around 30% the mass of Jupiter that has an orbital period of 615 days. HD 219077 has a planet around 10 times as massive as Jupiter in a highly eccentric orbit.
Deep-sky objects
The second-brightest globular cluster in the sky after Omega Centauri, 47 Tucanae (NGC 104) lies just west of the Small Magellanic Cloud. Only 14,700 light-years distant from Earth, it is thought to be around 12 billion years old. Mostly composed of old, yellow stars, it does possess a contingent of blue stragglers, hot stars that are hypothesized to form from binary star mergers. 47 Tucanae has an apparent magnitude of 3.9, meaning that it is visible to the naked eye; it is a Shapley class III cluster, which means that it has a clearly defined nucleus. Near to 47 Tucana on the sky, and often seen in wide-field photographs showing it, are two much more distant globular clusters associated with the SMC: NGC 121, 10 arcminutes away from the bigger cluster's edge, and Lindsay 8.
NGC 362 is another globular cluster in Tucana with an apparent magnitude of 6.4, 27,700 light-years from Earth. Like neighboring 47 Tucanae, NGC 362 is a Shapley class III cluster and among the brightest globular clusters in the sky. Unusually for a globular cluster, its orbit takes it very close to the center of the Milky Way—approximately 3,000 light-years. It was discovered in the 1820s by James Dunlop. Its stars become visible at 180x magnification through a telescope.
Located at the southern end of Tucana, the Small Magellanic Cloud is a dwarf galaxy that is one of the nearest neighbors to the Milky Way galaxy at a distance of 210,000 light-years. Though it probably formed as a disk shape, tidal forces from the Milky Way have distorted it. Along with the Large Magellanic Cloud, it lies within the Magellanic Stream, a cloud of gas that connects the two galaxies. NGC 346 is a star-forming region located in the Small Magellanic Cloud. It has an apparent magnitude of 10.3. Within it lies the triple star system HD 5980, each of its members among the most luminous stars known.
The Tucana Dwarf galaxy, which was discovered in 1990, is a dwarf spheroidal galaxy of type dE5 that is an isolated member of the Local Group. It is located from the Solar System and around from the barycentre of the Local Group—the second most remote of all member galaxies after the Sagittarius Dwarf Irregular Galaxy.
The barred spiral galaxy NGC 7408 is located 3 degrees northwest of Delta Tucanae, and was initially mistaken for a planetary nebula.
In 1998, part of the constellation was the subject of a two-week observation program by the Hubble Space Telescope, which resulted in the Hubble Deep Field South. The potential area to be covered needed to be at the poles of the telescope's orbit for continuous observing, with the final choice resting upon the discovery of a quasar, QSO J2233-606, in the field.
| Physical sciences | Other | Astronomy |
30662 | https://en.wikipedia.org/wiki/Triangulum%20Australe | Triangulum Australe | Triangulum Australe is a small constellation in the far Southern Celestial Hemisphere. Its name is Latin for "the southern triangle", which distinguishes it from Triangulum in the northern sky and is derived from the acute, almost equilateral pattern of its three brightest stars. It was first depicted on a celestial globe as Triangulus Antarcticus by Petrus Plancius in 1589, and later with more accuracy and its current name by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756.
Alpha Trianguli Australis, known as Atria, is a second-magnitude orange giant and the brightest star in the constellation, as well as the 42nd-brightest star in the night sky. Completing the triangle are the two white main sequence stars Beta and Gamma Trianguli Australis. Although the constellation lies in the Milky Way and contains many stars, deep-sky objects are not prominent. Notable features include the open cluster NGC 6025 and planetary nebula NGC 5979.
The Great Attractor, the gravitational center of the Laniakea Supercluster which includes the Milky Way galaxy, straddles between Triangulum Australe and the neighboring constellation Norma.
History
Italian navigator Amerigo Vespucci explored the New World at the beginning of the 16th century. He learnt to recognize the stars in the southern hemisphere and made a catalogue for his patron king Manuel I of Portugal, which is now lost. As well as the catalogue, Vespucci wrote descriptions of the southern stars, including a triangle which may be either Triangulum Australe or Apus. This was sent to his patron in Florence, Lorenzo di Pierfrancesco de' Medici, and published as Mundus Novus in 1504. The first depiction of the constellation was provided in 1589 by Flemish astronomer and clergyman Petrus Plancius on a -cm diameter celestial globe published in Amsterdam by Dutch cartographer Jacob van Langren, where it was called Triangulus Antarcticus and incorrectly portrayed to the south of Argo Navis. His student Petrus Keyzer, along with Dutch explorer Frederick de Houtman, coined the name Den Zuyden Trianghel. Triangulum Australe was more accurately depicted in Johann Bayer's celestial atlas Uranometria in 1603, where it was also given its current name.
Nicolas Louis de Lacaille portrayed the constellations of Norma, Circinus and Triangulum Australe as a set square and ruler, a compass, and a surveyor's level respectively in a set of draughtsman's instruments in his 1756 map of the southern stars. Also depicting it as a surveyor's level, German Johann Bode gave it the alternate name of Libella in his Uranographia.
German poet and author Philippus Caesius saw the three main stars as representing the Three Patriarchs, Abraham, Isaac and Jacob (with Atria as Abraham). The Wardaman people of the Northern Territory in Australia perceived the stars of Triangulum Australe as the tail of the Rainbow Serpent, which stretched out from near Crux across to Scorpius. Overhead in October, the Rainbow Serpent "gives Lightning a nudge" to bring on the wet season rains in November.
Characteristics
Triangulum Australe is a small constellation bordered by Norma to the north, Circinus to the west, Apus to the south and Ara to the east. It lies near the Pointers (Alpha and Beta Centauri), with only Circinus in between. The constellation is located within the Milky Way, and hence has many stars. A roughly equilateral triangle, it is easily identifiable. Triangulum Australe lies too far south in the celestial southern hemisphere to be visible from Europe, yet is circumpolar from most of the southern hemisphere. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "TrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 18 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −60.26° and −70.51°. Triangulum Australe culminates each year at 9 p.m. on 23 August.
Notable features
Bright stars
In defining the constellation, Lacaille gave twelve stars Bayer designations of Alpha through to Lambda, with two close stars called Eta (one now known by its Henry Draper catalogue number), while Lambda was later dropped due to its dimness. The three brightest stars, Alpha, Beta and Gamma, make up the triangle. Readily identified by its orange hue, Alpha Trianguli Australis is a bright giant star of spectral type K2 IIb-IIIa with an apparent magnitude of +1.91 that is the 42nd-brightest star in the night sky. It lies away and has an absolute magnitude of −3.68 and is 5,500 times more luminous than the Sun. With a diameter 130 times that of the Sun, it would almost reach the orbit of Venus if placed at the centre of the Solar System. The proper name Atria is a contraction of its Bayer designation. Beta Trianguli Australis is a double star, the primary being a F-type main-sequence star with a stellar classification of F1V, and an apparent magnitude of 2.85. Lying only away, it has an absolute magnitude of 2.38. Its companion, almost 3 arcminutes away, is a 13th-magnitude star which may or may not be in orbit around Beta. The remaining member of the triangle is Gamma Trianguli Australis with an apparent magnitude of 2.87. It is an A-type main sequence star of spectral class A1 V, which lies away.
Located outside the triangle near Beta, Delta Trianguli Australis is the fourth-brightest star at apparent magnitude +3.8. It is a yellow giant of spectral type G2Ib-II and lies away. Lying halfway between Beta and Gamma, Epsilon Trianguli Australis is an optical double. The brighter star, Epsilon Trianguli Australis A, is an orange K-type sub-giant of spectral type K1.5III with an apparent magnitude of +4.11. The optical companion, Epsilon Trianguli Australis B (or HD 138510), is a white main sequence star of spectral type A9IV/V which has an apparent magnitude of +9.32. Zeta Trianguli Australis appears as a star of apparent magnitude +4.91 and spectral class F9V, but is actually a spectroscopic binary with a near companion, probably a red dwarf. The pair orbit each other once every 13 days. A young star, its proper motion indicates it is a member of the Ursa Major moving group. Iota Trianguli Australis shows itself to be a multiple star system composed of a yellow and a white star when seen though a 7.5 cm telescope. The brighter star has a spectral type of F4IV and is a spectroscopic binary whose components are two yellow-white stars which orbit each other every 39.88 days. The primary is a Gamma Doradus variable, pulsating over a period of 1.45 days. The fainter star is not associated with the system, hence the system is an optical double. HD 147018 is a Sun-like star of apparent magnitude 8.3 and spectral type G9V, which was found to have two exoplanets, HD 147018 b and HD 147018 c, in 2009.
Of apparent magnitude 5.11, the yellow bright giant Kappa Trianguli Australis of spectral type G5IIa lies around distant from the Solar System. Eta Trianguli Australis (or Eta1 Trianguli Australis) is a Be star of spectral type B7IVe which is from Earth, with an apparent magnitude of 5.89. Lacaille named a close-by star as Eta as well, which was inconsistently followed by Francis Baily, who used the name for the brighter or both stars in two different publications. Despite their faintness, Benjamin Gould upheld their Bayer designation as they were closer than 25 degrees to the south celestial pole. The second Eta is now designated as HD 150550. It is a variable star of average magnitude 6.53 and spectral type A1III.
Variable stars
Triangulum Australe contains several cepheid variables, all of which are too faint to be seen with the naked eye: R Trianguli Australis ranges from apparent magnitude 6.4 to 6.9 over a period of 3.389 days, S Trianguli Australis varies from magnitude 6.1 to 6.8 over 6.323 days, and U Trianguli Australis' brightness changes from 7.5 to 8.3 over 2.568 days. All three are yellow-white giants of spectral type F7Ib/II, F8II, and F8Ib/II respectively. RT Trianguli Australis is an unusual cepheid variable which shows strong absorption bands in molecular fragments of C2, ⫶CH and ⋅CN, and has been classified as a carbon cepheid of spectral type R. It varies between magnitudes 9.2 and 9.97 over 1.95 days. Lying nearby Gamma, X Trianguli Australis is a variable carbon star with an average magnitude of 5.63. It has two periods of around 385 and 455 days, and is of spectral type C5, 5(Nb).
EK Trianguli Australis, a dwarf nova of the SU Ursae Majoris type, was first noticed in 1978 and officially described in 1980. It consists of a white dwarf and a donor star which orbit each other every 1.5 hours. The white dwarf sucks matter from the other star onto an accretion disc and periodically erupts, reaching magnitude 11.2 in superoutbursts, 12.1 in normal outbursts and remaining at magnitude 16.7 when quiet. NR Trianguli Australis was a slow nova which peaked at magnitude 8.4 in April 2008, before fading to magnitude 12.4 by September of that year.
Deep-sky objects
Triangulum Australe has few deep-sky objects—one open cluster and a few planetary nebulae and faint galaxies. NGC 6025 is an open cluster with about 30 stars ranging from 7th to 9th magnitude. Located 3 degrees north and 1 east of Beta Trianguli Australis, it lies about away and is about in diameter. Its brightest star is MQ Trianguli Australis at apparent magnitude 7.1. NGC 5979, a planetary nebula of apparent magnitude 12.3, has a blue-green hue at higher magnifications, while Henize 2-138 is a smaller planetary nebula of magnitude 11.0. NGC 5938 is a remote spiral galaxy around 300 million light-years (90 megaparsecs) away. It is located 5 degrees south of Epsilon Trianguli Australis. ESO 69-6 is a pair of merging galaxies located about 600 million light-years (185 megaparsecs) away. Their contents have been dragged out in long tails by the interaction.
In culture
Triangulum Australe appears on the flag of Brazil, symbolizing the three states of the South Region.
It also appears as the only constellation used for the flag of secessionist movement The South Is My Country.
| Physical sciences | Other | Astronomy |
30664 | https://en.wikipedia.org/wiki/Telescopium | Telescopium | Telescopium is a minor constellation in the southern celestial hemisphere, one of twelve named in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. Its name is a Latinized form of the Greek word for telescope. Telescopium was later much reduced in size by Francis Baily and Benjamin Gould.
The brightest star in the constellation is Alpha Telescopii, a blue-white subgiant with an apparent magnitude of 3.5, followed by the orange giant star Zeta Telescopii at magnitude 4.1. Eta and PZ Telescopii are two young star systems with debris disks and brown dwarf companions. Telescopium hosts two unusual stars with very little hydrogen that are likely to be the result of two merged white dwarfs: PV Telescopii, also known as HD 168476, is a hot blue extreme helium star, while RS Telescopii is an R Coronae Borealis variable. RR Telescopii is a cataclysmic variable that brightened as a nova to magnitude 6 in 1948.
It had been hypothesized in 2020 that Telescopium would also host the first known visible star system with a black hole, QV Telescopii (HR 6819), however observations in 2022 indicated that this is a binary system of two main-sequence stars without a black hole instead.
History
Telescopium was introduced in 1751–52 by Nicolas-Louis de Lacaille with the French name le Telescope, depicting an aerial telescope, after he had observed and catalogued 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised 14 new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honored instruments that symbolised the Age of Enlightenment. Covering 40 degrees of the night sky, the telescope stretched out northwards between Sagittarius and Scorpius. Lacaille had Latinised its name to Telescopium by 1763.
The constellation was known by other names. It was called Tubus Astronomicus in the eighteenth century, during which time three constellations depicting telescopes were recognised—Tubus Herschelii Major between Gemini and Auriga and Tubus Herschelii Minor between Taurus and Orion, both of which had fallen out of use by the nineteenth century. Johann Bode called it the Astronomische Fernrohr in his 1805 Gestirne and kept its size, but later astronomers Francis Baily and Benjamin Gould subsequently shrank its boundaries. The much-reduced constellation lost several brighter stars to neighbouring constellations: Beta Telescopii became Eta Sagittarii, which it had been before Lacaille placed it in Telescopium, Gamma was placed in Scorpius and renamed G Scorpii by Gould, Theta Telescopii reverted to its old appellation of d Ophiuchi, and Sigma Telescopii was placed in Corona Australis. Initially uncatalogued, the latter is now known as HR 6875. The original object Lacaille had named Eta Telescopii—the open cluster Messier 7—was in what is now Scorpius, and Gould used the Bayer designation for a magnitude 5 star, which he felt warranted a letter.
Characteristics
A small constellation, Telescopium is bordered by Sagittarius and Corona Australis to the north, Ara to the west, Pavo to the south, and Indus to the east, cornering on Microscopium to the northeast. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Tel". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a quadrilateral. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −45.09° and −56.98°. The whole constellation is visible to observers south of latitude 33°N.
Features
Stars
Within the constellation's borders, there are 57 stars brighter than or equal to apparent magnitude 6.5. With a magnitude of 3.5, Alpha Telescopii is the brightest star in the constellation. It is a blue-white subgiant of spectral type B3IV which lies around 250 light-years away. It is radiating nearly 800 times the Sun's luminosity, and is estimated to be 5.2±0.4 times as massive and have 3.3±0.5 times the Sun's radius. Close by Alpha Telescopii are the two blue-white stars sharing the designation of Delta Telescopii. Delta¹ Telescopii is of spectral type B6IV and apparent magnitude 4.9, while Delta² Telescopii is of spectral type B3III and magnitude 5.1. They form an optical double, as the stars are estimated to be around 710 and 1190 light-years away respectively. The faint (magnitude 12.23) Gliese 754, a red dwarf of spectral type M4.5V, is one of the nearest 100 stars to Earth at 19.3 light-years distant. Its eccentric orbit around the Galaxy indicates that it may have originated in the Milky Way's thick disk.
At least four of the fifteen stars visible to the unaided eye are orange giants of spectral class K. The second brightest star in the constellation—at apparent magnitude 4.1—is Zeta Telescopii, an orange subgiant of spectral type K1III-IV. Around 1.53 times as massive as the Sun, it shines with 512 times its luminosity. Located 127 light years away from Earth, it has been described as yellow or reddish in appearance. Epsilon Telescopii is a binary star system: the brighter component, Epsilon Telescopii A, is an orange giant of spectral type K0III with an apparent magnitude of +4.52, while the 13th magnitude companion, Epsilon Telescopii B, is 21 arcseconds away from the primary, and just visible with a 15 cm aperture telescope on a dark night. The system is 417 light-years away. Iota Telescopii and HD 169405—magnitude 5 orange giants of spectral types K0III and K0.5III respectively—make up the quartet. They are around 370 and 497 light-years away from the Sun respectively. Another ageing star, Kappa Telescopii is a yellow giant with a spectral type G9III and apparent magnitude of 5.18. Around 1.87 billion years old, this star of around 1.6 solar masses has swollen to 11 times the Sun's diameter. It is approximately 293 light-years from Earth, and is another optical double.
Xi Telescopii is an irregular variable star that ranges between magnitudes 4.89 and 4.94. Located 1079 light-years distant, it is a red giant of spectral type M2III that has a diameter around 5.6 times the Sun's, and a luminosity around 2973 times that of the Sun. Another irregular variable, RX Telescopii is a red supergiant that varies between magnitudes 6.45 and 7.47, just visible to the unaided eye under good viewing conditions. BL Telescopii is an Algol-like eclipsing binary system that varies between apparent magnitudes 7.09 and 9.08 over a period of just over 778 days (2 years 48 days). The primary is a yellow supergiant that is itself intrinsically variable. Dipping from its baseline magnitude of 9.6 to 16.5, RS Telescopii is a rare R Coronae Borealis variable—an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. The dimming is thought to be caused by carbon dust expelled by the star. As of 2012, four dimmings have been observed. PV Telescopii is a class B-type (blue) extreme helium star that is the prototype of a class of variables known as PV Telescopii variables. First discovered in 1952, it was found to have a very low level of hydrogen. One theory of its origin is that it is the result of a merger between a helium- and a carbon-oxygen white dwarf. If the combined mass does not exceed the Chandrasekhar limit, the former will accrete onto the latter star and ignite to form a supergiant. Later this will become an extreme helium star before cooling to become a white dwarf.
While RR Telescopii, also designated Nova Telescopii 1948, is often called a slow nova, it is now classified as a symbiotic nova system composed of an M5III pulsating red giant and a white dwarf; between 1944 and 1948 it brightened by about 7 magnitudes before being noticed at apparent magnitude 6.0 in mid-1948. It has since faded slowly to about apparent magnitude 12. QS Telescopii is a binary system composed of a white dwarf and main sequence donor star, in this case the two are close enough to be tidally locked, facing one another. Known as polars, material from the donor star does not form an accretion disk around the white dwarf, but rather streams directly onto it. This is due to the presence of the white dwarf's strong magnetic field.
Although no star systems in Telescopium have confirmed planets, several have been found to have brown dwarf companions. A member of the 12-million-year-old Beta Pictoris moving group of stars that share a common proper motion through space, Eta Telescopii is a young white main sequence star of magnitude 5.0 and spectral type A0V. It has a debris disk and brown dwarf companion of spectral type M7V or M8V that is between 20 and 50 times as massive as Jupiter. The system is complex, as it has a common proper motion with (and is gravitationally bound to) the star HD 181327, which has its own debris disk. This latter star is a yellow-white main sequence star of spectral type F6V of magnitude 7.0. PZ Telescopii is another young star with a debris disk and substellar brown dwarf companion, though at 24 million years of age appears too old to be part of the Beta Pictoris moving group. HD 191760 is a yellow subgiant—a star that is cooling and expanding off the main sequence—of spectral type G3IV/V. Estimated to be just over four billion years old, it is slightly (1.1 to 1.3 times) more massive as the Sun, 2.69 times as luminous, and has around 1.62 times its radius. Using the High Accuracy Radial Velocity Planet Searcher (HARPS) instrument on the ESO 3.6 m Telescope, it was found to have a brown dwarf around 38 times as massive as Jupiter orbiting at an average distance of 1.35 AU with a period of 505 days. This is an unusually close distance from the star, within a range that has been termed the brown-dwarf desert.
Deep sky objects
The Telescopium group is a group of twelve galaxies spanning three degrees in the northeastern part of the constellation, lying around 37 megaparsecs (120 million light-years) from our own galaxy. The brightest member is the elliptical galaxy NGC 6868, and to the west lies the spiral galaxy (or, perhaps, lenticular galaxy) NGC 6861. These are the brightest members of two respective subgroups within the galaxy group, and are heading toward a merger in the future.
The globular cluster NGC 6584 lies near Theta Arae and is 45,000 light-years distant from Earth. It is an Oosterhoff type I cluster, and contains at least 69 variable stars, most of which are RR Lyrae variables. The planetary nebula IC 4699 is of 13th magnitude and lies midway between Alpha and Epsilon Telescopii. IC 4889 is an elliptical galaxy of apparent magnitude 11.3, which can be found 2 degrees north-north-west of 5.3-magnitude Nu Telescopii. Observing it through a 40 cm telescope will reveal its central region and halo.
Occupying an area of around 4' × 2', NGC 6845 is an interacting system of four galaxies—two spiral and two lenticular galaxies—that is estimated to be around 88 megaparsecs (287 million light-years) distant. SN 2008da was a type II supernova observed in one of the spiral galaxies, NGC 6845A, in June 2008. SN 1998bw was a luminous supernova observed in the spiral arm of the galaxy ESO184-G82 in April 1998, and is notable in that it is highly likely to be the source of the gamma-ray burst GRB 980425.
| Physical sciences | Other | Astronomy |
30677 | https://en.wikipedia.org/wiki/Tool | Tool | A tool is an object that can extend an individual's ability to modify features of the surrounding environment or help them accomplish a particular task. Although many animals use simple tools, only human beings, whose use of stone tools dates back hundreds of millennia, have been observed using tools to make other tools.
Early human tools, made of such materials as stone, bone, and wood, were used for the preparation of food, hunting, the manufacture of weapons, and the working of materials to produce clothing and useful artifacts and crafts such as pottery, along with the construction of housing, businesses, infrastructure, and transportation. The development of metalworking made additional types of tools possible. Harnessing energy sources, such as animal power, wind, or steam, allowed increasingly complex tools to produce an even larger range of items, with the Industrial Revolution marking an inflection point in the use of tools. The introduction of widespread automation in the 19th and 20th centuries allowed tools to operate with minimal human supervision, further increasing the productivity of human labor.
By extension, concepts that support systematic or investigative thought are often referred to as "tools" or "toolkits".
Definition
While a common-sense understanding of the meaning of tool is widespread, several formal definitions have been proposed.
In 1981, Benjamin Beck published a widely used definition of tool use. This has been modified to:Other, briefer definitions have been proposed:
History
Anthropologists believe that the use of tools was an important step in the evolution of mankind. Because tools are used extensively by both humans (Homo sapiens) and wild chimpanzees, it is widely assumed that the first routine use of tools took place prior to the divergence between the two ape species. These early tools, however, were likely made of perishable materials such as sticks, or consisted of unmodified stones that cannot be distinguished from other stones as tools.
Stone artifacts date back to about 2.5 million years ago. However, a 2010 study suggests the hominin species Australopithecus afarensis ate meat by carving animal carcasses with stone implements. This finding pushes back the earliest known use of stone tools among hominins to about 3.4 million years ago. Finds of actual tools date back at least 2.6 million years in Ethiopia. One of the earliest distinguishable stone tool forms is the hand axe.
Up until recently, weapons found in digs were the only tools of "early man" that were studied and given importance. Now, more tools are recognized as culturally and historically relevant. As well as hunting, other activities required tools such as preparing food, "...nutting, leatherworking, grain harvesting and woodworking..." Included in this group are "flake stone tools".
Tools are the most important items that the ancient humans used to climb to the top of the food chain; by inventing tools, they were able to accomplish tasks that human bodies could not, such as using a spear or bow to kill prey, since their teeth were not sharp enough to pierce many animals' skins. "Man the hunter" as the catalyst for Hominin change has been questioned. Based on marks on the bones at archaeological sites, it is now more evident that pre-humans were scavenging off of other predators' carcasses rather than killing their own food.
Timeline of ancient tool development
Many tools were made in prehistory or in the early centuries of recorded history, but archaeological evidence can provide dates of development and use.
Olduvai stone technology (Oldowan) 2.5 million years ago (scrapers; to butcher dead animals)
Huts, 2 million years ago.
Acheulean stone technology 1.6 million years ago (hand axe)
Fire creation and manipulation, used since the Paleolithic, possibly by Homo erectus as early as 1.5 Million years ago
Boats, 900,000 years ago.
Cooking, 500,000 years ago.
Javelins, 400,000 years ago.
Glue, 200,000 years ago.
Clothing possibly 170,000 years ago.
Stone tools, used by Homo floresiensis, possibly 100,000 years ago.
Harpoons, 90,000 years ago.
Bow and arrows, 70,000–60,000 years ago.
Sewing needles, 60,000 – 50,000 BC
Flutes, 43,000 years ago.
Fishing nets, 43,000 years ago.
Ropes, 40,000 years ago.
Ceramics
Fishing hooks, .
Domestication of animals,
Sling (weapon)
Microliths
Brick used for construction in the Middle East
Agriculture and Plough
Wheel
Gnomon
Writing systems
Copper
Bronze
Salt
Chariot
Iron
Sundial
Glass
Catapult
Cast iron
Horseshoe
Stirrup first few centuries AD
Several of the six classic simple machines (wheel and axle, lever, pulley, inclined plane, wedge, and screw) were invented in Mesopotamia. The wheel and axle mechanism first appeared with the potter's wheel, invented in what is now Iraq during the 5th millennium BC. This led to the invention of the wheeled vehicle in Mesopotamia during the early 4th millennium BC. The lever was used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia , and then in ancient Egyptian technology . The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609 BC). The Assyrian King Sennacherib (704–681 BC) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two-part clay molds rather than by the 'lost wax' process. The Jerwan Aqueduct ( is made with stone arches and lined with waterproof concrete. The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BC, in the regions of Mesopotamia (Iraq) and Persia (Iran). This pioneering use of water power constituted perhaps the first use of mechanical energy.
Mechanical devices experienced a major expansion in their use in Ancient Greece and Ancient Rome with the systematic employment of new energy sources, especially waterwheels. Their use expanded through the Dark Ages with the addition of windmills.
Machine tools
Machine tools occasioned a surge in producing new tools in the Industrial Revolution. Pre-industrial machinery was built by various craftsmenmillwrights built water and windmills, carpenters made wooden framing, and smiths and turners made metal parts. Wooden components had the disadvantage of changing dimensions with temperature and humidity, and the various joints tended to rack (work loose) over time. As the Industrial Revolution progressed, machines with metal parts and frames became more common.
Other important uses of metal parts were in firearms and threaded fasteners, such as machine screws, bolts, and nuts. There was also the need for precision in making parts. Precision would allow better working machinery, interchangeability of parts, and standardization of threaded fasteners. The demand for metal parts led to the development of several machine tools. They have their origins in the tools developed in the 18th century by makers of clocks and watches and scientific instrument makers to enable them to batch-produce small mechanisms. Before the advent of machine tools, metal was worked manually using the basic hand tools of hammers, files, scrapers, saws, and chisels. Consequently, the use of metal machine parts was kept to a minimum. Hand methods of production were very laborious and costly and precision was difficult to achieve. With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Examples of machine tools include:
Broaching machine
Drill press
Gear shaper
Hobbing machine
Hone
Lathe
Screw machines
Milling machine
Shear (sheet metal)
Shaper
Bandsaw
Planer
Stewart platform mills
Grinding machines
Advocates of nanotechnology expect a similar surge as tools become microscopic in size.
Types
One can classify tools according to their basic functions:
Cutting and edge tools, such as the knife, sickle, scythe, hatchet, and axe, are wedge-shaped implements that produce a shearing force along a narrow face. Ideally, the edge of the tool needs to be harder than the material being cut or the blade will become dulled with repeated use. But even resilient tools will require periodic sharpening, which is the process of removing deformation wear from the edge. Other examples of cutting tools include gouges and drill bits.
Moving tools move large and tiny items. Many are levers which give the user a mechanical advantage. Examples of force-concentrating tools include the hammer which moves a nail or the maul which moves a stake. These operate by applying physical compression to a surface. In the case of the screwdriver, the force is rotational and called torque. By contrast, an anvil concentrates force on an object being hammered by preventing it from moving away when struck. Writing implements deliver a fluid to a surface via compression to activate the ink cartridge. Grabbing and twisting nuts and bolts with pliers, a glove, a wrench, etc. likewise move items by applying torque (rotational force).
Tools that enact chemical changes, including temperature and ignition, such as lighters and blowtorches.
Guiding, measuring and perception tools include the ruler, glasses, square, sensors, straightedge, theodolite, microscope, monitor, clock, phone, printer
Shaping tools, such as molds, jigs, trowels.
Fastening tools, such as welders, soldering irons, rivet guns, nail guns, or glue guns.
Information and data manipulation tools, such as computers, IDE, spreadsheets
Some tools may be combinations of other tools. An alarm-clock is for example a combination of a measuring tool (the clock) and a perception tool (the alarm). This enables the alarm-clock to be a tool that falls outside of all the categories mentioned above.
There is some debate on whether to consider protective gear items as tools, because they do not directly help perform work, just protect the worker like ordinary clothing. They do meet the general definition of tools and in many cases are necessary for the completion of the work. Personal protective equipment includes such items as gloves, safety glasses, ear defenders and biohazard suits.
Function
Tool substitution
Often, by design or coincidence, a tool may share key functional attributes with one or more other tools. In this case, some tools can substitute for other tools, either as a makeshift solution or as a matter of practical efficiency. "One tool does it all" is a motto of some importance for workers who cannot practically carry every specialized tool to the location of every work task, such as a carpenter who does not necessarily work in a shop all day and needs to do jobs in a customer's house. Tool substitution may be divided broadly into two classes: substitution "by-design", or "multi-purpose", and substitution as makeshift. Substitution "by-design" would be tools that are designed specifically to accomplish multiple tasks using only that one tool.
Substitution is "makeshift" when human ingenuity comes into play and a tool is used for an unintended purpose, such as using a long screwdriver to separate a cars control arm from a ball joint, instead of using a tuning fork. In many cases, the designed secondary functions of tools are not widely known. For example, many wood-cutting hand saws integrate a square by incorporating a specially-shaped handle, that allows 90° and 45° angles to be marked by aligning the appropriate part of the handle with an edge, and scribing along the back edge of the saw. The latter is illustrated by the saying "All tools can be used as hammers". Nearly all tools can be used to function as a hammer, even though few tools are intentionally designed for it and even fewer work as well as the original.
Tools are often used to substitute for many mechanical apparatuses, especially in older mechanical devices. In many cases a cheap tool could be used to occupy the place of a missing mechanical part. A window roller in a car could be replaced with pliers. A transmission shifter or ignition switch would be able to be replaced with a screwdriver. Again, these would be considered tools that are being used for their unintended purposes, substitution as makeshift. Tools such as a rotary tool would be considered the substitution "by-design", or "multi-purpose". This class of tools allows the use of one tool that has at least two different capabilities. "Multi-purpose" tools are basically multiple tools in one device/tool. Tools such as this are often power tools that come with many different attachments like a rotary tool does, so one could say that a power drill is a "multi-purpose" tool.
Multi-use tools
A multi-tool is a hand tool that incorporates several tools into a single, portable device; the Swiss Army knife represents one of the earliest examples. Other tools have a primary purpose but also incorporate other functionality – for example, lineman's pliers incorporate a gripper and cutter and are often used as a hammer; and some hand saws incorporate a square in the right-angle between the blade's dull edge and the saw's handle. This would also be the category of "multi-purpose" tools, since they are also multiple tools in one (multi-use and multi-purpose can be used interchangeably – compare hand axe). These types of tools were specifically made to catch the eye of many different craftsman who traveled to do their work. To these workers these types of tools were revolutionary because they were one tool or one device that could do several different things. With this new revolution of tools, the traveling craftsman would not have to carry so many tools with them to job sites, in that their space would be limited to the vehicle or to the beast of burden they were driving. Multi-use tools solve the problem of having to deal with many different tools.
Use by other animals
Tool use by animals is a phenomenon in which an animal uses any kind of tool in order to achieve a goal such as acquiring food and water, grooming, defense, communication, recreation or construction. Originally thought to be a skill possessed only by humans, some tool use requires a sophisticated level of cognition. There is considerable discussion about the definition of what constitutes a tool and therefore which behaviours can be considered true examples of tool use. Observation has confirmed that a number of species can use tools including monkeys, apes, elephants, several birds, and sea otters. Now the unique relationship of humans with tools is considered to be that we are the only species that uses tools to make other tools.
Primates are well known for using tools for hunting or gathering food and water, cover for rain, and self-defense. Chimpanzees have often been the object of study in regard to their usage of tools, most famously by Jane Goodall; these animals are closely related to humans. Wild tool-use in other primates, especially among apes and monkeys, is considered relatively common, though its full extent remains poorly documented, as many primates in the wild are mainly only observed distantly or briefly when in their natural environments and living without human influence. Some novel tool-use by primates may arise in a localized or isolated manner within certain unique primate cultures, being transmitted and practiced among socially connected primates through cultural learning. Many famous researchers, such as Charles Darwin in his book The Descent of Man, mentioned tool-use in monkeys (such as baboons).
Among other mammals, both wild and captive elephants are known to create tools using their trunks and feet, mainly for swatting flies, scratching, plugging up waterholes that they have dug (to close them up again so the water does not evaporate), and reaching food that is out of reach. Many other social mammals particularly have been observed engaging in tool-use. A group of dolphins in Shark Bay uses sea sponges to protect their beaks while foraging. Sea otters will use rocks or other hard objects to dislodge food (such as abalone) and break open shellfish. Many or most mammals of the order Carnivora have been observed using tools, often to trap or break open the shells of prey, as well as for scratching.
Corvids (such as crows, ravens and rooks) are well known for their large brains (among birds) and tool use. New Caledonian crows are among the only animals that create their own tools. They mainly manufacture probes out of twigs and wood (and sometimes metal wire) to catch or impale larvae. Tool use in some birds may be best exemplified in nest intricacy. Tailorbirds manufacture 'pouches' to make their nests in. Some birds, such as weaver birds, build complex nests utilizing a diverse array of objects and materials, many of which are specifically chosen by certain birds for their unique qualities. Woodpecker finches insert twigs into trees in order to catch or impale larvae. Parrots may use tools to wedge nuts so that they can crack open the outer shell of nuts without launching away the inner contents. Some birds take advantage of human activity, such as carrion crows in Japan, which drop nuts in front of cars to crack them open.
Several species of fish use tools to hunt and crack open shellfish, extract food that is out of reach, or clear an area for nesting. Among cephalopods (and perhaps uniquely or to an extent unobserved among invertebrates), octopuses are known to use tools relatively frequently, such as gathering coconut shells to create a shelter or using rocks to create barriers.
Non-material usage
By extension, concepts which support systematic or investigative thought are often referred to as "tools", for example Vanessa Dye refers to "tools of reflection" and "tools to help sharpen your professional practice" for trainee teachers, illustrating the connection between physical and conceptual tools by quoting the French scientist Claude Bernaud: Similarly, a decision-making process "developed to help women and their partners make confident and informed decisions when planning where to give birth" is described as a "Birth Choice tool": and the idea of a "toolkit" is used by the International Labour Organization to describe a set of processes applicable to improving global labour relations.
A telephone is a communication tool that interfaces between two people engaged in conversation at one level. It also interfaces between each user and the communication network at another level. It is in the domain of media and communications technology that a counter-intuitive aspect of our relationships with our tools first began to gain popular recognition. John M. Culkin famously said, "We shape our tools and thereafter our tools shape us". One set of scholars expanded on this to say: "Humans create inspiring and empowering technologies but also are influenced, augmented, manipulated, and even imprisoned by technology".
| Technology | Technology | null |
30684 | https://en.wikipedia.org/wiki/Tundra | Tundra | In physical geography, tundra () is a type of biome where tree growth is hindered by frigid temperatures and short growing seasons. There are three regions and associated types of tundra: Arctic, Alpine, and Antarctic.
Tundra vegetation is composed of dwarf shrubs, sedges, grasses, mosses, and lichens. Scattered trees grow in some tundra regions. The ecotone (or ecological boundary region) between the tundra and the forest is known as the tree line or timberline. The tundra soil is rich in nitrogen and phosphorus. The soil also contains large amounts of biomass and decomposed biomass that has been stored as methane and carbon dioxide in the permafrost, making the tundra soil a carbon sink. As global warming heats the ecosystem and causes soil thawing, the permafrost carbon cycle accelerates and releases much of these soil-contained greenhouse gases into the atmosphere, creating a feedback cycle that changes climate.
Etymology
The term is a Russian word adapted from the Sámi languages.
Arctic
Arctic tundra occurs in the far Northern Hemisphere, north of the taiga belt. The word "tundra" usually refers only to the areas where the subsoil is permafrost, or permanently frozen soil. (It may also refer to the treeless plain in general so that northern Sápmi would be included.) Permafrost tundra includes vast areas of northern Russia and Canada. The polar tundra is home to several peoples who are mostly nomadic reindeer herders, such as the Nganasan and Nenets in the permafrost area (and the Sami in Sápmi).
Arctic tundra contains areas of stark landscape and is frozen for much of the year. The soil there is frozen from down, making it impossible for trees to grow there. Instead, bare and sometimes rocky land can only support certain kinds of Arctic vegetation, low-growing plants such as moss, heath (Ericaceae varieties such as crowberry and black bearberry), and lichen.
There are two main seasons, winter and summer, in the polar tundra areas. During the winter it is very cold, dark, and windy with the average temperature around , sometimes dipping as low as . However, extreme cold temperatures on the tundra do not drop as low as those experienced in taiga areas further south (for example, Russia's, Canada's, and Alaska's lowest temperatures were recorded in locations south of the tree line). During the summer, temperatures rise somewhat, and the top layer of seasonally-frozen soil melts, leaving the ground very soggy. The tundra is covered in marshes, lakes, bogs, and streams during the warm months. Generally daytime temperatures during the summer rise to about but can often drop to or even below freezing. Arctic tundras are sometimes the subject of habitat conservation programs. In Canada and Russia, many of these areas are protected through a national Biodiversity Action Plan.
Tundra tends to be windy, with winds often blowing upwards of . However, it is desert-like, with only about of precipitation falling per year (the summer is typically the season of maximum precipitation). Although precipitation is light, evaporation is also relatively minimal. During the summer, the permafrost thaws just enough to let plants grow and reproduce, but because the ground below this is frozen, the water cannot sink any lower, so the water forms the lakes and marshes found during the summer months. There is a natural pattern of accumulation of fuel and wildfire which varies depending on the nature of vegetation and terrain. Research in Alaska has shown fire-event return intervals (FRIs) that typically vary from 150 to 200 years, with dryer lowland areas burning more frequently than wetter highland areas.
The biodiversity of tundra is low: 1,700 species of vascular plants and only 48 species of land mammals can be found, although millions of birds migrate there each year for the marshes. There are also a few fish species. There are few species with large populations. Notable plants in the Arctic tundra include blueberry (Vaccinium uliginosum), crowberry (Empetrum nigrum), reindeer lichen (Cladonia rangiferina), lingonberry (Vaccinium vitis-idaea), and Labrador tea (Rhododendron groenlandicum). Notable animals include reindeer (caribou), musk ox, Arctic hare, Arctic fox, snowy owl, ptarmigan, northern red-backed voles, lemmings, the mosquito, and even polar bears near the ocean. Tundra is largely devoid of poikilotherms such as frogs or lizards.
Due to the harsh climate of Arctic tundra, regions of this kind have seen little human activity, even though they are sometimes rich in natural resources such as petroleum, natural gas, and uranium. In recent times this has begun to change in Alaska, Russia, and some other parts of the world: for example, the Yamalo-Nenets Autonomous Okrug produces 90% of Russia's natural gas.
Relationship to climate change
A severe threat to tundra is global warming, which causes permafrost to thaw. The thawing of the permafrost in a given area on human time scales (decades or centuries) could radically change which species can survive there. It also represents a significant risk to infrastructure built on top of permafrost, such as roads and pipelines.
In locations where dead vegetation and peat have accumulated, there is a risk of wildfire, such as the of tundra which burned in 2007 on the north slope of the Brooks Range in Alaska. Such events may both result from and contribute to global warming.
Carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw, making it a positive climate change feedback. The warming also intensifies Arctic water cycle, and the increased amounts of warmer rain are another factor which increases permafrost thaw depths.
The IPCC Sixth Assessment Report estimates that carbon dioxide and methane released from permafrost could amount to the equivalent of 14–175 billion tonnes of carbon dioxide per of warming. For comparison, by 2019, annual anthropogenic emission of carbon dioxide alone stood around 40 billion tonnes. A 2018 perspectives article discussing tipping points in the climate system activated around of global warming suggested that at this threshold, permafrost thaw would add a further to global temperatures by 2100, with a range of
Antarctic
Antarctic tundra occurs on Antarctica and on several Antarctic and subantarctic islands, including South Georgia and the South Sandwich Islands and the Kerguelen Islands. Most of Antarctica is too cold and dry to support vegetation, and most of the continent is covered by ice fields or cold deserts. However, some portions of the continent, particularly the Antarctic Peninsula, have areas of rocky soil that support plant life. The flora presently consists of around 300–400 species of lichens, 100 mosses, 25 liverworts, and around 700 terrestrial and aquatic algae species, which live on the areas of exposed rock and soil around the shore of the continent. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis), are found on the northern and western parts of the Antarctic Peninsula.
In contrast with the Arctic tundra, the Antarctic tundra lacks a large mammal fauna, mostly due to its physical isolation from the other continents. Sea mammals and sea birds, including seals and penguins, inhabit areas near the shore, and some small mammals, like rabbits and cats, have been introduced by humans to some of the subantarctic islands. The Antipodes Subantarctic Islands tundra ecoregion includes the Bounty Islands, Auckland Islands, Antipodes Islands, the Campbell Island group, and Macquarie Island. Species endemic to this ecoregion include Corybas dienemus and Corybas sulcatus, the only subantarctic orchids; the royal penguin; and the Antipodean albatross.
There is some ambiguity on whether Magellanic moorland, on the west coast of Patagonia, should be considered tundra or not. Phytogeographer Edmundo Pisano called it tundra () since he considered the low temperatures key to restrict plant growth. More recent approaches have since recognized it as a temperate grassland, restricting southern tundra to coastal Antarctica and its islands.
The flora and fauna of Antarctica and the Antarctic Islands (south of 60° south latitude) are protected by the Antarctic Treaty.
Alpine
Alpine tundra does not contain trees because the climate and soils at high altitude block tree growth. The cold climate of the alpine tundra is caused by the low air temperatures, and is similar to polar climate. Alpine tundra is generally better drained than arctic soils. Alpine tundra transitions to subalpine forests below the tree line; stunted forests occurring at the forest-tundra ecotone (the treeline) are known as Krummholz. Alpine tundra can be affected by woody plant encroachment.
Alpine tundra occurs in mountains worldwide. The flora of the alpine tundra is characterized by plants that grow close to the ground, including perennial grasses, sedges, forbs, cushion plants, mosses, and lichens. The flora is adapted to the harsh conditions of the alpine environment, which include low temperatures, dryness, ultraviolet radiation, and a short growing season.
Climatic classification
Tundra climates ordinarily fit the Köppen climate classification ET, signifying a local climate in which at least one month has an average temperature high enough to melt snow (), but no month with an average temperature in excess of . The cold limit generally meets the EF climates of permanent ice and snows; the warm-summer limit generally corresponds with the poleward or altitudinal limit of trees, where they grade into the subarctic climates designated Dfd, Dwd and Dsd (extreme winters as in parts of Siberia), Dfc typical in Alaska, Canada, mountain areas of Scandinavia, European Russia, and Western Siberia (cold winters with months of freezing).
Despite the potential diversity of climates in the ET category involving precipitation, extreme temperatures, and relative wet and dry seasons, this category is rarely subdivided, although, for example, Wainwright, Alaska can be classified ETw and Provideniya, Russia ETs, with most of the rest of the tundra fitting into the ETf subcategory. Rainfall and snowfall are generally slight due to the low vapor pressure of water in the chilly atmosphere, but as a rule potential evapotranspiration is extremely low, allowing soggy terrain of swamps and bogs even in places that get precipitation typical of deserts of lower and middle latitudes. The amount of native tundra biomass depends more on the local temperature than the amount of precipitation.
Places featuring a tundra climate
Alpine tundra
Gavia Pass, Italy
Mount Fuji, Japan
Cerro de Pasco, Peru
Apartaderos, Venezuela
Puno, Peru
Kasprowy Wierch, Poland
High Tatras, Slovakia
Murghob, Tajikistan
Mount Wellington, Australia
Cairn Gorm, United Kingdom
Putre, Chile
Coranzuli, Argentina
Yu Shan, Taiwan
Juf, Switzerland
Finse, Norway
Sêrxü, China
Polar tundra
Longyearbyen, Svalbard, Norway
Yamal Peninsula, Russia
Iqaluit, Canada
Utqiagvik, United States
Hooper Bay, United States
Kerguelen Islands, French Southern Lands (France)
Nuuk, Greenland (Denmark)
Grytviken, South Georgia (United Kingdom)
Tiksi, Russia
Mykines, Faroe Islands (Denmark)
Hveravellir, Iceland
Tolhuin, Argentina
Campbell Island, New Zealand
| Physical sciences | Biomes | null |
30698 | https://en.wikipedia.org/wiki/TNT | TNT | Trinitrotoluene (), more commonly known as TNT (and more specifically 2,4,6-trinitrotoluene, and by its preferred IUPAC name 2-methyl-1,3,5-trinitrobenzene), is a chemical compound with the formula C6H2(NO2)3CH3. TNT is occasionally used as a reagent in chemical synthesis, but it is best known as an explosive material with convenient handling properties. The explosive yield of TNT is considered to be the standard comparative convention of bombs and asteroid impacts. In chemistry, TNT is used to generate charge transfer salts.
History
TNT was first synthesized in 1861 by German chemist Joseph Wilbrand and was originally used as a yellow dye. Its potential as an explosive was not recognized for three decades, mainly because it was so much less sensitive than other explosives known at the time. Its explosive properties were discovered in 1891 by another German chemist, Carl Häussermann. TNT can be safely poured when liquid into shell cases, and is so insensitive that in 1910 it was exempted from the UK's Explosives Act 1875 and was not considered an explosive for the purposes of manufacture and storage.
The German armed forces adopted it as a filling for artillery shells in 1902. TNT-filled armour-piercing shells would explode after they had penetrated the armour of British capital ships, whereas the British Lyddite-filled shells tended to explode upon striking armour, thus expending much of their energy outside the ship. The British started replacing Lyddite with TNT in 1907.
The United States Navy continued filling armour-piercing shells with explosive D after some other nations had switched to TNT, but began filling naval mines, bombs, depth charges, and torpedo warheads with burster charges of crude grade B TNT with the color of brown sugar and requiring an explosive booster charge of granular crystallized grade A TNT for detonation. High-explosive shells were filled with grade A TNT, which became preferred for other uses as industrial chemical capacity became available for removing xylene and similar hydrocarbons from the toluene feedstock and other nitrotoluene isomer byproducts from the nitrating reactions.
Preparation
In industry, TNT is produced in a three-step process. First, toluene is nitrated with a mixture of sulfuric and nitric acid to produce mononitrotoluene (MNT). The MNT is separated and then renitrated to dinitrotoluene (DNT). In the final step, the DNT is nitrated to trinitrotoluene (TNT) using an anhydrous mixture of nitric acid and oleum. Nitric acid is consumed by the manufacturing process, but the diluted sulfuric acid can be reconcentrated and reused.
After nitration, TNT can either be purified by crystallization from an organic solvent or stabilized by a process called sulfitation, where the crude TNT is treated with aqueous sodium sulfite solution to remove less stable isomers of TNT and other undesired reaction products. The rinse water from sulfitation is known as red water and is a significant pollutant and waste product of TNT manufacture.
Control of nitrogen oxides in feed nitric acid is very important because free nitrogen dioxide can result in oxidation of the methyl group of toluene. This reaction is highly exothermic and carries with it the risk of a runaway reaction leading to an explosion.
In the laboratory, 2,4,6-trinitrotoluene is produced by a two-step process. A nitrating mixture of concentrated nitric and sulfuric acids is used to nitrate toluene to a mixture of mono- and di-nitrotoluene isomers, with careful cooling to maintain temperature. The nitrated toluenes are then separated, washed with dilute sodium bicarbonate to remove oxides of nitrogen, and then carefully nitrated with a mixture of fuming nitric acid and sulfuric acid.
Applications
TNT is one of the most commonly used explosives for military, industrial, and mining applications. TNT has been used in conjunction with hydraulic fracturing (popularly known as fracking), a process used to acquire oil and gas from shale formations. The technique involves displacing and detonating nitroglycerin in hydraulically induced fractures followed by wellbore shots using pelletized TNT.
TNT is valued partly because of its insensitivity to shock and friction, with reduced risk of accidental detonation compared to more sensitive explosives such as nitroglycerin. TNT melts at 80 °C (176 °F), far below the temperature at which it will spontaneously detonate, allowing it to be poured or safely combined with other explosives. TNT neither absorbs nor dissolves in water, which allows it to be used effectively in wet environments. To detonate, TNT must be triggered by a pressure wave from a starter explosive, called an explosive booster.
Although blocks of TNT are available in various sizes (e.g. 250 g, 500 g, 1,000 g), it is more commonly encountered in synergistic explosive blends comprising a variable percentage of TNT plus other ingredients. Examples of explosive blends containing TNT include:
Amatex (ammonium nitrate and RDX)
Amatol (ammonium nitrate)
Baratol (barium nitrate and wax)
Composition B (RDX and paraffin wax)
Composition H6
Cyclotol (RDX)
Ednatol
Hexanite (hexanitrodiphenylamine)
Minol
Octol
Pentolite
Picratol
Tetrytol
Torpex
Tritonal
Explosive character
Upon detonation, TNT undergoes a decomposition equivalent to the reaction
2 C7H5N3O6 → 3 N2 + 5 H2 + 12 CO + 2 C
plus some of the reactions
+ CO → + C
and
2 CO → + C.
The reaction is exothermic but has a high activation energy in the gas phase (~62 kcal/mol). The condensed phases (solid or liquid) show markedly lower activation energies of roughly 35 kcal/mol due to unique bimolecular decomposition routes at elevated densities. Because of the production of carbon, TNT explosions have a sooty appearance. Because TNT has an excess of carbon, explosive mixtures with oxygen-rich compounds can yield more energy per kilogram than TNT alone. During the 20th century amatol, a mixture of TNT with ammonium nitrate, was a widely used military explosive.
TNT can be detonated with a high velocity initiator or by efficient concussion. For many years, TNT used to be the reference point for the Figure of Insensitivity. TNT had a rating of exactly 100 on the "F of I" scale. The reference has since been changed to a more sensitive explosive called RDX, which has an F of I rating of 80.
Energy content
The energy density of TNT is used as a reference point for many other explosives, including nuclear weapons, as their energy content is measured in equivalent tonnes (metric tons, t) of TNT. The energy used by NIST to define the equivalent is 4.184 GJ/t.
For safety assessments, it has been stated that the detonation of TNT, depending on circumstances, can release 2.673–6.702 GJ/t.
The heat of combustion however is 14.5 GJ/t (14.5 MJ/kg or 4.027 kWh/kg), which requires that the carbon in TNT fully react with atmospheric oxygen, which does not occur in the initial event.
For comparison, gunpowder contains 3 MJ/kg, dynamite contains 7.5 MJ/kg, and gasoline contains 47.2 MJ/kg (though gasoline requires an oxidant, so an optimized gasoline and O2 mixture contains 10.4 MJ/kg).
Detection
Various methods can be used to detect TNT, including optical and electrochemical sensors and explosive-sniffing dogs. In 2013, researchers from the Indian Institutes of Technology using noble-metal quantum clusters could detect TNT at the sub-zeptomolar (10−18 mol/m3) level.
Safety and toxicity
TNT is poisonous, and skin contact can cause skin irritation, causing the skin to turn a bright yellow-orange color. During the First World War, female munition workers who handled the chemical found that their skin turned bright yellow, which resulted in their acquiring the nickname "canary girls" or simply "canaries".
People exposed to TNT over a prolonged period tend to experience anemia and abnormal liver functions. Blood and liver effects, spleen enlargement and other harmful effects on the immune system have also been found in animals that ingested or breathed trinitrotoluene. There is evidence that TNT adversely affects male fertility. TNT is listed as a possible human carcinogen, with carcinogenic effects demonstrated in animal experiments with rats, although effects upon humans so far amount to none (according to IRIS of March 15, 2000). Consumption of TNT produces red urine through the presence of breakdown products and not blood as sometimes believed.
Some military testing grounds are contaminated with wastewater from munitions programs, including contamination of surface and subsurface waters which may be colored pink because of the presence of TNT. Such contamination, called "pink water", may be difficult and expensive to remedy.
TNT is prone to exudation of dinitrotoluenes and other isomers of trinitrotoluene when projectiles containing TNT are stored at higher temperatures in warmer climates. Exudation of impurities leads to formation of pores and cracks (which in turn cause increased shock sensitivity). Migration of the exudated liquid into the fuze screw thread can form fire channels, increasing the risk of accidental detonation. Fuze malfunction can also result from the liquid migrating into the fuze mechanism. Calcium silicate is mixed with TNT to mitigate the tendency towards exudation.
Pink and red water
Pink water and red water are two distinct types of wastewater related to trinitrotoluene. Pink water is produced from equipment washing processes after munitions filling or demilitarization operations, and as such is generally saturated with the maximum amount of TNT that will dissolve in water (about 150 parts per million (ppm).) However it has an indefinite composition that depends on the exact process; in particular, it may also contain cyclotrimethylenetrinitramine (RDX) if the plant uses TNT/RDX mixtures, or HMX if TNT/HMX is used. Red water (also known as "Sellite water") is produced during the process used to purify the crude TNT. It has a complex composition containing more than a dozen aromatic compounds, but the principal components are inorganic salts (sodium sulfate, sodium sulfite, sodium nitrite and sodium nitrate) and sulfonated nitroaromatics.
Pink and red water are colorless at the time of generation; the color is produced by photolytic reactions under the influence of sunlight. Despite the names, red and pink water are not necessarily different shades; the color depends mainly on the duration of solar exposure. If exposed long enough, "pink" water may turn various shades of pink, red, rusty orange, or black.
Because of the toxicity of TNT, the discharge of pink water to the environment has been prohibited in the US and many other countries for decades, but ground contamination may exist in very old plants. However, RDX and tetryl contamination is usually considered more problematic, as TNT has very low soil mobility. Red water is significantly more toxic and as such it has always been considered hazardous waste. It has traditionally been disposed of by evaporation to dryness (as the toxic components are not volatile), followed by incineration. Much research has been conducted to develop better disposal processes.
Ecological impact
Because of its suitability in construction and demolition, TNT has become the most widely used explosive and thus its toxicity is the most characterized and reported. Residual TNT from manufacture, storage, and use can pollute water, soil, the atmosphere, and the biosphere.
The concentration of TNT in contaminated soil can reach 50 g/kg of soil, where the highest concentrations can be found on or near the surface. In September 2001, the United States Environmental Protection Agency (USEPA) declared TNT a pollutant whose removal is a priority. The USEPA maintains that TNT levels in soil should not exceed 17.2 milligrams per kilogram of soil and 0.01 milligrams per litre of water.
Aqueous solubility
Dissolution is a measure of the rate that solid TNT in contact with water is dissolved. The relatively low aqueous solubility of TNT causes solid particles to be continuously released to the environment over extended periods of time. Studies have shown that TNT dissolves more slowly in saline water than in freshwater. However, when salinity is altered, TNT dissolves at the same speed. Because TNT is moderately soluble in water, it can migrate through subsurface soil, and cause groundwater contamination.
Soil adsorption
Adsorption is a measure of the distribution between soluble and sediment adsorbed contaminants following attainment of equilibrium. TNT and its transformation products are known to adsorb to surface soils and sediments, where they undergo reactive transformation or remained stored. The movement or organic contaminants through soils is a function of their ability to associate with the mobile phase (water) and a stationary phase (soil). Materials that associate strongly with soils move slowly through soil. The association constant for TNT with soil is 2.7 to 11 L/kg of soil. This means that TNT has a one- to tenfold tendency to adhere to soil particulates than not when introduced into the soil. Hydrogen bonding and ion exchange are two suggested mechanisms of adsorption between the nitro functional groups and soil colloids.
The number of functional groups on TNT influences the ability to adsorb into soil. Adsorption coefficient values have been shown to increase with an increase in the number of amino groups. Thus, adsorption of the TNT decomposition product 2,4-diamino-6-nitrotoluene (2,4-DANT) was greater than that for 4-amino-2,6-dinitrotoluene (4-ADNT), which was greater than that for TNT. Lower adsorption coefficients for 2,6-DNT compared to 2,4-DNT can be attributed to the steric hindrance of the NO2 group in the ortho position.
Research has shown that in freshwater environments, with high abundances of Ca2+, the adsorption of TNT and its transformation products to soils and sediments may be lower than observed in a saline environment, dominated by K+ and Na+. Therefore, when considering the adsorption of TNT, the type of soil or sediment and the ionic composition and strength of the ground water are important factors.
The association constants for TNT and its degradation products with clays have been determined. Clay minerals have a significant effect on the adsorption of energetic compounds. Soil properties, such as organic carbon content and cation exchange capacity have significant impacts on the adsorption coefficients.
Additional studies have shown that the mobility of TNT degradation products is likely to be lower "than TNT in subsurface environments where specific adsorption to clay minerals dominates the sorption process." Thus, the mobility of TNT and its transformation products are dependent on the characteristics of the sorbent. The mobility of TNT in groundwater and soil has been extrapolated from "sorption and desorption isotherm models determined with humic acids, in aquifer sediments, and soils". From these models, it is predicted that TNT has a low retention and transports readily in the environment.
Compared to other explosives, TNT has a higher association constant with soil, meaning it adheres more with soil than with water. Conversely, other explosives, such as RDX and HMX with low association constants (ranging from 0.06 to 7.3 L/kg and 0 to 1.6 L/kg respectively) can move more rapidly in water.
Chemical breakdown
TNT is a reactive molecule and is particularly prone to react with reduced components of sediments or photodegradation in the presence of sunlight. TNT is thermodynamically and kinetically capable of reacting with a wide number of components of many environmental systems. This includes wholly abiotic reactants, like hydrogen sulfide, Fe2+, or microbial communities, both oxic and anoxic and photochemical degradation.
Soils with high clay contents or small particle sizes and high total organic carbon content have been shown to promote TNT transformation. Possible TNT transformations include reduction of one, two, or three nitro-moieties to amines and coupling of amino transformation products to form dimers. Formation of the two monoamino transformation products, 2-ADNT and 4-ADNT, is energetically favored, and therefore is observed in contaminated soils and ground water. The diamino products are energetically less favorable, and even less likely are the triamino products.
The transformation of TNT is significantly enhanced under anaerobic conditions as well as under highly reducing conditions. TNT transformations in soils can occur both biologically and abiotically.
Photolysis is a major process that impacts the transformation of energetic compounds. The alteration of a molecule in photolysis occurs by direct absorption of light energy or by the transfer of energy from a photosensitized compound. Phototransformation of TNT "results in the formation of nitrobenzenes, benzaldehydes, azodicarboxylic acids, and nitrophenols, as a result of the oxidation of methyl groups, reduction of nitro groups, and dimer formation."
Evidence of the photolysis of TNT has been seen due to the color change to pink of TNT-containing wastewaters when exposed to sunlight. Photolysis is more rapid in river water than in distilled water. Ultimately, photolysis affects the fate of TNT primarily in the aquatic environment but could also affect the fate of TNT in soil when the soil surface is exposed to sunlight.
Biodegradation
The ligninolytic physiological phase and manganese peroxidase system of fungi can cause a very limited amount of mineralization of TNT in a liquid culture, though not in soil. An organism capable of the remediation of large amounts of TNT in soil has yet to be discovered. Both wild and transgenic plants can phytoremediate explosives from soil and water.
| Physical sciences | Amides and amines | Chemistry |
30699 | https://en.wikipedia.org/wiki/Toluene | Toluene | Toluene (), also known as toluol (), is a substituted aromatic hydrocarbon with the chemical formula , often abbreviated as , where Ph stands for the phenyl group. It is a colorless, water-insoluble liquid with the odor associated with paint thinners. It is a mono-substituted benzene derivative, consisting of a methyl group (CH3) attached to a phenyl group by a single bond. As such, its systematic IUPAC name is methylbenzene. Toluene is predominantly used as an industrial feedstock and a solvent.
As the solvent in some types of paint thinner, permanent markers, contact cement and certain types of glue, toluene is sometimes used as a recreational inhalant and has the potential of causing severe neurological harm.
History
The compound was first isolated in 1837 through a distillation of pine oil by Pierre Joseph Pelletier and Filip Neriusz Walter, who named it rétinnaphte. In 1841, Henri Étienne Sainte-Claire Deville isolated a hydrocarbon from balsam of Tolu (an aromatic extract from the tropical Colombian tree Myroxylon balsamum), which Deville recognized as similar to Walter's rétinnaphte and to benzene; hence he called the new hydrocarbon benzoène. In 1843, Jöns Jacob Berzelius recommended the name toluin. In 1850, French chemist Auguste Cahours isolated from a distillate of wood a hydrocarbon which he recognized as similar to Deville's benzoène and which Cahours named toluène.
Chemical properties
The distance between carbon atoms in the toluene ring is 0.1399 nm. The C-CH3 bond is longer at 0.1524 nm, while the average C-H bond length is 0.111 nm.
Ring reactions
Toluene reacts as a normal aromatic hydrocarbon in electrophilic aromatic substitution. Because the methyl group has greater electron-releasing properties than a hydrogen atom in the same position, toluene is more reactive than benzene toward electrophiles. It undergoes sulfonation to give p-toluenesulfonic acid, and chlorination by Cl2 in the presence of FeCl3 to give ortho and para isomers of chlorotoluene.
Nitration of toluene gives mono-, di-, and trinitrotoluene, all of which are widely used. Dinitrotoluene is the precursor to toluene diisocyanate, a precursor to polyurethane foam. Trinitrotoluene (TNT) is an explosive.
Complete hydrogenation of toluene gives methylcyclohexane. The reaction requires a high pressure of hydrogen and a catalyst.
Side chain reactions
The C-H bonds of the methyl group in toluene are benzylic, therefore they are weaker than C-H bonds in simpler alkanes. Reflecting this weakness, the methyl group in toluene undergoes a variety of free radical reactions. For example, when heated with N-bromosuccinimide (NBS) in the presence of AIBN, toluene converts to benzyl bromide. The same conversion can be effected with elemental bromine in the presence of UV light or even sunlight.
Toluene may also be brominated by treating it with HBr and H2O2 in the presence of light.
C6H5CH3 + Br2 → C6H5CH2Br + HBr
Benzoic acid and benzaldehyde are produced commercially by partial oxidation of toluene with oxygen. Typical catalysts include cobalt or manganese naphthenates. Related but laboratory-scale oxidations involve the use of potassium permanganate to yield benzoic acid and chromyl chloride to yield benzaldehyde (Étard reaction).
The methyl group in toluene undergoes deprotonation only with very strong bases; its pKa is estimated using acidity trends to be approximately 43 in dimethyl sulfoxide (DMSO) and its ion pair acidity is extrapolated to be 41.2 in cyclohexylamine (Cesium Cyclohexylamide) using a Bronsted correlation.
Miscibility
Toluene is miscible (soluble in all proportions) with ethanol, benzene, diethyl ether, acetone, chloroform, glacial acetic acid and carbon disulfide, but immiscible with water.
Production
Toluene occurs naturally at low levels in crude oil and is a byproduct in the production of gasoline by a catalytic reformer or ethylene cracker. It is also a byproduct of the production of coke from coal. Final separation and purification is done by any of the distillation or solvent extraction processes used for BTX aromatics (benzene, toluene, and xylene isomers).
Other preparative routes
Toluene can be prepared by a variety of methods. For example, benzene reacts with methanol in presence of a solid acid to give toluene and water:
C6H6 + CH3OH ->[t^o]C6H5CH3 + H2O
Uses
Toluene is one of the most abundantly produced chemicals. Its main uses are (1) as a precursor to benzene and xylenes, (2) as a solvent for thinners, paints, lacquers, adhesives, and (3) as an additive for gasoline.
Precursor to benzene and xylenes
Toluene is converted to benzene via hydrodealkylation:
C6H5CH3 + H2 → C6H6 + CH4
Its transalkylation gives a mixture of benzene and xylenes.
Solvent
Toluene is widely used in the paint, dye, rubber, chemical, glue, printing, and pharmaceutical industries as a solvent. Nail polish, paintbrush cleaners, and stain removers may contain toluene. Manufacturing of explosives (TNT) uses it as well. Toluene is also found in cigarette smoke and car exhaust. If not in contact with air, toluene can remain unchanged in soil or water for a long time.
Toluene is a common solvent, e.g. for paints, paint thinners, silicone sealants, many chemical reactants, rubber, printing ink, adhesives (glues), lacquers, leather tanners, and disinfectants.
Fuel
Toluene is an octane booster in gasoline fuels for internal combustion engines as well as jet fuel and turbocharged engines in Formula One.
In Australia in 2003, toluene was found to have been illegally combined with petrol in fuel outlets for sale as standard vehicular fuel. Toluene incurs no fuel excise tax, while other fuels are taxed at more than 40%, providing a greater profit margin for fuel suppliers. The extent of toluene substitution is claimed to be 60%.
Niche applications
In the laboratory, toluene is used as a solvent for carbon nanomaterials, including nanotubes and fullerenes, and it can also be used as a fullerene indicator. The color of the toluene solution of C60 is bright purple. Toluene is used as a cement for fine polystyrene kits (by dissolving and then fusing surfaces) as it can be applied very precisely by brush and contains none of the bulk of an adhesive. Toluene can be used to break open red blood cells in order to extract hemoglobin in biochemistry experiments. Toluene has also been used as a coolant for its good heat transfer capabilities in sodium cold traps used in nuclear reactor system loops. Toluene had also been used in the process of removing the cocaine from coca leaves in the production of Coca-Cola syrup.
Toxicology and metabolism
The environmental and toxicological effects of toluene have been extensively studied.
Toluene is irritating to the eyes, skin, and respiratory tract. It is absorbed slowly through the skin. It can cause systemic toxicity by inhalation or ingestion. Inhalation is the most common route of exposure. Symptoms of toluene poisoning include central nervous system effects (headache, dizziness, drowsiness, ataxia, euphoria, tremors, hallucinations, seizures, and coma), chemical pneumonitis, respiratory depression, ventricular arrhythmias, nausea, vomiting, and electrolyte imbalances.
Inhalation of toluene in low to moderate levels can cause tiredness, confusion, weakness, drunken-type actions, memory loss, nausea, loss of appetite, hearing loss, and colour vision loss. Some of these symptoms usually disappear when exposure is stopped. Inhaling high levels of toluene in a short time may cause light-headedness, nausea, or sleepiness, unconsciousness, and even death. Toluene is, however, much less toxic than benzene, and as a consequence, largely replaced it as an aromatic solvent in chemical preparation. The US Environmental Protection Agency (EPA) states that the carcinogenic potential of toluene cannot be evaluated due to insufficient information. In 2013, worldwide sales of toluene amounted to about 24.5 billion US dollars.
Toluene occurs as an indoor air pollutant in a number of processes including electrosurgery, and can be removed from the air with an activated carbon filter.
Similarly to many other solvents such as 1,1,1-trichloroethane and some alkylbenzenes, toluene has been shown to act as a non-competitive NMDA receptor antagonist and GABAA receptor positive allosteric modulator. Additionally, toluene has been shown to display antidepressant-like effects in rodents in the forced swim test (FST) and the tail suspension test (TST), likely due to its NMDA antagonist properties.
Toluene is sometimes used as a recreational inhalant ("glue sniffing"), likely on account of its euphoric and dissociative effects.
Toluene inhibits excitatory ion channels such as the NMDA receptor, nicotinic acetylcholine receptor, and the serotonin 5-HT3 receptor. It also potentiates the function of inhibitory ion channels, such as the GABAA and glycine receptors. In addition, toluene disrupts voltage-gated calcium channels and ATP-gated ion channels.
Recreational use
Toluene is used as an intoxicative inhalant in a manner unintended by manufacturers. People inhale toluene-containing products (e.g., paint thinner, contact cement, correction pens, model glue, etc.) for its intoxicating effect. The possession and use of toluene and products containing it are regulated in many jurisdictions, for the supposed reason of preventing minors from obtaining these products for recreational drug purposes. As of 2007, 24 US states had laws penalizing use, possession with intent to use, and/or distribution of such inhalants. In 2005 the European Union banned the general sale of products consisting of greater than 0.5% toluene.
Bioremediation
Several types of fungi including Cladophialophora, Exophiala, Leptodontidium (syn. Leptodontium), Pseudeurotium zonatum, and Cladosporium sphaerospermum, and certain species of bacteria can degrade toluene using it as a source of carbon and energy.
| Physical sciences | Aromatic hydrocarbons | Chemistry |
30718 | https://en.wikipedia.org/wiki/Tide | Tide | Tides are the rise and fall of sea levels caused by the combined effects of the gravitational forces exerted by the Moon (and to a much lesser extent, the Sun) and are also caused by the Earth and Moon orbiting one another.
Tide tables can be used for any given locale to find the predicted times and amplitude (or "tidal range").
The predictions are influenced by many factors including the alignment of the Sun and Moon, the phase and amplitude of the tide (pattern of tides in the deep ocean), the amphidromic systems of the oceans, and the shape of the coastline and near-shore bathymetry (see Timing). They are however only predictions, the actual time and height of the tide is affected by wind and atmospheric pressure. Many shorelines experience semi-diurnal tides—two nearly equal high and low tides each day. Other locations have a diurnal tide—one high and low tide each day. A "mixed tide"—two uneven magnitude tides a day—is a third regular category.
Tides vary on timescales ranging from hours to years due to a number of factors, which determine the lunitidal interval. To make accurate records, tide gauges at fixed stations measure water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.
While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to change from thermal expansion, wind, and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts.
Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the shape of the solid part of the Earth is affected slightly by Earth tide, though this is not as easily seen as the water tidal movements.
Characteristics
Four stages in the tidal cycle are named:
The water stops falling, reaching a local minimum called low tide.
Sea level rises over several hours, covering the intertidal zone; flood tide.
The water stops rising, reaching a local maximum called high tide.
Sea level falls over several hours, revealing the intertidal zone; ebb tide.
Oscillating currents produced by tides are known as tidal streams or tidal currents. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water, but there are locations where the moments of slack tide differ significantly from those of high and low water.
Tides are commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the Equator.
Reference levels
The following reference tide levels can be defined, from the highest level to the lowest:
Highest astronomical tide (HAT) – The highest tide which can be predicted to occur. Note that meteorological conditions may add extra height to the HAT.
Mean high water springs (MHWS) – The average of the two high tides on the days of spring tides.
Mean high water neaps (MHWN) – The average of the two high tides on the days of neap tides.
Mean sea level (MSL) – This is the average sea level. The MSL is constant for any location over a long period.
Mean low water neaps (MLWN) – The average of the two low tides on the days of neap tides.
Mean low water springs (MLWS) – The average of the two low tides on the days of spring tides.
Lowest astronomical tide (LAT) – The lowest tide which can be predicted to occur.
Range variation: springs and neaps
The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon, and Earth form a line (a configuration known as a syzygy), the tidal force due to the Sun reinforces that due to the Moon. The tide's range is then at its maximum; this is called the spring tide. It is not named after the season, but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring.
Spring tides are sometimes referred to as syzygy tides.
When the Moon is at first quarter or third quarter, the Sun and Moon are separated by 90° when viewed from the Earth (in quadrature), and the solar tidal force partially cancels the Moon's tidal force. At these points in the lunar cycle, the tide's range is at its minimum; this is called the neap tide, or neaps. "Neap" is an Anglo-Saxon word meaning "without the power", as in forðganges nip (forth-going without-the-power).
Neap tides are sometimes referred to as quadrature tides.
Spring tides result in high waters that are higher than average, low waters that are lower than average, "slack water" time that is shorter than average, and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps.
Tidal constituents
Tidal constituents are the net result of multiple influences impacting tidal changes over certain periods of time. Primary constituents include the Earth's rotation, the position of the Moon and Sun relative to the Earth, the Moon's altitude (elevation) above the Earth's Equator, and bathymetry. Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents.
Tidal forces affect the entire earth, but the movement of solid Earth occurs by mere centimeters. In contrast, the atmosphere is much more fluid and compressible so its surface moves by kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere.
Principal lunar semi-diurnal constituent
In most locations, the largest constituent is the principal lunar semi-diurnal, also known as the M2 tidal constituent or M2 tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins.
The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above-mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides.
Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally (see equilibrium tide).
As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.
When there are two high tides each day with different heights (and two low tides also of different heights), the pattern is called a mixed semi-diurnal tide.
Lunar distance
The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Six or eight times a year perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. The difference between the height of a tide at perigean spring tide and the spring tide when the moon is at apogee depends on location but can be large as a foot higher.
Other constituents
These include solar gravitational effects, the obliquity (tilt) of the Earth's Equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the Sun.
A compound tide (or overtide) results from the shallow-water interaction of its two parent waves.
Phase and amplitude
Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps, and when plotted form a cotidal map or cotidal chart. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent.
For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation, caused by the Coriolis effect, is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian.
In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines.
History
History of tidal theory
Investigation into tidal physics was important in the early development of celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the Sun's gravity.
Seleucus of Seleucia theorized around 150 BC that tides were caused by the Moon. The influence of the Moon on bodies of water was also mentioned in Ptolemy's Tetrabiblos.
In (The Reckoning of Time) of 725 Bede linked semidurnal tides and the phenomenon of varying tidal heights to the Moon and its phases. Bede starts by noting that the tides rise and fall 4/5 of an hour later each day, just as the Moon rises and sets 4/5 of an hour later. He goes on to emphasise that in two lunar months (59 days) the Moon circles the Earth 57 times and there are 114 tides. Bede then observes that the height of tides varies over the month. Increasing tides are called malinae and decreasing tides ledones and that the month is divided into four parts of seven or eight days with alternating malinae and ledones. In the same passage he also notes the effect of winds to hold back tides. Bede also records that the time of tides varies from place to place. To the north of Bede's location (Monkwearmouth) the tides are earlier, to the south later. He explains that the tide "deserts these shores in order to be able all the more to be able to flood other [shores] when it arrives there" noting that "the Moon which signals the rise of tide here, signals its retreat in other regions far from this quarter of the heavens".
Later medieval understanding of the tides was primarily based on works of Muslim astronomers, which became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi (d. circa 886), in his , taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji (d. circa 1204) contributed the notion that the tides were caused by the general circulation of the heavens.
Simon Stevin, in his 1608 (The theory of ebb and flood), dismissed a large number of misconceptions that still existed about ebb and flood. Stevin pleaded for the idea that the attraction of the Moon was responsible for the tides and spoke in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made.
In 1609 Johannes Kepler also correctly suggested that the gravitation of the Moon caused the tides, which he based upon ancient observations and correlations.
Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the Sun. He hoped to provide mechanical proof of the Earth's movement. The value of his tidal theory is disputed. Galileo rejected Kepler's explanation of the tides.
Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687) and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces.
Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by ocean depth, the Earth's rotation, and other factors.
In 1740, the Académie Royale des Sciences in Paris offered a prize for the best theoretical essay on tides. Daniel Bernoulli, Leonhard Euler, Colin Maclaurin and Antoine Cavalleri shared the prize.
Maclaurin used Newton's theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three-dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation.
In 1770 James Cook's barque HMS Endeavour grounded on the Great Barrier Reef. Attempts were made to refloat her on the following tide which failed, but the tide after that lifted her clear with ease. Whilst she was being repaired in the mouth of the Endeavour River Cook observed the tides over a period of seven weeks. At neap tides both tides in a day were similar, but at springs the tides rose in the morning but in the evening.
Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.
Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use.
History of tidal observation
From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the Sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon.
In the 2nd century BC, the Hellenistic astronomer Seleucus of Seleucia correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the Sun.
The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the Equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast.
The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.
In 1614 Claude d'Abbeville published the work "", where he exposed that the Tupinambá people already had an understanding of the relation between the Moon and the tides before Europe.
William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.
The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gauge stations by 1850.
John Lubbock was one of the first to map co-tidal lines, for Great Britain, Ireland and adjacent coasts, in 1840. William Whewell expanded this work ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of a region with no tidal rise or fall where co-tidal lines meet in the mid-ocean. The existence of such an amphidromic point, as they are now known, was confirmed in 1840 by Captain William Hewett, RN, from careful soundings in the North Sea.
Much later, in the late 20th century, geologists noticed tidal rhythmites, which document the occurrence of ancient tides in the geological record, notably in the Carboniferous.
Physics
Forces
The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass.
Whereas the gravitational force subjected by a celestial body on Earth varies inversely as the square of its distance to the Earth, the maximal tidal force varies inversely as, approximately, the cube of this distance. If the tidal force caused by each body were instead equal to its full gravitational force (which is not the case due to the free fall of the whole Earth, not only the oceans, towards these bodies) a different pattern of tidal forces would be observed, e.g. with a much stronger influence from the Sun than from the Moon: The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the Sun is on average 389 times farther from the Earth, its field gradient is weaker. The overall proportionality is
where is the mass of the heavenly body, is its distance, is its average density, and is its radius. The ratio is related to the angle subtended by the object in the sky. Since the Sun and the Moon have practically the same diameter in the sky, the tidal force of the Sun is less than that of the Moon because its average density is much less, and it is only 46% as large as the lunar, thus during a spring tide, the Moon contributes 69% while the Sun contributes 31%. More precisely, the lunar tidal acceleration (along the Moon–Earth axis, at the Earth's surface) is about 1.1 g, while the solar tidal acceleration (along the Sun–Earth axis, at the Earth's surface) is about 0.52 g, where g is the gravitational acceleration at the Earth's surface. The effects of the other planets vary as their distances from Earth vary. When Venus is closest to Earth, its effect is 0.000113 times the solar effect. At other times, Jupiter or Mars may have the most effect.
The ocean's surface is approximated by a surface referred to as the geoid, which takes into consideration the gravitational force exerted by the earth as well as centrifugal force due to rotation. Now consider the effect of massive external bodies such as the Moon and Sun. These bodies have strong gravitational fields that diminish with distance and cause the ocean's surface to deviate from the geoid. They establish a new equilibrium ocean surface which bulges toward the moon on one side and away from the moon on the other side. The earth's rotation relative to this shape causes the daily tidal cycle. The ocean surface tends toward this equilibrium shape, which is constantly changing, and never quite attains it. When the ocean surface is not aligned with it, it's as though the surface is sloping, and water accelerates in the down-slope direction.
Equilibrium
The equilibrium tide is the idealized tide assuming a landless Earth.
It would produce a tidal bulge in the ocean, elongated towards the attracting body (Moon or Sun).
It is not caused by the vertical pull nearest or farthest from the body, which is very weak; rather, it is caused by the tangential or tractive tidal force, which is strongest at about 45 degrees from the body, resulting in a horizontal tidal current.
Laplace's tidal equations
Ocean depths are much smaller than their horizontal extent. Thus, the response to tidal forcing can be modelled using the Laplace tidal equations which incorporate the following features:
The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow.
The forcing is only horizontal (tangential).
The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity.
The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.
The boundary conditions dictate no flow across the coastline and free slip at the bottom.
The Coriolis effect (inertial force) steers flows moving towards the Equator to the west and flows moving away from the Equator toward the east, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity.
Amplitude and cycle time
The theoretical amplitude of oceanic tides caused by the Moon is about at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the Moon's orbit. The Sun similarly causes tides, of which the theoretical amplitude is about (46% of that of the Moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of , while at neap tide the theoretical level is reduced to . Since the orbits of the Earth about the Sun, and the Moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–Sun and Earth–Moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the Moon and ±5% for the Sun. If both the Sun and Moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach .
Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the Equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces.
Dissipation
Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatts. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about /year, lengthening the terrestrial day.
Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year.
Bathymetry
The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, U.S., predictably occurs approximately two and a half hours before the Moon passes directly overhead.
Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides. Human changes to the landscape can also significantly alter local tides.
Observation and prediction
Timing
The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases of the Moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.
The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases; the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry, and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of and a highest predicted extreme of . Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of and a highest predicted extreme of . Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge, but Ungava Bay is only free of pack ice for about four months every year while the Bay of Fundy rarely freezes.
Southampton in the United Kingdom has a double high water caused by the interaction between the M2 and M4 tidal constituents (Shallow water overtides of principal lunar). Portland has double low waters for the same reason. The M4 tide is found all along the south coast of the United Kingdom, but its effect is most noticeable between the Isle of Wight and Portland because the M2 tide is lowest in this region.
Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome.
Analysis
Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for a detailed understanding of tidal forces and behavior. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated by the body of water over many days. In addition, accurate results would require detailed knowledge of the shape of all the ocean basins—their bathymetry, and coastline shape.
Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of Sun and Moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found.
The main patterns in the tides are
the twice-daily variation
the difference between the first and second tide of a day
the spring–neap cycle
the annual variation
The Highest Astronomical Tide is the perigean spring tide when both the Sun and Moon are closest to the Earth.
When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides.
For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the Moon, and the angles that define the shape and location of their orbits.
For tides, then, harmonic analysis is not limited to harmonics of a single frequency. In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid.
The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form
where
is the amplitude,
is the angular frequency, usually given in degrees per hour, corresponding to measured in hours,
is the phase offset with regard to the astronomical state at time t = 0.
There is one term for the Moon and a second term for the Sun. The phase of the first harmonic for the Moon term is called the lunitidal interval or high water interval.
The next refinement is to accommodate the harmonic terms due to the elliptical shape of the orbits. To do so, the value of the amplitude is taken to be not a constant, but varying with time, about the average amplitude . To do so, replace in the above equation with where is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. This gives
which is to say an average value with a sinusoidal variation about it of magnitude , with frequency and phase . Substituting this for in the original equation gives a product of two cosine factors:
Given that for any and
it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is .) Consider further that the tidal force on a location depends also on whether the Moon (or the Sun) is above or below the plane of the Equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term.
Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide.
Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, Moon and Sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).
In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand, M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.)
The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components.
Example calculation
Because the Moon is moving in its orbit around the Earth and in the same sense as the Earth's rotation, a point on the Earth must rotate slightly further to catch up so that the time between semi-diurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides.
When the Earth, Moon, and Sun are in line (Sun–Earth–Moon, or Sun–Moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle Moon–Earth–Sun is close to ninety degrees, neap tides result. As the Moon moves around its orbit it changes from north of the Equator to south of the Equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the Moon is above the Equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again.
Current
The tides' influence on current or flow is much more difficult to analyze, and data is much more difficult to collect. A tidal height is a scalar quantity and varies smoothly over a wide region. A flow is a vector quantity, with magnitude and direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel may have similar magnitude, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction.
Nevertheless, tidal current analysis is similar to tidal heights analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights.
In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction.
Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away.
As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome.
The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods.
A further complication for Cook Strait's flow pattern is that the tide at the south side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the north side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier.
The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for (northwest of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north west end of the strait near Nelson has a counterpart spring tide at the south east end (Wellington), so the resulting behaviour follows neither reference harbour.
Power generation
Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.
Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief.
Navigation
Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide.
Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides".
Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed.
The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle).
Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide.
Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12.
Biological aspects
Intertidal ecology
Intertidal ecology is the study of ecosystems between the low- and high-water lines along a shore. At low water, the intertidal zone is exposed (or emersed), whereas at high water, it is underwater (or immersed). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom.
Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit.
Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research.
Biological rhythms
The approximately 12-hour and fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of these periods. Many other animals such as the vertebrates, display similar circatidal rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor.
Other tides
When oscillating tidal currents in the stratified ocean flow over uneven bottom topography, they generate internal waves with tidal frequencies. Such waves are called internal tides.
Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in hours (for example, the Nantucket Shoals).
In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications).
Lake tides
Large lakes such as Superior and Erie can experience tides of , but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as or . This is so small that other larger effects completely mask any tide, and as such these lakes are considered non-tidal.
Atmospheric tides
Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about , above which the molecular density becomes too low to support fluid behavior.
Earth tides
Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about at the Equator— due to the Sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and polar motion, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the Moon with a lag of about two hours.
Galactic tides
Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets.
Misnomers
Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any causal link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides. Many of these usages are historic and refer to the earlier meaning of tide as "a portion of time, a season" and "a stream, current or flood".
| Physical sciences | Oceanography | null |
30719 | https://en.wikipedia.org/wiki/Tidal%20force | Tidal force | The tidal force or tide-generating force is a gravitational effect that stretches a body along the line towards and away from the center of mass of another body due to spatial variations in strength in gravitational field from the other body. It is responsible for the tides and related phenomena, including solid-earth tides, tidal locking, breaking apart of celestial bodies and formation of ring systems within the Roche limit, and in extreme cases, spaghettification of objects. It arises because the gravitational field exerted on one body by another is not constant across its parts: the nearer side is attracted more strongly than the farther side. The difference is positive in the near side and negative in the far side, which causes a body to get stretched. Thus, the tidal force is also known as the differential force, residual force, or secondary effect of the gravitational field.
In celestial mechanics, the expression tidal force can refer to a situation in which a body or material (for example, tidal water) is mainly under the gravitational influence of a second body (for example, the Earth), but is also perturbed by the gravitational effects of a third body (for example, the Moon). The perturbing force is sometimes in such cases called a tidal force (for example, the perturbing force on the Moon): it is the difference between the force exerted by the third body on the second and the force exerted by the third body on the first.
Tidal forces have also been shown to be fundamentally related to gravitational waves.
Explanation
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2. Figure 2 shows the differential force of gravity on a spherical body (body 1) exerted by another body (body 2).
These tidal forces cause strains on both bodies and may distort them or even, in extreme cases, break one or the other apart. The Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one another. These strains would not occur if the gravitational field were uniform, because a uniform field only causes the entire body to accelerate together in the same direction and at the same rate.
Size and distance
The relationship of an astronomical body's size, to its distance from another body, strongly influences the magnitude of tidal force. The tidal force acting on an astronomical body, such as the Earth, is directly proportional to the diameter of the Earth and inversely proportional to the cube of the distance from another body producing a gravitational attraction, such as the Moon or the Sun. Tidal action on bath tubs, swimming pools, lakes, and other small bodies of water is negligible.
Figure 3 is a graph showing how gravitational force declines with distance. In this graph, the attractive force decreases in proportion to the square of the distance (), while the slope () is inversely proportional to the cube of the distance.
The tidal force corresponds to the difference in Y between two points on the graph, with one point on the near side of the body, and the other point on the far side. The tidal force becomes larger, when the two points are either farther apart, or when they are more to the left on the graph, meaning closer to the attracting body.
For example, even though the Sun has a stronger overall gravitational pull on Earth, the Moon creates a larger tidal bulge because the Moon is closer. This difference is due to the way gravity weakens with distance: the Moon's closer proximity creates a steeper decline in its gravitational pull as you move across Earth (compared to the Sun's very gradual decline from its vast distance). This steeper gradient in the Moon's pull results in a larger difference in force between the near and far sides of Earth, which is what creates the bigger tidal bulge.
Gravitational attraction is inversely proportional to the square of the distance from the source. The attraction will be stronger on the side of a body facing the source, and weaker on the side away from the source. The tidal force is proportional to the difference.
Sun, Earth, and Moon
The Earth is 81 times more massive than the Moon, the Earth has roughly 4 times the Moon's radius. As a result, at the same distance, the tidal force of the Earth at the surface of the Moon is about 20 times stronger than that of the Moon at the Earth's surface.
Effects
In the case of an infinitesimally small elastic sphere, the effect of a tidal force is to distort the shape of the body without any change in volume. The sphere becomes an ellipsoid with two bulges, pointing towards and away from the other body. Larger objects distort into an ovoid, and are slightly compressed, which is what happens to the Earth's oceans under the action of the Moon. All parts of the Earth are subject to the Moon's gravitational forces, causing the water in the oceans to redistribute, forming bulges on the sides near the Moon and far from the Moon.
When a body rotates while subject to tidal forces, internal friction results in the gradual dissipation of its rotational kinetic energy as heat. In the case for the Earth, and Earth's Moon, the loss of rotational kinetic energy results in a gain of about 2 milliseconds per century. If the body is close enough to its primary, this can result in a rotation which is tidally locked to the orbital motion, as in the case of the Earth's moon. Tidal heating produces dramatic volcanic effects on Jupiter's moon Io. Stresses caused by tidal forces also cause a regular monthly pattern of moonquakes on Earth's Moon.
Tidal forces contribute to ocean currents, which moderate global temperatures by transporting heat energy toward the poles. It has been suggested that variations in tidal forces correlate with cool periods in the global temperature record at 6- to 10-year intervals, and that harmonic beat variations in tidal forcing may contribute to millennial climate changes. No strong link to millennial climate changes has been found to date.
Tidal effects become particularly pronounced near small bodies of high mass, such as neutron stars or black holes, where they are responsible for the "spaghettification" of infalling matter. Tidal forces create the oceanic tide of Earth's oceans, where the attracting bodies are the Moon and, to a lesser extent, the Sun. Tidal forces are also responsible for tidal locking, tidal acceleration, and tidal heating. Tides may also induce seismicity.
By generating conducting fluids within the interior of the Earth, tidal forces also affect the Earth's magnetic field.
Formulation
For a given (externally generated) gravitational field, the tidal acceleration at a point with respect to a body is obtained by vector subtraction of the gravitational acceleration at the center of the body (due to the given externally generated field) from the gravitational acceleration (due to the same field) at the given point. Correspondingly, the term tidal force is used to describe the forces due to tidal acceleration. Note that for these purposes the only gravitational field considered is the external one; the gravitational field of the body (as shown in the graphic) is not relevant. (In other words, the comparison is with the conditions at the given point as they would be if there were no externally generated field acting unequally at the given point and at the center of the reference body. The externally generated field is usually that produced by a perturbing third body, often the Sun or the Moon in the frequent example-cases of points on or above the Earth's surface in a geocentric reference frame.)
Tidal acceleration does not require rotation or orbiting bodies; for example, the body may be freefalling in a straight line under the influence of a gravitational field while still being influenced by (changing) tidal acceleration.
By Newton's law of universal gravitation and laws of motion, a body of mass m at distance R from the center of a sphere of mass M feels a force ,
equivalent to an acceleration ,
where is a unit vector pointing from the body M to the body m (here, acceleration from m towards M has negative sign).
Consider now the acceleration due to the sphere of mass M experienced by a particle in the vicinity of the body of mass m. With R as the distance from the center of M to the center of m, let ∆r be the (relatively small) distance of the particle from the center of the body of mass m. For simplicity, distances are first considered only in the direction pointing towards or away from the sphere of mass M. If the body of mass m is itself a sphere of radius ∆r, then the new particle considered may be located on its surface, at a distance (R ± ∆r) from the centre of the sphere of mass M, and ∆r may be taken as positive where the particle's distance from M is greater than R. Leaving aside whatever gravitational acceleration may be experienced by the particle towards m on account of ms own mass, we have the acceleration on the particle due to gravitational force towards M as:
Pulling out the R2 term from the denominator gives:
The Maclaurin series of is which gives a series expansion of:
The first term is the gravitational acceleration due to M at the center of the reference body , i.e., at the point where is zero. This term does not affect the observed acceleration of particles on the surface of m because with respect to M, m (and everything on its surface) is in free fall. When the force on the far particle is subtracted from the force on the near particle, this first term cancels, as do all other even-order terms. The remaining (residual) terms represent the difference mentioned above and are tidal force (acceleration) terms. When ∆r is small compared to R, the terms after the first residual term are very small and can be neglected, giving the approximate tidal acceleration for the distances ∆r considered, along the axis joining the centers of m and M:
When calculated in this way for the case where ∆r is a distance along the axis joining the centers of m and M, is directed outwards from to the center of m (where ∆r is zero).
Tidal accelerations can also be calculated away from the axis connecting the bodies m and M, requiring a vector calculation. In the plane perpendicular to that axis, the tidal acceleration is directed inwards (towards the center where ∆r is zero), and its magnitude is in linear approximation as in Figure 2.
The tidal accelerations at the surfaces of planets in the Solar System are generally very small. For example, the lunar tidal acceleration at the Earth's surface along the Moon–Earth axis is about , while the solar tidal acceleration at the Earth's surface along the Sun–Earth axis is about , where g is the gravitational acceleration at the Earth's surface. Hence the tide-raising force (acceleration) due to the Sun is about 45% of that due to the Moon. The solar tidal acceleration at the Earth's surface was first given by Newton in the Principia.
| Physical sciences | Classical mechanics | Physics |
30733 | https://en.wikipedia.org/wiki/Tram | Tram | A tram (also known as a streetcar or trolley in Canada and the United States) is an urban rail transit in which vehicles, whether individual railcars or multiple-unit trains, run on tramway tracks on urban public streets; some include segments on segregated right-of-way. The tramlines or tram networks operated as public transport are called tramways or simply trams/streetcars. Because of their close similarities, trams are commonly included in the wider term light rail, which also includes systems separated from other traffic.
Tram vehicles are usually lighter and shorter than main line and rapid transit trains. Most trams use electrical power, usually fed by a pantograph sliding on an overhead line; older systems may use a trolley pole or a bow collector. In some cases, a contact shoe on a third rail is used. If necessary, they may have dual power systems—electricity in city streets and diesel in more rural environments. Occasionally, trams also carry freight. Some trams, known as tram-trains, may have segments that run on mainline railway tracks, similar to interurban systems. The differences between these modes of rail transport are often indistinct, and systems may combine multiple features.
One of the advantages over earlier forms of transit was the low rolling resistance of metal wheels on steel rails, allowing the trams to haul a greater load for a given effort. Another factor which contributed to the rise of trams was the high total cost of ownership of horses. Electric trams largely replaced animal power in the late 19th and early 20th centuries. Improvements in other vehicles such as buses led to decline of trams in early to mid 20th century. However, trams have seen resurgence since the 1980s.
History
Creation
The history of passenger trams, streetcars and trolley systems, began in the early nineteenth century. It can be divided into several distinct periods defined by the principal means of power used. Precursors to the tramway included the wooden or stone wagonways that were used in central Europe to transport mine carts with unflanged wheels since the 1500s, and the paved limestone trackways designed by the Romans for heavy horse and ox-drawn transportation. By the 1700s, paved plateways with cast iron rails were introduced in England for transporting coal, stone or iron ore from the mines to the urban factories and docks.
Horse-drawn
The world's first passenger train or tram was the Swansea and Mumbles Railway, in Wales, UK. The British Parliament passed the Mumbles Railway Act in 1804, and horse-drawn service started in 1807. The service closed in 1827, but was restarted in 1860, again using horses. It was worked by steam from 1877, and then, from 1929, by very large (106-seat) electric tramcars, until closure in 1960. The Swansea and Mumbles Railway was something of a one-off however, and no street tramway appeared in Britain until 1860 when one was built in Birkenhead by the American George Francis Train.
Street railways developed in America before Europe, due to the poor paving of the streets in American cities which made them unsuitable for horsebuses, which were then common on the well-paved streets of European cities. Running the horsecars on rails allowed for a much smoother ride. There are records of a street railway running in Baltimore as early as 1828, however the first authenticated streetcar in America, was the New York and Harlem Railroad developed by the Irish coach builder John Stephenson, in New York City which began service in the year 1832. The New York and Harlem Railroad's Fourth Avenue Line ran along the Bowery and Fourth Avenue in New York City. It was followed in 1835 by the New Orleans and Carrollton Railroad in New Orleans, Louisiana, which still operates as the St. Charles Streetcar Line. Other American cities did not follow until the 1850s, after which the "animal railway" became an increasingly common feature in the larger towns.
The first permanent tram line in continental Europe was opened in Paris in 1855 by Alphonse Loubat who had previously worked on American streetcar lines. The tram was developed in numerous cities of Europe (some of the most extensive systems were found in Berlin, Budapest, Birmingham, Saint Petersburg, Lisbon, London, Manchester, Paris, Kyiv).
The first tram in South America opened in 1858 in Santiago, Chile. The first trams in Australia opened in 1860 in Sydney. Africa's first tram service started in Alexandria on 8 January 1863. The first trams in Asia opened in 1869 in Batavia (Jakarta), Netherlands East Indies (Indonesia).
Limitations of horsecars included the fact that any given animal could only work so many hours on a given day, had to be housed, groomed, fed and cared for day in and day out, and produced prodigious amounts of manure, which the streetcar company was charged with storing and then disposing. Since a typical horse pulled a streetcar for about a dozen miles a day and worked for four or five hours, many systems needed ten or more horses in stable for each horsecar. In 1905 the British newspaper Newcastle Daily Chronicle reported that, "A large number of London's discarded horse tramcars have been sent to Lincolnshire where they are used as sleeping rooms for potato pickers".
Horses continued to be used for light shunting well into the 20th century, and many large metropolitan lines lasted into the early 20th century. New York City had a regular horsecar service on the Bleecker Street Line until its closure in 1917. Pittsburgh, Pennsylvania, had its Sarah Street line drawn by horses until 1923. The last regular mule-drawn cars in the US ran in Sulphur Rock, Arkansas, until 1926 and were commemorated by a U.S. postage stamp issued in 1983. The last mule tram service in Mexico City ended in 1932, and a mule tram in Celaya, Mexico, survived until 1954. The last horse-drawn tram to be withdrawn from public service in the UK took passengers from Fintona railway station to Fintona Junction one mile away on the main Omagh to Enniskillen railway in Northern Ireland. The tram made its last journey on 30 September 1957 when the Omagh to Enniskillen line closed. The "van" is preserved at the Ulster Transport Museum.
Horse-drawn trams still operate on the 1876-built Douglas Bay Horse Tramway on the Isle of Man, and at the 1894-built horse tram at Victor Harbor in South Australia. New horse-drawn systems have been established at the Hokkaidō Museum in Japan and also in Disneyland. A horse-tram route in Polish gmina Mrozy, first built in 1902, was reopened in 2012.
Steam
The first mechanical trams were powered by steam. Generally, there were two types of steam tram. The first and most common had a small steam locomotive (called a tram engine in the UK) at the head of a line of one or more carriages, similar to a small train. Systems with such steam trams included Christchurch, New Zealand; Sydney, Australia; other city systems in New South Wales; Munich, Germany (from August 1883 on), British India (from 1885) and the Dublin & Blessington Steam Tramway (from 1888) in Ireland. Steam tramways also were used on the suburban tramway lines around Milan and Padua; the last Gamba de Legn ("Peg-Leg") tramway ran on the Milan-Magenta-Castano Primo route in late 1957.
The other style of steam tram had the steam engine in the body of the tram, referred to as a tram engine (UK) or steam dummy (US). The most notable system to adopt such trams was in Paris. French-designed steam trams also operated in Rockhampton, in the Australian state of Queensland between 1909 and 1939. Stockholm, Sweden, had a steam tram line at the island of Södermalm between 1887 and 1901.
Tram engines usually had modifications to make them suitable for street running in residential areas. The wheels, and other moving parts of the machinery, were usually enclosed for safety reasons and to make the engines quieter. Measures were often taken to prevent the engines from emitting visible smoke or steam. Usually the engines used coke rather than coal as fuel to avoid emitting smoke; condensers or superheating were used to avoid emitting visible steam. A major drawback of this style of tram was the limited space for the engine, so that these trams were usually underpowered. Steam trams faded out around the 1890s to 1900s, being replaced by electric trams.
Cable-hauled
Another motive system for trams was the cable car, which was pulled along a fixed track by a moving steel cable, the cable usually running in a slot below the street level. The power to move the cable was normally provided at a "powerhouse" site a distance away from the actual vehicle. The London and Blackwall Railway, which opened for passengers in east London, England, in 1840 used such a system.
The first practical cable car line was tested in San Francisco, in 1873. Part of its success is attributed to the development of an effective and reliable cable grip mechanism, to grab and release the moving cable without damage. The second city to operate cable trams was Dunedin, from 1881 to 1957.
The most extensive cable system in the US was built in Chicago in stages between 1859 and 1892. New York City developed multiple cable car lines, that operated from 1883 to 1909. Los Angeles also had several cable car lines, including the Second Street Cable Railroad, which operated from 1885 to 1889, and the Temple Street Cable Railway, which operated from 1886 to 1898.
From 1885 to 1940, the city of Melbourne, Victoria, Australia operated one of the largest cable systems in the world, at its peak running 592 trams on of track. There were also two isolated cable lines in Sydney, New South Wales, Australia; the North Sydney line from 1886 to 1900, and the King Street line from 1892 to 1905.
In Dresden, Germany, in 1901 an elevated suspended cable car following the Eugen Langen one-railed floating tram system started operating. Cable cars operated on Highgate Hill in North London and Kennington to Brixton Hill in South London. They also worked around "Upper Douglas" in the Isle of Man from 1897 to 1929 (cable car 72/73 is the sole survivor of the fleet).
In Italy, in Trieste, the Trieste–Opicina tramway was opened in 1902, with the steepest section of the route being negotiated with the help of a funicular and its cables.
Cable cars suffered from high infrastructure costs, since an expensive system of cables, pulleys, stationary engines and lengthy underground vault structures beneath the rails had to be provided. They also required physical strength and skill to operate, and alert operators to avoid obstructions and other cable cars. The cable had to be disconnected ("dropped") at designated locations to allow the cars to coast by inertia, for example when crossing another cable line. The cable then had to be "picked up" to resume progress, the whole operation requiring precise timing to avoid damage to the cable and the grip mechanism. Breaks and frays in the cable, which occurred frequently, required the complete cessation of services over a cable route while the cable was repaired. Due to overall wear, the entire length of cable (typically several kilometres) had to be replaced on a regular schedule. After the development of reliable electrically powered trams, the costly high-maintenance cable car systems were rapidly replaced in most locations.
Cable cars remained especially effective in hilly cities, since their nondriven wheels did not lose traction as they climbed or descended a steep hill. The moving cable pulled the car up the hill at a steady pace, unlike a low-powered steam or horse-drawn car. Cable cars do have wheel brakes and track brakes, but the cable also helps restrain the car to going downhill at a constant speed. Performance in steep terrain partially explains the survival of cable cars in San Francisco.
The San Francisco cable cars, though significantly reduced in number, continue to provide regular transportation service, in addition to being a well-known tourist attraction. A single cable line also survives in Wellington (rebuilt in 1979 as a funicular but still called the "Wellington Cable Car"). Another system, with two separate cable lines and a shared power station in the middle, operates from the Welsh town of Llandudno up to the top of the Great Orme hill in North Wales, UK.
Fossil fuels
Hastings and some other tramways, for example Stockholms Spårvägar in Sweden and some lines in Karachi, used petrol trams. Galveston Island Trolley in Texas operated diesel trams due to the city's hurricane-prone location, which would have resulted in frequent damage to an electrical supply system. Although Portland, Victoria promotes its tourist tram as being a cable car it actually operates using a diesel motor. The tram, which runs on a circular route around the town of Portland, uses dummies and salons formerly used on the Melbourne cable tramway system and since restored.
In the late 19th and early 20th centuries a number of systems in various parts of the world employed trams powered by gas, naphtha gas or coal gas in particular. Gas trams are known to have operated between Alphington and Clifton Hill in the northern suburbs of Melbourne, Australia (1886–1888); in Berlin and Dresden, Germany; in Estonia (1921–1951); between Jelenia Góra, Cieplice, and Sobieszów in Poland (from 1897); and in the UK at Lytham St Annes, Trafford Park, Manchester (1897–1908) and Neath, Wales (1896–1920).
Comparatively little has been published about gas trams. However, research on the subject was carried out for an article in the October 2011 edition of "The Times", the historical journal of the Australian Association of Timetable Collectors, later renamed the Australian Timetable Association.
Electric
The world's first electric tram line operated in Sestroretsk near Saint Petersburg invented and tested by inventor Fyodor Pirotsky in 1875. Later, using a similar technology, Pirotsky put into service the first public electric tramway in St. Petersburg, which operated only during September 1880.
The second demonstration tramway was presented by Siemens & Halske at the 1879 Berlin Industrial Exposition.
The first public electric tramway used for permanent service was the Gross-Lichterfelde tramway in Lichterfelde near Berlin in Germany, which opened in 1881. It was built by Werner von Siemens who contacted Pirotsky. This was the world's first commercially successful electric tram. It drew current from the rails at first, with overhead wire being installed in 1883.
In Britain, Volk's Electric Railway was opened in 1883 in Brighton. This two kilometer line along the seafront, re-gauged to in 1884, remains in service as the oldest operating electric tramway in the world. Also in 1883, Mödling and Hinterbrühl Tram was opened near Vienna in Austria. It was the first tram in the world in regular service that was run with electricity served by an overhead line with pantograph current collectors. The Blackpool Tramway was opened in Blackpool, UK on 29 September 1885 using conduit collection along Blackpool Promenade. This system is still in operation in modernised form.
The earliest tram system in Canada was built by John Joseph Wright, brother of the famous mining entrepreneur Whitaker Wright, in Toronto in 1883, introducing electric trams in 1892. In the US, multiple experimental electric trams were exhibited at the 1884 World Cotton Centennial World's Fair in New Orleans, Louisiana, but they were not deemed good enough to replace the Lamm fireless engines then propelling the St. Charles Avenue Streetcar in that city. The first commercial installation of an electric streetcar in the United States was built in 1884 in Cleveland, Ohio, and operated for a period of one year by the East Cleveland Street Railway Company. The first city-wide electric streetcar system was implemented in 1886 in Montgomery, Alabama, by the Capital City Street Railway Company, and ran for 50 years.<
In 1888, the Richmond Union Passenger Railway began to operate trams in Richmond, Virginia, that Frank J. Sprague had built. Sprague later developed multiple unit control, first demonstrated in Chicago in 1897, allowing multiple cars to be coupled together and operated by a single motorman. This gave rise to the modern subway train. Following the improvement of an overhead "trolley" system on streetcars for collecting electricity from overhead wires by Sprague, electric tram systems were rapidly adopted across the world.
Earlier electric trains proved difficult or unreliable and experienced limited success until the second half of the 1880s, when new types of current collectors were developed. Siemens' line, for example, provided power through a live rail and a return rail, like a model train, limiting the voltage that could be used, and delivering electric shocks to people and animals crossing the tracks. Siemens later designed his own version of overhead current collection, called the bow collector. One of the first systems to use it was in Thorold, Ontario, opened in 1887, and it was considered quite successful. While this line proved quite versatile as one of the earliest fully functional electric streetcar installations, it required horse-drawn support while climbing the Niagara Escarpment and for two months of the winter when hydroelectricity was not available. It continued in service in its original form into the 1950s.
Sidney Howe Short designed and produced the first electric motor that operated a streetcar without gears. The motor had its armature direct-connected to the streetcar's axle for the driving force. Short pioneered "use of a conduit system of concealed feed" thereby eliminating the necessity of overhead wire and a trolley pole for street cars and railways. While at the University of Denver he conducted experiments which established that multiple unit powered cars were a better way to operate trains and trolleys.
Electric tramways spread to many European cities in the 1890s, such as:
Prague, Bohemia (then in the Austro-Hungarian Empire), in 1891;
Kyiv, Ukraine, in 1892;
Dresden, Germany; Lyon, France; and Milan and Genoa, Italy, Douglas, Isle of Man in 1893;
Rome, Italy: Plauen, Germany; Bucharest, Romania; Lviv, Ukraine; Belgrade, Serbia in 1894;
Bristol, United Kingdom; and Munich, Germany in 1895;
Bilbao, Spain, in 1896;
Copenhagen, Denmark; and Vienna, Austria, in 1897;
Florence and Turin, Italy, in 1898;
Helsinki, Finland; and Madrid and Barcelona, Spain, in 1899.
Sarajevo built a citywide system of electric trams in 1895. Budapest established its tramway system in 1887, and its ring line has grown to be the busiest tram line in Europe, with a tram running once per minute at rush hour. Bucharest and Belgrade ran a regular service from 1894. Ljubljana introduced its tram system in 1901 – it closed in 1958. Oslo had the first tramway in Scandinavia, starting operation on 2 March 1894.
The first electric tramway in Australia was a Sprague system demonstrated at the 1888 Melbourne Centennial Exhibition in Melbourne; afterwards, this was installed as a commercial venture operating between the outer Melbourne suburb of Box Hill and the then tourist-oriented country town Doncaster from 1889 to 1896. Electric systems were also built in Adelaide, Ballarat, Bendigo, Brisbane, Fremantle, Geelong, Hobart, Kalgoorlie, Launceston, Leonora, Newcastle, Perth, and Sydney.
By the 1970s, the only full tramway system remaining in Australia was the Melbourne tram system. However, there were also a few single lines remaining elsewhere: the Glenelg tram line, connecting Adelaide to the beachside suburb of Glenelg, and tourist trams in the Victorian Goldfields cities of Bendigo and Ballarat. In recent years the Melbourne system, generally recognised as the largest urban tram network in the world, has been considerably modernised and expanded. The Adelaide line has been extended to the Entertainment Centre, and work is progressing on further extensions. Sydney re-introduced trams (or light rail) on 31 August 1997. A completely new system, known as G:link, was introduced on the Gold Coast, Queensland, on 20 July 2014. The Newcastle Light Rail opened in February 2019, while the Canberra light rail opened on 20 April 2019. This is the first time that there have been trams in Canberra, even though Walter Burley Griffin's 1914–1920 plans for the capital then in the planning stage did propose a Canberra tram system.
In Japan, the Kyoto Electric railroad was the first tram system, starting operation in 1895. By 1932, the network had grown to 82 railway companies in 65 cities, with a total network length of . By the 1960s the tram had generally died out in Japan.
Two rare but significant alternatives were conduit current collection, which was widely used in London, Washington, D.C., and New York City, and the surface contact collection method, used in Wolverhampton (the Lorain system), Torquay and Hastings in the UK (the Dolter stud system), and in Bordeaux, France (the ground-level power supply system).
The convenience and economy of electricity resulted in its rapid adoption once the technical problems of production and transmission of electricity were solved. Electric trams largely replaced animal power and other forms of motive power including cable and steam, in the late 19th and early 20th centuries.
There was one particular hazard associated with trams powered from a trolley pole off an overhead line on the early electrified systems. Since the tram relies on contact with the rails for the current return path, a problem arises if the tram is derailed or (more usually) if it halts on a section of track that has been heavily sanded by a previous tram, and the tram loses electrical contact with the rails. In this event, the underframe of the tram, by virtue of a circuit path through ancillary loads (such as interior lighting), is live at the full supply voltage, typically 600 volts DC. In British terminology, such a tram was said to be 'grounded'—not to be confused with the US English use of the term, which means the exact opposite. Any person stepping off the tram and completing the earth return circuit with their body could receive a serious electric shock. If "grounded", the driver was required to jump off the tram (avoiding simultaneous contact with the tram and the ground) and pull down the trolley pole, before allowing passengers off the tram. Unless derailed, the tram could usually be recovered by running water down the running rails from a point higher than the tram, the water providing a conducting bridge between the tram and the rails. With improved technology, this ceased to be a problem.
In the 2000s, several companies introduced catenary-free designs: Alstom's Citadis line uses a third rail, Bombardier's PRIMOVE LRV is charged by contactless induction plates embedded in the trackway and CAF URBOS tram uses ultracaps technology
Battery
As early as 1834, Thomas Davenport, a Vermont blacksmith, had invented a battery-powered electric motor which he later patented. The following year he used it to operate a small model electric car on a short section of track four feet in diameter.
Attempts to use batteries as a source of electricity were made from the 1880s and 1890s, with unsuccessful trials conducted in among other places Bendigo and Adelaide in Australia, and for about 14 years as The Hague accutram of HTM in the Netherlands. The first trams in Bendigo, Australia, in 1892, were battery-powered, but within as little as three months they were replaced with horse-drawn trams. In New York City some minor lines also used storage batteries. Then, more recently during the 1950s, a longer battery-operated tramway line ran from Milan to Bergamo. In China there is a Nanjing battery Tram line and has been running since 2014. In 2019, the West Midlands Metro in Birmingham, England adopted battery-powered trams on sections through the city centre close to Grade I listed Birmingham Town Hall.
Compressed air
Paris and Berne (Switzerland) operated trams that were powered by compressed air using the Mekarski system.
Trials on street tramways in Britain, including by the North Metropolitan Tramway Company between Kings Cross and Holloway, London (1883), achieved acceptable results but were found not to be economic because of the combined coal consumption of the stationary compressor and the onboard steam boiler.
Hybrid system
The Trieste–Opicina tramway in Trieste operates a hybrid funicular tramway system. Conventional electric trams are operated in street running and on reserved track for most of their route. However, on one steep segment of track, they are assisted by cable tractors, which push the trams uphill and act as brakes for the downhill run. For safety, the cable tractors are always deployed on the downhill side of the tram vehicle.
Similar systems were used elsewhere in the past, notably on the Queen Anne Counterbalance in Seattle and the Darling Street wharf line in Sydney.
Modern development
In the mid-20th century many tram systems were disbanded, replaced by buses, trolleybuses, automobiles or rapid transit. The General Motors streetcar conspiracy was a case study of the decline of trams in the United States. In the 21st century, trams have been re-introduced in cities where they had been closed down for decades (such as Tramlink in London), or kept in heritage use (such as Spårväg City in Stockholm). Most trams made since the 1990s (such as the Bombardier Flexity series and Alstom Citadis) are articulated low-floor trams with features such as regenerative braking.
In March 2015, China South Rail Corporation (CSR) demonstrated the world's first hydrogen fuel cell vehicle tramcar at an assembly facility in Qingdao. The chief engineer of the CSR subsidiary CSR Sifang Co Ltd., Liang Jianying, said that the company is studying how to reduce the running costs of the tram.
Design
Trams have been used for two main purposes: for carrying passengers and for carrying cargo. There are several types of passenger tram:
Articulated
Cargo trams
Double-Decker
Drop-centre (or drop-center)
Double ended and Single ended
Low-floor
Rubber-tired
Tram-train
Operation
There are two main types of tramways, the classic tramway built in the early 20th century with the tram system operating in mixed traffic, and the later type which is most often associated with the tram system having its own right of way. Tram systems that have their own right of way are often called light rail but this does not always hold true. Though these two systems differ in their operation, their equipment is much the same.
Controls
Trams were traditionally operated with separate levers for applying power and brakes. More modern vehicles use a locomotive-style controller which incorporate a dead man's switch. The success of the PCC streetcar had also seen trams use automobile-style foot controls allowing hands-free operation, particularly when the driver was responsible for fare collection.
Power supply
Electric trams use various devices to collect power from overhead lines. The most common device is the pantograph, while some older systems use trolley poles or bow collectors. Ground-level power supply has become a more recent innovation. Another technology uses supercapacitors; when an insulator at a track switch cuts off power from the tram for a short distance along the line, the tram can use energy stored in a large capacitor to drive the tram past the gap in the power feed.
The old tram systems in London, Manhattan (New York City), and Washington, D.C., used live rails, like those on third-rail electrified railways, but in a conduit underneath the road, from which they drew power through a plough. It was called conduit current collection. Washington's was the last of these to close, in 1962. No commercial tramway uses this system anymore. More recently, an equivalent to these systems has been developed which allows for the safe installation of a third rail on city streets, known as surface current collection or ground-level power supply; the main example of this is the new tramway in Bordeaux.
Ground-level power supply
A ground-level power supply system, also called surface current collection or (APS), is an updated version of the original stud type system. APS uses a third rail placed between the running rails, divided electrically into eight-metre powered segments with three-metre neutral sections between. Each tram has two power collection skates, next to which are antennas that send radio signals to energize the power rail segments as the tram passes over them.
Older systems required mechanical switching systems which were susceptible to environmental problems. At any one time no more than two consecutive segments under the tram should be live. Wireless and solid state switching eliminate mechanical problems.
Alstom developed the system primarily to avoid intrusive power supply cables in the sensitive area of the old city of old Bordeaux.
Routes
Route patterns vary greatly among the world's tram systems, leading to different network topologies.
Most systems start by building up a strongly nucleated radial pattern of routes linking the city centre with residential suburbs and traffic hubs such as railway stations and hospitals, usually following main roads. Some of these, such as those in Hong Kong, Blackpool and Bergen, still essentially comprise a single route. Some suburbs may be served by loop lines connecting two adjacent radial roads. Some modern systems have started by reusing existing radial railway tracks, as in Nottingham and Birmingham, sometimes joining them together by a section of street track through the city centre, as in Manchester. Later developments often include tangential routes linking adjacent suburbs directly, or multiple routes through the town centre to avoid congestion (as in Manchester's Second City Crossing).
Other new systems, particularly those in large cities which already have well-developed metro and suburban railway systems, such as London and Paris, have started by building isolated suburban lines feeding into railway or metro stations. In Paris these have then been linked by ring lines.
A third, weakly nucleated, route pattern may grow up where a number of nearby small settlements are linked, such as in the coal-mining areas served by BOGESTRA or the Silesian Interurbans.
A fourth starting point may be a loop in the city centre, sometimes called a downtown circulator, as in Portland or El Paso.
Occasionally a modern tramway system may grow from a preserved heritage line, as in Stockholm.
The resulting route patterns are very different. Some have a rational structure, covering their catchment area as efficiently as possible, with new suburbs being planned with tramlines integral to their layout – such is the case in Amsterdam. Bordeaux and Montpellier have built comprehensive networks, based on radial routes with numerous interconnections, within the last two decades. Some systems serve only parts of their cities, with Berlin being the prime example, as trams survived the city's political division only in the Eastern part. Other systems have ended up with a rather random route map, for instance when some previous operating companies have ceased operation (as with the tramways vicinaux/buurtspoorwegen in Brussels) or where isolated outlying lines have been preserved (as on the eastern fringe of Berlin). In Rome, the remnant of the system comprises three isolated radial routes, not connecting in the ancient city centre, but linked by a ring route. Some apparently anomalous lines continue in operation where a new line would not on rational grounds be built, because it is much more costly to build a new line than to continue operating an existing one.
In some places, the opportunity is taken when roads are being repaved to lay tramlines (though without erecting overhead cables) even though no service is immediately planned: such is the case in Leipzigerstraße in Berlin, the Haarlemmer Houttuinen in Amsterdam, and Botermarkt in Ghent.
Cross-border routes
Tram systems operate across national borders in Basel (from Switzerland into France and Germany), Geneva (from Switzerland into France) and Strasbourg (from France into Germany). A planned line linking Hasselt (Belgium) with Maastricht (Netherlands) was cancelled in June 2022.
Track
Tramway track can have different rail profiles to accommodate the various operating environments of the vehicle. They may be embedded into concrete for street-running operation, or use standard ballasted track with railroad ties on high-speed sections. A more ecological solution is to embed tracks into grass turf, an approach known as green track.
Tramway tracks use a grooved rail with a groove designed for tramway or railway track in pavement or grassed surfaces, also called grassed track or track in a lawn. The rail has the railhead on one side and the guard on the other. The guard provides accommodation for the flange. The guard carries no weight, but may act as a checkrail. Grooved rail was invented in 1852 by Alphonse Loubat, a French inventor who developed improvements in tram and rail equipment, and helped develop tram lines in New York City and Paris. The invention of grooved rail enabled tramways to be laid without causing a nuisance to other road users, except unsuspecting cyclists, who could get their wheels caught in the groove. The grooves may become filled with gravel and dirt (particularly if infrequently used or after a period of idleness) and need clearing from time to time, this being done by a "scrubber" tram. Failure to clear the grooves can lead to a bumpy ride for the passengers, damage to either wheel or rail and possibly derailing.
In narrow situations double-track tram lines sometimes reduce to single track, or, to avoid switches, have the tracks interlaced.
Switches
On many tram systems where tracks diverge, the driver chooses the route, usually either by flicking a switch on the dashboard or by use of the power pedal – generally if power is applied the tram goes straight on, whereas if no power is applied the tram turns. Some systems use automatic point-setting systems, where the route for each journey is downloaded from a central computer, and an onboard computer actuates each point as it comes to it via an induction loop. Such is the case at Manchester Metrolink. If the powered system breaks down, most points may be operated manually, by inserting a metal lever ('point iron') into the point machine.
Track gauge
Historically, the track gauge has had considerable variations, with narrow gauge common in many early systems. However, most light rail systems are now standard gauge. An important advantage of standard gauge is that standard railway maintenance equipment can be used on it, rather than custom-built machinery. Using standard gauge also allows light rail vehicles to be delivered and relocated conveniently using freight railways and locomotives.
Another factor favoring standard gauge is that low-floor vehicles are becoming popular, and there is generally insufficient space for wheelchairs to move between the wheels in a narrow gauge layout. Standard gauge also enables – at least in theory – a larger choice of manufacturers and thus lower procurement costs for new vehicles. However, other factors such as electrification or loading gauge for which there is more variation may require costly custom built units regardless.
Tram stop
Tram stops may be similar to bus stops in design and use, particularly in street-running sections, where in some cases other vehicles are legally required to stop clear of the tram doors. Some stops may have railway platforms, particularly in private right-of-way sections and where trams are boarded at standard railway platform height, as opposed to using steps at the doorway or low-floor trams.
Manufacturing
Many independent companies started making trams in the 19th and early 20th century. In the last several decades most of them have merged with or into larger ones. The biggest changes in the period after 2010 were the mergers of AnsaldoBreda into Hitachi Rail in 2015 and Bombardier into Alstom in 2020.
Approximately 5,000 new trams are manufactured each year.
As of February 2017, 4,478 new trams were on order from their makers, with a further 1,092 options being open:
Debate
Advantages
Trams (and road public transport in general) can be much more efficient in terms of road usage than cars – one vehicle replaces about 40 cars (which take up a far larger area of road space).
Vehicles run more efficiently compared to similar vehicles that use rubber tyres, since the rolling resistance of steel on steel is lower than rubber on asphalt.
Trams and light rail transit use sustainable technologies like electric propulsion and support limiting urban sprawl which in return lowers the carbon footprint.
There is a well studied effect that the installation of a tram service – even if service frequency, speed and price all remain constant – leads to higher ridership and mode shift away from cars compared to buses. Conversely, the abandonment of tram service leads to measurable declines in ridership.
Being guided by rails means that even very long tram units can navigate tight, winding city streets that are inaccessible to long buses.
Tram vehicles are very durable, with some being in continuous revenue service for more than fifty years. This is especially true compared to internal combustion buses, which tend to require high amounts of maintenance and break down after less than 20 years, mostly due to the vibrations of the engine.
In many cases tram networks have a higher capacity than similar buses. This has been cited as a reason for the replacement of one of Europe's busiest bus lines (with three-minute headways in peak times) with a tram by Dresdner Verkehrsbetriebe.
Due to the above-mentioned capacity advantage, labor costs (which form the biggest share of operating costs of many public transit systems) per passenger can be significantly lower compared to buses.
Trams and light rail systems can be cheaper to install than subways or other forms of heavy rail. In Berlin the commonly cited figure is that one kilometer of subway costs as much as ten kilometers of tramway.
ULR (Ultra Light Rail) developments with prefabricated track and onboard power (no OHL Over Head Line) in the UK are aiming for £10 m per km as opposed to convention tram rail and OHL at £20–£30 m per km.
Tramways can take advantage of old heavy rail alignments. Some examples include the Manchester Metrolink of which the Bury Line was part of the East Lancashire Railway, the Altrincham Line was part of the Manchester South Junction and Altrincham Railway, and the Oldham and Rochdale Line was the Oldham Loop Line. Other examples can be found in Paris, London, Boston, Melbourne and Sydney. They hence sometimes take advantage of high speed track while on train tracks.
As tram lines are permanent this allows local authorities to redevelop and revitalise their towns and cities provided suitable planning changes are made. Melbourne will allow higher buildings (5 to 6 story) along tram routes leaving the existing suburbs behind unchanged whilst doubling the cities density.
Trams produce less air pollution than rubber tyred transport which produce tyre, asphalt and brake based pollutants. The use of regenerative electric motor braking in trams lowers mechanical brake use. Steel wheel and rail particulates are produced but regular wheel alignment and flexible track mounting can reduce emissions.
Tram networks can link to other operational heavy rail and rapid transit systems, allowing vehicles to move directly from one to the other without passengers needing to alight. Trams that are compatible with heavy rail systems are called tram-trains, while those that can use subway tunnels are called semi-metro, pre-metro or U-Stadtbahn.
Trams can integrate more effectively with pedestrian heavy environments than other forms of transport due to compactness and predictable movement. Passengers can reach surface stations quicker than underground stations. Subjective safety at surface stations is often seen to be higher.
Trams can be tourist attractions in ways buses usually are not.
Many modern tram systems plant low growing vegetation – mostly grasses – between the tracks which has a psychological effect on perceived noise levels and the benefits of greenspace. This is not possible for buses as they deviate too much from an "ideal" track in daily operations.
Disadvantages
Installing rails for tram tracks and overhead lines for power means a higher up-front cost than using buses which require no modifications to streets to begin operations.
Tram tracks can be hazardous for cyclists, as bikes, particularly those with narrow tyres, may get their wheels caught in the track grooves. It is possible to close the grooves of the tracks on critical sections by rubber profiles that are pressed down by the wheelflanges of the passing tram but that cannot be lowered by the weight of a cyclist. If not well-maintained, however, these lose their effectiveness over time.
When wet, tram tracks tend to become slippery and thus dangerous for bicycles and motorcycles, especially in traffic. In some cases, even cars can be affected.
The opening of new tram and light rail systems has sometimes been accompanied by a marked increase in car accidents, as a result of drivers' unfamiliarity with the physics and geometry of trams. Though such increases may be temporary, long-term conflicts between motorists and light rail operations can be alleviated by segregating their respective rights-of-way and installing appropriate signage and warning systems.
Rail transport can expose neighbouring populations to moderate levels of low-frequency noise. However, transportation planners use noise mitigation strategies to minimise these effects. Most of all, the potential for decreased private motor vehicle operations along the tram's service line because of the service provision could result in lower ambient noise levels than without.
The overhead power lines and supporting poles utilized by trams (except for those using a third rail) can be unsightly and contribute to visual pollution.
By region
Trams are in a period of growth, with about 400 tram systems operating around the world, several new systems being opened each year, and many being gradually extended. Some of these systems date from the late 19th or early 20th centuries. In the past 20 years their numbers have been augmented by modern tramway or light rail systems in cities that had abandoned this form of transport. There have also been some new tram systems in cities that never previously had them.
Tramways with trams (British English) or street railways with streetcars (North American English) were common throughout the industrialised world in the late 19th and early 20th centuries but they had disappeared from most British, Canadian, French and US cities by the mid-20th century. After World War II most Australian cities also began to replace their trams with buses, but Melbourne defied the trend, opening new tram lines even in the mid 1950s. By the 1970s Melbourne was the only Australian city with a major tram network.
By contrast, trams in parts of continental Europe continued to be used by many cities, although there were declines in some countries, including the Netherlands.
Since 1980 trams have returned to favour in many places, partly because their tendency to dominate the roadway, formerly seen as a disadvantage, is considered to be a merit since it raises the visibility of public transport (encouraging car users to change their mode of travel), and enables streets to be reconfigured to give more space to pedestrians, making cites more pleasant places to live. New systems have been built in the United States, United Kingdom, Ireland, Italy, France, Australia and many other countries.
In Milan, Italy, the old "Ventotto" trams are considered a "symbol" of the city. The same can be said of trams in Melbourne in general, but particularly the iconic W class. The Toronto streetcar system had similarly become an iconic symbol of the city, operating the largest network in the Americas as well as the only large-scale tram system in Canada (not including light rail systems, or heritage lines).
Major tram and light rail systems
Existing systems
The largest tram (classic tram, streetcar, straßenbahn) and fast tram (light rail, stadtbahn) networks in the world by route length as of 2016 are:
Melbourne ()
Saint Petersburg ()
Cologne ()
Berlin ()
Moscow ()
Milan ()
Budapest ()
Katowice agglomeration ()
Vienna ().
Other large transit networks that operate streetcar and light rail systems include:
DART light rail, modern streetcar and heritage streetcar ()
Sofia ()
Warsaw ()
Leipzig ()
Brussels ()
Łódź ()
Bucharest ()
Prague ()
Dresden ()
Los Angeles ()
Statistics
Tram and light rail systems operate in 403 cities across the world, 210 of which are in Europe;
The longest single tram line and route in the world is the interurban Belgian Coast Tram (Kusttram), which runs almost the entire length of the Belgian coast. Another fairly long interurban line is the Valley Metro Rail agglomeration of Phoenix, Arizona, with its . The world's longest urban intracity tram line is counter-ring routes 5/5a in Kazan (Tatarstan, Russia).
Since 1985, 108 light rail systems have opened;
Since 2000, 78 systems have opened while 13 have closed. The countries that have opened the most systems since 2000 are the US (23), France (20), Spain (16), and Turkey (8);
of track is in operation, with in construction and a further planned;
All networks together have 28,593 stops;
They carry 13.5 billion passengers a year, 3% of all public transport passengers. The highest-volume systems are Budapest (396 million passengers a year), Prague (372 m), Bucharest (322 m), Saint Petersburg (312 m), and Vienna (305 m);
The most busy networks (passengers per km, per year) are: Istanbul, Hong Kong, Tokyo and Sarajevo.
Some 36,864 trams and light rail vehicles are in operation. The largest fleets are in Prague (788), Vienna (782), Warsaw (756), Saint-Petersburg (750), Moscow (632)
Between 1997 and 2014, 400–450 vehicles were built each year.
As of October 2015, Hong Kong has the world's only exclusively double-decker tramway system.
The busiest junction in any tram network is the Lazarská x Spálená junction in Prague with appx. 150 vehicles passing through per hour.
World's longest 9-sectioned -meter articulated tram vehicle CAF Urbos 3/9 started operation in Budapest in 2016. Škoda ForCity vehicles family allows expansion of length up to with 539 passengers.
Historical
Historically, the Paris Tram System was, at its peak, the world's largest system, with of track in 1925 (according to other sources, ca. of route length in 1930). However it was completely closed in 1938. The next largest system appears to have been , in Buenos Aires before 19 February 1963. The third largest was Chicago, with over of track, but it was all converted to trolleybus and bus services by 21 June 1958. Before its decline, the BVG in Berlin operated a very large network with of route. Before its system was converted to trolleybus (and later bus) services in the 1930s (last tramway closed 6 July 1952), the first-generation London network had of route in 1931. In 1958 trams in Rio de Jainero were employed on () of track. The final line, the Santa Teresa route was closed in 1968. During a period in the 1980s, the world's largest tram system was in Leningrad (St. Petersburg) with , USSR, and was included as such in the Guinness World Records; however Saint Petersburg's tram system has declined in size since the fall of the Soviet Union. Vienna in 1960 had , before the expansion of bus services and the opening of a subway (1976). Substituting subway services for tram routes continues. was in Minneapolis–Saint Paul in 1947: There streetcars ended 31 October 1953 in Minneapolis and 19 June 1954 in St. Paul. The Sydney tram network, before it was closed on 25 February 1961, had of route, and was thus the largest in Australia. Since 1961, the Melbourne system (recognised as the world's largest) has assumed Sydney's title as the largest network in Australia.
Tram modelling
Model trams are popular in HO scale (1:87) and O scale (1:48 in the US and generally 1:43,5 and 1:45 in Europe and Asia). They are typically powered and will accept plastic figures inside. Common manufacturers are Roco and Lima, with many custom models being made as well. The German firm Hödl and the Austrian Halling specialise in 1:87 scale.
In the US, Bachmann Industries is a mass supplier of HO streetcars and kits. Bowser Manufacturing has produced white metal models for over 50 years. There are many boutique vendors offering limited run epoxy and wood models. At the high end are highly detailed brass models which are usually imported from Japan or Korea and can cost in excess of $500. Many of these run on gauge track, which is correct for the representation of (standard gauge) in HO scale as in US and Japan, but incorrect in 4 mm (1:76.2) scale, as it represents . This scale/gauge hybrid is called OO scale.
O scale trams are also very popular among tram modellers because the increased size allows for more detail and easier crafting of overhead wiring. In the US these models are usually purchased in epoxy or wood kits and some as brass models. The Saint Petersburg Tram Company produces highly detailed polyurethane non-powered O Scale models from around the world which can easily be powered by trucks from vendors like Q-Car.
Etymology and terminology
The English terms tram and tramway are derived from the Scots word , referring respectively to a type of truck (goods wagon or freight railroad car) used in coal mines and the tracks on which they ran. The word tram probably derived from Middle Flemish ("beam, handle of a barrow, bar, rung"). The identical word with the meaning "crossbeam" is also used in the French language. Etymologists believe that the word tram refers to the wooden beams the railway tracks were initially made of before the railroad pioneers switched to the much more wear-resistant tracks made of iron and, later, steel. The word tram-car is attested from 1873.
Alternatives
Although the terms tram and tramway have been adopted by many languages, they are not used universally in English; North Americans prefer streetcar, trolley, or trolleycar. The term streetcar is first recorded in 1840, and originally referred to horsecars.
The terms streetcar and trolley are often used interchangeably in the United States, with trolley being the preferred term in the eastern US and streetcar in the western US. Streetcar is preferred in English Canada, while tramway is preferred in Quebec. In parts of the United States, internally powered buses made to resemble a streetcar are often referred to as "trolleys". To avoid further confusion with trolley buses, the American Public Transportation Association (APTA) refers to them as "trolley-replica buses". In the United States, the term tram has sometimes been used for rubber-tired trackless trains, which are unrelated to other kinds of trams.
A widely held belief holds the word trolley to derive from the troller (said to derive from the words traveler and roller), a four-wheeled device that was dragged along dual overhead wires by a cable that connected the troller to the top of the car and collected electrical power from the overhead wires; this portmanteau derivation is, however, most likely folk etymology. "Trolley" and variants refer to the verb troll, meaning "roll" and probably derived from Old French, and cognate uses of the word were well established for handcarts and horse drayage, as well as for nautical uses.
The alternative North American term 'trolley' may strictly speaking be considered incorrect, as the term can also be applied to cable cars, or conduit cars that instead draw power from an underground supply. Conventional diesel tourist buses decorated to look like streetcars are sometimes called trolleys in the US (tourist trolley). Furthering confusion, the term tram has instead been applied to open-sided, low-speed segmented vehicles on rubber tires generally used to ferry tourists short distances, for example on the Universal Studios backlot tour and, in many countries, as tourist transport to major destinations. The term may also apply to an aerial ropeway, e.g. the Roosevelt Island Tramway.
Trolleybus
Although the use of the term trolley for tram was not adopted in Europe, the term was later associated with the trolleybus, a rubber-tired vehicle running on hard pavement, which draws its power from pairs of overhead wires. These electric buses, which use twin trolley poles, are also called trackless trolleys (particularly in the northeastern US), or sometimes simply trolleys (in the UK, as well as the Pacific Northwest, including Seattle, and Vancouver).
In popular culture
A Streetcar Named Desire was written by Tennessee Williams in 1947.
The Rev W. Awdry wrote about GER Class C53 called Toby the Tram Engine, which starred in his The Railway Series with his faithful coach, Henrietta.
"The Trolley Song" in the film Meet Me in St. Louis received an Academy Award nomination.
Trams feature in the opening titles of the world's longest running TV soap opera Coronation Street, set in a fictional suburb of Greater Manchester, and produced by Granada Television. A Blackpool tram killed one of the main characters in 1989 and the most recent faked accident involved a tram (modelled on the Manchester Metrolink) careering off a viaduct into the set in 2009.
The 1986 Australian film Malcolm is centred on an autistic tram enthusiast who builds his own tram and becomes involved with a pair of bank robbers.
Toonerville Folks comic strip (1908–55) by Fontaine Fox featured the "Toonerville Trolley that met all the trains".
The predominance of trams (trolleys) in the borough of Brooklyn in New York City gave rise to the disparaging term trolley dodger for residents of the borough. That term, shortened to "Dodger" became the nickname for the Brooklyn Dodgers (now the Los Angeles Dodgers).
The Red Car Trolley is a transportation attraction at Disney California Adventure at the Disneyland Resort in Anaheim, California.
| Technology | Trains | null |
30747 | https://en.wikipedia.org/wiki/TRS-80 | TRS-80 | The TRS-80 Micro Computer System (TRS-80, later renamed the Model I to distinguish it from successors) is a desktop microcomputer developed by American company Tandy Corporation and was sold through their Radio Shack stores. Launched in 1977, it is one of the earliest mass-produced and mass-marketed retail home computers. The name is an abbreviation of Tandy Radio Shack, Z80 [microprocessor], referring to its Zilog Z80 8-bit microprocessor.
The TRS-80 has a full-stroke QWERTY keyboard, 4 KB dynamic random-access memory (DRAM) standard memory, small size and desk area, floating-point Level I BASIC language interpreter in read-only memory (ROM), 64-character-per-line video monitor, and had a starting price of US$600 (equivalent to US$ in ). A cassette tape drive for program storage was included in the original package. While the software environment was stable, the cassette load/save process combined with keyboard bounce issues and a troublesome Expansion Interface contributed to the Model I's reputation as not well-suited for serious use. Initially (until 1981), it lacked support for lowercase characters which may have hampered business adoption. An extensive line of upgrades and add-on hardware peripherals for the TRS-80 was developed and marketed by Tandy/Radio Shack. The basic system can be expanded with up to 48 KB of RAM (in 16 KB increments), and up to four floppy disk drives and/or hard disk drives. Tandy/Radio Shack provided full-service support including upgrade, repair, and training services in their thousands of stores worldwide.
By 1979, the TRS-80 had the largest selection of software in the microcomputer market. Until 1982, the TRS-80 was the bestselling PC line, outselling the Apple II by a factor of five according to one analysis. The broadly compatible TRS-80 Model III was released in the middle of 1980. The Model I was discontinued shortly thereafter, primarily due to stricter Federal Communications Commission (FCC) regulations on radio-frequency interference to nearby electronic devices. In April 1983, the Model III was succeeded by the compatible TRS-80 Model 4. Following the original Model I and its compatible descendants, the TRS-80 name became a generic brand used on other unrelated computer lines sold by Tandy, including the TRS-80 Model II, TRS-80 Model 2000, TRS-80 Model 100, TRS-80 Color Computer, and TRS-80 Pocket Computer.
History
Development
In the mid-1970s, Tandy Corporation's Radio Shack division was a successful American chain of more than 3,000 electronics stores. Among the Tandy employees who purchased a MITS Altair kit computer was buyer Don French, who began designing his own computer and showed it to the vice president of manufacturing John V. Roach, Tandy's former electronic data processing manager. Although the design did not impress Roach, the idea of selling a microcomputer did. When the two men visited National Semiconductor in California in mid-1976, Homebrew Computer Club member Steve Leininger's expertise on the SC/MP microprocessor impressed them. National executives refused to provide Leininger's contact information when French and Roach wanted to hire him as a consultant, but they found Leininger working part-time at Byte Shop. Leininger was unhappy at National, his wife wanted a better job, and Texas did not have a state income tax. Hired for his technical and retail experience, Leininger began working with French in June 1976. The company envisioned a kit, but Leininger persuaded the others that because "too many people can't solder", a preassembled computer would be better.
Tandy had 11 million customers that might buy a microcomputer, but it would be much more expensive than the median price of a Radio Shack product, and a great risk for the very conservative company. Executives feared losing money as Sears did with Cartrivision, and many opposed the project; one executive told French, "Don't waste my time—we can't sell computers." As the popularity of CB radio—at one point comprising more than 20% of Radio Shack's sales—declined, however, the company sought new products. In December 1976 French and Leininger received official approval for the project but were told to emphasize cost savings; for example, leaving out lowercase characters saved US$1.50 in components and reduced the retail price by . The original retail price required manufacturing cost of ; the first design had a membrane keyboard and no video monitor. Leininger persuaded Roach and French to include a better keyboard, a monitor, datacassette storage, and other features requiring a higher retail price to provide Tandy's typical profit margin. In February 1977 they showed their prototype, running a simple tax-accounting program, to Charles Tandy, head of Tandy Corporation. The program quickly crashed as the computer's implementation of Tiny BASIC could not handle the figure that Tandy typed in as his salary, and the two men added support for floating-point math to its Level I BASIC to prevent a recurrence. The project was formally approved on 2 February 1977; Tandy revealed that he had already leaked the computer's existence to the press. When first inspecting the prototype, he remarked that even if it did not sell, the project could be worthy if only for the publicity it might generate.
MITS sold 1,000 Altairs in February 1975 and was selling 10,000 a year. When Charles Tandy asked who would buy the computer, company president Lewis Kornfeld admitted that they did not know if anyone would, but suggested that small businesses and schools might. Knowing that demand was very strong for the Altair—which cost more than $1,000 with a monitor—Leininger suggested that Radio Shack could sell 50,000 computers, but no one else believed him; Roach called the figure "horseshit", as the company had never sold that many of anything at that price. Roach and Kornfeld suggested 1,000 to 3,000 per year; 3,000 was the quantity the company would have to produce to buy the components in bulk. Roach persuaded Tandy to agree to build 3,500—the number of Radio Shack stores—so that each store could use a computer for inventory purposes if they did not sell. RCA agreed to supply the video monitor—a black-and-white television with the tuner and speakers removed—after others refused because of Tandy's low initial volume of production. Tandy used the black-and-silver colors of the RCA CRT unit's cabinet for the TRS-80 units as well.
Announcement
Having spent less than on development, Radio Shack announced the TRS-80 (Tandy Radio Shack) at a New York City press conference on August 3, 1977. It cost (), or () with a 12" monitor and a Radio Shack tape recorder; the most expensive product Radio Shack previously sold was a stereo. The company hoped that the new computer would help Radio Shack sell higher-priced products, and improve its "schlocky" image among customers. Small businesses were the primary target market, followed by educators, then consumers and hobbyists; despite its hobbyist customer base, Radio Shack saw them as "not the mainstream of the business" and "never our large market".
Although the press conference did not receive much media attention because of a terrorist bombing elsewhere in the city, the computer received much more publicity at Boston University's Personal Computer Fair two days later. A front-page Associated Press article discussed the novelty of a large consumer-electronics company selling a home computer that could "do a payroll for up to 15 people in a small business, teach children mathematics, store your favorite recipes or keep track of an investment portfolio. It can also play cards." Six sacks of mail arrived at Tandy headquarters asking about the computer, over 15,000 people called to purchase a TRS-80—paralyzing the company switchboard—and 250,000 joined the waiting list with a $100 deposit.
Despite the internal skepticism, Radio Shack aggressively entered the market. The company advertised "The $599 personal computer" as "the most important, useful, exciting, electronic product of our time". Kornfeld stated when announcing the TRS-80, "This device is inevitably in the future of everyone in the civilized world—in some way—now and so far as ahead as one can think", and Tandy's 1977 annual report called the computer "probably the most important product we've ever built in a company factory". Unlike competitor Commodore—which had announced the PET several months earlier but had not yet shipped any—Tandy had its own factories (capable of producing 18,000 computers a month) and distribution network, and even small towns had Radio Shack stores. The company announced plans to be selling by Christmas a range of peripherals and software for the TRS-80, began shipping computers by September, opened its first computer-only store in October, and delivered 5,000 computers to customers by December. Still forecasting 3,000 sales a year, Radio Shack sold over 10,000 TRS-80s in its first one and a half months of sales, 55,000 in its first year, and over 200,000 during the product's lifetime; one entered the Smithsonian's National Museum of American History. By mid-1978 the waits of two months or more for delivery were over, and the company could state in advertisements that TRS-80 was "on demonstration and available from stock now at every Radio Shack store in this community!"
Delivery
The first units, ordered unseen, were delivered in November 1977, and rolled out to the stores the third week of December. The line won popularity with hobbyists, home users, and small businesses. Tandy Corporation's leading position in what Byte magazine called the "1977 Trinity" (Apple Computer, Commodore, and Tandy) had much to do with Tandy's retailing the computer through more than 3,000 of its Radio Shack storefronts in the USA. Tandy claimed it had "7000 [Radio Shack] stores in 40 countries". The pre-release price for the basic system (CPU/keyboard and video monitor) was and a deposit was required, with a money-back guarantee at time of delivery.
By 1978, Tandy/Radio Shack promoted itself as "The Biggest Name in Little Computers". By 1979 1,600 employees built computers in six factories. Kilobaud Microcomputing estimated in 1980 that Tandy was selling three times as many computers as Apple Computer, with both companies ahead of Commodore. By 1981, InfoWorld described Radio Shack as "the dominant supplier of small computers". Hundreds of small companies produced TRS-80 software and accessories, and Adam Osborne described Tandy as "the number-one microcomputer manufacturer" despite having "so few roots in microcomputing". That year Leininger left his job as director for advanced research; French had left to found a software company, and the company had rejected his attempt for a Tandy Computer Center to sell non-Tandy computers. while the company's computer success helped Roach become Tandy's CEO. Selling computers did not change the company's "schlocky" image; the Radio Shack name embarrassed business customers, and Tandy executives disliked the "Trash-80" nickname for its products. By 1984, computers accounted for 35% of sales, however, and the company had 500 Tandy Radio Shack Computer Centers.
Model II and III
By 1979, when Radio Shack launched the business-oriented, and incompatible, TRS-80 Model II, the TRS-80 was officially renamed the TRS-80 Model I to distinguish the two product lines.
After some exhibitors at the 1979 Northeast Computer Show were forced to clarify that their products bearing the TRS-80 name were not affiliated with Radio Shack, publications and advertisers briefly began to use "S-80" generically rather than "TRS-80" under scare of legal action, though this never materialized.
Following the Model III launch in mid-1980, Tandy stated that the Model I was still sold, but it was discontinued by the end of the year. Tandy cited one of the main reasons as being the prohibitive cost of redesigning it to meet stricter FCC regulations covering the significant levels of radio-frequency interference emitted by the original design. The Model I radiated so much interference that, while playing games, an AM radio placed next to the computer can be used to provide sounds. Radio Shack offered upgrades (double-density floppy controller, LDOS, memory, reliable keyboard with numeric keypad, lowercase, Level II, RS-232C) as late as its 1984 catalog.
Hardware
The Model I combines the mainboard and keyboard into one unit, which became a design trend in the 8-bit microcomputer era, although the Model I has a separate power supply unit. It uses a Zilog Z80 processor clocked at 1.78 MHz (later models shipped with a Z80A). The initial Level I machines shipped in late 1977-early 1978 have only 4 KB of RAM. After the Expansion Interface and Level II BASIC were introduced in mid-1978, RAM configurations of 16 KB and up were offered (the first 16 KB was in the Model I itself and the remaining RAM in the Expansion Interface).
The OS ROMs, I/O area, video memory, and OS work space occupy the first 16 KB of memory space on the Model I. The remaining 48 KB of the 64 KB memory map space is available for program use, subject to the amount of physical RAM installed. Although the Z80 CPU can use port-based I/O, the Model I's I/O is memory-mapped aside from the cassette tape and RS-232 serial ports.
Keyboard
The TRS-80 Model I keyboard uses mechanical switches that suffer from "keyboard bounce", resulting in multiple letters being typed per keystroke. The problem was described in Wayne Green's editorial in the first issue of 80 Micro. Dirt, cigarette smoke, or other contamination enters the unsealed key switches, causing electrical noise that the computer detects as multiple presses. The key switches can be cleaned, but the bounce recurs when the keyboard is reexposed to the contaminating environment.
Keyboard bounce only occurs in Model I computers with Level II BASIC firmware; Level I BASIC has a "debounce" delay to the keyboard driver to avoid the noisy switch contacts. Tandy's utility, the Model III, the last Model I firmware, and most third-party operating systems also implement the software fix, and Tandy changed the keyboard during the Model III's lifetime to an Alps Electric design with sealed switches. The Alps keyboard was available as an upgrade for the Model I for $79.
The keyboard is memory-mapped so that certain locations in the processor's memory space correspond to the status of a group of keys.
Video and audio
The color of the KCS 172 RCA monitor's text is faintly blue (the standard P4 phosphor used in black-and-white televisions). Green and amber filters, or replacement tubes to reduce eye fatigue were popular aftermarket items. Later models came with a green-on-black display.
Complaints about the video display quality were common. As Green wrote, "hells bells, [the monitor] is a cheap black and white television set with a bit of conversion for computer use". (The computer could be purchased without the Radio Shack monitor.) CPU access to the screen memory causes visible flicker. The bus arbitration logic blocks video display refresh (video-RAM reads) during CPU writes to the VRAM, causing a short black line. This has little effect on normal BASIC programs, but fast programs made with assembly language can be affected. Software authors worked to minimize the effect, and many arcade-style games are available for the Tandy TRS-80.
Because of bandwidth problems in the interface card that replaced the TV's tuner, the display loses horizontal sync if large areas of white are displayed. A simple half-hour hardware fix corrects the problem.
The graphics are displayed at a resolution of 64×16 character positions on a screen measuring wide and tall. Each character is composed of a 2×3 matrix of pixels, and corresponds to one byte of the 1 KB video memory used by the TRS-80. In each of those bytes, the first six bits control which pixel is displayed. The seventh bit is ignored, and the eighth toggles graphics mode. The reason that the seventh bit is ignored is due to the company's decision to have only seven 2102 static-RAM chips installed on the computer's motherboard instead of eight to keep the manufacturing cost low. Thus, there are no lowercase letters in the TRS-80 character set of an unmodified Model I, and the number of both graphics symbols and alphanumeric symbols is 64. This can be worked around by deleting the unused bit and piggybacking an eighth 2102 chip onto another. The alphanumeric symbols are displayed in 5×7 matrices of pixels. The 1978 manual for the popular word processor Electric Pencil came with instructions for modifying the computer. Although the modification needs to be disabled for Level II BASIC, its design became the industry standard and was widely sold in kit form, along with an eighth 2102 chip. Later models came with the hardware for the lowercase character set to be displayed with descenders.
With higher-density RAM chips and purpose-built monitors, higher-resolution crisp displays are obtainable; 80×24-character displays are available in the Model II, Model 4, and later systems.
The Model I has no built-in speaker. Square-wave tones can be produced by outputting data to the cassette port and plugging an amplifier into the cassette "Mic" line. Most games use this ability for sound effects. An adapter was available to use Atari joysticks.
Peripherals
Cassette tape drive
User data was originally stored on cassette tape. Radio Shack's model CTR-41 cassette recorder was included with the US$599 package. The software-based cassette tape interface is slow and erratic; Green described it as "crummy ... drives users up the wall", and the first issue of 80 Micro has three articles on how to improve cassette performance. It is sensitive to audio volume, and the computer gives only a crude indication as to whether the correct volume was set, via a blinking character on screen while data is loaded. To find the correct volume at first use, the load is started and the volume is adjusted until the TRS-80 picked up the data. Then it is halted to rewind the tape and restart the load. Users were instructed to save multiple copies of a software program file, especially if audio tape cassettes instead of certified data tape was used. Automatic gain control or indicator circuits can be constructed to improve the loading process (the owner's manual provides complete circuit diagrams for the whole machine, including the peripheral interfaces, with notes on operation).
An alternative to using tape was data transmissions from the BBC's Chip Shop programme in the UK, which broadcast software for several different microcomputers over the radio. A special program was loaded using the conventional tape interface. Then the radio broadcast was connected to the cassette tape interface. Tandy eventually replaced the CTR-41 unit with the CTR-80 which had built-in AGC circuitry (and no volume control). This helped the situation, but tape operation is still unreliable.
TRS-80 Model I computers with Level I BASIC read and write tapes at 250 baud (about 30 bytes per second); Level II BASIC doubles this to 500 baud (about 60 bytes per second). Some programmers wrote machine-language programs that increase the speed to up to 2,000 bits per second without a loss of reliability on their tape recorders. With the Model III and improved electronics in the cassette interface, the standard speed increased to 1,500 baud which works reliably on most tape recorders.
For loading and storing data from tape, the CPU creates the sound by switching the output voltage between three states, creating crude sine wave audio.
The first version of the Model I also has a hardware problem that complicated loading programs from cassette recorders. Tandy offered a small board which was installed at a service center to correct the issue. The ROMs in later models were modified to correct this.
Model I Expansion Interface
Only the Model I uses an Expansion Interface; all later models have everything integrated in the same housing.
The TRS-80 does not use the S-100 bus like other early 8080 and Z80-based computers. A proprietary Expansion Interface (E/I) box, which fits under the video monitor and serves as its base, was offered instead. Standard features of the E/I are a floppy disk controller, Centronics parallel port for a printer, and an added cassette connector. Optionally, an extra 16 or 32 KB of RAM can be installed and a daughterboard with an RS-232 port. The 40-conductor expansion connector passes through to a card edge connector, which permits the addition of external peripherals such as an outboard hard disk drive, a voice synthesizer, or a VOXBOX voice recognition unit.
Originally, printing with the Model I required the Expansion Interface, but later Tandy made an alternative parallel printer interface available.
The Model I Expansion Interface is the most troublesome part of the TRS-80 Model I system. It went through several revisions. The E/I connects to the CPU/keyboard with a 6-inch ribbon cable which is unshielded against RF interference and its card edge connector tends to oxidize due to its base metal contacts. This demands periodic cleaning with a pencil eraser in order to avoid spontaneous reboots, which contributes to its "Trash-80" sobriquet. Aftermarket connectors plated with gold solved this problem permanently. Software developers also responded by devising a recovery method which became a standard feature of many commercial programs. They accept an "asterisk parameter", an asterisk (star) character typed following the program name when the program is run from the TRSDOS Ready prompt. When used following a spontaneous reboot (or an accidental reset, program crash, or exit to TRSDOS without saving data to disk), the program loads without initializing its data area(s), preserving any program data still present from the pre-reboot session. Thus, for example, if a VisiCalc user suffers a spontaneous reboot, to recover data the user enters at TRSDOS Ready, and Visicalc restores the previous computing session intact.
The power button on the E/I is difficult to operate as it is recessed so as to guard against the user accidentally hitting it and turning it off while in use. A pencil eraser or similar object is used to depress the power button and the E/I has no power LED, making it difficult to determine if it is running or not.
The expansion unit requires a second power supply, identical to the base unit power supply. An interior recess holds both supplies.
The user is instructed to power on and power off all peripherals in proper order to avoid corrupting data or potentially damaging hardware components. The manuals for the TRS-80 advise turning on the monitor first, then any peripherals attached to the E/I (if multiple disk drives are attached, the last drive on the chain is to be powered on first and work down from there), the E/I, and the computer last. When powering down, the computer is to be turned off first, followed by the monitor, E/I, and peripherals. In addition, users are instructed to remove all disks from the drives during power up or down (or else leave the drive door open to disengage the read/write head from the disk). This is because a transient electrical surge from the drive's read/write head would create a magnetic pulse that could corrupt data. This was a common problem on many early floppy drives.
The E/I displays a screen full of garbage characters on power up and unless a bootable system disk is present in Drive 0, it hangs there until the user either presses on the back of the computer, which causes it to attempt to boot the disk again, or was pressed, which drops the computer into BASIC. Due to the above-mentioned problems with potentially corrupting disks, it is recommended to power up to the garbage screen with the disk drives empty, insert a system disk, and then hit .
InfoWorld compared the cable spaghetti connecting the TRS-80 Model I's various components to the snakes in Raiders of the Lost Ark. Radio Shack offered a "TRS-80 System Desk" that concealed nearly all the cabling. It can accommodate the complete computer system plus up to four floppy drives and the Quick Printer. Since the cable connecting the Expansion Interface carries the system bus, it is short (about 6 inches). The user has no choice but to place the E/I directly behind the computer with the monitor on top. This causes problems for a non-Tandy monitor whose case did not fit the mounting holes. Also, the friction fit of the edge connector on the already short interconnect cable makes it possible to disconnect the system bus from the CPU if either unit is bumped during operation.
Floppy disk drives
Radio Shack introduced floppy drives in July 1978, about six months after the Model I went on sale. The Model I disk operating system TRSDOS was written by Randy Cook under license from Radio Shack; Randy claimed to have been paid $3000 for it. The first version released to the public was a buggy v2.0. This was quickly replaced by v2.1. Floppy disk operation requires buying the Expansion Interface, which included a single-density floppy disk interface (with a formatted capacity of 85K) based on the Western Digital 1771 single-density floppy disk controller chip. The industry standard Shugart Associates SA-400 minifloppy disk drive was used. Four floppy drives can be daisy-chained to the Model I. The last drive in the chain is supposed to have a termination resistor installed but often it is not needed as it is integrated into later cables.
Demand for Model I drives greatly exceeded supply at first. The drive is unreliable, partly because the interface lacked an external data separator (buffer). The early versions of TRSDOS were also buggy, and not helped by the Western Digital FD1771 chip which cannot reliably report its status for several instruction cycles after it receives a command. A common method of handling the delay was to issue a command to the 1771, perform several "NOP" instructions, then query the 1771 for the result. Early TRSDOS neglects the required yet undocumented wait period, and thus false status often returns to the OS, generating random errors and crashes. Once the 1771 delay was implemented, it was fairly reliable.
In 1981, Steve Ciarcia published in Byte the design for a homemade, improved expansion interface with more RAM and a disk controller for the TRS-80.
A data separator and a double-density disk controller (based on the WD 1791 chip) were made by Percom (a Texas peripheral vendor), LNW, Tandy, and others. The Percom Doubler adds the ability to boot and use double density floppies using a Percom-modified TRSDOS called DoubleDOS. The LNDoubler adds the ability to read and write diskette drives with up to 720 KB of storage, and also the older diskettes with up to 1,155 KB. Near the end of the Model I's lifespan in 1982, upgrades were offered to replace its original controller with a double-density one.
The first disk drives offered on the Model I were Shugart SA-400s which supported 35 tracks and was the sole -inch drive on the market in 1977–78. By 1979, other manufacturers began offering drives. Models 3/4/4P uses Tandon TM-100 40-track drives. The combination of 40 tracks and double density gives a capacity of 180 kilobytes per single-sided floppy disk. The use of index-sync means that a "flippy disk" requires a second index hole and write-enable notch. One could purchase factory-made "flippies". Some software publishers formatted one side for Apple systems and the other for the TRS-80.
The usual method of connecting floppy drives involves setting the drive letter via jumper blocks on the drive controller board, but Tandy opted for a slightly more user-friendly technique where all four select pins on the drives are jumpered and the ribbon cable is missing the Drive Select line. Thus, the user does not need to worry about moving jumpers around depending on which position on the chain a drive was in.
A standard flat floppy ribbon cable is usable on the Model I, in which case the drives is jumpered to their number on the chain, or even an IBM PC "twist" cable, which requires setting each drive number to 1, but only permits two drives on the chain.
Although third-party DOSes allow the user to define virtually any floppy format wanted, the "lowest common denominator" format for TRS-80s is the baseline single-density, single-sided, 35-40 track format of the Model I.
Third-party vendors like Aerocomp made available double-sided and 80 track -inch and later -inch floppy drives with up to 720 KB of storage each. These new drives are half-height and therefore require different or modified drive housings.
Exatron Stringy Floppy
An alternative to cassette tape and floppy disk storage from Exatron sold over 4,000 units by 1981. The device is a continuous loop tape drive, dubbed the stringy floppy or ESF. It requires no Expansion Interface, plugging directly into the TRS-80's 40-pin expansion bus, is much less expensive than a floppy drive, can read and write random-access data like a floppy drive unlike a cassette tape, and it transfers data at up to 14,400 baud. Exatron tape cartridges store over 64 KB of data. The ESF can coexist with the TRS-80 data cassette drive. Exatron also made a complementary RAM expansion board that installed in the TRS-80 keyboard to increase memory to 48 KB without the EI.
Hard drive
Radio Shack introduced a 5 MB external hard disk for the TRS-80 Model III/4 in 1983. It is the same hard disk unit offered for the Model II line, but came with OS software for Model III/4. An adapter is required to connect it to the Model I's E/I. The unit is about the same size as a modern desktop computer enclosure. Up to four hard disks can be daisy-chained for 20 MB of storage. The LDOS operating system by Logical Systems was bundled, which provides utilities for managing the storage space and flexible backup. The initial retail price for the first (primary) unit (). Later, a 15MB hard disk was offered in a white case, which can be daisy-chained for up to 60 MB. Like most hard disks used on 8-bit machines, there is no provision for subdirectories, but the DiskDISK utility is a useful alternative that creates virtual hard disk ".DSK" files that can be mounted as another disk drive and used like a subdirectory would. To display the directory/contents of an unmounted DiskDISK virtual disk file, a shareware DDIR "Virtual Disk Directory Utility" program was commonly used.
Printers
The "Quick Printer" is an electrostatic rotary printer that scans the video memory through the bus connector, and prints an image of the screen onto aluminum-coated paper in about one second. However, it is incompatible with both the final, buffered version of the Expansion Interface, and with the "heartbeat" interrupt used for the real-time clock under Disk BASIC. This can be overcome by using special cabling, and by doing a "dummy" write to the cassette port while triggering the printer.
Two third-party printers were for metal-coated paper, selling for approximately DM 600 in Germany, and a dot-matrix printer built by Centronics for normal paper, costing at first DM 3000, later sold at approximately DM 1500 in some stores. It has only 7 pins, so letters with descenders such as lowercase "g" do not reach under the baseline, but are elevated within the normal line.
Radio Shack offered an extensive line of printers for the TRS-80 family, ranging from basic 9-pin dot matrix units to large wide-carriage line printers for professional use, daisy-wheel printers, inkjet, laser, and color plotters. All have a Centronics-standard interface and after the introduction of the Color Computer in 1980, many also had a connector for the CoCo's serial interface.
FP-215 is a flatbed plotter.
Software
BASIC
Three versions of the BASIC programming language were produced for the Model I. Level I BASIC fits in 4 KB of ROM, and Level II BASIC fits into 12 KB of ROM. Level I is single precision only and had a smaller set of commands. Level II introduced double precision floating point support and has a much wider set of commands. Level II was further enhanced when a disk system was added, allowing for the loading of Disk BASIC.
Level I BASIC is based on Li-Chen Wang's free Tiny BASIC with more functions added by Radio Shack. The accompanying User's Manual for Level 1 by David A. Lien presents lessons on programming with text and cartoons. Lien wrote that it was "written specifically for people who don't know anything about computers ... I want you to have fun with your computer! I don't want you to be afraid of it, because there is nothing to fear". Reviewers praised the manual's quality. Level I BASIC has only two string variables (A$ and B$), 26 numeric variables (A – Z), and one array, A(). Code for functions like SIN(), COS() and TAN() is not included in ROM but printed at the end of the book. The only error messages are "WHAT?" for syntax errors, "HOW?" for arithmetic errors such as division by zero, and "SORRY" for out of memory errors.
Level I BASIC is not tokenized; reserved words are stored literally. In order to maximize the code that fits into 4 KB of memory, users can enter abbreviations for reserved words. For example, writing "P." instead of "PRINT" saves 3 bytes.
Level II BASIC, introduced in mid-1978, was licensed from Microsoft and is required to use the expansion bus and disk drives. Radio Shack always intended for Level I BASIC to be a stopgap until Level II was ready, and the first brochure for the Model I in January 1978 mentioned that Level II BASIC was "coming soon". It is an abridged version of the 16K Extended BASIC, since the Model I has 12 KB of ROM space. According to Bill Gates, "It was a sort of intermediate between 8K BASIC and Extended BASIC. Some features from Extended BASIC such as descriptive errors and user-defined functions were not included, but there were double precision variables and the PRINT USING statement that we wanted to get in. The entire development of Level II BASIC took about four weeks from start to finish." The accompanying manual is more terse and technical than the Level I manual. Original Level I BASIC-equipped machines could be retrofitted to Level II through a ROM replacement performed by Radio Shack for a fee (originally $199). Users with Level I BASIC programs stored on cassette have to convert these to the tokenized Level II BASIC before use. A utility for this was provided with the Level II ROMS.
Disk BASIC allows disk I/O, and in some cases (NewDos/80, MultiDOS, DosPlus, LDOS) adds powerful sorting, searching, full-screen editing, and other features. Level II BASIC reserves some of these keywords and issues a "?L3 ERROR", suggesting a behind-the-scenes change of direction intervened between the creation of the Level II ROMs and the introduction of Disk BASIC.
Microsoft also marketed an enhanced BASIC called Level III BASIC written by Bill Gates, on cassette tape. The cassette contains a "Cassette File" version on one side and a "disk file" version on the second side for disk system users (which was to be saved to disk). Level III BASIC adds most of the functions in the full 16 KB version of BASIC plus many other TRS-80 specific enhancements. Many of Level III BASIC's features are included in the TRS-80 Model III's Level II BASIC and disk BASIC.
Level I BASIC was still offered on the Model I in either 4K or 16K configurations after the introduction of Level II BASIC.
Other programming languages
Radio Shack published a combined assembler and program editing package called the Series I Assembler Editor. 80 Micro magazine printed a modification enabling it to run under the Model 4's TRSDOS Version 6. Also from Radio Shack was Tiny Pascal.
Microsoft made its Fortran, COBOL and BASCOM BASIC compiler available through Radio Shack.
In 1982, Scientific Time Sharing Corporation published a version of its APL for the TRS-80 Model III as APL*PLUS/80.
Other applications
Blackjack and backgammon came with the TRS-80, and at its debut, Radio Shack offered four payroll, personal finance, and educational programs on cassette. Its own products' quality was often poor. A critical 1980 80 Micro review of a text adventure described it as "yet another example of Radio Shack's inability to deal with the consumer in a consumer's market". The magazine added, "Sadly, too, as with some other Radio Shack programs, the instructions seem to assume that the reader is either a child or an adult with the mentality of a slightly premature corned beef".
The more than 2,000 Radio Shack franchise stores sold third-party hardware and software, but the more than 4,300 company-owned stores were at first prohibited from reselling or even mentioning products not sold by Radio Shack itself. Green stated in 1980 that although "there are more programs for the 80 than for all other systems combined" because of the computer's large market share, "Radio Shack can't advertise this because they are trying as hard as they can to keep this fact a secret from their customers. They don't want the TRS-80 buyers to know that there is anything more than their handful of mediocre programs available", many of which "are disastrous and, I'm sure, doing tremendous damage to the industry". Broderbund, founded that year, began by publishing TRS-80 software, but by 1983 cofounder Doug Carlston said that the computer "turned out to be a terrible market because most of the distribution networks were closed, even though there were plenty of machines out there". Green wrote in 1982 that Apple had surpassed Tandy in sales and sales outlets despite the thousands of Radio Shack dealers because it supported third-party development, while "we find the Shack seeming to begrudge any sale not made by them and them alone". Dealers not affiliated with Radio Shack preferred to sell software for other computers and not compete with the company; mail-order sales were also difficult, because company-owned stores did not sell third-party publications like 80 Micro.
Charles Tandy reportedly wanted to encourage outside developers but after his death a committee ran the company, which refused to help outside developers, hoping to monopolize the sale of software and peripherals. Leininger reportedly resigned because he disliked the company's bureaucracy after Tandy's death. An author wrote in a 1979 article on the computer's "mystery of machine language graphics control" that "Radio Shack seems to hide the neat little jewels of information a hobbyist needs to make a treasure of the TRS-80". He stated that other than the "excellent" Level I BASIC manual "there has been little information until recently ... TRS-80 owners must be resourceful", reporting that the computer's "keyboard, video, and cassette" functionality were also undocumented. The first book authorized by Tandy with technical information on TRSDOS for the Model I did not appear until after the computer's discontinuation.
By 1982, the company admitted—after no software appeared for the Model 16 after five months—that it should have, like Apple, encouraged third-party developers of products like the killer app VisiCalc. (A lengthy 1980 article in a Tandy publication introducing the TRS-80 version of VisiCalc did not mention that the spreadsheet had been available for the Apple II for a year.) However, in the early 1980s, it was not uncommon for small companies and municipalities to write custom programs for computers such as the TRS-80 to process a variety of data. In one case a small town's vehicle fleet was managed from a single TRS-80.
By 1985, the company's Ed Juge stated that other than Scripsit and DeskMate, "we intend to rely mostly on 'big-name', market-proven software from leading software firms". A full suite of office applications became available from the company and others, including the VisiCalc and Multiplan spreadsheets and the Lazy Writer, Electric Pencil, and from Radio Shack itself the Scripsit and SuperScripsit word processors.
Compared to the contemporary Commodore and Apple micros, the TRS-80's block graphics and crude sound were widely considered limited. The faster speed available to the game programmer, not having to processor color data in high resolution, went a long way to compensating for this. TRS-80 arcade games tended to be faster with effects that emphasized motion. This perceived disadvantage did not deter independent software companies such as Big Five Software from producing unlicensed versions of arcade games like Namco's Galaxian, Atari's Asteroids, Taito's Lunar Rescue, Williams's Make Trax, and Exidy's Targ and Venture. Sega's Frogger and Zaxxon were ported to the computer and marketed by Radio Shack. Namco/Midway's Pac-Man was cloned by Philip Oliver and distributed by Cornsoft Group as Scarfman. Atari's Battlezone was cloned for the Models I/III by Wayne Westmoreland and Terry Gilman and published by Adventure International as Armored Patrol. They also cloned Eliminator (based on Defender) and Donkey Kong; the latter wasn't published until after the TRS-80 was discontinued, because Nintendo refused to license the game.
Some games originally written for other computers were ported to the TRS-80. Microchess has three levels of play and can run in the 4 KB of memory that is standard with the Model I; the classic ELIZA is another TRS-80 port. Both were offered by Radio Shack. Apple Panic, itself a clone of Universal's Space Panic, was written for the TRS-80 by Yves Lempereur and published by Funsoft. Epyxs Temple of Apshai runs slowly on the TRS-80. Infocom ported its series of interactive text-based adventure games to the Models I/III; the first, Zork I, was marketed by Radio Shack.
Adventure International's text adventures began on the TRS-80, as did Sea Dragon by Westmoreland and Gilman, later ported to the other home micros. Android Nim by Leo Christopherson was rewritten for the Commodore PET and Apple. Many games are unique to the TRS-80, including Duel-N-Droids, also by Christopherson, an early first-person shooter 13 Ghosts by Software Affair (the Orchestra-80, -85 and -90 people) and shooters like Cosmic Fighter and Defence Command, and strange experimental programs such as Christopherson's Dancing Demon, in which the player composes a song for a devil and choreographs his dance steps to the music. Radio Shack offered simple graphics animation programs Micro Movie and Micro Marquee, and Micro Music.
Radio Shack offered a number of programming utilities, including an advanced debugger, a subroutine package, and a cross-reference builder. Probably the most popular utility package was Super Utility written by Kim Watt of Breeze Computing. Other utility software such as Stewart Software's Toolkit offered the first sorted directory, decoding or reset of passwords, and the ability to eliminate parts of TRSDOS that were not needed in order to free up floppy disk space. They also produced the On-Line 80 BBS, a TRSDOS-based Bulletin Board System. Misosys Inc. was a prolific producer of sophisticated TRS-80 utility and language software for all models of TRS-80 from the very beginning.
Perhaps because of the lack of information on TRSDOS and its bugs, by 1982 perhaps more operating systems existed for the TRS-80 than for any other computer. TRSDOS is limited in its capabilities, since like Apple DOS 3.3 on the Apple II, it is mainly conceived of as a way of extending BASIC to support disk drives. Numerous alternative DOSes appeared, the most prominent being LDOS because Radio Shack licensed it from Logical Systems and adopted it as its official DOS for its Models I and III hard disk drive products. Other alternative TRS-80 DOSes included NewDOS from Apparat, Inc., and DoubleDOS, DOSPlus, MicroDOS, UltraDOS (later called Multidos). The DOS for the Model 4 line, TRSDOS Version 6, was produced by and licensed from Logical Systems. It is a derivative of LDOS, enhanced to allow for the new Model 4 hardware such as its all-RAM architecture (no ROM), external 32 KB memory banks, bigger screen and keyboard, and featured new utilities such as a ram disk and a printer spooler.
The memory map of the Model I and III render them incompatible with the standard CP/M OS for Z80 business computers, which loads at hexadecimal address $0000 with TPA (Transient Program Area) starting at $0100; the TRS-80 ROM resides in this address space. Omikron Systems' Mappers board remaps the ROM to run unmodified CP/M programs on the Model I. A customized version of CP/M is available but loses its portability advantage. 80 Micro magazine published a do-it-yourself CP/M modification for the Model III.
Reception
Dan Fylstra, among the first owners, wrote in Byte in April 1978 that as an "'appliance' computer ... the TRS-80 brings the personal computer a good deal closer to the average customer", suitable for home and light business use. He concluded that it "is not the only alternative for the aspiring personal computer user, but it is a strong contender." Jerry Pournelle wrote in 1980 that "the basic TRS-80 is a lot of computer for the money". He criticized the quality of Tandy's application and system software and the high cost of peripherals, but reported that with the Omikron board a customer paid less than $5000 for a computer compatible with TRS-80 and CP/M software "all without building a single kit".
Three years later Pournelle was less positive about the computer. He wrote in May 1983, "As to our TRS-80 Model I, we trashed that sucker long ago. It was always unreliable, and repeated trips to the local Radio Shack outlet didn't help. The problem was that Tandy cut corners". Pournelle wrote in July 1983:
Compatible successors
Tandy replaced the Model I with the broadly compatible Model III in 1980. (The TRS-80 Model II is an entirely different and incompatible design).
Model III
Tandy released the TRS-80 Model III on July 26, 1980. The improvements of the Model III over the Model I include: built-in lowercase, a better keyboard with repeating keys, an enhanced character set, a real-time clock, 1500-baud cassette interface, a faster (2.03 MHz) Z80 processor, and an all-in-one enclosure requiring fewer cables. A Model III with two floppy drives requires the use of only one electrical outlet; a two-drive Model I requires five outlets. The Model III avoids the complicated power on/off sequence of the Model I. Shortly after the Model III's introduction, Model I production was discontinued as it did not comply with new FCC regulations as of January 1, 1981, regarding electromagnetic interference.
Tandy distinguished between the high-end Model II and Model III, describing the former as "an administrative system, good for things like word processing, data management and VisiCalc operations" and suitable for small businesses. The lowest-priced version of the Model III was sold with 4 KB of RAM and cassette storage. The computer's CPU board has three banks of sockets (8 sockets to a bank) which take type 4116 DRAMs, so memory configurations come in 16 KB, 32 KB, or 48 KB RAM memory sizes. Computers with 32 KB or 48 KB RAM can be upgraded with floppy disk drive storage. There is space inside the computer cabinet for two full-height drives. Those offered by Tandy/Radio Shack are single-sided, 40-track, double-density (MFM encoding) for 180K of storage. Third-party suppliers offered double-sided and 80-track drives, though to control them they had to modify the TRSDOS driver code or else furnish an alternative third-party DOS which could (see below). The installation of floppy disk drives also requires the computer's power supply to be upgraded. There is no internal cooling fan in the Model III; it uses passive convection cooling (unless an unusual number of power-hungry expansions were installed internally, such as a hard disk drive, graphics board, speedup kit, RS-232 board, etc.).
Tandy claimed that the Model III was compatible with 80% of Model I software. Many software publishers issued patches to permit their Model I programs to run on the Model III. Marketing director Ed Juge explained that their designers considered changing from the Model I's 64-column by 16-row video screen layout, but that they ultimately decided that maintaining compatibility was most important.
The Model III's memory map and system architecture are mostly the same as the Model I's, but the disk drives and printer port were moved from memory mapped to port I/O, thus Model I software that attempts to manipulate the disk controller directly or output to the printer (in particular Model I DOSes and application packages such as Visicalc and Scripsit) will not work. Under the supplied TRSDOS 1.3 operating system Model I disks can be read in the Model III, but not vice versa. The optional LDOS OS (by Logical Systems Inc.) uses a common disk format for both Model I and Model III versions.
Customers and developers complained of bugs in the Model III's Microsoft BASIC interpreter and TRSDOS. Tandy/Radio Shack (and TRS-80 magazines like 80 Micro) periodically published many software patches to correct these deficiencies and to permit users to customize the software to their preferences.
Differences in the WD1771 and WD1791 floppy controllers created problems reading Model I disks on a Model III (the double-density upgrade in the Model I included both chips while a Model III had only the WD1791). The WD1771 supports four data markers while the WD1791 only supports two, and some versions of TRSDOS for the Model I also use them. In addition, they are used by copy protection schemes. Software was available to allow Model I disks to be read on a Model III. The WD1791 supports the 500-bit/s bitrate needed for high-density floppy drives, but the controller is not capable of using them without extensive modifications.
TRSDOS for the Model III was developed in-house by Radio Shack rather than being contracted out like the Model I's DOS. None of the code base from Model I DOS was reused and the Model III DOS was rewritten from scratch; this also created some compatibility issues since the Model III DOS's API was not entirely identical to the Model I DOS. This was primarily to avoid legal disputes with Randy Cook over ownership of the code as had occurred with Model I DOS and also because Radio Shack originally planned several features for the Model III such as 80-column text support that were not included. Two early versions, 1.1 and 1.2, were replaced by version 1.3 in 1981 which became the standard Model III OS. TRSDOS 1.3 is not format compatible with 1.1 and 1.2; a utility called XFERSYS is provided which converts older format disks to TRSDOS 1.3 format (this change is permanent and the resultant disks cannot be read with the older DOS versions).
The Model III's boot screen was cleaned up from the Model I. Instead of displaying garbage on the screen at power up, it displays a "Diskette?" prompt if a bootable floppy is not detected. The user can insert a disk and press any key to boot. On powerup or reset holding down the key will boot the computer into ROM-based Level II BASIC. This ability is useful if the disk drive is not functioning and cannot boot a TRSDOS disk (or if a boot disk is not available); it permits an operator familiar with the machine hardware to perform diagnostics using BASIC's PEEK and POKE commands. This works for the Model 4 as well, but not for the 4P.
While Model I DOS is fairly flexible in its capabilities, Model III DOS is hard-coded to only support 180K single-sided floppies, a problem fixed by the many third-party DOSes. To that end, when Radio Shack introduced hard disks for the TRS-80 line in 1982, the company licensed LDOS rather than attempting to modify Model III DOS for hard disk support.
Level II BASIC on the Model III is 16 KB in size and incorporates a few features from Level I Disk BASIC.
TRSDOS 1.3 was given a few more minor updates, the last being in 1984, although the version number was unchanged. This includes at least one update that writes an Easter Egg message "Joe, you rummy buzzard" on an unused disk sector, which is reputedly a joke message left by a programmer in a beta version, but accidentally included in the production master.
The Model III keyboard lacks . Many application programs use , while others use . Often is used in combination with number and alpha keys. The Model III keyboard also lacks ; to caps-lock the alpha keys the user presses . Under LDOS typeahead is supported.
Because TRSDOS 1.3 was found wanting by many users, Tandy offered (at added cost) Logical System's LDOS Version 5 as an alternative. As with the Model I, other third-party sources also offered TRSDOS alternatives for the Model III, including NewDOS, Alphabit's MultiDOS, and Micro Systems Software's DOSPlus. These are compatible with TRSDOS 1.3 and run the same application programs, but offer improved command structures, more and better system utilities, and enhancements to the Microsoft BASIC interpreter. After writing the original Model I TRSDOS, Randy Cook began work on his own DOS, titled VTOS, which was superseded by LDOS and also created some frustration for users as it is the only TRS-80 DOS to be copy-protected.
Although mostly intended as a disk-based computer, the Model III was available in a base cassette configuration with no disk hardware and only 16 KB of RAM with Level II BASIC. Radio Shack also offered a 4K version with Level I BASIC, identical to Model I Level I BASIC, but with the addition of LPRINT and LLIST commands for printer output. Upgrading to a disk machine necessitates installing at least 32 KB of RAM, the disk controller board, and another power supply for the disk drives. Disk upgrades purchased from Radio Shack included TRSDOS 1.3; users upgrading from third-party vendors had to purchase DOS separately (most opted for LDOS or DOSPlus), though a great many Model III applications programs included a licensed copy of TRSDOS 1.3.
As with the Model I's E/I, the RS-232C port on the Model III was an extra cost option and not included in the base price of the computer, though the dual disk Model III for $2495 included the serial port.
Like the Model I, the Model III sold well in the educational market. Many school administrators valued the Model III's all-in-one hardware design because it made it more difficult for students to steal components. InfoWorld approved of the Model III's single-unit design, simplified cable management, and improvements such as lack of keyboard bounce and improved disk reliability. The reviewer, a former Model I owner, stated "I'm impressed" and that "had the Model III been available, it's probable that I wouldn't have sold it". He concluded, "If you're looking for a computer that's not too expensive but that performs well, you would be wise to test the Model III—you might end up buying it."
Don French, who had left Radio Shack to found FMG Software after designing the Model I, expressed his disappointment in the new machine while trying to convert CP/M to run on it. "I've encountered numerous problems with the floppy drive and its interface. Radio Shack will sell a Model III to anyone. They're trying to market it as a business computer when the existing software is woefully inadequate. 48K just isn't enough. You run out of memory before you get going. They're selling a medical package that takes up nine disks. I think the Model III is a very poorly conceived machine".
Aftermarket products
Aftermarket hardware was offered by Tandy/Radio Shack and many third-party manufacturers. The usual selection of add-ons and peripherals available for the Model I were offered: outboard floppy drives (one or two could be plugged into a card-edge connector on the back panel), an outboard hard disk drive (LDOS was furnished as Tandy's hard drive OS vice TRSDOS), an RS-232C serial port on an internal circuit card, and a parallel printer (connected by a card-edge connector). Multiple high-resolution graphics solutions were available. The official Radio Shack Model III high-resolution graphics board had a screen resolution of 640 x 240 pixels. The third-party Micro-Labs "Grafyx Solution" board had a screen resolution of 512 x 192 pixels.
A popular hardware/software add-on was the Orchestra-90 music synthesizer. It can be programmed to play up to five voices with a range of six octaves stereophonically. A great many Orch-90 (as it was often called) music files were available for download from CompuServe. The Orch-90 was licensed from a company called Software Affair, which also produced the Model I-compatible Orchestra-85 from 1981.
At least three vendors produced CP/M modifications for the Model III, Omikron (also a Model I mod), Holmes Engineering, and Memory Merchant. Options were available for upgrading the CRT to the CP/M professional standard of 80 columns and 24 rows, as well as eight-inch floppy drives.
A number of third-party manufacturers specialized in upgrading Model IIIs with high-performance hardware and software, and remarketing them under their own labels. The improvements typically included internal hard disk drives, greater capacity floppy drives, 4 MHz Z80 speedup kits, professional-grade green or amber CRT video displays, better DOS software (typically DOSPlus by Micro Systems Software or LDOS by Logical Systems) including the all-important hard drive backup utilities, and custom menu-driven shell interfaces which insulated non-expert users (business employees) from the DOS command line. These were touted as high productivity turnkey systems for small businesses at less cost than competing business systems from higher-end providers such as IBM and DEC, as well as Radio Shack's own TRS-80 Model II.
Model 4
The successor to the Model III is the TRS-80 Model 4 released in April 1983. It has a faster Z80A CPU, a larger video display 80 columns x 24 rows with reverse video, bigger keyboard, internal speaker, and its 64 KB of RAM can be upgraded to 128 KB of bank-switched RAM. The display can be upgraded with a high-resolution graphics card yielding 640240 pixels. The Model 4 is fully compatible with Model III and CP/M application software. A diskless Model 4 (with 16 KB RAM and Level II BASIC) cost , with 64 KB RAM and one single-sided 180K disk drive , and two drives with RS-232C ; an upgrade for Model III owners cost and provided a new motherboard and keyboard. Tandy sold 71,000 in 1984.
The Model 4 includes all of the Model III's hardware, port assignments, and operating modes, making it 100% compatible. Model III programs running on a Model 4 can access the Model 4's added hardware features (like 4 MHz clock rate, bigger video screen and keyboard, banked RAM above 64 KB). There were aftermarket software packages that made this ability available to non-programmer users.
The Model 4P is a transportable version introduced in September 1983 and discontinued in early 1985. It is functionally the same as the dual-drive desktop model but lacks the card edge connector for two outboard diskette drives and for a cassette tape interface. It has a slot for an internal modem card and could emulate a Model III.
The Model 4D with bundled Deskmate productivity suite was introduced in early 1985. It has a revised CPU board using faster gate array logic which includes the floppy controller and RS-232C circuitry, all on a single board. The computer has two internal double-sided diskette drives and is the last model descended from the 1977 Model I. It retailed for at its introduction in 1985. During 1987–1988 the retail stores removed the Model 4Ds from display but they were available by special order through 1991.
Model 100
Also in April 1983, Radio Shack released the TRS-80 Model 100, one of the first laptop-portable computers.
Manufactured by Kyocera, the Model 100 features an LCD with 8 lines of 40 characters each, 8 kilobytes of RAM (expandable to 32KB), and is powered by AA cell batteries (or a plug-in adapter). A built-in modem and 25-pin RS-232 serial port provided connectivity.
With the rudementary operating system held in ROM, the Model 100 (and the improved model, named the Tandy 102) was ready to use immediately after sliding the power switch on, and work would be held ready when switched off, making it convenient to use for a few moments at a time. This speed made the Model 100/102 useful well after powerful—and slower-booting—laptop computers became common.
The Model 100-series computers also found popularity as a field-portable communications terminal due to its light weight and simplicity.
| Technology | Early computers | null |
30786 | https://en.wikipedia.org/wiki/Tuatara | Tuatara | The tuatara (Sphenodon punctatus) is a species of reptile endemic to New Zealand. Despite its close resemblance to lizards, it is part of a distinct lineage, the order Rhynchocephalia. The name is derived from the Māori language and means "peaks on the back".
The single extant species of tuatara is the only surviving member of its order, which was highly diverse during the Mesozoic era. Rhynchocephalians first appeared in the fossil record during the Triassic, around 240 million years ago, and reached worldwide distribution and peak diversity during the Jurassic, when they represented the world's dominant group of small reptiles. Rhynchocephalians declined during the Cretaceous, with their youngest records outside New Zealand dating to the Paleocene. Their closest living relatives are squamates (lizards and snakes). Tuatara are of interest for studying the evolution of reptiles.
Tuatara are greenish brown and grey, and measure up to from head to tail-tip and weigh up to with a spiny crest along the back, especially pronounced in males. They have two rows of teeth in the upper jaw overlapping one row on the lower jaw, which is unique among living species. They are able to hear, although no external ear is present, and have unique features in their skeleton.
Tuatara are sometimes referred to as "living fossils". This term is currently deprecated among paleontologists and evolutionary biologists. Although tuatara have preserved the morphological characteristics of their Mesozoic ancestors (240–230 million years ago), there is no evidence of a continuous fossil record to support the idea that the species has survived unchanged since that time.
The species has between 5 and 6 billion base pairs of DNA sequence, nearly twice that of humans.
The tuatara has been protected by law since 1895. Tuatara, like many of New Zealand's native animals, are threatened by habitat loss and introduced predators, such as the Polynesian rat (Rattus exulans). Tuatara were extinct on the mainland, with the remaining populations confined to 32 offshore islands, until the first North Island release into the heavily fenced and monitored Karori Wildlife Sanctuary (now named "Zealandia") in 2005. During routine maintenance work at Zealandia in late 2008, a tuatara nest was uncovered, with a hatchling found the following autumn. This is thought to be the first case of tuatara successfully breeding in the wild on New Zealand's North Island in over 200 years.
Taxonomy and evolution
Relationships of the tuatara to other living reptiles and birds, after Simões et al. 2022
Tuatara, along with other now-extinct members of the order Rhynchocephalia, belong to the superorder Lepidosauria, as do the order Squamata, which includes lizards and snakes. Squamates and tuatara both show caudal autotomy (loss of the tail-tip when threatened), and have transverse cloacal slits.
Tuatara were originally classified as lizards in 1831 when the British Museum received a skull. John Edward Gray used the name Sphenodon to describe the skull; this remains the current scientific name for the genus. Sphenodon is derived from the Greek for "wedge" (σφήν, σφηνός/sphenos) and "tooth" (ὀδούς, ὀδόντος/odontos). In 1842, Grey described a member of the species as Hatteria punctata, not realising that it and the skull he received in 1831 were both tuatara.
The genus remained misclassified as a lizard until 1867, when A.C.L.G. Günther of the British Museum noted features similar to birds, turtles, and crocodiles. He proposed the order Rhynchocephalia (meaning "beak head") for the tuatara and its fossil relatives. Since 1869, Sphenodon punctatus (or the variation Sphenodon punctatum in some earlier sources) has been used as the scientific name for the species.
At one point, many disparate species were incorrectly referred to the Rhynchocephalia, resulting in what taxonomists call a "wastebasket taxon". Williston in 1925 proposed the Sphenodontia to include only tuatara and their closest fossil relatives. However, Rhynchocephalia is the older name and in widespread use today. Many scholars use Sphenodontia as a subset of Rhynchocephalia, including almost all members of Rhynchocephalia, apart from the most primitive representatives of the group.
The earliest rhynchocephalian, Wirtembergia, is known from the Middle Triassic of Germany, around 240 million years ago. During the Late Triassic, rhynchocephalians greatly diversified, going on to become the world's dominant group of small reptiles during the Jurassic period, when the group was represented by a diversity of forms, including the aquatic pleurosaurs and the herbivorous eilenodontines. The earliest members of Sphenodontinae, the clade which includes the tuatara, are known from the Early Jurassic of North America. The earliest representatives of this group are already very similar to the modern tuatara. Rhynchocephalians declined during the Cretaceous period, possibly due to competition with mammals and lizards, with their youngest record outside of New Zealand being of Kawasphenodon, known from the Paleocene of Patagonia in South America.
A species of sphenodontine is known from the Miocene Saint Bathans fauna from Otago in the South Island of New Zealand. Whether it is referable to Sphenodon proper is not entirely clear, but it is likely to be closely related to tuatara. The ancestors of the tuatara were likely already present in New Zealand prior to its separation from Antarctica around 82–60 million years ago.
Cladogram of the position of the tuatara within Sphenodontia, after Simoes et al., 2022:
Species
While there is currently considered to be only one living species of tuatara, two species were previously identified: Sphenodon punctatus, or northern tuatara, and the much rarer Sphenodon guntheri, or Brothers Island tuatara, which is confined to North Brother Island in the Cook Strait. The specific name punctatus is Latin for "spotted", and guntheri refers to German-born British herpetologist Albert Günther. A 2009 paper re-examined the genetic bases used to distinguish the two supposed species of tuatara, and concluded they represent only geographic variants, and only one species should be recognized. Consequently, the northern tuatara was re-classified as Sphenodon punctatus punctatus and the Brothers Island tuatara as Sphenodon punctatus guntheri. The Brothers Island tuatara has olive brown skin with yellowish patches, while the colour of the northern tuatara ranges from olive green through grey to dark pink or brick red, often mottled, and always with white spots. In addition, the Brothers Island tuatara is considerably smaller. However, individuals from Brothers Island could not be distinguished from other modern and fossil samples on the basis of jaw morphology.
An extinct species of Sphenodon was identified in November 1885 by William Colenso, who was sent an incomplete subfossil specimen from a local coal mine. Colenso named the new species S. diversum. Fawcett and Smith (1970) consider it a synonym to the subspecies, based on a lack of distinction.
Description
Tuatara are the largest reptiles in New Zealand. Adult S. punctatus males measure in length and females . Tuatara are sexually dimorphic, males being larger. The San Diego Zoo even cites a length of up to . Males weigh up to , and females up to . Brothers Island tuatara are slightly smaller, weighing up to 660 g (1.3 lb).
Their lungs have a single chamber with no bronchi.
The tuatara's greenish brown colour matches its environment, and can change over its lifetime. Tuatara shed their skin at least once per year as adults, and three or four times a year as juveniles. Tuatara sexes differ in more than size. The spiny crest on a tuatara's back, made of triangular, soft folds of skin, is larger in males, and can be stiffened for display. The male abdomen is narrower than the female's.
Skull
Unlike the vast majority of lizards, the tuatara has a complete lower temporal bar closing the lower temporal fenestra (an opening of the skull behind the eye socket), caused by the fusion of the quadrate/quadratojugal (which are fused into a single element in adult tuatara) and the jugal bones of the skull. This is similar to the condition found in primitive diapsid reptiles. However, because more primitive rhynchocephalians have an open lower temporal fenestra with an incomplete temporal bar, this is thought to be derived characteristic of the tuatara and other members of the clade Sphenodontinae, rather than a primitive trait retained from early diapsids. The complete bar is thought to stabilise the skull during biting.
The tip of the upper jaw is chisel- or beak-like and separated from the remainder of the jaw by a notch, this structure is formed from fused premaxillary teeth, and is also found in many other advanced rhynchocephalians. The teeth of the tuatara, and almost all other rhynchocephalians, are described as acrodont, as they are attached to the apex of the jaw bone. This contrast with the pleurodont condition found in the vast majority of lizards, where the teeth are attached to the inward-facing surface of the jaw. The teeth of the tuatara are extensively fused to the jawbone, making the boundary between the tooth and jaw difficult to discern, and the teeth lack roots and are not replaced during the lifetime of the animal, unlike those of pleurodont lizards. It is a common misconception that tuatara lack teeth and instead have sharp projections on the jaw bone; histology shows that they have true teeth with enamel and dentine with pulp cavities. As their teeth wear down, older tuatara have to switch to softer prey, such as earthworms, larvae, and slugs, and eventually have to chew their food between smooth jaw bones.
The tuatara possesses palatal dentition (teeth growing from the bones of the roof of the mouth), which is ancestrally present in reptiles (and tetrapods generally). While many of the original palatal teeth present in reptiles have been lost, as in all other known rhynchocephalians, the row of teeth growing from the palatine bones in the tuatara have been enlarged, and as in other members of Sphenodontinae the palatine teeth are orientated parallel to the teeth in the maxilla; during biting the teeth of the lower jaw slot between the two upper tooth rows. The structure of the jaw joint allows the lower jaw to slide forwards after it has closed between the two upper rows of teeth. This mechanism allows the jaws to shear through chitin and bone.
The brain of Sphenodon fills only half of the volume of its endocranium. This proportion has been used by paleontologists trying to estimate the volume of dinosaur brains based on fossils. However, the proportion of the tuatara endocranium occupied by its brain may not be a very good guide to the same proportion in Mesozoic dinosaurs since modern birds are surviving dinosaurs but have brains which occupy a much greater relative volume in the endocranium.
Sensory organs
Eyes
The eyes can focus independently, and are specialised with three types of photoreceptive cells, all with fine structural characteristics of retinal cone cells used for both day and night vision, and a tapetum lucidum which reflects onto the retina to enhance vision in the dark. There is also a third eyelid on each eye, the nictitating membrane. Five visual opsin genes are present, suggesting good colour vision, possibly even at low light levels.
Parietal eye (third eye)
Like some other living vertebrates, including some lizards, the tuatara has a third eye on the top of its head called the parietal eye (also called a pineal or third eye) formed by the parapineal organ, with an accompanying opening in the skull roof called the pineal or parietal foramen, enclosed by the parietal bones. It has its own lens, a parietal plug which resembles a cornea, retina with rod-like structures, and degenerated nerve connection to the brain. The parietal eye is visible only in hatchlings, which have a translucent patch at the top centre of the skull. After four to six months, it becomes covered with opaque scales and pigment. While capable of detecting light, it is probably not capable of detecting movement or forming an image. It likely serves to regulate the circadian rhythm and possibly detect seasonal changes, and help with thermoregulation.
Of all extant tetrapods, the parietal eye is most pronounced in the tuatara. It is part of the pineal complex, another part of which is the pineal gland, which in tuatara secretes melatonin at night. Some salamanders have been shown to use their pineal bodies to perceive polarised light, and thus determine the position of the sun, even under cloud cover, aiding navigation.
Hearing
Together with turtles, the tuatara has the most primitive hearing organs among the amniotes. There is no tympanum (eardrum) and no earhole, and the middle ear cavity is filled with loose tissue, mostly adipose (fatty) tissue. The stapes comes into contact with the quadrate (which is immovable), as well as the hyoid and squamosal. The hair cells are unspecialised, innervated by both afferent and efferent nerve fibres, and respond only to low frequencies. Though the hearing organs are poorly developed and primitive with no visible external ears, they can still show a frequency response from 100 to 800 Hz, with peak sensitivity of 40 dB at 200 Hz.
Odorant receptors
Animals that depend on the sense of smell to capture prey, escape from predators or simply interact with the environment they inhabit, usually have many odorant receptors. These receptors are expressed in the dendritic membranes of the neurons for the detection of odours. The tuatara has around 472 receptors, a number more similar to what birds have than to the large number of receptors that turtles and crocodiles may have.
Spine and ribs
The tuatara spine is made up of hourglass-shaped amphicoelous vertebrae, concave both before and behind. This is the usual condition of fish vertebrae and some amphibians, but is unique to tuatara within the amniotes. The vertebral bodies have a tiny hole through which a constricted remnant of the notochord passes; this was typical in early fossil reptiles, but lost in most other amniotes.
The tuatara has gastralia, rib-like bones also called gastric or abdominal ribs, the presumed ancestral trait of diapsids. They are found in some lizards, where they are mostly made of cartilage, as well as crocodiles and the tuatara, and are not attached to the spine or thoracic ribs. The true ribs are small projections, with small, hooked bones, called uncinate processes, found on the rear of each rib. This feature is also present in birds. The tuatara is the only living tetrapod with well-developed gastralia and uncinate processes.
In the early tetrapods, the gastralia and ribs with uncinate processes, together with bony elements such as bony plates in the skin (osteoderms) and clavicles (collar bone), would have formed a sort of exoskeleton around the body, protecting the belly and helping to hold in the guts and inner organs. These anatomical details most likely evolved from structures involved in locomotion even before the vertebrates ventured onto land. The gastralia may have been involved in the breathing process in early amphibians and reptiles. The pelvis and shoulder girdles are arranged differently from those of lizards, as is the case with other parts of the internal anatomy and its scales.
Tail and back
The spiny plates on the back and tail of the tuatara resemble those of a crocodile more than a lizard, but the tuatara shares with lizards the ability to break off its tail when caught by a predator, and then regenerate it. The regrowth takes a long time and differs from that of lizards. Well illustrated reports on tail regeneration in tuatara have been published by Alibardi and Meyer-Rochow. The cloacal glands of tuatara have a unique organic compound named tuataric acid.
Age determination
Currently, there are two means of determining the age of tuatara. Using microscopic inspection, hematoxylinophilic rings can be identified and counted in both the phalanges and the femur. Phalangeal hematoxylinophilic rings can be used for tuatara up to ages 12–14 years, as they cease to form around this age. Femoral rings follow a similar trend, however they are useful for tuatara up to ages 25–35 years. Around that age, femoral rings cease to form. Further research on age determination methods for tuatara is required, as tuatara have lifespans much longer than 35 years (ages up to 60 are common, and captive tuatara have lived to over 100 years). One possibility could be via examination of tooth wear, as tuatara have fused sets of teeth.
Physiology
Adult tuatara are terrestrial and nocturnal reptiles, though they will often bask in the sun to warm their bodies. Hatchlings hide under logs and stones, and are diurnal, likely because adults are cannibalistic. Juveniles are typically active at night, but can be found active during the day. The juveniles' movement pattern is attributed to genetic hardwire of conspecifics for predator avoidance and thermal restrictions. Tuatara thrive in temperatures much lower than those tolerated by most reptiles, and hibernate during winter. They remain active at temperatures as low as , while temperatures over are generally fatal. The optimal body temperature for the tuatara is from , the lowest of any reptile. The body temperature of tuatara is lower than that of other reptiles, ranging from over a day, whereas most reptiles have body temperatures around . The low body temperature results in a slower metabolism.
Ecology
Burrowing seabirds such as petrels, prions, and shearwaters share the tuatara's island habitat during the birds' nesting seasons. The tuatara use the birds' burrows for shelter when available, or dig their own. The seabirds' guano helps to maintain invertebrate populations on which tuatara predominantly prey, including beetles, crickets, spiders, wētās, earthworms, and snails. Their diets also consist of frogs, lizards, and bird's eggs and chicks. Young tuatara are also occasionally cannibalized. The diet of the tuatara varies seasonally, and they consume mainly fairy prions and their eggs in the summer. In total darkness no feeding attempt was observed, and the lowest light intensity at which an attempt to snatch a beetle was observed occurred under 0.0125 lux. The eggs and young of seabirds that are seasonally available as food for tuatara may provide beneficial fatty acids. Tuatara of both sexes defend territories, and will threaten and eventually bite intruders. The bite can cause serious injury. Tuatara will bite when approached, and will not let go easily. Female tuatara rarely exhibit parental behaviour by guarding nests on islands with high rodent populations.
Tuataras are parasitised by the tuatara tick (Archaeocroton sphenodonti), a tick that directly depends on tuataras. These ticks tend to be more prevalent on larger males, as they have larger home ranges than smaller and female tuatara and interact with other tuatara more in territorial displays.
Reproduction
Tuatara reproduce very slowly, taking 10 to 20 years to reach sexual maturity. Though their reproduction rate is slow, tuatara have the fastest swimming sperm by two to four times compared to all reptiles studied earlier. Mating occurs in midsummer; females mate and lay eggs once every four years. During courtship, a male makes his skin darker, raises his crests, and parades toward the female. He slowly walks in circles around the female with stiffened legs. The female will either allow the male to mount her, or retreat to her burrow. Males do not have a penis; they have rudimentary hemipenes; meaning that intromittent organs are used to deliver sperm to the female during copulation. They reproduce by the male lifting the tail of the female and placing his vent over hers. This process is sometimes referred to as a "cloacal kiss". The sperm is then transferred into the female, much like the mating process in birds. Along with birds, the tuatara is one of the few members of Amniota to have lost the ancestral penis.
Tuatara eggs have a soft, parchment-like 0.2 mm thick shell that consists of calcite crystals embedded in a matrix of fibrous layers. It takes the females between one and three years to provide eggs with yolk, and up to seven months to form the shell. It then takes between 12 and 15 months from copulation to hatching. This means reproduction occurs at two- to five-year intervals, the slowest in any reptile. Survival of embryos has also been linked to having more success in moist conditions. Wild tuatara are known to be still reproducing at about 60 years of age; "Henry", a male tuatara at Southland Museum in Invercargill, New Zealand, became a father (possibly for the first time) on 23 January 2009, at age 111, with an 80 year-old female.
The sex of a hatchling depends on the temperature of the egg, with warmer eggs tending to produce male tuatara, and cooler eggs producing females. Eggs incubated at have an equal chance of being male or female. However, at , 80% are likely to be males, and at , 80% are likely to be females; at all hatchlings will be females. Some evidence indicates sex determination in tuatara is determined by both genetic and environmental factors.
Tuatara probably have the slowest growth rates of any reptile, continuing to grow larger for the first 35 years of their lives. The average lifespan is about 60 years, but they can live to be well over 100 years old; tuatara could be the reptile with the second longest lifespan after tortoises. Some experts believe that captive tuatara could live as long as 200 years. This may be related to genes that offer protection against reactive oxygen species. The tuatara genome has 26 genes that encode selenoproteins and 4 selenocysteine-specific tRNA genes. In humans, selenoproteins have a function of antioxidation, redox regulation and synthesis of thyroid hormones. It is not fully demonstrated, but these genes may be related to the longevity of this animal or may have emerged as a result of the low levels of selenium and other trace elements in the New Zealand terrestrial systems.
Genomic characteristics
The most abundant LINE element in the tuatara is L2 (10%). Most of them are interspersed and can remain active. The longest L2 element found is 4 kb long and 83% of the sequences had ORF2p completely intact. The CR1 element is the second most repeated (4%). Phylogenetic analysis shows that these sequences are very different from those found in other nearby species such as lizards. Finally, less than 1% are elements belonging to L1, a low percentage since these elements tend to predominate in placental mammals. Usually, the predominant LINE elements are the CR1, contrary to what has been seen in the tuatara. This suggests that perhaps the genome repeats of sauropsids were very different compared to mammals, birds and lizards.
The genes of the major histocompatibility complex (MHC) are known to play roles in disease resistance, mate choice, and kin recognition in various vertebrate species. Among known vertebrate genomes, MHCs are considered one of the most polymorphic. In the tuatara, 56 MHC genes have been identified; some of which are similar to MHCs of amphibians and mammals. Most MHCs that were annotated in the tuatara genome are highly conserved, however there is large genomic rearrangement observed in distant lepidosaur lineages.
Many of the elements that have been analyzed are present in all amniotes, most are mammalian interspersed repeats or MIR, specifically the diversity of MIR subfamilies is the highest that has been studied so far in an amniote. 16 families of SINEs that were recently active have also been identified.
The tuatara has 24 unique families of DNA transposons, and at least 30 subfamilies were recently active. This diversity is greater than what has been found in other amniotes and in addition, thousands of identical copies of these transposons have been analyzed, suggesting to researchers that there is recent activity.
The genome is the second largest known to reptiles. Only the Greek tortoise genome is larger. Around 7,500 LTRs have been identified, including 450 endogenous retroviruses (ERVs). Studies in other Sauropsida have recognized a similar number but nevertheless, in the genome of the tuatara it has been found a very old clade of retrovirus known as Spumavirus.
More than 8,000 non-coding RNA-related elements have been identified in the tuatara genome, of which the vast majority, about 6,900, are derived from recently active transposable elements. The rest are related to ribosomal, spliceosomal and signal recognition particle RNA.
The mitochondrial genome of the genus Sphenodon is approximately 18,000 bp in size and consists of 13 protein-coding genes, 2 ribosomal RNA and 22 transfer RNA genes.
DNA methylation is a very common modification in animals and the distribution of CpG sites within genomes affects this methylation. Specifically, 81% of these CpG sites have been found to be methylated in the tuatara genome. Recent publications propose that this high level of methylation may be due to the amount of repeating elements that exist in the genome of this animal. This pattern is closer to what occurs in organisms such as zebrafish, about 78%, while in humans it is only 70%.
Conservation
Tuatara are absolutely protected under New Zealand's Wildlife Act 1953. The species is also listed under Appendix I of the Convention on International Trade in Endangered Species (CITES) meaning commercial international trade in wild sourced specimens is prohibited and all other international trade (including in parts and derivatives) is regulated by the CITES permit system.
Distribution and threats
Tuatara were once widespread on New Zealand's main North and South Islands, where subfossil remains have been found in sand dunes, caves, and Māori middens. Wiped out from the main islands before European settlement, they were long confined to 32 offshore islands free of mammals. The islands are difficult to get to, and are colonised by few animal species, indicating that some animals absent from these islands may have caused tuatara to disappear from the mainland. However, kiore (Polynesian rats) had recently become established on several of the islands, and tuatara were persisting, but not breeding, on these islands. Additionally, tuatara were much rarer on the rat-inhabited islands. Prior to conservation work, 25% of the distinct tuatara populations had become extinct in the past century.
The recent discovery of a tuatara hatchling on the mainland indicates that attempts to re-establish a breeding population on the New Zealand mainland have had some success. The total population of tuatara is estimated to be between 60,000 and 100,000.
Climate change
Tuatara have temperature-dependent sex determination meaning that the temperature of the egg determines the sex of the animal. For tuatara, lower egg incubation temperatures lead to females while higher temperatures lead to males. Since global temperatures are increasing, climate change may be skewing the male to female ratio of tuatara. Current solutions to this potential future threat are the selective removal of adults and the incubation of eggs.
Eradication of rats
Tuatara were removed from Stanley, Red Mercury and Cuvier Islands in 1990 and 1991, and maintained in captivity to allow Polynesian rats to be eradicated on those islands. All three populations bred in captivity, and after successful eradication of the rats, all individuals, including the new juveniles, were returned to their islands of origin. In the 1991–92 season, Little Barrier Island was found to hold only eight tuatara, which were taken into in situ captivity, where females produced 42 eggs, which were incubated at Victoria University. The resulting offspring were subsequently held in an enclosure on the island, then released into the wild in 2006 after rats were eradicated there.
In the Hen and Chicken Islands, Polynesian rats were eradicated on Whatupuke in 1993, Lady Alice Island in 1994, and Coppermine Island in 1997. Following this program, juveniles have once again been seen on the latter three islands. In contrast, rats persist on Hen Island of the same group, and no juvenile tuatara have been seen there as of 2001. In the Alderman Islands, Middle Chain Island holds no tuatara, but it is considered possible for rats to swim between Middle Chain and other islands that do hold tuatara, and the rats were eradicated in 1992 to prevent this. Another rodent eradication was carried out on the Rangitoto Islands east of D'Urville Island, to prepare for the release of 432 Cook Strait tuatara juveniles in 2004, which were being raised at Victoria University as of 2001.
Brothers Island tuatara
Sphenodon punctatus guntheri is present naturally on one small island with a population of approximately 400. In 1995, 50 juvenile and 18 adult Brothers Island tuatara were moved to Titi Island in Cook Strait, and their establishment monitored. Two years later, more than half of the animals had been seen again and of those all but one had gained weight. In 1998, 34 juveniles from captive breeding and 20 wild-caught adults were similarly transferred to Matiu/Somes Island, a more publicly accessible location in Wellington Harbour. The captive juveniles were from induced layings from wild females.
In late October 2007, 50 tuatara collected as eggs from North Brother Island and hatched at Victoria University were being released onto Long Island in the outer Marlborough Sounds. The animals had been cared for at Wellington Zoo for the previous five years and had been kept in secret in a specially built enclosure at the zoo, off display.
There is another out of country population of Brothers Island tuatara that was given to the San Diego Zoological Society and is housed off-display at the San Diego Zoo facility in Balboa. No successful reproductive efforts have been reported yet.
Northern tuatara
S. punctatus punctatus naturally occurs on 29 islands, and its population is estimated to be over 60,000 individuals. In 1996, 32 adult northern tuatara were moved from Moutoki Island to Moutohora. The carrying capacity of Moutohora is estimated at 8,500 individuals, and the island could allow public viewing of wild tuatara. In 2003, 60 northern tuatara were introduced to Tiritiri Matangi Island from Middle Island in the Mercury group. They are occasionally seen sunbathing by visitors to the island.
A mainland release of S.p. punctatus occurred in 2005 in the heavily fenced and monitored Karori Sanctuary. The second mainland release took place in October 2007, when a further 130 were transferred from Stephens Island to the Karori Sanctuary. In early 2009, the first recorded wild-born offspring were observed.
Captive breeding
The first successful breeding of tuatara in captivity is believed to have achieved by Sir Algernon Thomas at either his University offices or residence in Symonds Street in the late 1880s or his new home, Trewithiel, in Mount Eden in the early 1890s.
Several tuatara breeding programmes are active in New Zealand. Southland Museum and Art Gallery in Invercargill was the first institution to have a tuatara breeding programme; starting in 1986 they bred S. punctatus and have focused on S. guntheri more recently.
Hamilton Zoo, Auckland Zoo and Wellington Zoo also breed tuatara for release into the wild. At Auckland Zoo in the 1990s it was discovered that tuatara have temperature-dependent sex determination.
The Victoria University of Wellington maintains a research programme into the captive breeding of tuatara, and the Pūkaha / Mount Bruce National Wildlife Centre keeps a pair and a juvenile.
The WildNZ Trust has a tuatara breeding enclosure at Ruawai. One notable captive breeding success story took place in January 2009, when all 11 eggs belonging to 110 year-old tuatara Henry and 80 year-old tuatara Mildred hatched. This story is especially remarkable as Henry required surgery to remove a cancerous tumour in order to successfully breed.
In January 2016, Chester Zoo, England, announced that they succeeded in breeding the tuatara in captivity for the first time outside its homeland.
Cultural significance
Tuatara feature in a number of indigenous legends, and are held as ariki (God forms). Tuatara are regarded as the messengers of Whiro, the god of death and disaster, and Māori women are forbidden to eat them. Tuatara also indicate tapu (the borders of what is sacred and restricted), beyond which there is mana, meaning there could be serious consequences if that boundary is crossed. Māori women would sometimes tattoo images of lizards, some of which may represent tuatara, near their genitals. Today, tuatara are regarded as a taonga (special treasure) along with being viewed as the kaitiaki (guardian) of knowledge.
The tuatara was featured on one side of the New Zealand five-cent coin, which was phased out in October 2006. Tuatara was also the name of the Journal of the Biological Society of Victoria University College and subsequently Victoria University of Wellington, published from 1947 until 1993. It has now been digitised by the New Zealand Electronic Text Centre, also at Victoria.
In popular culture
A tuatara named "Tua" is prominently featured in the 2017 novel Turtles All the Way Down by John Green.
There is a brand of New Zealand craft beer named after the Tuatara which particularly references the third eye in its advertising.
In the season one finale of Abbott Elementary an old tuatara named Duster is used to represent themes of ageing and transition.
In the 2023 animated movie Leo, the main character is a tuatara named Leo.
| Biology and health sciences | Reptiles | null |
30791 | https://en.wikipedia.org/wiki/Polytetrafluoroethylene | Polytetrafluoroethylene | Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer of tetrafluoroethylene, and has numerous applications because it is chemically inert. The commonly known brand name of PTFE-based composition is Teflon by Chemours, a spin-off from DuPont, which originally invented the compound in 1938.
Polytetrafluoroethylene is a fluorocarbon solid, as it is a high-molecular-weight polymer consisting wholly of carbon and fluorine. PTFE is hydrophobic: neither water nor water-containing substances wet PTFE, as fluorocarbons exhibit only small London dispersion forces due to the low electric polarizability of fluorine. PTFE has one of the lowest coefficients of friction of any solid.
Polytetrafluoroethylene is used as a non-stick coating for pans and other cookware. It is non-reactive, partly because of the strength of carbon–fluorine bonds, so it is often used in containers and pipework for reactive and corrosive chemicals. Where used as a lubricant, PTFE reduces friction, wear, and energy consumption of machinery. It is used as a graft material in surgery and as a coating on catheters.
PTFE and chemicals used in its production are some of the best-known and widely applied PFAS, which are persistent organic pollutants. PTFE occupies more than half of all fluoropolymer production, followed by polyvinylidene fluoride (PVdF).
For decades, DuPont used perfluorooctanoic acid (PFOA, or C8) during production of PTFE, later discontinuing its use due to legal actions over ecotoxicological and health effects of exposure to PFOA. Dupont's spin-off Chemours today manufactures PTFE using an alternative chemical it calls GenX, another PFAS. Although GenX was designed to be less persistent in the environment compared to PFOA, it has proven to be a "regrettable substitute". Its effects may be equally harmful or even more detrimental than those of the chemical it was meant to replace.
History
Polytetrafluoroethylene (PTFE) was accidentally discovered in 1938 by Roy J. Plunkett while he was working in Chemours Chambers Works plant in New Jersey for DuPont. A team of Dupont chemists attempted to make a new chlorofluorocarbon refrigerant, called tetrafluoroethylene. The gas in its pressure bottle stopped flowing before the bottle's weight had dropped to the point signaling "empty". John J. Beall (chemist), noticing a weight differential in his test cylinder, brought it to the attention of Roy Plunkett. The chemists in the lab sawed the bottle apart and found the bottle's interior coated with a waxy white material that was oddly slippery. Analysis showed that it was polymerized perfluoroethylene, with the iron from the inside of the container having acted as a catalyst at high pressure. Kinetic Chemicals patented the new fluorinated plastic (analogous to the already known polyethylene) in 1941, and registered the Teflon trademark in 1945.
By 1948, DuPont, which founded Kinetic Chemicals in partnership with General Motors, was producing over of Teflon-brand polytetrafluoroethylene per year in Parkersburg, West Virginia. An early use was in the Manhattan Project as a material to coat valves and seals in the pipes holding highly reactive uranium hexafluoride at the vast K-25 uranium enrichment plant in Oak Ridge, Tennessee.
In 1954, Colette Grégoire urged her husband, the French engineer Marc Grégoire, to try the material he had been using on fishing tackle on her cooking pans. He subsequently created the first PTFE-coated, non-stick pans under the brand name Tefal (combining "Tef" from "Teflon" and "al" from aluminium). In the United States, Marion A. Trozzolo, who had been using the substance on scientific utensils, marketed the first US-made PTFE-coated pan, "The Happy Pan", in 1961. Non-stick cookware has since become a common household product, now offered by hundreds of manufacturers across the world.
The brand name Zepel was used for promoting its stain-resistance and water-resistance when applied to fabrics.
In the 1990s, it was found that PTFE could be radiation cross-linked above its melting point in an oxygen-free environment. Electron beam processing is one example of radiation processing. Cross-linked PTFE has improved high-temperature mechanical properties and radiation stability. That was significant because, for many years, irradiation at ambient conditions has been used to break down PTFE for recycling. This radiation-induced chain scission allows it to be more easily reground and reused.
Corona discharge treatment of the surface to increase the energy and improve adhesion has been reported.
Production
PTFE is produced by free-radical polymerization of tetrafluoroethylene. The net equation is as follows:
n F2C=CF2 → −(F2C−CF2)n−
Because tetrafluoroethylene can explosively decompose to tetrafluoromethane () and carbon, a special apparatus is required for the polymerization to prevent hot spots that might initiate this dangerous side reaction. The process is typically initiated with persulfate, which homolyzes to generate sulfate radicals:
[O3SO−OSO3]2− ⇌ 2
The resulting polymer is terminated with sulfate ester groups, which can be hydrolyzed to give OH end-groups.
Granular PTFE is produced via suspension polymerization, where PTFE is suspended in an aqueous medium primarily via agitation and sometimes with the use of a surfactant. PTFE is also synthesized via emulsion polymerization, where a surfactant is the primary means of keeping PTFE in an aqueous medium. Surfactants in the past have included perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS). More recently, Perfluoro 3,6 dioxaoctanoic acid (PFO2OA) and FRD-903 (GenX) are being used as alternatives.
Properties
PTFE is a thermoplastic polymer, which is a white solid at room temperature, with a density of about 2200 kg/m3 and a melting point of . It maintains high strength, toughness and self-lubrication at low temperatures down to , and good flexibility at temperatures above . PTFE gains its properties from the aggregate effect of carbon-fluorine bonds, as do all fluorocarbons. The only chemicals known to affect these carbon-fluorine bonds are highly reactive metals like the alkali metals, at higher temperatures such metals as aluminium and magnesium, and fluorinating agents such as xenon difluoride and cobalt(III) fluoride. At temperatures above PTFE undergoes depolymerization. However, it begins to decompose at about through , and pyrolysis occurs at temperatures above .
The coefficient of friction of plastics is usually measured against polished steel. PTFE's coefficient of friction is 0.05 to 0.10, which is the third-lowest of any known solid material (aluminium magnesium boride (BAM) being the lowest, with a coefficient of friction of 0.02; diamond-like carbon being second-lowest at 0.05). PTFE's resistance to van der Waals forces means that it is the only known surface to which a gecko cannot stick. In addition, PTFE can be used to prevent insects from climbing up surfaces painted with the material. For example, PTFE is used to prevent ants from climbing out of formicaria. There are surface treatments for PTFE that alter the surface to allow adhesion to other materials.
Because of its chemical and thermal properties, PTFE is often used as a gasket material within industries that require resistance to aggressive chemicals such as pharmaceuticals or chemical processing. However, until the 1990s, PTFE was not known to crosslink like an elastomer, due to its chemical inertness. Therefore, it has no "memory" and is subject to creep. Because of the propensity to creep, the long-term performance of such seals is worse than for elastomers that exhibit zero, or near-zero, levels of creep. In critical applications, Belleville washers are often used to apply continuous force to PTFE gaskets, thereby ensuring a minimal loss of performance over the lifetime of the gasket.
PTFE is an ultraviolet (UV) transparent polymer. However, when exposed to an excimer laser beam it severely degrades due to heterogeneous photothermal effect.
Processing
Processing PTFE can be difficult and expensive because its high melting temperature, , is above its decomposition temperature. Even when molten, PTFE does not flow due to its exceedingly high melt-viscosity. The viscosity and melting point can be decreased by inclusion of small amount of comonomers such as perfluoro (propylvinyl ether) and hexafluoropropylene (HFP). These cause the otherwise perfectly linear PTFE chain to become branched, reducing its crystallinity.
Some PTFE parts are made by cold-moulding, a form of compression molding. Here, fine powdered PTFE is forced into a mould under high pressure (10–100 MPa). After a settling period, lasting from minutes to days, the mould is heated at , allowing the fine particles to fuse (sinter) into a single mass.
Applications and uses
Wire insulation, electronics
The most common use of PTFE, consuming about 50% of production, is for the insulation of wiring in aerospace and computer applications (e.g. hookup wire, coaxial cables). This application exploits the fact that PTFE has excellent dielectric properties, specifically low group velocity dispersion, especially at high radio frequencies, making it suitable for use as an excellent insulator in connector assemblies and cables, and in printed circuit boards used at microwave frequencies. Combined with its high melting temperature, this makes PTFE the material of choice as a high-performance substitute for the weaker, higher dispersion and lower-melting-point polyethylene commonly used in low-cost applications.
Bearings seals
In industrial applications, owing to its low friction, PTFE is used for plain bearings, gears, slide plates, seals, gaskets, bushings, and more applications with sliding action of parts, where it outperforms acetal and nylon.
Electrets
Its extremely high bulk resistivity makes it an ideal material for fabricating long-life electrets, the electrostatic analogues of permanent magnets.
Composites
PTFE film is also widely used in the production of carbon fiber composites as well as fiberglass composites, notably in the aerospace industry. PTFE film is used as a barrier between the carbon or fiberglass part being built and, in breather and bagging materials, is used to incapsulate the bondment when debulking (vacuum removal of air from between layers of laid-up plies of material) and when curing the composite, usually in an autoclave. The PTFE, used here as a film, prevents the non-production materials from sticking to the part being built, which is sticky due to the carbon-graphite or fiberglass plies being pre-pregnated with bismaleimide resin. Non-production materials such as Teflon, Airweave Breather, and the bag itself would be considered F.O.D. (foreign object debris/damage) if left in layup.
Gore-Tex is a brand of expanded PTFE (ePTFE), a material incorporating a fluoropolymer membrane with micropores. The roof of the Hubert H. Humphrey Metrodome in Minneapolis, US, was one of the largest applications of PTFE coatings. of the material was used in the creation of the white double-layered PTFE-coated fiberglass dome.
Chemically inert liners
Because of its extreme non-reactivity and high temperature rating, PTFE is often used as the liner in hose assemblies, expansion joints, and in industrial pipe lines, particularly in applications using acids, alkalis, or other chemicals. Its frictionless qualities allow improved flow of highly viscous liquids and for uses in applications such as brake hoses.
Tensioned membrane structures
PTFE architectural membranes are created by coating a woven glass-fibre base cloth with PTFE, forming one of the strongest and most durable materials used in tensile structures. Some notable structures featuring PTFE-tensioned membranes include The O2 Arena in London, Moses Mabhida Stadium in South Africa, Metropolitano Stadium in Spain and the Sydney Football Stadium Roof in Australia.
Musical instruments
PTFE is often found in musical instrument lubrication products, most commonly valve oil.
Lubricants
PTFE is used in some aerosol lubricant sprays, including in micronized and polarized form. It is notable for its extremely low coefficient of friction, its hydrophobia (which serves to inhibit rust), and for the dry film it forms after application, which allows it to resist collecting particles that might otherwise form an abrasive paste. Brands include GT85, Tri-Flow and WD-40 Specialist.
Kitchenware
PTFE is best known for its use in coating non-stick frying pans and other cookware, as it is hydrophobic and possesses fairly high heat resistance.
The sole plates of some clothes irons are coated with PTFE.
Others
Other niche applications include:
It is often found in ski bindings as a non-mechanical AFD (anti-friction device)
It can be stretched to contain small pores of varying sizes and is then placed between fabric layers to make a waterproof, breathable fabric in outdoor apparel.
It is used widely as a fabric protector to repel stains on formal school-wear, like uniform blazers.
It is frequently used as a lubricant to prevent captive insects and other arthropods from escaping.
It is used as a coating for medical and healthcare applications formulated to provide strength and heat resistance to surgical devices and other medical equipment.
It is used as a film interface patch for sports and medical applications, featuring a pressure-sensitive adhesive backing, which is installed in strategic high friction areas of footwear, insoles, ankle-foot orthosis, and other medical devices to prevent and relieve friction-induced blisters, calluses and foot ulceration.
Expanded PTFE membranes have been used in trials to assist trabeculectomy surgery to treat glaucoma.
Powdered PTFE is used in pyrotechnic compositions as an oxidizer with powdered metals such as aluminium and magnesium. Upon ignition, these mixtures form carbonaceous soot and the corresponding metal fluoride, and release large amounts of heat. They are used in infrared decoy flares and as igniters for solid-fuel rocket propellants. Aluminium and PTFE is also used in some thermobaric fuel compositions.
Powdered PTFE is used in a suspension with a low-viscosity, azeotropic mixture of siloxane ethers to create a lubricant for use in twisty puzzles.
In optical radiometry, sheets of PTFE are used as measuring heads in spectroradiometers and broadband radiometers (e.g., illuminance meters and UV radiometers) due to PTFE's capability to diffuse a transmitting light nearly perfectly. Moreover, optical properties of PTFE stay constant over a wide range of wavelengths, from UV down to near infrared. In this region, the ratio of its regular transmittance to diffuse transmittance is negligibly small, so light transmitted through a diffuser (PTFE sheet) radiates like Lambert's cosine law. Thus PTFE enables cosinusoidal angular response for a detector measuring the power of optical radiation at a surface, e.g. in solar irradiance measurements.
Teflon-coated bullets are coated with PTFE to reduce wear on the rifling of firearms that uncoated projectiles would cause. PTFE itself does not give a projectile an armor-piercing property.
Its high corrosion resistance makes PTFE useful in laboratory environments, where it is used for lining containers, as a coating for magnetic stirrers, and as tubing for highly corrosive chemicals such as hydrofluoric acid, which will dissolve glass containers. It is used in containers for storing fluoroantimonic acid, a superacid.
PTFE tubes are used in gas-gas heat exchangers in gas cleaning of waste incinerators. Unit power capacity is typically several megawatts.
PTFE is widely used as a thread seal tape in plumbing applications, largely replacing paste thread dope.
PTFE membrane filters are among the most efficient industrial air filters. PTFE-coated filters are often used in dust collection systems to collect particulate matter from air streams in applications involving high temperatures and high particulate loads such as coal-fired power plants, cement production and steel foundries.
PTFE grafts can be used to bypass stenotic arteries in peripheral vascular disease if a suitable autologous vein graft is not available.
Many bicycle lubricants and greases contain PTFE and are used on chains and other moving parts subjected to frictional forces (such as hub bearings).
PTFE is used for some types of dental floss.
PTFE can also be used when placing dental fillings, to isolate the contacts of the adjacent tooth so the restorative materials will not stick to the adjacent tooth.
PTFE sheets are used in the production of butane hash oil due to its non-stick properties and resistance to non-polar solvents.
PTFE, associated with a slightly textured laminate, makes the plain bearing system of a Dobsonian telescope.
PTFE is widely used as a non-stick coating for food processing equipment; dough hoppers, mixing bowls, conveyor systems, rollers, and chutes. PTFE can also be reinforced where abrasion is present – for equipment processing seeded or grainy dough for example.
PTFE has been experimented with for electroless nickel plating.
PTFE tubing is used for Bowden tubing in 3D printers because its low friction allows the extruder stepper motor to push filament through it more easily.
PTFE is commonly used in aftermarket add-on mouse feet for gaming mice to reduce friction of the mouse against the mouse pad, resulting in a smoother glide.
PTFE foils are commonly used with laserprinters everywhere, in their fuser unit, wrapped around the heater element(s) and as well on the opposite pressure roller to prevent any kind of sticking to it (neither the printed paper nor toner waste)
PTFE is also used to make body jewellery as it's much safer to wear compared to materials like acrylic, that release toxics into the body at 26.6 °C, unlike PTFE at 650–700 °C.
PTFE is used to make bookbinding tools for folding, scoring and separating sheets of paper. These are typically referred to as Teflon bone folders.
PTFE is commonly used for the tip of desoldering pumps due to its high melting temperature.
Safety
While PTFE is stable at lower temperatures, it begins to deteriorate at temperatures of about , it decomposes above , and pyrolysis occurs at temperatures above . The main decomposition products are fluorocarbon gases and a sublimate, including tetrafluoroethylene (TFE) and difluorocarbene radicals (RCF2).
An animal study conducted in 1955 concluded that it is unlikely that these products would be generated in amounts significant to health at temperatures below . Above those temperatures the degradation by-products can be lethal to birds, and can cause flu-like symptoms in humans (polymer fume fever), although in humans those symptoms disappear within a day or two of being moved to fresh air.
Most cases of polymer fume fever in humans occur due to smoking PTFE-contaminated tobacco, although cases have occurred in people who have welded near PTFE components. PTFE-coated cookware is unlikely to reach dangerous temperatures with normal use, as meat is usually fried between , and most cooking oils (except refined safflower and avocado oils) start to smoke before a temperature of is reached. A 1973 study by DuPont's Haskell Laboratory found that a 4-hour exposure to the fumes emitted by PTFE cookware heated to was lethal for parakeets, although that was a higher temperature than the required for fumes from pyrolyzed butter to be lethal to the birds.
Perfluorooctanoic acid (PFOA), a chemical formerly used in the manufacture of PTFE products such as non-stick coated cookware, can be carcinogenic for people who are exposed to it (see Ecotoxicity). Concerning levels of PFOA have been found in the blood of people who work in or live near factories where the chemical is used, and in people regularly exposed to PFOA-containing products such as some ski waxes and stain-resistant fabric coatings, but non-stick cookware was not found to be a major source of exposure, as the PFOA is burned off during the manufacturing process and not present in the finished product. Non-stick coated cookware has not been manufactured using PFOA since 2013, and PFOA is no longer being made in the United States.
Ecotoxicity
Living Building Challenge
PTFE was added to the Living Building Challenge (LBC) Red List in 2016. The Red List bans substances prevalent in the building industry that pose serious risks to human health and the environment from construction that seeks to meet the criteria of the Living Building Challenge (LBC).
Trifluoroacetate
Sodium trifluoroacetate and the similar compound sodium chlorodifluoroacetate can both be generated when PTFE undergoes thermolysis, as well as producing longer chain polyfluoro- and/or polychlorofluoro- (C3-C14) carboxylic acids which may be equally persistent. These products can accumulate in evaporative wetlands and have been found in the roots and seeds of wetland plant species, but has not been observed to have an adverse impact on plant health or germination success.
PFOA
Perfluorooctanoic acid (PFOA, or C8) has been used as a surfactant in the emulsion polymerization of PTFE, although several manufacturers have entirely discontinued its use.
PFOA persists indefinitely in the environment. PFOA has been detected in the blood of many individuals of the general US population in the low and sub-parts per billion range, and levels are higher in chemical plant employees and surrounding subpopulations. PFOA and perfluorooctanesulfonic acid (PFOS) have been estimated to be in every American person's blood stream in the parts per billion range, though those concentrations have decreased by 70% for PFOA and 84% for PFOS between 1999 and 2014, which coincides with the end of the production and phase out of PFOA and PFOS in the US. The general population has been exposed to PFOA through massive dumping of C8 waste into the ocean and near the Ohio River Valley. PFOA has been detected in industrial waste, stain-resistant carpets, carpet cleaning liquids, house dust, microwave popcorn bags, water, food and PTFE cookware.
As a result of a class-action lawsuit and community settlement with DuPont, three epidemiologists conducted studies on the population of Parkersburg, WV surrounding the (former DuPont) Chemours Washington Works chemical plant that was exposed to PFOA at levels greater than in the general population. The studies concluded that there was an association between PFOA exposure and six health outcomes: kidney cancer, testicular cancer, ulcerative colitis, thyroid disease, hypercholesterolemia (high cholesterol), and gestational hypertension (pregnancy-induced high blood pressure).
Overall, PTFE cookware is considered a minor exposure pathway to PFOA.
GenX
As a result of the lawsuits concerning the PFOA class-action lawsuit, DuPont began to use GenX, a similarly fluorinated compound, as a replacement for perfluorooctanoic acid in the manufacture of fluoropolymers, such as Teflon-brand PTFE. However, the EPA has classified GenX as more toxic than PFOA and it has proven to be a "regrettable substitute"; its effects may be equally harmful or even more detrimental than those of the chemical it was meant to replace.
The chemicals are manufactured by Chemours, a corporate spin-off of DuPont, in Fayetteville, North Carolina. Fayetteville Works was the site where DuPont began manufacture of PFOA after the lawsuit in Parkersburg WV halted their production there. When EPA asked companies to voluntarily phase out PFOA production, it was replaced by GenX in Fayetteville Works. In June of 2017, The Wilmington Star-News broke the story that GenX was found in the Cape Fear River – the drinking water supply for 500,000 people. The source of the pollution was determined to be the Fayetteville Works site, which had been run by DuPont since its founding in 1971 and then managed by DuPont spinoff, The Chemours Company, since 2015. The water utility confirmed they had no ability to filter these chemicals from the drinking water.
The North Carolina Department of Environmental Quality (NC DEQ) records indicate that DuPont started release PFAS into the area beginning in 1976 with the production of Nafion, and that PFAS including GenX had been released as a byproduct of the production of Vinyl Ethers since 1980, exposing the Cape Fear Basin for decades. A small nonprofit called Cape Fear River Watch sued NC DEQ for not taking swifter and stronger action, and sued the polluter, Chemours, for violations of the Clean Water Act and the Toxic Substances Control Act. The result was a Consent Order, signed February 25, 2019 by Cape Fear River Watch, NC DEQ, and Chemours. The order has required Chemours to stop wastewater discharge, air emissions, groundwater discharge, sampling and filtration options to well users, and required sampling that proved there were upwards of 300 distinct PFAS compounds being released from Fayetteville Works.
Similar polymers
The Teflon trade name is also used for other polymers with similar compositions:
Perfluoroalkoxy alkane (PFA)
Fluorinated ethylene propylene (FEP)
These retain the useful PTFE properties of low friction and nonreactivity, but are also more easily formable. For example, FEP is softer than PTFE and melts at ; it is also highly transparent and resistant to sunlight.
| Physical sciences | Polymers | Chemistry |
30802 | https://en.wikipedia.org/wiki/Tragedy%20of%20the%20commons | Tragedy of the commons | The tragedy of the commons is a concept which states that if many people enjoy unfettered access to a finite, valuable resource, such as a pasture, they will tend to overuse it and may end up destroying its value altogether. Even if some users exercised voluntary restraint, the other users would merely replace them, the predictable result being a "tragedy" for all. The concept has been widely discussed, and criticised, in economics, ecology and other sciences.
The metaphorical term is the title of a 1968 essay by ecologist Garrett Hardin. The concept itself did not originate with Hardin but rather extends back to classical antiquity, being discussed by Aristotle. The principal concern of Hardin's essay was overpopulation of the planet. To prevent the inevitable tragedy (he argued) it was necessary to reject the principle (supposedly enshrined in the Universal Declaration of Human Rights) according to which every family has a right to choose the number of its offspring, and to replace it by "mutual coercion, mutually agreed upon".
Some scholars have argued that over-exploitation of the common resource is by no means inevitable, since the individuals concerned may be able to achieve mutual restraint by consensus. Others have contended that the metaphor is inapposite because its exemplar – unfettered access to common land – did not exist historically, the right to exploit common land being controlled by law. The work of Elinor Ostrom, who received the Nobel Prize in Economics, is seen by some economists as having refuted Hardin's claims. Hardin's views on over-population have been criticised as simplistic and racist.
Expositions
Classical
The concept of unrestricted-access resources becoming spent, where personal use does not incur personal expense, has been discussed for millennia. Aristotle wrote that "That which is common to the greatest number gets the least amount of care. Men pay most attention to what is their own: they care less for what is common."
Lloyd's pamphlet
In 1833, the English economist William Forster Lloyd published "Two Lectures on the Checks to Population", a pamphlet that included a hypothetical example of over-use of a common resource. This was the situation of cattle herders sharing a common parcel of land on which they were each entitled to let their cows graze.
He postulated that if a herder put more than his allotted number of cattle on the common, overgrazing could result. For each additional animal, a herder could receive additional benefits, while the whole group shared the resulting damage to the commons. If all herders made this individually rational economic decision, the common could be depleted or even destroyed, to the detriment of all.
Lloyd's pamphlet was written after the enclosure movement had eliminated the open field system of common property as the standard model for land exploitation in England (though there remained, and still remain, millions acres of "common land": see below, Commons in historical reality). Carl Dahlman and others have asserted that his description was historically inaccurate, pointing to the fact that the system endured for hundreds of years without producing the disastrous effects claimed by Lloyd.
Garrett Hardin's article
In 1968, ecologist Garrett Hardin explored this social dilemma in his article "The Tragedy of the Commons", published in the journal Science. The essay derived its title from the pamphlet by Lloyd, which he cites, on the over-grazing of common land:
Hardin discussed problems that cannot be solved by technical means, as distinct from those with solutions that require "a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality". Hardin focused on human population growth, the use of the Earth's natural resources, and the welfare state.
Hardin argued that if individuals relied on themselves alone, and not on the relationship between society and man, then people will treat other people as resources, which would lead to the world population growing and for the process to continue. Parents breeding excessively would leave fewer descendants because they would be unable to provide for each child adequately. Such negative feedback is found in the animal kingdom. Hardin said that if the children of improvident parents starved to death, if overbreeding was its own punishment, then there would be no public interest in controlling the breeding of families.
Political inferences
Hardin blamed the welfare state for allowing the tragedy of the commons; where the state provides for children and supports over breeding as a fundamental human right, a Malthusian catastrophe is inevitable. Consequently, in his article, Hardin lamented the following proposal from the United Nations:
In addition, Hardin also pointed out the problem of individuals acting in rational self-interest by claiming that if all members in a group used common resources for their own gain and with no regard for others, all resources would still eventually be depleted. Overall, Hardin argued against relying on conscience as a means of policing commons, suggesting that this favors selfish individuals – often known as free riders – over those who are more altruistic.
In the context of avoiding over-exploitation of common resources, Hardin concluded by restating Hegel's maxim (which was quoted by Engels), "freedom is the recognition of necessity". He suggested that "freedom" completes the tragedy of the commons. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believed that humans "can preserve and nurture other and more precious freedoms".
The "Commons" as a modern resource concept
Hardin's article marked the mainstream acceptance of the term "commons" as used to connote a shared resource. As Frank van Laerhoven and Elinor Ostrom have stated: "Prior to the publication of Hardin’s article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources,' or 'common property' were very rare in the academic literature." They go on to say: "In 2002, Barrett and Mabry conducted a major survey of biologists to determine which publications in the twentieth century had become classic books or benchmark publications in biology. They report that Hardin’s 1968 article was the one having the greatest career impact on biologists and is the most frequently cited". However, the Ostroms point out that Hardin's analysis was based on crucial misconceptions about the nature of common property systems.
System archetype
In systems theory, the commons problem is one of the ten most common system archetypes. The Tragedy of the Commons archetype can be illustrated using a causal loop diagram.
Application
Metaphoric meaning
Like Lloyd and Thomas Malthus before him, Hardin was primarily interested in the problem of human population growth. But in his essay, he also focused on the use of larger (though finite) resources such as the Earth's atmosphere and oceans, as well as pointing out the "negative commons" of pollution (i.e., instead of dealing with the deliberate privatization of a positive resource, a "negative commons" deals with the deliberate commonization of a negative cost, pollution).
As a metaphor, the tragedy of the commons should not be taken too literally. The "tragedy" is not in the word's conventional or theatric sense, nor a condemnation of the processes that lead to it. Similarly, Hardin's use of "commons" has frequently been misunderstood, leading him to later remark that he should have titled his work "The Tragedy of the Unregulated Commons".
The metaphor illustrates the argument that free access and unrestricted demand for a finite resource ultimately reduces the resource through over-exploitation, temporarily or permanently. This occurs because the benefits of exploitation accrue to individuals or groups, each of whom is motivated to maximize the use of the resource to the point in which they become reliant on it, while the costs of the exploitation are borne by all those to whom the resource is available (which may be a wider class of individuals than those who are exploiting it). This, in turn, causes demand for the resource to increase, which causes the problem to snowball until the resource collapses (even if it retains a capacity to recover). The rate at which depletion of the resource is realized depends primarily on three factors: the number of users wanting to consume the common in question, the consumptive nature of their uses, and the relative robustness of the common.
The same concept is sometimes called the "tragedy of the fishers", because fishing too many fish before or during breeding could cause stocks to plummet.
Modern commons
The tragedy of the commons can be considered in relation to environmental issues such as sustainability. The commons dilemma stands as a model for a great variety of resource problems in society today, such as water, forests, fish, and non-renewable energy sources such as oil, gas, and coal.
Hardin's model posits that the tragedy of the commons may emerge if individuals prioritize self-interest.
Another case study involves beavers in Canada, historically crucial for natives who, as stewards, organized to hunt them for food and commerce. Non-native trappers, motivated by fur prices, contributed to resource degradation, wresting control from the indigenous population. Conservation laws enacted in the 1930s in response to declining beaver populations led to the expulsion of trappers, legal acknowledgment of natives, and enforcement of customary laws. This intervention resulted in productive harvests by the 1950s.
Situations exemplifying the "tragedy of the commons" include the overfishing and destruction of the Grand Banks of Newfoundland, the destruction of salmon runs on rivers that have been dammed (most prominently in modern times on the Columbia River in the Northwest United States and historically in North Atlantic rivers), and the devastation of the sturgeon fishery (in modern Russia, but historically in the United States as well). In terms of water supply, another example is the limited water available in arid regions (e.g., the area of the Aral Sea and the Los Angeles water system supply, especially at Mono Lake and Owens Lake).
In economics, an externality is a cost or benefit that affects a party who did not choose to incur that cost or benefit. Negative externalities are a well-known feature of the "tragedy of the commons". For example, driving cars has many negative externalities; these include pollution, carbon emissions, and traffic accidents. Every time Person A gets in a car, it becomes more likely that Person Z will suffer in each of those areas. Economists often urge the government to adopt policies that "internalize" an externality.
The tragedy of the commons can also refer to the idea of open data. Anonymised data are crucial for useful social research and represent therefore a public resource better said, a common good which is liable to exhaustion. Some feel that the law should provide a safe haven for the dissemination of research data, since it can be argued that current data protection policies overburden valuable research without mitigating realistic risks.
An expansive application of the concept can also be seen in Vyse's analysis of differences between countries in their responses to the COVID-19 pandemic. Vyse argues that those who defy public health recommendations can be thought of as spoiling a set of common goods, "the economy, the healthcare system, and the very air we breathe, for all of us. In a similar vein, it has been argued that higher sickness and mortality rates from COVID-19 in individualistic cultures with less obligatory collectivism, is another instance of the "tragedy of the commons".
Tragedy of the digital commons
In the past two decades, scholars have been attempting to apply the concept of the tragedy of the commons to the digital environment. However, between scholars there are differences on some very basic notions inherent to the tragedy of the commons: the idea of finite resources and the extent of pollution. On the other hand, there seems to be some agreement on the role of the digital divide and how to solve a potential tragedy of the digital commons.
Resources
Many digital resources have properties that make them vulnerable to the tragedy of the commons, including data, virtual artifacts and even limited user attention. Closely related are the physical computational resources, such as CPU, RAM, and network bandwidth, that digital communities on shared servers rely upon and govern. Some scholars argue that digital resources are infinite, and therefore immune to the tragedy of the commons, because downloading a file does not constitute the destruction of the file in the digital environment, and because it can be replicated and disseminated throughout the digital environment. However, it can still be considered a finite resource within the context of privacy laws and regulations that limit access to it.
Finite digital resources can thus be digital commons. An example is a database that requires persistent maintenance, such as Wikipedia. As a non-profit, it survives on a network of people contributing to maintain a knowledge base without expectation of direct compensation. This digital resource will deplete as Wikipedia may only survive if it is contributed to and used as a commons. The motivation for individuals to contribute is reflective of the theory because, if humans act in their own immediate interest and no longer participate, then the resource becomes misinformed or depleted. Arguments surrounding the regulation and mitigation requirements for digital resources may become reflective of natural resources.
This raises the question whether one can view access itself as a finite resource in the context of a digital environment. Some scholars argue this point, often pointing to a proxy for access that is more concrete and measurable. One such proxy is bandwidth, which can become congested when too many people try to access the digital environment. Alternatively, one can think of the network itself as a common resource which can be exhausted through overuse. Therefore, when talking about resources running out in a digital environment, it could be more useful to think in terms of the access to the digital environment being restricted in some way; this is called information entropy.
Pollution
In terms of pollution, there are some scholars who look only at the pollution that occurs in the digital environment itself. They argue that unrestricted use of digital resources can cause an overproduction of redundant data which causes noise and corrupts communication channels within the digital environment. Others argue that the pollution caused by the overuse of digital resources also causes pollution in the physical environment. They argue that unrestricted use of digital resources causes misinformation, fake news, crime, and terrorism, as well as problems of a different nature such as confusion, manipulation, insecurity, and loss of confidence.
Digital divide and solutions
Scholars disagree on the particularities underlying the tragedy of the digital commons; however, there does seem to be some agreement on the cause and the solution. The cause of the tragedy of the commons occurring in the digital environment is attributed by some scholars to the digital divide. They argue that there is too large a focus on bridging this divide and providing unrestricted access to everyone. Such a focus on increasing access without the necessary restrictions causes the exploitation of digital resources for individual self-interest that is underlying any tragedy of the commons.
In terms of the solution, scholars agree that cooperation rather than regulation is the best way to mitigate a tragedy of the digital commons. The digital world is not a closed system in which a central authority can regulate the users, as such some scholars argue that voluntary cooperation must be fostered. This could perhaps be done through digital governance structure that motivates multiple stakeholders to engage and collaborate in the decision-making process. Other scholars argue more in favor of formal or informal sets of rules, like a code of conduct, to promote ethical behaviour in the digital environment and foster trust. Alternative to managing relations between people, some scholars argue that it is access itself that needs to be properly managed, which includes expansion of network capacity.
Patents and technology
Patents are effectively a limited-time exploitation monopoly given to inventors. Once the period has elapsed, the invention is in principle free to all, and many companies do indeed commercialize such products, now market-proven. However, around 50% of all patent applications do not reach successful commercialization at all, often due to immature levels of components or marketing failures by the innovators. Scholars have suggested that since investment is often connected to patentability, such inactive patents form a rapidly growing category of underprivileged technologies and ideas that, under current market conditions, are effectively unavailable for use.
Thus, "Under the current system, people are encouraged to register new patents, and are discouraged from using publicly available patents." The case might be particularly relevant to technologies that are relatively more environmentally/human damaging but also somewhat costlier than other alternatives developed contemporaneously.
Examples
More general examples (some alluded to by Hardin) of potential and actual tragedies include:
Physical resources
Uncontrolled human population growth leading to overpopulation.
Atmosphere: through the release of pollution that leads to ozone depletion, global warming, ocean acidification (by way of increased atmospheric being absorbed by the sea), and particulate pollution.
Light pollution: with the loss of the night sky for research and cultural significance, affected human, flora and fauna health, nuisance, trespass and the loss of enjoyment or function of private property.
Water: Water pollution, water crisis of over-extraction of groundwater and wasting water due to overirrigation.
Forests: Frontier logging of old growth forest and slash and burn.
Energy resources and climate: Environmental residue of mining and drilling, burning of fossil fuels and consequential global warming.
Animals: Habitat destruction and poaching leading to the Holocene mass extinction.
Oceans: Overfishing
Space debris in Earth's surrounding space leading to limited locations for new satellites and the obstruction of universal observations.
Health
Antibioticsantibiotic resistance: Misuse of antibiotics anywhere in the world may eventually result in the global antibiotic resistance, both in human and agricultural settings, which would cause an irreparable harm to the societal health, seen as a common goods. The survey of Kieran S. O'Brien et al. stated that many consider the misuse of antibiotics to be the case of the "tragedy of the commons", however the research results in this respect were inconclusive (as of 2014).
VaccinesHerd immunity: Avoiding a vaccine shot and relying on the established herd immunity instead will avoid potential vaccine risks, but if everyone does this, it will diminish herd immunity and bring risk to people who cannot receive vaccines for medical reasons. The analogy with the "tragedy of the commons" is based on the interpretation that the common goods here is the pool of the vaccinated people, and avoiding vaccination diminishes it.
Other
Knowledge commons encompass immaterial and collectively owned goods in the information age, including, for example:
Source code and software documentation in software projects that can get "polluted" with messy code or inaccurate information.
Skills acquisition and training, when all parties involved pass the buck on implementing it.
Application to evolutionary biology
A parallel was drawn in 2006 between the tragedy of the commons and the competing behaviour of parasites that, through acting selfishly, eventually diminish or destroy their common host. The idea has also been applied to areas such as the evolution of virulence or sexual conflict, where males may fatally harm females when competing for matings.
The idea of evolutionary suicide, where adaptation at the level of the individual causes the whole species or population to be driven extinct, can be seen as an extreme form of an evolutionary tragedy of the commons. From an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide us with advanced therapeutic methods.
Microbial ecology studies have also addressed if resource availability modulates the cooperative or competitive behaviour in bacteria populations. When resources availability is high, bacterial populations become competitive and aggressive with each other, but when environmental resources are low, they tend to be cooperative and mutualistic.
Ecological studies have hypothesised that competitive forces between animals are major in high carrying capacity zones (i.e., near the Equator), where biodiversity is higher, because of natural resources abundance. This abundance or excess of resources, causes animal populations to have reproduction strategies (many offspring, short gestation, less parental care, and a short time until sexual maturity), so competition is affordable for populations. Also, competition could select populations to have behaviour in a positive feedback regulation.
Contrarily, in low carrying capacity zones (i.e., far from the equator), where environmental conditions are harsh, K strategies are common (longer life expectancy, produce relatively fewer offspring and tend to be altricial, requiring extensive care by parents when young) and populations tend to have cooperative or mutualistic behaviours. If populations have a competitive behaviour in hostile environmental conditions, they mostly are filtered out (die) by environmental selection; hence, populations in hostile conditions are selected to be cooperative.
Climate change
The effects of climate change have been given as a mass example of the tragedy of the commons. This perspective proposes that the earth, being the commons, has suffered a depletion of natural resources without regard to the externalities, the impact on neighboring and future populations. The collective actions of individuals, organisations, and governments continue to contribute to environmental degradation. Mitigation of the long-term impacts and tipping points require strict controls or other solution, but this may come as a loss to different industries. The sustainability of population and industry growth is the subject of climate change discussion. The global commons of environmental resource consumption or selfishness, as in the fossil fuel industry has been theorised as not realistically manageable. This is due to the crossing of irreversible thresholds of impact before the costs are entirely realised.
Commons dilemma
The commons dilemma is a specific class of social dilemma in which people's short-term selfish interests are at odds with long-term group interests and the common good. In academia, a range of related terminology has also been used as shorthand for the theory or aspects of it, including resource dilemma, take-some dilemma, and common pool resource.
Commons dilemma researchers have studied conditions under which groups and communities are likely to under- or over-harvest common resources in both the laboratory and field. Research programs have concentrated on a number of motivational, strategic, and structural factors that might be conducive to management of commons.
In game theory, which constructs mathematical models for individuals' behavior in strategic situations, the corresponding "game", developed by Hardin, is known as the Commonize Costs – Privatize Profits Game (CC–PP game).
Psychological factors
Kopelman, Weber, & Messick (2002), in a review of the experimental research on cooperation in commons dilemmas, identify nine classes of independent variables that influence cooperation in commons dilemmas: social motives, gender, payoff structure, uncertainty, power and status, group size, communication, causes, and frames. They organize these classes and distinguish between psychological individual differences (stable personality traits) and situational factors (the environment). Situational factors include both the task (social and decision structure) and the perception of the task.
Empirical findings support the theoretical argument that the cultural group is a critical factor that needs to be studied in the context of situational variables. Rather than behaving in line with economic incentives, people are likely to approach the decision to cooperate with an appropriateness framework. An expanded, four factor model of the Logic of Appropriateness, suggests that the cooperation is better explained by the question: "What does a person like me (identity) do (rules) in a situation like this (recognition) given this culture (group)?"
Strategic factors
Strategic factors also matter in commons dilemmas. One often-studied strategic factor is the order in which people take harvests from the resource. In simultaneous play, all people harvest at the same time, whereas in sequential play people harvest from the pool according to a predetermined sequence – first, second, third, etc. There is a clear order effect in the latter games: the harvests of those who come first – the leaders – are higher than the harvest of those coming later – the followers. The interpretation of this effect is that the first players feel entitled to take more. With sequential play, individuals adopt a first come-first served rule, whereas with simultaneous play people may adopt an equality rule. Another strategic factor is the ability to build up reputations. Research found that people take less from the common pool in public situations than in anonymous private situations. Moreover, those who harvest less gain greater prestige and influence within their group.
Structural factors
Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to all." One of the proposed solutions is to appoint a leader to regulate access to the common. Groups are more likely to endorse a leader when a common resource is being depleted and when managing a common resource is perceived as a difficult task. Groups prefer leaders who are elected, democratic, and prototypical of the group, and these leader types are more successful in enforcing cooperation. A general aversion to autocratic leadership exists, although it may be an effective solution, possibly because of the fear of power abuse and corruption.
The provision of rewards and punishments may also be effective in preserving common resources. Selective punishments for overuse can be effective in promoting domestic water and energy conservation – for example, through installing water and electricity meters in houses. Selective rewards work, provided that they are open to everyone. An experimental carpool lane in the Netherlands failed because car commuters did not feel they were able to organize a carpool. The rewards do not have to be tangible. In Canada, utilities considered putting "smiley faces" on electricity bills of customers below the average consumption of that customer's neighborhood.
Solutions
Articulating solutions to the tragedy of the commons is one of the main problems of political philosophy. In some situations, locals implement (often complex) social schemes that work well. When these fail, there are many possible governmental solutions such as privatization, internalizing the externalities, and regulation.
Non-governmental solution
Robert Axelrod contends that even self-interested individuals will often find ways to cooperate, because collective restraint serves both the collective and individual interests. Anthropologist G. N. Appell criticised those who cited Hardin to "impos[e] their own economic and environmental rationality on other social systems of which they have incomplete understanding and knowledge."
Political scientist Elinor Ostrom, who was awarded 2009's Nobel Memorial Prize in Economic Sciences for her work on the issue, and others revisited Hardin's work in 1999. They found the tragedy of the commons not as prevalent or as difficult to solve as Hardin maintained, since locals have often come up with solutions to the commons problem themselves. For example, another group found that a commons in the Swiss Alps has been run by a collective of farmers there to their mutual and individual benefit since 1517, in spite of the farmers also having access to their own farmland. In general, it is in the interest of the users of a commons to keep them functioning and so complex social schemes are often invented by the users for maintaining them at optimum efficiency. Another prominent example of this is the deliberative process of granting legal personhood to a part of nature, for example rivers, with the aim of preserving their water resources and prevent environmental degradation. This process entails that a river is regarded as its own legal entity that can sue against environmental damage done to it while being represented by an independently appointed guardian advisory group. This has happened as a bottom-up process in New Zealand: Here debates initiated by the Whanganui Iwi tribe have resulted in legal personhood for the river. The river is considered as a living whole, stretching from mountain to sea and even includes not only the physical but also its metaphysical elements.
Similarly, geographer Douglas L. Johnson remarks that many nomadic pastoralist societies of Africa and the Middle East in fact "balanced local stocking ratios against seasonal rangeland conditions in ways that were ecologically sound", reflecting a desire for lower risk rather than higher profit; in spite of this, it was often the case that "the nomad was blamed for problems that were not of his own making and were a product of alien forces." Independently finding precedent in the opinions of previous scholars such as Ibn Khaldun as well as common currency in antagonistic cultural attitudes towards non-sedentary peoples, governments and international organizations have made use of Hardin's work to help justify restrictions on land access and the eventual sedentarization of pastoral nomads despite its weak empirical basis. Examining relations between historically nomadic Bedouin Arabs and the Syrian state in the 20th century, Dawn Chatty notes that "Hardin's argument was curiously accepted as the fundamental explanation for the degradation of the steppe land" in development schemes for the arid interior of the country, downplaying the larger role of agricultural overexploitation in desertification as it melded with prevailing nationalist ideology which viewed nomads as socially backward and economically harmful.
Elinor Ostrom and her colleagues looked at how real-world communities manage communal resources, such as fisheries, land irrigation systems, and farmlands, and they identified a number of factors conducive to successful resource management. One factor is the resource itself; resources with definable boundaries (e.g. land) can be preserved much more easily. A second factor is resource dependence; there must be a perceptible threat of resource depletion, and it must be difficult to find substitutes. The third is the presence of a community; small and stable populations with a thick social network and social norms promoting conservation do better. A final condition is that there be appropriate community-based rules and procedures in place with built-in incentives for responsible use and punishments for overuse. When the commons is taken over by non-locals, those solutions can no longer be used.
Many of the economic and social structures recommended by Ostrom coincide with the structures recommended by anarchists, particularly green anarchism. The largest contemporary societies that use these organizational strategies are the Rebel Zapatista Autonomous Municipalities and the Autonomous Administration of North and East Syria which have heavily been influenced by anarchism and other versions of libertarian and ecological socialism.
Individuals may act in a deliberate way to avoid consumption habits that deplete natural resources. This consciousness promotes the boycotting of products or brands and seeking alternative, more sustainable options.
Altruistic punishment
Various well-established theories, such as theory of kin selection and direct reciprocity, have limitations in explaining patterns of cooperation emerging between unrelated individuals and in non-repeatable short-term interactions. Studies have shown that punishment is an efficacious motivator for cooperation among humans.
Altruistic punishment entails the presence of individuals that punish defectors from a cooperative agreement, although doing so is costly and provides no material gain. These punishments effectively resolve tragedy of the commons scenarios by addressing both first-order free rider problems (i.e. defectors free riding on cooperators) and second-order free rider problems (i.e. cooperators free riding on work of punishers). Such results can only be witnessed when the punishment levels are high enough.
While defectors are motivated by self-interest and cooperators feel morally obliged to practice self-restraint, punishers pursue this path when their emotions are clouded by annoyance and anger at free riders.
Governmental solutions
Governmental solutions are used when the above conditions are not met (such as a community being larger than the cohesion of its social network). Examples of government regulation include population control, privatization, regulation, and internalizing the externalities.
Population control
In Hardin's essay, he proposed that the solution to the problem of overpopulation must be based on "mutual coercion, mutually agreed upon" and result in "relinquishing the freedom to breed". Hardin discussed this topic further in a 1979 book, Managing the Commons, co-written with John A. Baden. He framed this prescription in terms of needing to restrict the "reproductive right", to safeguard all other rights. Several countries have a variety of population control laws in place.
In the context of United States policy debates, Hardin advocated restrictions on migration, particularly of non-whites. In a 1991 article, he stated
Privatization
One solution for some resources is to convert common good into private property (Coase 1960), giving the new owner an incentive to enforce its sustainability. Libertarians and classical liberals cite the tragedy of the commons as an example of what happens when Lockean property rights to homestead resources are prohibited by a government. They argue that the solution to the tragedy of the commons is to allow individuals to take over the property rights of a resource, that is, to privatize it.
In England, this solution was attempted in the inclosure acts. According to Karl Marx in , this solution leads to increasing numbers of people being pushed into smaller and smaller pockets of common land which has yet to be privatised, thereby merely displacing and exacerbating the problem while putting an increasing number of people in precarious situations. Economic historian Bob Allen coined the term "Engels' pause" to describe the period from 1790 to 1840, when British working-class wages stagnated and per-capita gross domestic product expanded rapidly during a technological upheaval.
Regulation
In a typical example, governmental regulations can limit the amount of a common good that is available for use by any individual. Permit systems for extractive economic activities including mining, fishing, hunting, livestock raising, and timber extraction are examples of this approach. Similarly, limits to pollution are examples of governmental intervention on behalf of the commons. This idea is used by the United Nations Moon Treaty, Outer Space Treaty and Law of the Sea Treaty as well as the UNESCO World Heritage Convention (treaty) which involves the international law principle that designates some areas or resources the Common Heritage of Mankind.
German historian Joachim Radkau thought Hardin advocates strict management of common goods via increased government involvement or international regulation bodies. An asserted impending "tragedy of the commons" is frequently warned of as a consequence of the adoption of policies which restrict private property and espouse expansion of public property.
Giving legal rights of personhood to objects in nature is another proposed solution. The idea of giving land a legal personality is intended to enable the democratic system of the rule of law to allow for prosecution, sanction, and reparation for damage to the earth. For example, this has been put into practice in Ecuador in the form of a constitutional principle known as "Pacha Mama" (Mother Earth).
Internalizing externalities
Privatization works when the person who owns the property (or rights of access to that property) pays the full price of its exploitation. As discussed above negative externalities (negative results, such as air or water pollution, that do not proportionately affect the user of the resource) is often a feature driving the tragedy of the commons. Internalizing the externalities, in other words ensuring that the users of resource pay for all of the consequences of its use, can provide an alternate solution between privatization and regulation. One example is gasoline taxes which are intended to include both the cost of road maintenance and of air pollution. This solution can provide the flexibility of privatization while minimizing the amount of government oversight and overhead that is needed.
The mid-way solution
One of the significant actions areas which can dwell as potential solution is to have co-shared communities that have partial ownership from governmental side and partial ownership from the community. By ownership, here it is referred to planning, sharing, using, benefiting and supervision of the resources which ensure that the power is not held in one or two hands only. Since, involvement of multiple stakeholders is necessary responsibilities can be shared across them based on their abilities and capacities in terms of human resources, infrastructure development ability, and legal aspects, etc.
Criticism
Commons in historical reality
The status of common land in England as mentioned in Lloyd's pamphlet has been widely misunderstood.
Millions of acres were "common land", but this did not mean public land open to everybody, a popular fallacy. There was no such thing as ownerless land. Every parcel of "common" land had a legal owner, who was a private person or corporation. The owner was called the lord of the manor (which, like landlord, was a legal term denoting ownership, not aristocratic status).
It was true that there were local people, called commoners, defined as those who had a legal right to use his land for some purpose of their own, typically grazing their animals. Certainly their rights were strong, because the lord was not entitled to build on his own land, or fence off any part of it, unless he could prove he had left enough pasture for the commoners. But these individuals were not the general public at large: not everyone in the vicinity was a commoner.
Furthermore the commoners' right to graze the lord's land with their animals was restricted by law - precisely in order to prevent overgrazing. If overgrazing did nevertheless occur, which it sometimes did, it was because of incompetent or weak land management, and not because of the pressure of an unlimited right to graze, which did not exist.
Hence Christopher Rodgers said that "Hardin's influential thesis on the 'tragedy of the commons' ... has no application to common land in England and Wales. It is based on a false premise". Rodgers, professor of law at Newcastle University, added:
Every productive unit ("manor") had a manorial court; without it, the manor ceased to exist. Manorial courts could fine commoners, and the lord of the manor for that matter, for breaches of customary law, e.g. grazing too many cattle on the land. Customary law varied locally. It could not be altered without the consent of the whole body of the commoners, except by getting an Act of Parliament.
By the time of Lloyd's pamphlet (1833) the majority of land in England had been enclosed and had ceased to be common land. That which remained may not have been good agricultural land anyway, or the best managed. Lloyd takes for granted that common lands were inferior and argues his over-grazing theory to explain it. He does not examine other possible causes e.g. common land was difficult to drain, to keep disease-free, and to use for improved cattle breeding.
Likewise, Susan Jane Buck Cox argues that the common land example used to argue this economic concept is on very weak historical ground, and misrepresents what she terms was actually the "triumph of the commons": the successful common usage of land for many centuries. She argues that social changes and agricultural innovation, and not the behaviour of the commoners, led to the demise of the commons. In a similar vein, Carl Dahlman argues that commons were effectively managed to prevent overgrazing.
Others
Hardin's work is criticised as historically inaccurate in failing to account for the demographic transition, and for failing to distinguish between common property and open access resources.
Radical environmentalist Derrick Jensen claims the tragedy of the commons is used as propaganda for private ownership. He says it has been used by the political right wing to hasten the final enclosure of the "common resources" of third world and indigenous people worldwide, as a part of the Washington Consensus. He argues that in true situations, those who abuse the commons would have been warned to desist and if they failed would have punitive sanctions against them. He says that rather than being called "The Tragedy of the Commons", it should be called "the Tragedy of the Failure of the Commons".
Marxist geographer David Harvey has a similar criticism: "The dispossession of indigenous populations in North America by 'productive' colonists, for instance, was justified because indigenous populations did not produce value", asking: "Why, for instance, do we not focus in Hardin's metaphor on the individual ownership of the cattle rather than on the pasture as a common?"
Some authors, like Yochai Benkler, say that with the rise of the Internet and digitalisation, an economics system based on commons becomes possible again. He wrote in his book The Wealth of Networks in 2006 that cheap computing power plus networks enable people to produce valuable products through non-commercial processes of interaction: "as human beings and as social beings, rather than as market actors through the price system". He uses the term networked information economy to refer to a "system of production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means that do not depend on market strategies." He also coined the term commons-based peer production for collaborative efforts based on sharing information. Examples of commons-based peer production are Wikipedia, free and open source software and open-source hardware.
Tragedy of the commons has served as a pretext for powerful private companies and/or governments to introduce regulatory agents or outsourcing on less powerful entities or governments, for the exploitation of their natural resources. Powerful companies and governments can easily corrupt and bribe less powerful institutions or governments, to allow them exploit or privatize their resources, which causes more concentration of power and wealth in powerful entities. This phenomenon is known as the resource curse.
Other criticisms have focused on Hardin's racist and eugenicist views, claiming that his arguments are directed towards forcible population control, particularly for people of color.
Comedy of the commons
In certain cases, exploiting a resource more may be a good thing. Carol M. Rose, in a 1986 article, discussed the concept of the "comedy of the commons", where the public property in question exhibits "increasing returns to scale" in usage (hence the phrase, "the more the merrier"), in that the more people use the resource, the higher the benefit to each one. Rose cites as examples commerce and group recreational activities. According to Rose, public resources with the "comedic" characteristic may suffer from under-investment rather than over usage.
A modern example presented by Garrett Richards in environmental studies is that the issue of excessive carbon emissions can be tackled effectively only when the efforts are directly addressing the issues along with the collective efforts from the world economies. Additionally, the more that nations are willing to collaborate and contribute resources, the higher the chances are for successful technological developments.
| Physical sciences | Earth science basics: General | Earth science |
30806 | https://en.wikipedia.org/wiki/Tree%20%28abstract%20data%20type%29 | Tree (abstract data type) | In computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. Each node in the tree can be connected to many children (depending on the type of tree), but must be connected to exactly one parent, except for the root node, which has no parent (i.e., the root node as the top-most node in the tree hierarchy). These constraints mean there are no cycles or "loops" (no node can be its own ancestor), and also that each child can be treated like the root node of its own subtree, making recursion a useful technique for tree traversal. In contrast to linear data structures, many trees cannot be represented by relationships between neighboring nodes (parent and children nodes of a node under consideration, if they exist) in a single straight line (called edge or link between two adjacent nodes).
Binary trees are a commonly used type, which constrain the number of children for each parent to at most two. When the order of the children is specified, this data structure corresponds to an ordered tree in graph theory. A value or pointer to other data may be associated with every node in the tree, or sometimes only with the leaf nodes, which have no children nodes.
The abstract data type (ADT) can be represented in a number of ways, including a list of parents with pointers to children, a list of children with pointers to parents, or a list of nodes and a separate list of parent-child relations (a specific type of adjacency list). Representations might also be more complicated, for example using indexes or ancestor lists for performance.
Trees as used in computing are similar to but can be different from mathematical constructs of trees in graph theory, trees in set theory, and trees in descriptive set theory.
Applications
Trees are commonly used to represent or manipulate hierarchical data in applications such as:
File systems for:
Directory structure used to organize subdirectories and files (symbolic links create non-tree graphs, as do multiple hard links to the same file or directory)
The mechanism used to allocate and link blocks of data on the storage device
Class hierarchy or "inheritance tree" showing the relationships among classes in object-oriented programming; multiple inheritance produces non-tree graphs
Abstract syntax trees for computer languages
Natural language processing:
Parse trees
Modeling utterances in a generative grammar
Dialogue tree for generating conversations
Document Object Models ("DOM tree") of XML and HTML documents
Search trees store data in a way that makes an efficient search algorithm possible via tree traversal
A binary search tree is a type of binary tree
Representing sorted lists of data
Computer-generated imagery:
Space partitioning, including binary space partitioning
Digital compositing
Storing Barnes–Hut trees used to simulate galaxies
Implementing heaps
Nested set collections
Hierarchical taxonomies such as the Dewey Decimal Classification with sections of increasing specificity.
Hierarchical temporal memory
Genetic programming
Hierarchical clustering
Trees can be used to represent and manipulate various mathematical structures, such as:
Paths through an arbitrary node-and-edge graph (including multigraphs), by making multiple nodes in the tree for each graph node used in multiple paths
Any mathematical hierarchy
Tree structures are often used for mapping the relationships between things, such as:
Components and subcomponents which can be visualized in an exploded-view drawing
Subroutine calls used to identify which subroutines in a program call other subroutines non recursively
Inheritance of DNA among species by evolution, of source code by software projects (e.g. Linux distribution timeline), of designs in various types of cars, etc.
The contents of hierarchical namespaces
JSON and YAML documents can be thought of as trees, but are typically represented by nested lists and dictionaries.
Terminology
A node is a structure which may contain data and connections to other nodes, sometimes called edges or links. Each node in a tree has zero or more child nodes, which are below it in the tree (by convention, trees are drawn with descendants going downwards). A node that has a child is called the child's parent node (or superior). All nodes have exactly one parent, except the topmost root node, which has none. A node might have many ancestor nodes, such as the parent's parent. Child nodes with the same parent are sibling nodes. Typically siblings have an order, with the first one conventionally drawn on the left. Some definitions allow a tree to have no nodes at all, in which case it is called empty.
An internal node (also known as an inner node, inode for short, or branch node) is any node of a tree that has child nodes. Similarly, an external node (also known as an outer node, leaf node, or terminal node) is any node that does not have child nodes.
The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). Thus the root node has depth zero, leaf nodes have height zero, and a tree with only a single node (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1.
Each non-root node can be treated as the root node of its own subtree, which includes that node and all its descendants.
Other terms used with trees:
Examples of trees and non-trees
Common operations
Enumerating all the items
Enumerating a section of a tree
Searching for an item
Adding a new item at a certain position on the tree
Deleting an item
Pruning: Removing a whole section of a tree
Grafting: Adding a whole section to a tree
Finding the root for any node
Finding the lowest common ancestor of two nodes
Traversal and search methods
Stepping through the items of a tree, by means of the connections between parents and children, is called walking the tree, and the action is a walk of the tree. Often, an operation might be performed when a pointer arrives at a particular node. A walk in which each parent node is traversed before its children is called a pre-order walk; a walk in which the children are traversed before their respective parents are traversed is called a post-order walk; a walk in which a node's left subtree, then the node itself, and finally its right subtree are traversed is called an in-order traversal. (This last scenario, referring to exactly two subtrees, a left subtree and a right subtree, assumes specifically a binary tree.) A level-order walk effectively performs a breadth-first search over the entirety of a tree; nodes are traversed level by level, where the root node is visited first, followed by its direct child nodes and their siblings, followed by its grandchild nodes and their siblings, etc., until all nodes in the tree have been traversed.
Representations
There are many different ways to represent trees. In working memory, nodes are typically dynamically allocated records with pointers to their children, their parents, or both, as well as any associated data. If of a fixed size, the nodes might be stored in a list. Nodes and relationships between nodes might be stored in a separate special type of adjacency list. In relational databases, nodes are typically represented as table rows, with indexed row IDs facilitating pointers between parents and children.
Nodes can also be stored as items in an array, with relationships between them determined by their positions in the array (as in a binary heap).
A binary tree can be implemented as a list of lists: the head of a list (the value of the first term) is the left child (subtree), while the tail (the list of second and subsequent terms) is the right child (subtree). This can be modified to allow values as well, as in Lisp S-expressions, where the head (value of first term) is the value of the node, the head of the tail (value of second term) is the left child, and the tail of the tail (list of third and subsequent terms) is the right child.
Ordered trees can be naturally encoded by finite sequences, for example with natural numbers.
Type theory
As an abstract data type, the abstract tree type with values of some type is defined, using the abstract forest type (list of trees), by the functions:
value: →
children: →
nil: () →
node: × →
with the axioms:
value(node(, )) =
children(node(, )) =
In terms of type theory, a tree is an inductive type defined by the constructors (empty forest) and (tree with root node with given value and children).
Mathematical terminology
Viewed as a whole, a tree data structure is an ordered tree, generally with values attached to each node. Concretely, it is (if required to be non-empty):
A rooted tree with the "away from root" direction (a more narrow term is an "arborescence"), meaning:
A directed graph,
whose underlying undirected graph is a tree (any two vertices are connected by exactly one simple path),
with a distinguished root (one vertex is designated as the root),
which determines the direction on the edges (arrows point away from the root; given an edge, the node that the edge points from is called the parent and the node that the edge points to is called the child), together with:
an ordering on the child nodes of a given node, and
a value (of some data type) at each node.
Often trees have a fixed (more properly, bounded) branching factor (outdegree), particularly always having two child nodes (possibly empty, hence at most two non-empty child nodes), hence a "binary tree".
Allowing empty trees makes some definitions simpler, some more complicated: a rooted tree must be non-empty, hence if empty trees are allowed the above definition instead becomes "an empty tree or a rooted tree such that ...". On the other hand, empty trees simplify defining fixed branching factor: with empty trees allowed, a binary tree is a tree such that every node has exactly two children, each of which is a tree (possibly empty).
| Mathematics | Data structures and types | null |
30844 | https://en.wikipedia.org/wiki/Tensor%20product | Tensor product | In mathematics, the tensor product of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map that maps a pair to an element of denoted .
An element of the form is called the tensor product of and . An element of is a tensor, and the tensor product of two vectors is sometimes called an elementary tensor or a decomposable tensor. The elementary tensors span in the sense that every element of is a sum of elementary tensors. If bases are given for and , a basis of is formed by all tensor products of a basis element of and a basis element of .
The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from into another vector space factors uniquely through a linear map (see Universal property).
Tensor products are used in many application areas, including physics and engineering. For example, in general relativity, the gravitational field is described through the metric tensor, which is a tensor field with one tensor at each point of the space-time manifold, and each belonging to the tensor product of the cotangent space at the point with itself.
Definitions and constructions
The tensor product of two vector spaces is a vector space that is defined up to an isomorphism. There are several equivalent ways to define it. Most consist of defining explicitly a vector space that is called a tensor product, and, generally, the equivalence proof results almost immediately from the basic properties of the vector spaces that are so defined.
The tensor product can also be defined through a universal property; see , below. As for every universal property, all objects that satisfy the property are isomorphic through a unique isomorphism that is compatible with the universal property. When this definition is used, the other definitions may be viewed as constructions of objects satisfying the universal property and as proofs that there are objects satisfying the universal property, that is that tensor products exist.
From bases
Let and be two vector spaces over a field , with respective bases and .
The tensor product of and is a vector space that has as a basis the set of all with and . This definition can be formalized in the following way (this formalization is rarely used in practice, as the preceding informal definition is generally sufficient): is the set of the functions from the Cartesian product to that have a finite number of nonzero values. The pointwise operations make a vector space. The function that maps to and the other elements of to is denoted .
The set is then straightforwardly a basis of , which is called the tensor product of the bases and .
We can equivalently define to be the set of bilinear forms on that are nonzero at only a finite number of elements of . To see this, given and a bilinear form , we can decompose and in the bases and as:
where only a finite number of 's and 's are nonzero, and find by the bilinearity of that:
Hence, we see that the value of for any is uniquely and totally determined by the values that it takes on . This lets us extend the maps defined on as before into bilinear maps , by letting:
Then we can express any bilinear form as a (potentially infinite) formal linear combination of the maps according to:
making these maps similar to a Schauder basis for the vector space of all bilinear forms on . To instead have it be a proper Hamel basis, it only remains to add the requirement that is nonzero at an only a finite number of elements of , and consider the subspace of such maps instead.
In either construction, the tensor product of two vectors is defined from their decomposition on the bases. More precisely, taking the basis decompositions of and as before:
This definition is quite clearly derived from the coefficients of in the expansion by bilinearity of using the bases and , as done above. It is then straightforward to verify that with this definition, the map is a bilinear map from to satisfying the universal property that any construction of the tensor product satisfies (see below).
If arranged into a rectangular array, the coordinate vector of is the outer product of the coordinate vectors of and . Therefore, the tensor product is a generalization of the outer product, that is, an abstraction of it beyond coordinate vectors.
A limitation of this definition of the tensor product is that, if one changes bases, a different tensor product is defined. However, the decomposition on one basis of the elements of the other basis defines a canonical isomorphism between the two tensor products of vector spaces, which allows identifying them. Also, contrarily to the two following alternative definitions, this definition cannot be extended into a definition of the tensor product of modules over a ring.
As a quotient space
A construction of the tensor product that is basis independent can be obtained in the following way.
Let and be two vector spaces over a field .
One considers first a vector space that has the Cartesian product as a basis. That is, the basis elements of are the pairs with and . To get such a vector space, one can define it as the vector space of the functions that have a finite number of nonzero values and identifying with the function that takes the value on and otherwise.
Let be the linear subspace of that is spanned by the relations that the tensor product must satisfy. More precisely, is spanned by the elements of one of the forms:
where , and .
Then, the tensor product is defined as the quotient space:
and the image of in this quotient is denoted .
It is straightforward to prove that the result of this construction satisfies the universal property considered below. (A very similar construction can be used to define the tensor product of modules.)
Universal property
In this section, the universal property satisfied by the tensor product is described. As for every universal property, two objects that satisfy the property are related by a unique isomorphism. It follows that this is a (non-constructive) way to define the tensor product of two vector spaces. In this context, the preceding constructions of tensor products may be viewed as proofs of existence of the tensor product so defined.
A consequence of this approach is that every property of the tensor product can be deduced from the universal property, and that, in practice, one may forget the method that has been used to prove its existence.
The "universal-property definition" of the tensor product of two vector spaces is the following (recall that a bilinear map is a function that is separately linear in each of its arguments):
The tensor product of two vector spaces and is a vector space denoted as , together with a bilinear map from to , such that, for every bilinear map , there is a unique linear map , such that (that is, for every and ).
Linearly disjoint
Like the universal property above, the following characterization may also be used to determine whether or not a given vector space and given bilinear map form a tensor product.
For example, it follows immediately that if and are positive integers then and the bilinear map defined by sending to form a tensor product of and . Often, this map will be denoted by so that denotes this bilinear map's value at .
As another example, suppose that is the vector space of all complex-valued functions on a set with addition and scalar multiplication defined pointwise (meaning that is the map and is the map ). Let and be any sets and for any and , let denote the function defined by .
If and are vector subspaces then the vector subspace of together with the bilinear map:
form a tensor product of and .
Properties
Dimension
If and are vectors spaces of finite dimension, then is finite-dimensional, and its dimension is the product of the dimensions of and .
This results from the fact that a basis of is formed by taking all tensor products of a basis element of and a basis element of .
Associativity
The tensor product is associative in the sense that, given three vector spaces , there is a canonical isomorphism:
that maps to .
This allows omitting parentheses in the tensor product of more than two vector spaces or vectors.
Commutativity as vector space operation
The tensor product of two vector spaces and is commutative in the sense that there is a canonical isomorphism:
that maps to .
On the other hand, even when , the tensor product of vectors is not commutative; that is , in general.
The map from to itself induces a linear automorphism that is called a .
More generally and as usual (see tensor algebra), let denote the tensor product of copies of the vector space . For every permutation of the first positive integers, the map:
induces a linear automorphism of , which is called a braiding map.
Tensor product of linear maps
Given a linear map , and a vector space , the tensor product:
is the unique linear map such that:
The tensor product is defined similarly.
Given two linear maps and , their tensor product:
is the unique linear map that satisfies:
One has:
In terms of category theory, this means that the tensor product is a bifunctor from the category of vector spaces to itself.
If and are both injective or surjective, then the same is true for all above defined linear maps. In particular, the tensor product with a vector space is an exact functor; this means that every exact sequence is mapped to an exact sequence (tensor products of modules do not transform injections into injections, but they are right exact functors).
By choosing bases of all vector spaces involved, the linear maps and can be represented by matrices. Then, depending on how the tensor is vectorized, the matrix describing the tensor product is the Kronecker product of the two matrices. For example, if , and above are all two-dimensional and bases have been fixed for all of them, and and are given by the matrices:
respectively, then the tensor product of these two matrices is:
The resultant rank is at most 4, and thus the resultant dimension is 4. here denotes the tensor rank i.e. the number of requisite indices (while the matrix rank counts the number of degrees of freedom in the resulting array). .
A dyadic product is the special case of the tensor product between two vectors of the same dimension.
General tensors
For non-negative integers and a type tensor on a vector space is an element of:
Here is the dual vector space (which consists of all linear maps from to the ground field ).
There is a product map, called the :
It is defined by grouping all occurring "factors" together: writing for an element of and for an element of the dual space:
If is finite dimensional, then picking a basis of and the corresponding dual basis of naturally induces a basis of (this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if and are two covariant tensors of orders and respectively (i.e. and ), then the components of their tensor product are given by:
Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let be a tensor of type with components , and let be a tensor of type with components . Then:
and:
Tensors equipped with their product operation form an algebra, called the tensor algebra.
Evaluation map and tensor contraction
For tensors of type there is a canonical evaluation map:
defined by its action on pure tensors:
More generally, for tensors of type , with , there is a map, called tensor contraction:
(The copies of and on which this map is to be applied must be specified.)
On the other hand, if is , there is a canonical map in the other direction (called the coevaluation map):
where is any basis of , and is its dual basis. This map does not depend on the choice of basis.
The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases.
Adjoint representation
The tensor product may be naturally viewed as a module for the Lie algebra by means of the diagonal action: for simplicity let us assume , then, for each ,
where is the transpose of , that is, in terms of the obvious pairing on ,
There is a canonical isomorphism given by:
Under this isomorphism, every in may be first viewed as an endomorphism of and then viewed as an endomorphism of . In fact it is the adjoint representation of .
Linear maps as tensors
Given two finite dimensional vector spaces , over the same field , denote the dual space of as , and the -vector space of all linear maps from to as . There is an isomorphism:
defined by an action of the pure tensor on an element of ,
Its "inverse" can be defined using a basis and its dual basis as in the section "Evaluation map and tensor contraction" above:
This result implies:
which automatically gives the important fact that forms a basis of where are bases of and .
Furthermore, given three vector spaces , , the tensor product is linked to the vector space of all linear maps, as follows:
This is an example of adjoint functors: the tensor product is "left adjoint" to Hom.
Tensor products of modules over a ring
The tensor product of two modules and over a commutative ring is defined in exactly the same way as the tensor product of vector spaces over a field:
where now is the free -module generated by the cartesian product and is the -module generated by these relations.
More generally, the tensor product can be defined even if the ring is non-commutative. In this case has to be a right--module and is a left--module, and instead of the last two relations above, the relation:
is imposed. If is non-commutative, this is no longer an -module, but just an abelian group.
The universal property also carries over, slightly modified: the map defined by is a middle linear map (referred to as "the canonical middle linear map"); that is, it satisfies:
The first two properties make a bilinear map of the abelian group . For any middle linear map of , a unique group homomorphism of satisfies , and this property determines within group isomorphism. See the main article for details.
Tensor product of modules over a non-commutative ring
Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by:
where is a free abelian group over and G is the subgroup of generated by relations:
The universal property can be stated as follows. Let G be an abelian group with a map that is bilinear, in the sense that:
Then there is a unique map such that for all and .
Furthermore, we can give a module structure under some extra conditions:
If A is a (S,R)-bimodule, then is a left S-module, where .
If B is a (R,S)-bimodule, then is a right S-module, where .
If A is a (S,R)-bimodule and B is a (R,T)-bimodule, then is a (S,T)-bimodule, where the left and right actions are defined in the same way as the previous two examples.
If R is a commutative ring, then A and B are (R,R)-bimodules where and . By 3), we can conclude is a (R,R)-bimodule.
Computing the tensor product
For vector spaces, the tensor product is quickly computed since bases of of immediately determine a basis of , as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, is not a free abelian group (-module). The tensor product with is given by:
More generally, given a presentation of some -module , that is, a number of generators together with relations:
the tensor product can be computed as the following cokernel:
Here , and the map is determined by sending some in the th copy of to (in ). Colloquially, this may be rephrased by saying that a presentation of gives rise to a presentation of . This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of -modules , the tensor product:
is not usually injective. For example, tensoring the (injective) map given by multiplication with , with yields the zero map , which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product.
Tensor product of algebras
Let be a commutative ring. The tensor product of -modules applies, in particular, if and are -algebras. In this case, the tensor product is an -algebra itself by putting:
For example:
A particular example is when and are fields containing a common subfield . The tensor product of fields is closely related to Galois theory: if, say, , where is some irreducible polynomial with coefficients in , the tensor product can be calculated as:
where now is interpreted as the same polynomial, but with its coefficients regarded as elements of . In the larger field , the polynomial may become reducible, which brings in Galois theory. For example, if is a Galois extension of , then:
is isomorphic (as an -algebra) to the .
Eigenconfigurations of tensors
Square matrices with entries in a field represent linear maps of vector spaces, say , and thus linear maps of projective spaces over . If is nonsingular then is well-defined everywhere, and the eigenvectors of correspond to the fixed points of . The eigenconfiguration of consists of points in , provided is generic and is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let be a -dimensional tensor of format with entries lying in an algebraically closed field of characteristic zero. Such a tensor defines polynomial maps and with coordinates:
Thus each of the coordinates of is a homogeneous polynomial of degree in . The eigenvectors of are the solutions of the constraint:
and the eigenconfiguration is given by the variety of the minors of this matrix.
Other examples of tensor products
Topological tensor products
Hilbert spaces generalize finite-dimensional vector spaces to arbitrary dimensions. There is an analogous operation, also called the "tensor product," that makes Hilbert spaces a symmetric monoidal category. It is essentially constructed as the metric space completion of the algebraic tensor product discussed above. However, it does not satisfy the obvious analogue of the universal property defining tensor products; the morphisms for that property must be restricted to Hilbert–Schmidt operators.
In situations where the imposition of an inner product is inappropriate, one can still attempt to complete the algebraic tensor product, as a topological tensor product. However, such a construction is no longer uniquely specified: in many cases, there are multiple natural topologies on the algebraic tensor product.
Tensor product of graded vector spaces
Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition).
Tensor product of representations
Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule.
Tensor product of quadratic forms
Tensor product of multilinear forms
Given two multilinear forms and on a vector space over the field their tensor product is the multilinear form:
This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product.
Tensor product of sheaves of modules
Tensor product of line bundles
Tensor product of fields
Tensor product of graphs
It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above.
Monoidal categories
The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects.
Quotient algebras
A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general.
The exterior algebra is constructed from the exterior product. Given a vector space , the exterior product is defined as:
When the underlying field of does not have characteristic 2, then this definition is equivalent to:
The image of in the exterior product is usually denoted and satisfies, by construction, . Similar constructions are possible for ( factors), giving rise to , the th exterior power of . The latter notion is the basis of differential -forms.
The symmetric algebra is constructed in a similar manner, from the symmetric product:
More generally:
That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors.
Tensor product in programming
Array programming languages
Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).
J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable.
However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).
| Mathematics | Linear algebra | null |
30848 | https://en.wikipedia.org/wiki/Turbine | Turbine | A turbine ( or ) (from the Greek , tyrbē, or Latin turbo, meaning vortex) is a rotary mechanical device that extracts energy from a fluid flow and converts it into useful work. The work produced can be used for generating electrical power when combined with a generator. A turbine is a turbomachine with at least one moving part called a rotor assembly, which is a shaft or drum with blades attached. Moving fluid acts on the blades so that they move and impart rotational energy to the rotor.
Gas, steam, and water turbines have a casing around the blades that contains and controls the working fluid.
The word "turbine" was coined in 1822 by the French mining engineer Claude Burdin from the Greek , tyrbē, meaning "vortex" or "whirling". Benoit Fourneyron, a former student of Claude Burdin, built the first practical water turbine. Credit for invention of the steam turbine is given both to Anglo-Irish engineer Sir Charles Parsons (1854–1931) for invention of the reaction turbine, and to Swedish engineer Gustaf de Laval (1845–1913) for invention of the impulse turbine. Modern steam turbines frequently employ both reaction and impulse in the same unit, typically varying the degree of reaction and impulse from the blade root to its periphery.
History
Hero of Alexandria demonstrated the turbine principle in an aeolipile in the first century AD and Vitruvius mentioned them around 70 BC.
Early turbine examples are windmills and waterwheels.
The word "turbine" was coined in 1822 by the French mining engineer Claude Burdin from the Greek , tyrbē, meaning "vortex" or "whirling", in a memo, "Des turbines hydrauliques ou machines rotatoires à grande vitesse", which he submitted to the Académie royale des sciences in Paris. However, it was not until 1824 that a committee of the Académie (composed of Prony, Dupin, and Girard) reported favorably on Burdin's memo. Benoit Fourneyron, a former student of Claude Burdin, built the first practical water turbine.
Credit for invention of the steam turbine is given both to Anglo-Irish engineer Sir Charles Parsons (1854–1931) for invention of the reaction turbine, and to Swedish engineer Gustaf de Laval (1845–1913) for invention of the impulse turbine.
Theory of operation
A working fluid contains potential energy (pressure head) and kinetic energy (velocity head). The fluid may be compressible or incompressible. Several physical principles are employed by turbines to collect this energy:
Impulse turbines change the direction of flow of a high velocity fluid or gas jet. The resulting impulse spins the turbine and leaves the fluid flow with diminished kinetic energy. There is no pressure change of the fluid or gas in the turbine blades (the moving blades), as in the case of a steam or gas turbine, all the pressure drop takes place in the stationary blades (the nozzles). Before reaching the turbine, the fluid's pressure head is changed to velocity head by accelerating the fluid with a nozzle. Pelton wheels and de Laval turbines use this process exclusively. Impulse turbines do not require a pressure casement around the rotor since the fluid jet is created by the nozzle prior to reaching the blades on the rotor. Newton's second law describes the transfer of energy for impulse turbines. Impulse turbines are most efficient for use in cases where the flow is low and the inlet pressure is high.
Reaction turbines develop torque by reacting to the gas or fluid's pressure or mass. The pressure of the gas or fluid changes as it passes through the turbine rotor blades. A pressure casement is needed to contain the working fluid as it acts on the turbine stage(s) or the turbine must be fully immersed in the fluid flow (such as with wind turbines). The casing contains and directs the working fluid and, for water turbines, maintains the suction imparted by the draft tube. Francis turbines and most steam turbines use this concept. For compressible working fluids, multiple turbine stages are usually used to harness the expanding gas efficiently. Newton's third law describes the transfer of energy for reaction turbines. Reaction turbines are better suited to higher flow velocities or applications where the fluid head (upstream pressure) is low.
In the case of steam turbines, such as would be used for marine applications or for land-based electricity generation, a Parsons-type reaction turbine would require approximately double the number of blade rows as a de Laval-type impulse turbine, for the same degree of thermal energy conversion. Whilst this makes the Parsons turbine much longer and heavier, the overall efficiency of a reaction turbine is slightly higher than the equivalent impulse turbine for the same thermal energy conversion.
In practice, modern turbine designs use both reaction and impulse concepts to varying degrees whenever possible. Wind turbines use an airfoil to generate a reaction lift from the moving fluid and impart it to the rotor. Wind turbines also gain some energy from the impulse of the wind, by deflecting it at an angle. Turbines with multiple stages may use either reaction or impulse blading at high pressure. Steam turbines were traditionally more impulse but continue to move towards reaction designs similar to those used in gas turbines. At low pressure the operating fluid medium expands in volume for small reductions in pressure. Under these conditions, blading becomes strictly a reaction type design with the base of the blade solely impulse. The reason is due to the effect of the rotation speed for each blade. As the volume increases, the blade height increases, and the base of the blade spins at a slower speed relative to the tip. This change in speed forces a designer to change from impulse at the base, to a high reaction-style tip.
Classical turbine design methods were developed in the mid 19th century. Vector analysis related the fluid flow with turbine shape and rotation. Graphical calculation methods were used at first. Formulae for the basic dimensions of turbine parts are well documented and a highly efficient machine can be reliably designed for any fluid flow condition. Some of the calculations are empirical or 'rule of thumb' formulae, and others are based on classical mechanics. As with most engineering calculations, simplifying assumptions were made.
Velocity triangles can be used to calculate the basic performance of a turbine stage. Gas exits the stationary turbine nozzle guide vanes at absolute velocity Va1. The rotor rotates at velocity U. Relative to the rotor, the velocity of the gas as it impinges on the rotor entrance is Vr1. The gas is turned by the rotor and exits, relative to the rotor, at velocity Vr2. However, in absolute terms the rotor exit velocity is Va2. The velocity triangles are constructed using these various velocity vectors. Velocity triangles can be constructed at any section through the blading (for example: hub, tip, midsection and so on) but are usually shown at the mean stage radius. Mean performance for the stage can be calculated from the velocity triangles, at this radius, using the Euler equation:
Hence:
where:
is the specific enthalpy drop across stage
is the turbine entry total (or stagnation) temperature
is the turbine rotor peripheral velocity
is the change in whirl velocity
The turbine pressure ratio is a function of and the turbine efficiency.
Modern turbine design carries the calculations further. Computational fluid dynamics dispenses with many of the simplifying assumptions used to derive classical formulas and computer software facilitates optimization. These tools have led to steady improvements in turbine design over the last forty years.
The primary numerical classification of a turbine is its specific speed. This number describes the speed of the turbine at its maximum efficiency with respect to the power and flow rate. The specific speed is derived to be independent of turbine size. Given the fluid flow conditions and the desired shaft output speed, the specific speed can be calculated and an appropriate turbine design selected.
The specific speed, along with some fundamental formulas can be used to reliably scale an existing design of known performance to a new size with corresponding performance.
Off-design performance is normally displayed as a turbine map or characteristic.
The number of blades in the rotor and the number of vanes in the stator are often two different prime numbers in order to reduce the harmonics and maximize the blade-passing frequency.
Types
Steam turbines are used to drive electrical generators in thermal power plants which use coal, fuel oil or nuclear fuel. They were once used to directly drive mechanical devices such as ships' propellers (for example the Turbinia, the first turbine-powered steam launch), but most such applications now use reduction gears or an intermediate electrical step, where the turbine is used to generate electricity, which then powers an electric motor connected to the mechanical load. Turbo electric ship machinery was particularly popular in the period immediately before and during World War II, primarily due to a lack of sufficient gear-cutting facilities in US and UK shipyards.
Aircraft gas turbine engines are sometimes referred to as turbine engines to distinguish them from piston engines.
Transonic turbine. The gas flow in most turbines employed in gas turbine engines remains subsonic throughout the expansion process. In a transonic turbine the gas flow becomes supersonic as it exits the nozzle guide vanes, although the downstream velocities normally become subsonic. Transonic turbines operate at a higher pressure ratio than normal but are usually less efficient and uncommon.
Contra-rotating turbines. With axial turbines, some efficiency advantage can be obtained if a downstream turbine rotates in the opposite direction to an upstream unit. However, the complication can be counter-productive. A contra-rotating steam turbine, usually known as the Ljungström turbine, was originally invented by Swedish Engineer Fredrik Ljungström (1875–1964) in Stockholm, and in partnership with his brother Birger Ljungström he obtained a patent in 1894. The design is essentially a multi-stage radial turbine (or pair of 'nested' turbine rotors) offering great efficiency, four times as large heat drop per stage as in the reaction (Parsons) turbine, extremely compact design and the type met particular success in back pressure power plants. However, contrary to other designs, large steam volumes are handled with difficulty and only a combination with axial flow turbines (DUREX) admits the turbine to be built for power greater than ca 50 MW. In marine applications only about 50 turbo-electric units were ordered (of which a considerable number were finally sold to land plants) during 1917–19, and during 1920–22 a few turbo-mechanic not very successful units were sold. Only a few turbo-electric marine plants were still in use in the late 1960s (ss Ragne, ss Regin) while most land plants remain in use 2010.
Statorless turbine. Multi-stage turbines have a set of static (meaning stationary) inlet guide vanes that direct the gas flow onto the rotating rotor blades. In a stator-less turbine the gas flow exiting an upstream rotor impinges onto a downstream rotor without an intermediate set of stator vanes (that rearrange the pressure/velocity energy levels of the flow) being encountered.
Ceramic turbine. Conventional high-pressure turbine blades (and vanes) are made from nickel based alloys and often use intricate internal air-cooling passages to prevent the metal from overheating. In recent years, experimental ceramic blades have been manufactured and tested in gas turbines, with a view to increasing rotor inlet temperatures and/or, possibly, eliminating air cooling. Ceramic blades are more brittle than their metallic counterparts, and carry a greater risk of catastrophic blade failure. This has tended to limit their use in jet engines and gas turbines to the stator (stationary) blades.
Ducted fan (shrouded) turbine. Many turbine rotor blades have shrouding at the top, which interlocks with that of adjacent blades, to increase damping and thereby reduce blade flutter. In large land-based electricity generation steam turbines, the shrouding is often complemented, especially in the long blades of a low-pressure turbine, with lacing wires. These wires pass through holes drilled in the blades at suitable distances from the blade root and are usually brazed to the blades at the point where they pass through. Lacing wires reduce blade flutter in the central part of the blades. The introduction of lacing wires substantially reduces the instances of blade failure in large or low-pressure turbines.
Propfan (shroudless turbine). Modern practice is, wherever possible, to eliminate the rotor shrouding, thus reducing the centrifugal load on the blade and the cooling requirements.
Tesla turbine or bladeless turbine uses the boundary layer effect and not a fluid impinging upon the blades as in a conventional turbine.
Water turbines:
Pelton wheel, a type of impulse water turbine.
Francis turbine, a type of widely used water turbine.
Kaplan turbine, a variation of the Francis Turbine.
Turgo turbine, a modified form of the Pelton wheel.
Tyson turbine, a conical water turbine with helical blades emerging partway down from the apex gradually increasing in radial dimension and decreasing in pitch as they spiral towards the base of the cone.
Cross-flow turbine, also known as Banki-Michell turbine, or Ossberger turbine.
Wind turbine. These normally operate as a single stage without nozzle and interstage guide vanes. An exception is the Éolienne Bollée, which has a stator and a rotor.
Velocity compound "Curtis". Curtis combined the de Laval and Parsons turbine by using a set of fixed nozzles on the first stage or stator and then a rank of fixed and rotating blade rows, as in the Parsons or de Laval, typically up to ten compared with up to a hundred stages of a Parsons design. The overall efficiency of a Curtis design is less than that of either the Parsons or de Laval designs, but it can be satisfactorily operated through a much wider range of speeds, including successful operation at low speeds and at lower pressures, which made it ideal for use in ships' powerplant. In a Curtis arrangement, the entire heat drop in the steam takes place in the initial nozzle row and both the subsequent moving blade rows and stationary blade rows merely change the direction of the steam. Use of a small section of a Curtis arrangement, typically one nozzle section and two or three rows of moving blades, is usually termed a Curtis 'Wheel' and in this form, the Curtis found widespread use at sea as a 'governing stage' on many reaction and impulse turbines and turbine sets. This practice is still commonplace today in marine steam plant.
Pressure compound multi-stage impulse, or "Rateau", after its French inventor, Auguste Rateau. The Rateau employs simple impulse rotors separated by a nozzle diaphragm. The diaphragm is essentially a partition wall in the turbine with a series of tunnels cut into it, funnel shaped with the broad end facing the previous stage and the narrow the next they are also angled to direct the steam jets onto the impulse rotor.
Mercury vapour turbines used mercury as the working fluid, to improve the efficiency of fossil-fuelled generating stations. Although a few power plants were built with combined mercury vapour and conventional steam turbines, the toxicity of the metal mercury was quickly apparent.
Screw turbine is a water turbine which uses the principle of the Archimedean screw to convert the potential energy of water on an upstream level into kinetic energy.
Uses
A large proportion of the world's electrical power is generated by turbo generators.
Turbines are used in gas turbine engines on land, sea and air.
Turbochargers are used on piston engines.
Gas turbines have very high power densities (i.e. the ratio of power to mass, or power to volume) because they run at very high speeds. The Space Shuttle main engines used turbopumps (machines consisting of a pump driven by a turbine engine) to feed the propellants (liquid oxygen and liquid hydrogen) into the engine's combustion chamber. The liquid hydrogen turbopump is slightly larger than an automobile engine (weighing approximately 700 lb) with the turbine producing nearly 70,000 hp (52.2 MW).
Turboexpanders are used for refrigeration in industrial processes.
| Technology | Electricity generation and distribution | null |
30890 | https://en.wikipedia.org/wiki/Time%20zone | Time zone | A time zone is an area which observes a uniform standard time for legal, commercial and social purposes. Time zones tend to follow the boundaries between countries and their subdivisions instead of strictly following longitude, because it is convenient for areas in frequent communication to keep the same time.
Each time zone is defined by a standard offset from Coordinated Universal Time (UTC). The offsets range from UTC−12:00 to UTC+14:00, and are usually a whole number of hours, but a few zones are offset by an additional 30 or 45 minutes, such as in India and Nepal. Some areas in a time zone may use a different offset for part of the year, typically one hour ahead during spring and summer, a practice known as daylight saving time (DST).
List of UTC offsets
In the table below, the locations that use daylight saving time (DST) are listed in their UTC offset when DST is not in effect. When DST is in effect, approximately during spring and summer, their UTC offset is increased by one hour (except for Lord Howe Island, where it is increased by 30 minutes). For example, during the DST period California observes UTC−07:00 and the United Kingdom observes UTC+01:00.
History
The apparent position of the Sun in the sky, and thus solar time, varies by location due to the spherical shape of the Earth. This variation corresponds to four minutes of time for every degree of longitude, so for example when it is solar noon in London, it is about 10 minutes before solar noon in Bristol, which is about 2.5 degrees to the west.
The Royal Observatory, Greenwich, founded in 1675, established Greenwich Mean Time (GMT), the mean solar time at that location, as an aid to mariners to determine longitude at sea, providing a standard reference time while each location in England kept a different time.
Railway time
In the 19th century, as transportation and telecommunications improved, it became increasingly inconvenient for each location to observe its own solar time. In November 1840, the British Great Western Railway started using GMT kept by portable chronometers. This practice was soon followed by other railway companies in Great Britain and became known as railway time.
Around August 23, 1852, time signals were first transmitted by telegraph from the Royal Observatory. By 1855, 98% of Great Britain's public clocks were using GMT, but it was not made the island's legal time until August 2, 1880. Some British clocks from this period have two minute hands, one for the local time and one for GMT.
On November 2, 1868, the British Colony of New Zealand officially adopted a standard time to be observed throughout the colony. It was based on longitude east of Greenwich, that is 11 hours 30 minutes ahead of GMT. This standard was known as New Zealand Mean Time.
Timekeeping on North American railroads in the 19th century was complex. Each railroad used its own standard time, usually based on the local time of its headquarters or most important terminus, and the railroad's train schedules were published using its own time. Some junctions served by several railroads had a clock for each railroad, each showing a different time. Because of this a number of accidents occurred when trains from different companies using the same tracks mistimed their passings.
Around 1863, Charles F. Dowd proposed a system of hourly standard time zones for North American railroads, although he published nothing on the matter at that time and did not consult railroad officials until 1869. In 1870 he proposed four ideal time zones having north–south borders, the first centered on Washington, D.C., but by 1872 the first was centered on meridian 75° west of Greenwich, with natural borders such as sections of the Appalachian Mountains. Dowd's system was never accepted by North American railroads.
Chief meteorologist at the United States Weather Bureau Cleveland Abbe divided the United States into four standard time zones for consistency among the weather stations. In 1879, he published a paper titled Report on Standard Time. In 1883, he convinced North American railroad companies to adopt his time-zone system. In 1884, Britain, which had already adopted its own standard time system for England, Scotland, and Wales, helped gather international consent for global time. In time, the American government, influenced in part by Abbe's 1879 paper, adopted the time-zone system.
It was a version proposed by William F. Allen, the editor of the Traveler's Official Railway Guide. The borders of its time zones ran through railroad stations, often in major cities. For example, the border between its Eastern and Central time zones ran through Detroit, Buffalo, Pittsburgh, Atlanta, and Charleston. It was inaugurated on Sunday, November 18, 1883, also called "The Day of Two Noons", when each railroad station clock was reset as standard-time noon was reached within each time zone.
The North American zones were named Intercolonial, Eastern, Central, Mountain, and Pacific. Within a year 85% of all cities with populations over 10,000 (about 200 cities) were using standard time. A notable exception was Detroit (located about halfway between the meridians of Eastern and Central time), which kept local time until 1900, then tried Central Standard Time, local mean time, and Eastern Standard Time (EST) before a May 1915 ordinance settled on EST and was ratified by popular vote in August 1916. The confusion of times came to an end when standard time zones were formally adopted by the U.S. Congress in the Standard Time Act of March 19, 1918.
Worldwide time zones
Italian mathematician Quirico Filopanti introduced the idea of a worldwide system of time zones in his book Miranda!, published in 1858. He proposed 24 hourly time zones, which he called "longitudinal days", the first centred on the meridian of Rome. He also proposed a universal time to be used in astronomy and telegraphy. However, his book attracted no attention until long after his death.
Scottish-born Canadian Sir Sandford Fleming proposed a worldwide system of time zones in 1876 - see . The proposal divided the world into twenty-four time zones labeled A-Y (skipping J), each one covering 15 degrees of longitude. All clocks within each zone would be set to the same time as the others, but differed by one hour from those in the neighboring zones. He advocated his system at several international conferences, including the International Meridian Conference, where it received some consideration. The system has not been directly adopted, but some maps divide the world into 24 time zones and assign letters to them, similarly to Fleming's system.
By about 1900, almost all inhabited places on Earth had adopted a standard time zone, but only some of them used an hourly offset from GMT. Many applied the time at a local astronomical observatory to an entire country, without any reference to GMT. It took many decades before all time zones were based on some standard offset from GMT or Coordinated Universal Time (UTC). By 1929, the majority of countries had adopted hourly time zones, though some countries such as Iran, India, Myanmar and parts of Australia had time zones with a 30-minute offset. Nepal was the last country to adopt a standard offset, shifting slightly to UTC+05:45 in 1986.
All nations currently use standard time zones for secular purposes, but not all of them apply the concept as originally conceived. Several countries and subdivisions use half-hour or quarter-hour deviations from standard time. Some countries, such as China and India, use a single time zone even though the extent of their territory far exceeds the ideal 15° of longitude for one hour; other countries, such as Spain and Argentina, use standard hour-based offsets, but not necessarily those that would be determined by their geographical location. The consequences, in some areas, can affect the lives of local citizens, and in extreme cases contribute to larger political issues, such as in the western reaches of China. In Russia, which has 11 time zones, two time zones were removed in 2010 and reinstated in 2014.
Notation
ISO 8601
ISO 8601 is a standard established by the International Organization for Standardization defining methods of representing dates and times in textual form, including specifications for representing time zones.
If a time is in Coordinated Universal Time (UTC), a "Z" is added directly after the time without a separating space. "Z" is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "0930Z". Likewise, "14:45:15 UTC" is written as "14:45:15Z" or "144515Z". UTC time is also known as "Zulu" time, since "Zulu" is a phonetic alphabet code word for the letter "Z".
Offsets from UTC are written in the format ±hh:mm, ±hhmm, or ±hh (either hours ahead or behind UTC). For example, if the time being described is one hour ahead of UTC (such as the time in Germany during the winter), the zone designator would be "+01:00", "+0100", or simply "+01". This numeric representation of time zones is appended to local times in the same way that alphabetic time zone abbreviations (or "Z", as above) are appended. The offset from UTC changes with daylight saving time, e.g. a time offset in Chicago, which is in the North American Central Time Zone, is "−06:00" for the winter (Central Standard Time) and "−05:00" for the summer (Central Daylight Time).
Abbreviations
Time zones are often represented by alphabetic abbreviations such as "EST", "WST", and "CST", but these are not part of the international time and date standard ISO 8601. Such designations can be ambiguous; for example, "CST" can mean (North American) Central Standard Time (UTC−06:00), Cuba Standard Time (UTC−05:00) and China Standard Time (UTC+08:00), and it is also a widely used variant of ACST (Australian Central Standard Time, UTC+09:30).
Conversions
Conversion between time zones obeys the relationship
"time in zone A" − "UTC offset for zone A" = "time in zone B" − "UTC offset for zone B",
in which each side of the equation is equivalent to UTC.
The conversion equation can be rearranged to
"time in zone B" = "time in zone A" − "UTC offset for zone A" + "UTC offset for zone B".
For example, the New York Stock Exchange opens at 09:30 (EST, UTC offset= −05:00). In California (PST, UTC offset= −08:00) and India (IST, UTC offset= +05:30), the New York Stock Exchange opens at
time in California = 09:30 − (−05:00) + (−08:00) = 06:30;
time in India = 09:30 − (−05:00) + (+05:30) = 20:00.
These calculations become more complicated near the time switch to or from daylight saving time, as the UTC offset for the area becomes a function of UTC time.
The time differences may also result in different dates. For example, when it is 22:00 on Monday in Egypt (UTC+02:00), it is 01:00 on Tuesday in Pakistan (UTC+05:00).
The table "Time of day by zone" gives an overview on the time relations between different zones.
Nautical time zones
Since the 1920s, a nautical standard time system has been in operation for ships on the high seas. As an ideal form of the terrestrial time zone system, nautical time zones consist of gores of 15° offset from GMT by a whole number of hours. A nautical date line follows the 180th meridian, bisecting one 15° gore into two 7.5° gores that differ from GMT by ±12 hours.
However, in practice each ship may choose what time to observe at each location. Ships may decide to adjust their clocks at a convenient time, usually at night, not exactly when they cross a certain longitude. Some ships simply remain on the time of the departing port during the whole trip.
Skewing of time zones
Ideal time zones, such as nautical time zones, are based on the mean solar time of a particular meridian in the middle of that zone with boundaries located 7.5 degrees east and west of the meridian. In practice, however, many time zone boundaries are drawn much farther to the west, and some countries are located entirely outside their ideal time zones.
For example, even though the Prime Meridian (0°) passes through Spain and France, they use the mean solar time of 15 degrees east (Central European Time) rather than 0 degrees (Greenwich Mean Time). France previously used GMT, but was switched to CET (Central European Time) during the German occupation of the country during World War II and did not switch back after the war. Similarly, prior to World War II, the Netherlands observed "Amsterdam Time", which was twenty minutes ahead of Greenwich Mean Time. They were obliged to follow German time during the war, and kept it thereafter. In the mid-1970s the Netherlands, as other European states, began observing daylight saving (summer) time.
One reason to draw time zone boundaries far to the west of their ideal meridians is to allow the more efficient use of afternoon sunlight. Some of these locations also use daylight saving time (DST), further increasing the difference to local solar time. As a result, in summer, solar noon in the Spanish city of Vigo occurs at 14:41 clock time. This westernmost area of continental Spain never experiences sunset before 18:00 clock time, even in winter, despite lying 42 degrees north of the equator. Near the summer solstice, Vigo has sunset times after 22:00, similar to those of Stockholm, which is in the same time zone and 17 degrees farther north. Stockholm has much earlier sunrises, though.
In the United States, the reasons were more historical and business-related. In Midwestern states, like Indiana and Michigan, those living in Indianapolis and Detroit wanted to be on the same time zone as New York to simplify communications and transactions.
A more extreme example is Nome, Alaska, which is at 165°24′W longitudejust west of center of the idealized Samoa Time Zone (165°W). Nevertheless, Nome observes Alaska Time (135°W) with DST so it is slightly more than two hours ahead of the sun in winter and over three in summer.
Kotzebue, Alaska, also near the same meridian but north of the Arctic Circle, has two sunsets on the same day in early August, one shortly after midnight at the start of the day, and the other shortly before midnight at the end of the day.
China extends as far west as 73°E, but all parts of it use UTC+08:00 (120°E), so solar "noon" can occur as late as 15:00 in western portions of China such as Xinjiang. The Afghanistan-China border marks the greatest terrestrial time zone difference on Earth, with a 3.5 hour difference between Afghanistan's UTC+4:30 and China's UTC+08:00.
Daylight saving time
Many countries, and sometimes just certain regions of countries, adopt daylight saving time (DST), also known as summer time, during part of the year. This typically involves advancing clocks by an hour near the start of spring and adjusting back in autumn ("spring forward", "fall back"). Modern DST was first proposed in 1907 and was in widespread use in 1916 as a wartime measure aimed at conserving coal. Despite controversy, many countries have used it off and on since then; details vary by location and change occasionally. Countries around the equator usually do not observe daylight saving time, since the seasonal difference in sunlight there is minimal.
Computer systems
Many computer operating systems include the necessary support for working with all (or almost all) possible local times based on the various time zones. Internally, operating systems typically use UTC as their basic time-keeping standard, while providing services for converting local times to and from UTC, and also the ability to automatically change local time conversions at the start and end of daylight saving time in the various time zones. (See the article on daylight saving time for more details on this aspect.)
Web servers presenting web pages primarily for an audience in a single time zone or a limited range of time zones typically show times as a local time, perhaps with UTC time in brackets. More internationally oriented websites may show times in UTC only or using an arbitrary time zone. For example, the international English-language version of CNN includes GMT and Hong Kong Time, whereas the US version shows Eastern Time. US Eastern Time and Pacific Time are also used fairly commonly on many US-based English-language websites with global readership. The format is typically based in the W3C Note "datetime".
Email systems and other messaging systems (IRC chat, etc.) time-stamp messages using UTC, or else include the sender's time zone as part of the message, allowing the receiving program to display the message's date and time of sending in the recipient's local time.
Database records that include a time stamp typically use UTC, especially when the database is part of a system that spans multiple time zones. The use of local time for time-stamping records is not recommended for time zones that implement daylight saving time because once a year there is a one-hour period when local times are ambiguous.
Calendar systems nowadays usually tie their time stamps to UTC, and show them differently on computers that are in different time zones. That works when having telephone or internet meetings. It works less well when travelling, because the calendar events are assumed to take place in the time zone the computer or smartphone was on when creating the event. The event can be shown at the wrong time. For example, if a New Yorker plans to meet someone in Los Angeles at 9 am, and makes a calendar entry at 9 am (which the computer assumes is New York time), the calendar entry will be at 6 am if taking the computer's time zone. Calendaring software must also deal with daylight saving time (DST). If, for political reasons, the begin and end dates of daylight saving time are changed, calendar entries should stay the same in local time, even though they may shift in UTC time.
Operating systems
Unix
Unix-like systems, including Linux and macOS, keep system time in Unix time format, representing the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC) on Thursday, January 1, 1970, excluding leap seconds. Unix time is usually converted to local time when displayed to the user, and times specified by the user in local time are converted to Unix time. The conversion takes into account the time zone and daylight saving time rules; by default the time zone and daylight saving time rules are set up when the system is configured, though individual processes can specify time zones and daylight saving time rules using the TZ environment variable. This allows users in multiple time zones, or in the same time zone but with different daylight saving time rules, to use the same computer, with their respective local times displayed correctly to each user. Information about time zones and daylight saving time rules most commonly comes from the IANA time zone database. Many systems, including anything using the GNU C Library, a C library based on the BSD C library, or the System V Release 4 C library, can make use of the IANA time zone database.
Microsoft Windows
Windows-based computer systems prior to Windows 95 and Windows NT used local time, but Windows 95 and later, and Windows NT, base system time on UTC. They allow a program to fetch the system time as UTC, represented as a year, month, day, hour, minute, second, and millisecond; Windows 95 and later, and Windows NT 3.5 and later, also allow the system time to be fetched as a count of 100 ns units since 1601-01-01 00:00:00 UTC. The system registry contains time zone information that includes the offset from UTC and rules that indicate the start and end dates for daylight saving in each zone. Interaction with the user normally uses local time, and application software is able to calculate the time in various zones. Terminal Servers allow remote computers to redirect their time zone settings to the Terminal Server so that users see the correct time for their time zone in their desktop/application sessions. Terminal Services uses the server base time on the Terminal Server and the client time zone information to calculate the time in the session.
Programming languages
Java
While most application software will use the underlying operating system for time zone and daylight saving time rule information, the Java Platform, from version 1.3.1, has maintained its own database of time zone and daylight saving time rule information. This database is updated whenever time zone or daylight saving time rules change. Oracle provides an updater tool for this purpose.
As an alternative to the information bundled with the Java Platform, programmers may choose to use the Joda-Time library. This library includes its own data based on the IANA time zone database.
As of Java 8 there is a new date and time API that can help with converting times.
JavaScript
Traditionally, there was very little in the way of time zone support for JavaScript. Essentially the programmer had to extract the UTC offset by instantiating a time object, getting a GMT time from it, and differencing the two. This does not provide a solution for more complex daylight saving variations, such as divergent DST directions between northern and southern hemispheres.
ECMA-402, the standard on Internationalization API for JavaScript, provides ways of formatting Time Zones. However, due to size constraint, some implementations or distributions do not include it.
Perl
The DateTime object in Perl supports all entries in the IANA time zone database and includes the ability to get, set and convert between time zones.
PHP
The DateTime objects and related functions have been compiled into the PHP core since 5.2. This includes the ability to get and set the default script time zone, and DateTime is aware of its own time zone internally. PHP.net provides extensive documentation on this. As noted there, the most current time zone database can be implemented via the PECL timezonedb.
Python
The standard module datetime included with Python stores and operates on the time zone information class tzinfo. The third party pytz module provides access to the full IANA time zone database. Negated time zone offset in seconds is stored time.timezone and time.altzone attributes. From Python 3.9, the zoneinfo module introduces timezone management without need for third party module.
Smalltalk
Each Smalltalk dialect comes with its own built-in classes for dates, times and timestamps, only a few of which implement the DateAndTime and Duration classes as specified by the ANSI Smalltalk Standard. VisualWorks provides a TimeZone class that supports up to two annually recurring offset transitions, which are assumed to apply to all years (same behavior as Windows time zones). Squeak provides a Timezone class that does not support any offset transitions. Dolphin Smalltalk does not support time zones at all.
For full support of the tz database (zoneinfo) in a Smalltalk application (including support for any number of annually recurring offset transitions, and support for different intra-year offset transition rules in different years) the third-party, open-source, ANSI-Smalltalk-compliant Chronos Date/Time Library is available for use with any of the following Smalltalk dialects: VisualWorks, Squeak, Gemstone, or Dolphin.
Time in outer space
Orbiting spacecraft may experience many sunrises and sunsets, or none, in a 24-hour period. Therefore, it is not possible to calibrate the time with respect to the Sun and still respect a 24-hour sleep/wake cycle. A common practice for space exploration is to use the Earth-based time of the launch site or mission control, synchronizing the sleeping cycles of the crew and controllers. The International Space Station normally uses Greenwich Mean Time (GMT).
Timekeeping on Mars can be more complex, since the planet has a solar day of approximately 24 hours and 40 minutes, known as a sol. Earth controllers for some Mars missions have synchronized their sleep/wake cycles with the Martian day, when specifically solar-powered rover activity occurs.
| Technology | Timekeeping | null |
30906 | https://en.wikipedia.org/wiki/Transformer | Transformer | In electrical engineering, a transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's core, which induces a varying electromotive force (EMF) across any other coils wound around the same core. Electrical energy can be transferred between separate coils without a metallic (conductive) connection between the two circuits. Faraday's law of induction, discovered in 1831, describes the induced voltage effect in any coil due to a changing magnetic flux encircled by the coil.
Transformers are used to change AC voltage levels, such transformers being termed step-up or step-down type to increase or decrease voltage level, respectively. Transformers can also be used to provide galvanic isolation between circuits as well as to couple stages of signal-processing circuits. Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electric power. A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume, to units weighing hundreds of tons used to interconnect the power grid.
Principles
Ideal transformer
An ideal transformer is linear, lossless and perfectly coupled. Perfect coupling implies infinitely high core magnetic permeability and winding inductance and zero net magnetomotive force (i.e. ipnp − isns = 0).
A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core, which is also encircled by the secondary winding. This varying flux at the secondary winding induces a varying electromotive force or voltage in the secondary winding. This electromagnetic induction phenomenon is the basis of transformer action and, in accordance with Lenz's law, the secondary current so produced creates a flux equal and opposite to that produced by the primary winding.
The windings are wound around a core of infinitely high magnetic permeability so that all of the magnetic flux passes through both the primary and secondary windings. With a voltage source connected to the primary winding and a load connected to the secondary winding, the transformer currents flow in the indicated directions and the core magnetomotive force cancels to zero.
According to Faraday's law, since the same magnetic flux passes through both the primary and secondary windings in an ideal transformer, a voltage is induced in each winding proportional to its number of turns. The transformer winding voltage ratio is equal to the winding turns ratio.
An ideal transformer is a reasonable approximation for a typical commercial transformer, with voltage ratio and winding turns ratio both being inversely proportional to the corresponding current ratio.
The load impedance referred to the primary circuit is equal to the turns ratio squared times the secondary circuit load impedance.
Real transformer
Deviations from ideal transformer
The ideal transformer model neglects many basic linear aspects of real transformers, including unavoidable losses and inefficiencies.
(a) Core losses, collectively called magnetizing current losses, consisting of
Hysteresis losses due to nonlinear magnetic effects in the transformer core, and
Eddy current losses due to joule heating in the core that are proportional to the square of the transformer's applied voltage.
(b) Unlike the ideal model, the windings in a real transformer have non-zero resistances and inductances associated with:
Joule losses due to resistance in the primary and secondary windings
Leakage flux that escapes from the core and passes through one winding only resulting in primary and secondary reactive impedance.
(c) similar to an inductor, parasitic capacitance and self-resonance phenomenon due to the electric field distribution. Three kinds of parasitic capacitance are usually considered and the closed-loop equations are provided
Capacitance between adjacent turns in any one layer;
Capacitance between adjacent layers;
Capacitance between the core and the layer(s) adjacent to the core;
Inclusion of capacitance into the transformer model is complicated, and is rarely attempted; the 'real' transformer model's equivalent circuit shown below does not include parasitic capacitance. However, the capacitance effect can be measured by comparing open-circuit inductance, i.e. the inductance of a primary winding when the secondary circuit is open, to a short-circuit inductance when the secondary winding is shorted.
Leakage flux
The ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings. Such flux is termed leakage flux, and results in leakage inductance in series with the mutually coupled transformer windings. Leakage flux results in energy being alternately stored in and discharged from the magnetic fields with each cycle of the power supply. It is not directly a power loss, but results in inferior voltage regulation, causing the secondary voltage not to be directly proportional to the primary voltage, particularly under heavy load. Transformers are therefore normally designed to have very low leakage inductance.
In some applications increased leakage is desired, and long magnetic paths, air gaps, or magnetic bypass shunts may deliberately be introduced in a transformer design to limit the short-circuit current it will supply. Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury- and sodium- vapor lamps and neon signs or for safely handling loads that become periodically short-circuited such as electric arc welders.
Air gaps are also used to keep a transformer from saturating, especially audio-frequency transformers in circuits that have a DC component flowing in the windings. A saturable reactor exploits saturation of the core to control alternating current.
Knowledge of leakage inductance is also useful when transformers are operated in parallel. It can be shown that if the percent impedance and associated winding leakage reactance-to-resistance (X/R) ratio of two transformers were
the same, the transformers would share the load power in proportion to their respective ratings. However, the impedance tolerances of commercial transformers are significant. Also, the impedance and X/R ratio of different capacity transformers tends to vary.
Equivalent circuit
Referring to the diagram, a practical transformer's physical behavior may be represented by an equivalent circuit model, which can incorporate an ideal transformer.
Winding joule losses and leakage reactance are represented by the following series loop impedances of the model:
Primary winding: RP, XP
Secondary winding: RS, XS.
In normal course of circuit equivalence transformation, RS and XS are in practice usually referred to the primary side by multiplying these impedances by the turns ratio squared, (NP/NS) 2 = a2.
Core loss and reactance is represented by the following shunt leg impedances of the model:
Core or iron losses: RC
Magnetizing reactance: XM.
RC and XM are collectively termed the magnetizing branch of the model.
Core losses are caused mostly by hysteresis and eddy current effects in the core and are proportional to the square of the core flux for operation at a given frequency. The finite permeability core requires a magnetizing current IM to maintain mutual flux in the core. Magnetizing current is in phase with the flux, the relationship between the two being non-linear due to saturation effects. However, all impedances of the equivalent circuit shown are by definition linear and such non-linearity effects are not typically reflected in transformer equivalent circuits. With sinusoidal supply, core flux lags the induced EMF by 90°. With open-circuited secondary winding, magnetizing branch current I0 equals transformer no-load current.
The resulting model, though sometimes termed 'exact' equivalent circuit based on linearity assumptions, retains a number of approximations. Analysis may be simplified by assuming that magnetizing branch impedance is relatively high and relocating the branch to the left of the primary impedances. This introduces error but allows combination of primary and referred secondary resistances and reactance by simple summation as two series impedances.
Transformer equivalent circuit impedance and transformer ratio parameters can be derived from the following tests: open-circuit test, short-circuit test, winding resistance test, and transformer ratio test.
Transformer EMF equation
If the flux in the core is purely sinusoidal, the relationship for either winding between its rms voltage Erms of the winding, and the supply frequency f, number of turns N, core cross-sectional area A in m2 and peak magnetic flux density Bpeak in Wb/m2 or T (tesla) is given by the universal EMF equation:
Polarity
A dot convention is often used in transformer circuit diagrams, nameplates or terminal markings to define the relative polarity of transformer windings. Positively increasing instantaneous current entering the primary winding's 'dot' end induces positive polarity voltage exiting the secondary winding's 'dot' end. Three-phase transformers used in electric power systems will have a nameplate that indicate the phase relationships between their terminals. This may be in the form of a phasor diagram, or using an alpha-numeric code to show the type of internal connection (wye or delta) for each winding.
Effect of frequency
The EMF of a transformer at a given flux increases with frequency. By operating at higher frequencies, transformers can be physically more compact because a given core is able to transfer more power without reaching saturation and fewer turns are needed to achieve the same impedance. However, properties such as core loss and conductor skin effect also increase with frequency. Aircraft and military equipment employ 400 Hz power supplies which reduce core and winding weight. Conversely, frequencies used for some railway electrification systems were much lower (e.g. 16.7 Hz and 25 Hz) than normal utility frequencies (50–60 Hz) for historical reasons concerned mainly with the limitations of early electric traction motors. Consequently, the transformers used to step-down the high overhead line voltages were much larger and heavier for the same power rating than those required for the higher frequencies.
Operation of a transformer at its designed voltage but at a higher frequency than intended will lead to reduced magnetizing current. At a lower frequency, the magnetizing current will increase. Operation of a large transformer at other than its design frequency may require assessment of voltages, losses, and cooling to establish if safe operation is practical. Transformers may require protective relays to protect the transformer from overvoltage at higher than rated frequency.
One example is in traction transformers used for electric multiple unit and high-speed train service operating across regions with different electrical standards. The converter equipment and traction transformers have to accommodate different input frequencies and voltage (ranging from as high as 50 Hz down to 16.7 Hz and rated up to 25 kV).
At much higher frequencies the transformer core size required drops dramatically: a physically small transformer can handle power levels that would require a massive iron core at mains frequency. The development of switching power semiconductor devices made switch-mode power supplies viable, to generate a high frequency, then change the voltage level with a small transformer.
Transformers for higher frequency applications such as SMPS typically use core materials with much lower hysteresis and eddy-current losses than those for 50/60 Hz. Primary examples are iron-powder and ferrite cores. The lower frequency-dependant losses of these cores often is at the expense of flux density at saturation. For instance, ferrite saturation occurs at a substantially lower flux density than laminated iron.
Large power transformers are vulnerable to insulation failure due to transient voltages with high-frequency components, such as caused in switching or by lightning.
Energy losses
Transformer energy losses are dominated by winding and core losses. Transformers' efficiency tends to improve with increasing transformer capacity. The efficiency of typical distribution transformers is between about 98 and 99 percent.
As transformer losses vary with load, it is often useful to tabulate no-load loss, full-load loss, half-load loss, and so on. Hysteresis and eddy current losses are constant at all load levels and dominate at no load, while winding loss increases as load increases. The no-load loss can be significant, so that even an idle transformer constitutes a drain on the electrical supply. Designing energy efficient transformers for lower loss requires a larger core, good-quality silicon steel, or even amorphous steel for the core and thicker wire, increasing initial cost. The choice of construction represents a trade-off between initial cost and operating cost.
Transformer losses arise from:
Winding joule losses
Current flowing through a winding's conductor causes joule heating due to the resistance of the wire. As frequency increases, skin effect and proximity effect causes the winding's resistance and, hence, losses to increase.
Core losses
Hysteresis losses
Each time the magnetic field is reversed, a small amount of energy is lost due to hysteresis within the core, caused by motion of the magnetic domains within the steel. According to Steinmetz's formula, the heat energy due to hysteresis is given by
and,
hysteresis loss is thus given by
where, f is the frequency, η is the hysteresis coefficient and βmax is the maximum flux density, the empirical exponent of which varies from about 1.4 to 1.8 but is often given as 1.6 for iron. For more detailed analysis, see Magnetic core and Steinmetz's equation.
Eddy current losses
Eddy currents are induced in the conductive metal transformer core by the changing magnetic field, and this current flowing through the resistance of the iron dissipates energy as heat in the core. The eddy current loss is a complex function of the square of supply frequency and inverse square of the material thickness. Eddy current losses can be reduced by making the core of a stack of laminations (thin plates) electrically insulated from each other, rather than a solid block; all transformers operating at low frequencies use laminated or similar cores.
Magnetostriction related transformer hum
Magnetic flux in a ferromagnetic material, such as the core, causes it to physically expand and contract slightly with each cycle of the magnetic field, an effect known as magnetostriction, the frictional energy of which produces an audible noise known as mains hum or "transformer hum". This transformer hum is especially objectionable in transformers supplied at power frequencies and in high-frequency flyback transformers associated with television CRTs.
Stray losses
Leakage inductance is by itself largely lossless, since energy supplied to its magnetic fields is returned to the supply with the next half-cycle. However, any leakage flux that intercepts nearby conductive materials such as the transformer's support structure will give rise to eddy currents and be converted to heat.
Radiative
There are also radiative losses due to the oscillating magnetic field but these are usually small.
Mechanical vibration and audible noise transmission
In addition to magnetostriction, the alternating magnetic field causes fluctuating forces between the primary and secondary windings. This energy incites vibration transmission in interconnected metalwork, thus amplifying audible transformer hum.
Construction
Cores
Closed-core transformers are constructed in 'core form' or 'shell form'. When windings surround the core, the transformer is core form; when windings are surrounded by the core, the transformer is shell form. Shell form design may be more prevalent than core form design for distribution transformer applications due to the relative ease in stacking the core around winding coils. Core form design tends to, as a general rule, be more economical, and therefore more prevalent, than shell form design for high voltage power transformer applications at the lower end of their voltage and power rating ranges (less than or equal to, nominally, 230 kV or 75 MVA). At higher voltage and power ratings, shell form transformers tend to be more prevalent. Shell form design tends to be preferred for extra-high voltage and higher MVA applications because, though more labor-intensive to manufacture, shell form transformers are characterized as having inherently better kVA-to-weight ratio, better short-circuit strength characteristics and higher immunity to transit damage.
Laminated steel cores
Transformers for use at power or audio frequencies typically have cores made of high permeability silicon steel. The steel has a permeability many times that of free space and the core thus serves to greatly reduce the magnetizing current and confine the flux to a path which closely couples the windings. Early transformer developers soon realized that cores constructed from solid iron resulted in prohibitive eddy current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires. Later designs constructed the core by stacking layers of thin steel laminations, a principle that has remained in use. Each lamination is insulated from its neighbors by a thin non-conducting layer of insulation. The transformer universal EMF equation can be used to calculate the core cross-sectional area for a preferred level of magnetic flux.
The effect of laminations is to confine eddy currents to highly elliptical paths that enclose little flux, and so reduce their magnitude. Thinner laminations reduce losses, but are more laborious and expensive to construct. Thin laminations are generally used on high-frequency transformers, with some of very thin steel laminations able to operate up to 10 kHz.
One common design of laminated core is made from interleaved stacks of E-shaped steel sheets capped with I-shaped pieces, leading to its name of E-I transformer. Such a design tends to exhibit more losses, but is very economical to manufacture. The cut-core or C-core type is made by winding a steel strip around a rectangular form and then bonding the layers together. It is then cut in two, forming two C shapes, and the core assembled by binding the two C halves together with a steel strap. They have the advantage that the flux is always oriented parallel to the metal grains, reducing reluctance.
A steel core's remanence means that it retains a static magnetic field when power is removed. When power is then reapplied, the residual field will cause a high inrush current until the effect of the remaining magnetism is reduced, usually after a few cycles of the applied AC waveform. Overcurrent protection devices such as fuses must be selected to allow this harmless inrush to pass.
On transformers connected to long, overhead power transmission lines, induced currents due to geomagnetic disturbances during solar storms can cause saturation of the core and operation of transformer protection devices.
Distribution transformers can achieve low no-load losses by using cores made with low-loss high-permeability silicon steel or amorphous (non-crystalline) metal alloy. The higher initial cost of the core material is offset over the life of the transformer by its lower losses at light load.
Solid cores
Powdered iron cores are used in circuits such as switch-mode power supplies that operate above mains frequencies and up to a few tens of kilohertz. These materials combine high magnetic permeability with high bulk electrical resistivity. For frequencies extending beyond the VHF band, cores made from non-conductive magnetic ceramic materials called ferrites are common. Some radio-frequency transformers also have movable cores (sometimes called 'slugs') which allow adjustment of the coupling coefficient (and bandwidth) of tuned radio-frequency circuits.
Toroidal cores
Toroidal transformers are built around a ring-shaped core, which, depending on operating frequency, is made from a long strip of silicon steel or permalloy wound into a coil, powdered iron, or ferrite. A strip construction ensures that the grain boundaries are optimally aligned, improving the transformer's efficiency by reducing the core's reluctance. The closed ring shape eliminates air gaps inherent in the construction of an E-I core. The cross-section of the ring is usually square or rectangular, but more expensive cores with circular cross-sections are also available. The primary and secondary coils are often wound concentrically to cover the entire surface of the core. This minimizes the length of wire needed and provides screening to minimize the core's magnetic field from generating electromagnetic interference.
Toroidal transformers are more efficient than the cheaper laminated E-I types for a similar power level. Other advantages compared to E-I types, include smaller size (about half), lower weight (about half), less mechanical hum (making them superior in audio amplifiers), lower exterior magnetic field (about one tenth), low off-load losses (making them more efficient in standby circuits), single-bolt mounting, and greater choice of shapes. The main disadvantages are higher cost and limited power capacity (see Classification parameters below). Because of the lack of a residual gap in the magnetic path, toroidal transformers also tend to exhibit higher inrush current, compared to laminated E-I types.
Ferrite toroidal cores are used at higher frequencies, typically between a few tens of kilohertz to hundreds of megahertz, to reduce losses, physical size, and weight of inductive components. A drawback of toroidal transformer construction is the higher labor cost of winding. This is because it is necessary to pass the entire length of a coil winding through the core aperture each time a single turn is added to the coil. As a consequence, toroidal transformers rated more than a few kVA are uncommon. Relatively few toroids are offered with power ratings above 10 kVA, and practically none above 25 kVA. Small distribution transformers may achieve some of the benefits of a toroidal core by splitting it and forcing it open, then inserting a bobbin containing primary and secondary windings.
Air cores
A transformer can be produced by placing the windings near each other, an arrangement termed an "air-core" transformer. An air-core transformer eliminates loss due to hysteresis in the core material. The magnetizing inductance is drastically reduced by the lack of a magnetic core, resulting in large magnetizing currents and losses if used at low frequencies. Air-core transformers are unsuitable for use in power distribution, but are frequently employed in radio-frequency applications. Air cores are also used for resonant transformers such as Tesla coils, where they can achieve reasonably low loss despite the low magnetizing inductance.
Windings
The electrical conductor used for the windings depends upon the application, but in all cases the individual turns must be electrically insulated from each other to ensure that the current travels throughout every turn. For small transformers, in which currents are low and the potential difference between adjacent turns is small, the coils are often wound from enamelled magnet wire. Larger power transformers may be wound with copper rectangular strip conductors insulated by oil-impregnated paper and blocks of pressboard.
High-frequency transformers operating in the tens to hundreds of kilohertz often have windings made of braided Litz wire to minimize the skin-effect and proximity effect losses. Large power transformers use multiple-stranded conductors as well, since even at low power frequencies non-uniform distribution of current would otherwise exist in high-current windings. Each strand is individually insulated, and the strands are arranged so that at certain points in the winding, or throughout the whole winding, each portion occupies different relative positions in the complete conductor. The transposition equalizes the current flowing in each strand of the conductor, and reduces eddy current losses in the winding itself. The stranded conductor is also more flexible than a solid conductor of similar size, aiding manufacture.
The windings of signal transformers minimize leakage inductance and stray capacitance to improve high-frequency response. Coils are split into sections, and those sections interleaved between the sections of the other winding.
Power-frequency transformers may have taps at intermediate points on the winding, usually on the higher voltage winding side, for voltage adjustment. Taps may be manually reconnected, or a manual or automatic switch may be provided for changing taps. Automatic on-load tap changers are used in electric power transmission or distribution, on equipment such as arc furnace transformers, or for automatic voltage regulators for sensitive loads. Audio-frequency transformers, used for the distribution of audio to public address loudspeakers, have taps to allow adjustment of impedance to each speaker. A center-tapped transformer is often used in the output stage of an audio power amplifier in a push-pull circuit. Modulation transformers in AM transmitters are very similar.
Cooling
It is a rule of thumb that the life expectancy of electrical insulation is halved for about every 7 °C to 10 °C increase in operating temperature (an instance of the application of the Arrhenius equation).
Small dry-type and liquid-immersed transformers are often self-cooled by natural convection and radiation heat dissipation. As power ratings increase, transformers are often cooled by forced-air cooling, forced-oil cooling, water-cooling, or combinations of these. Large transformers are filled with transformer oil that both cools and insulates the windings. Transformer oil is often a highly refined mineral oil that cools the windings and insulation by circulating within the transformer tank. The mineral oil and paper insulation system has been extensively studied and used for more than 100 years. It is estimated that 50% of power transformers will survive 50 years of use, that the average age of failure of power transformers is about 10 to 15 years, and that about 30% of power transformer failures are due to insulation and overloading failures. Prolonged operation at elevated temperature degrades insulating properties of winding insulation and dielectric coolant, which not only shortens transformer life but can ultimately lead to catastrophic transformer failure. With a great body of empirical study as a guide, transformer oil testing including dissolved gas analysis provides valuable maintenance information.
Building regulations in many jurisdictions require indoor liquid-filled transformers to either use dielectric fluids that are less flammable than oil, or be installed in fire-resistant rooms. Air-cooled dry transformers can be more economical where they eliminate the cost of a fire-resistant transformer room.
The tank of liquid-filled transformers often has radiators through which the liquid coolant circulates by natural convection or fins. Some large transformers employ electric fans for forced-air cooling, pumps for forced-liquid cooling, or have heat exchangers for water-cooling. An oil-immersed transformer may be equipped with a Buchholz relay, which, depending on severity of gas accumulation due to internal arcing, is used to either trigger an alarm or de-energize the transformer. Oil-immersed transformer installations usually include fire protection measures such as walls, oil containment, and fire-suppression sprinkler systems.
Polychlorinated biphenyls (PCBs) have properties that once favored their use as a dielectric coolant, though concerns over their environmental persistence led to a widespread ban on their use.
Today, non-toxic, stable silicone-based oils, or fluorinated hydrocarbons may be used where the expense of a fire-resistant liquid offsets additional building cost for a transformer vault. However, the long life span of transformers can mean that the potential for exposure can be high long after banning.
Some transformers are gas-insulated. Their windings are enclosed in sealed, pressurized tanks and often cooled by nitrogen or sulfur hexafluoride gas.
Experimental power transformers in the 500–1,000 kVA range have been built with liquid nitrogen or helium cooled superconducting windings, which eliminates winding losses without affecting core losses.
Insulation
Insulation must be provided between the individual turns of the windings, between the windings, between windings and core, and at the terminals of the winding.
Inter-turn insulation of small transformers may be a layer of insulating varnish on the wire. Layer of paper or polymer films may be inserted between layers of windings, and between primary and secondary windings. A transformer may be coated or dipped in a polymer resin to improve the strength of windings and protect them from moisture or corrosion. The resin may be impregnated into the winding insulation using combinations of vacuum and pressure during the coating process, eliminating all air voids in the winding. In the limit, the entire coil may be placed in a mold, and resin cast around it as a solid block, encapsulating the windings.
Large oil-filled power transformers use windings wrapped with insulating paper, which is impregnated with oil during assembly of the transformer. Oil-filled transformers use highly refined mineral oil to insulate and cool the windings and core.
Construction of oil-filled transformers requires that the insulation covering the windings be thoroughly dried of residual moisture before the oil is introduced. Drying may be done by circulating hot air around the core, by circulating externally heated transformer oil, or by vapor-phase drying (VPD) where an evaporated solvent transfers heat by condensation on the coil and core. For small transformers, resistance heating by injection of current into the windings is used.
Bushings
Larger transformers are provided with high-voltage insulated bushings made of polymers or porcelain. A large bushing can be a complex structure since it must provide careful control of the electric field gradient without letting the transformer leak oil.
Classification parameters
Transformers can be classified in many ways, such as the following:
Power rating: From a fraction of a volt-ampere (VA) to over a thousand MVA.
Duty of a transformer: Continuous, short-time, intermittent, periodic, varying.
Frequency range: Power-frequency, audio-frequency, or radio-frequency.
Voltage class: From a few volts to hundreds of kilovolts.
Cooling type: Dry or liquid-immersed; self-cooled, forced air-cooled;forced oil-cooled, water-cooled.
Application: power supply, impedance matching, output voltage and current stabilizer, pulse, circuit isolation, power distribution, rectifier, arc furnace, amplifier output, etc..
Basic magnetic form: Core form, shell form, concentric, sandwich.
Constant-potential transformer descriptor: Step-up, step-down, isolation.
General winding configuration: By IEC vector group, two-winding combinations of the phase designations delta, wye or star, and zigzag; autotransformer, Scott-T
Rectifier phase-shift winding configuration: 2-winding, 6-pulse; 3-winding, 12-pulse; . . ., , [n − 1]·6-pulse; polygon; etc..
Applications
Various specific electrical application designs require a variety of transformer types. Although they all share the basic characteristic transformer principles, they are customized in construction or electrical properties for certain installation requirements or circuit conditions.
In electric power transmission, transformers allow transmission of electric power at high voltages, which reduces the loss due to heating of the wires. This allows generating plants to be located economically at a distance from electrical consumers. All but a tiny fraction of the world's electrical power has passed through a series of transformers by the time it reaches the consumer.
In many electronic devices, a transformer is used to convert voltage from the distribution wiring to convenient values for the circuit requirements, either directly at the power line frequency or through a switch mode power supply.
Signal and audio transformers are used to couple stages of amplifiers and to match devices such as microphones and record players to the input of amplifiers. Audio transformers allowed telephone circuits to carry on a two-way conversation over a single pair of wires. A balun transformer converts a signal that is referenced to ground to a signal that has balanced voltages to ground, such as between external cables and internal circuits. Isolation transformers prevent leakage of current into the secondary circuit and are used in medical equipment and at construction sites. Resonant transformers are used for coupling between stages of radio receivers, or in high-voltage Tesla coils.
History
Discovery of induction
Electromagnetic induction, the principle of the operation of the transformer, was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Only Faraday furthered his experiments to the point of working out the equation describing the relationship between EMF and magnetic flux now known as Faraday's law of induction:
where is the magnitude of the EMF in volts and ΦB is the magnetic flux through the circuit in webers.
Faraday performed early experiments on induction between coils of wire, including winding a pair of coils around an iron ring, thus creating the first toroidal closed-core transformer. However he only applied individual pulses of current to his transformer, and never discovered the relation between the turns ratio and EMF in the windings.
Induction coils
The first type of transformer to see wide use was the induction coil, invented by Irish-Catholic Rev. Nicholas Callan of Maynooth College, Ireland in 1836. He was one of the first researchers to realize the more turns the secondary winding has in relation to the primary winding, the larger the induced secondary EMF will be. Induction coils evolved from scientists' and inventors' efforts to get higher voltages from batteries. Since batteries produce direct current (DC) rather than AC, induction coils relied upon vibrating electrical contacts that regularly interrupted the current in the primary to create the flux changes necessary for induction. Between the 1830s and the 1870s, efforts to build better induction coils, mostly by trial and error, slowly revealed the basic principles of transformers.
First alternating current transformers
By the 1870s, efficient generators producing alternating current (AC) were available, and it was found AC could power an induction coil directly, without an interrupter.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design. The coils Yablochkov employed functioned essentially as transformers.
In 1878, the Ganz factory, Budapest, Hungary, began producing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.
In 1882, Lucien Gaulard and John Dixon Gibbs first exhibited a device with an initially widely criticized laminated plate open iron core called a 'secondary generator' in London, then sold the idea to the Westinghouse company in the United States in 1886. They also exhibited the invention in Turin, Italy in 1884, where it was highly successful and adopted for an electric lighting system. Their open-core device used a fixed 1:1 ratio to supply a series circuit for the utilization load (lamps). However, the voltage of their system was controlled by moving the iron core in or out.
Early series circuit transformer distribution
Induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil.
Efficient, practical transformer designs did not appear until the 1880s, but within a decade, the transformer would be instrumental in the war of the currents, and in seeing AC distribution systems triumph over their DC counterparts, a position in which they have remained dominant ever since.
Closed-core transformers and parallel power distribution
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three Hungarian engineers associated with the Ganz Works, had determined that open-core devices were impracticable, as they were incapable of reliably regulating voltage. The Ganz factory had also in the autumn of 1884 made delivery of the world's first five high-efficiency AC transformers, the first of these units having been shipped on September 16, 1884. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around an iron wire ring core or surrounded by an iron wire core. The two designs were the first application of the two basic transformer constructions in common use to this day, termed "core form" or "shell form" .
In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see Toroidal cores below). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1,400 to 2,000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores.
Transformers today are designed on the principles discovered by the three engineers. They also popularized the word 'transformer' to describe a device for altering the EMF of an electric current although the term had already been in use by 1882. In 1886, the ZBD engineers designed, and the Ganz factory supplied electrical equipment for, the world's first power station that used AC generators to power a parallel connected common electrical network, the steam-powered Rome-Cerchi power plant.
Westinghouse improvements
Building on the advancement of AC technology in Europe, George Westinghouse founded the Westinghouse Electric in Pittsburgh, Pennsylvania, on January 8, 1886. The new firm became active in developing alternating current (AC) electric infrastructure throughout the United States.
The Edison Electric Light Company held an option on the US rights for the ZBD transformers, requiring Westinghouse to pursue alternative designs on the same principles. George Westinghouse had bought Gaulard and Gibbs' patents for $50,000 in February 1886. He assigned to William Stanley the task of redesign the Gaulard and Gibbs transformer for commercial use in United States. Stanley's first patented design was for induction coils with single cores of soft iron and adjustable gaps to regulate the EMF present in the secondary winding (see image). This design was first used commercially in the US in 1886 but Westinghouse was intent on improving the Stanley design to make it (unlike the ZBD type) easy and cheap to produce.
Westinghouse, Stanley and associates soon developed a core that was easier to manufacture, consisting of a stack of thin 'E‑shaped' iron plates insulated by thin sheets of paper or other insulating material. Pre-wound copper coils could then be slid into place, and straight iron plates laid in to create a closed magnetic circuit. Westinghouse obtained a patent for the new low-cost design in 1887.
Other early transformer designs
In 1889, Russian-born engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer at the Allgemeine Elektricitäts-Gesellschaft ('General Electricity Company') in Germany.
In 1891, Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for producing very high voltages at high frequency.
Audio frequency transformers ("repeating coils") were used by early experimenters in the development of the telephone.
| Technology | Components | null |
30919 | https://en.wikipedia.org/wiki/Triage | Triage | In medicine, triage (, ) is a process by which care providers such as medical professionals and those with first aid knowledge determine the order of priority for providing treatment to injured individuals and/or inform the rationing of limited supplies so that they go to those who can most benefit from it. Triage is usually relied upon when there are more injured individuals than available care providers (known as a mass casualty incident), or when there are more injured individuals than supplies to treat them.
The methodologies of triage vary by institution, locality, and country but have the same universal underlying concepts. In most cases, the triage process places the most injured and most able to be helped as the first priority, with the most terminally injured the last priority (except in the case of reverse triage). Triage systems vary dramatically based on a variety of factors, and can follow specific, measurable metrics, like trauma scoring systems, or can be based on the medical opinion of the provider. Triage is an imperfect practice, and can be largely subjective, especially when based on general opinion rather than a score. This is because triage needs to balance multiple and sometimes contradictory objectives simultaneously, most of them being fundamental to personhood: likelihood of death, efficacy of treatment, patients' remaining lifespan, ethics, and religion.
Etymology and origin
The term triage comes directly from French, where the term means to pick or to sort, it itself coming from the Old French verb , meaning to separate, sort, shift, or select; with in turn came from late Latin tritare, to grind. Although the concept existed much earlier, at least as far back as the reign of Maximillian I, it was not until the 1800s that the Old French trier was used to describe the practice of triage. That year, Baron Dominique-Jean Larrey, the Surgeon in Chief of Napoleon's Imperial Guard laid the groundwork for what would eventually become modern triage introducing the concept of "treat[ing] the wounded according to the observed gravity of their injuries and the urgency for medical care, regardless of their rank or nationality".
Concepts in triage
Simple triage
Simple triage is usually used in a scene of an accident or "mass-casualty incident" (MCI), in order to sort patients into those who need critical attention and immediate transport to a secondary or tertiary care facility to survive, those who require low-intensity care to survive, those who are uninjured, and those who are deceased or will be so imminently. In the United States, this most commonly takes the form of the START triage model, in Canada, the CTAS model, and in Australia the ATS model. Assessment often begins with asking anyone who can walk to walk to a designated area, labeling them the lowest priority, and assessing other patients from there. Upon completion of the initial assessment by the care provider, which is based on the so-called ABCDE approach, patients are generally labelled with their available information, including "patient’s name, gender, injuries, interventions, care-provider IDs, casualty triage score, and an easily visible overall triage category".
ABCDE Assessment
An ABCDE assessment (other variations include ABC, ABCD, ABCDEF, and many others, including those localized to non-English) is rapid patient assessment designed to check bodily function in order of importance.
Tags
A triage tag is a premade label placed on each patient that serves to accomplish several objectives:
identify the patient.
bear record of assessment findings.
identify the priority of the patient's need for medical treatment and transport from the emergency scene.
track the patients' progress through the triage process.
identify additional hazards such as contamination.
Triage tags take a variety of forms. Some countries use a nationally standardized triage tag, while in other countries commercially available triage tags are used, which vary by jurisdictional choice. In some cases, international organizations also have standardized tags, as is the case with NATO. The most commonly used commercial systems include the METTAG, the SMARTTAG, E/T LIGHT and the CRUCIFORM systems. More advanced tagging systems incorporate special markers to indicate whether or not patients have been contaminated by hazardous materials, and also tear off strips for tracking the movement of patients through the process.
Advanced triage
In advanced triage, those with advanced training, such as doctors, nurses and paramedics make further care determinations based on more in-depth assessments, and may make use of advanced diagnostics like CT scans. This can also be a form of secondary triage, where the evaluation occurs at a secondary location like a hospital, or after the arrival of more qualified care providers.
Reverse triage
There are a three primary concepts referred to as Reverse Triage. The first is concerned with the discharge of patients from hospital often to prepare for an incoming mass casualty. The second concept of Reverse Triage is utilized for certain conditions such as lightning injuries, where those appearing to be dead may be treated ahead of other patients, as they can typically be resuscitated successfully. The third is the concept of treating the least injured, often to return them to functional capability. This approach originated in the military, where returning combatants to the theatre of war may lead to overall victory (and survivability).
Undertriage and overtriage
Undertriage is underestimating the severity of an illness or injury. An example of this would be categorizing a Priority 1 (Immediate) patient as a Priority 2 (Delayed) or Priority 3 (Minimal). The rate of undertriage generally varies by the location of the triage, with a 2014 review of triage practices in emergency rooms finding that in-hospital undertriaging occurred 34% of the time in the United States, while reviews of pre-hospital triage finding undertriage rates of 14%.
Overtriage is the overestimating of the severity of an illness or injury. An example of this would be categorizing a Priority 3 (Minimal) patient as a Priority 2 (Delayed) or Priority 1 (Immediate). Acceptable overtriage rates have been typically up to 50% in an effort to avoid undertriage. Some studies suggest that overtriage is less likely to occur when triaging is performed by hospital medical teams, rather than paramedics or EMTs.
Telephone triage
In telephone triage, care providers like nurses assess symptoms and medical history, and make a care recommendation over the phone. A review of available literature found that these services provide accurate and safe information about 90% of the time.
Palliative care
In triage, palliative care takes on a wider applicability, as some conditions which may be survivable outside of extreme circumstances become unsurvivable due to the nature of a mass casualty incident. For these patients, as well as those who are deemed to be unsavable, palliative care can mean the difference of a painful death, and a relatively peaceful one. During the COVID-19 pandemic issues of palliative care in triage became more obvious as some countries were forced to deny care to large groups of individuals due to lack of supplies and ventilators.
Evacuation
In the field, evacuation of all casualties is the ultimate goal, so that the site of the incident can ultimately be cleared, if necessary investigated, and eventually rendered safe. Additional considerations must be made to avoid overwhelming local resources, and in some extreme cases, this can mean evacuating some patients to other countries.
Alternative care facilities
Alternative care facilities are places that are set up for the care of large numbers of patients, or are places that could be so set up. Examples include schools, sports stadiums, and large camps that can be prepared and used for the care, feeding, and holding of large numbers of victims of a mass casualty or other type of event. Such improvised facilities are generally developed in cooperation with the local hospital, which sees them as a strategy for creating surge capacity. While hospitals remain the preferred destination for all patients, during a mass casualty event such improvised facilities may be required in order to divert low-acuity patients away from hospitals in order to prevent the hospitals becoming overwhelmed.
Pre-modern history of triage
The Edwin Smith Papyrus
The general concept was first described in a 17th-century BCE Egyptian document, the Edwin Smith Papyrus. Discovered in 1862, outside of modern-day Luxor, Egypt, the Edwin Smith Papyrus contains descriptions of the assessment and treatment of a multitude of medical conditions, and divides injuries into three categories:
"A medical condition I can heal"
"A medical condition I intend to fight with."
"A medical condition that cannot be healed."
The Holy Roman Empire
During the reign of Emperor Maximilian I, during wartime, a policy was implemented where soldiers were prioritized over all others in hospitals, and the sickest soldiers received treatment first.
Modern history of triage
Napoleonic triage
Modern triage grew out of the work of Baron Dominique-Jean Larrey and Barron Francois Percy during the reign of Napoleon. Larrey in particular introduced the concept of a "flying ambulance" (flying in this case meaning rapidly moving) or in its native French, Ambulance Volante.
World War I
In 1914, Antoine Depage developed the five-tiered Ordre de Triage, a triage system which set specific benchmarks on evacuation, described staged evacuation. French and Belgian doctors began using these concepts to inform the treatment of casualties at aid stations behind the front. Those responsible for the removal of the wounded from a battlefield or their care afterwards would divide the victims into three categories:
Those who are likely to live, regardless of what care they receive;
Those who are unlikely to live, regardless of what care they receive;
Those for whom immediate care may make a positive difference in outcome.
From that delineation, aid workers would follow the Ordre de Triage:
First Order of Triage
In the first order of triage, the injured would be evacuated to clearing stations in the night, when darkness offered maximum protection from the German forces.
Second Order of Triage
Once at a casualty clearing station, wounds were dressed, and anyone requiring immediate surgical intervention was placed in a cart and brought immediately to an ambulance pickup area. If the wounded could wait, they would be evacuated by ambulance during the night.
Third Order of Triage
Ambulances, driven by YMCA and American Red Cross trained drivers then removed the casualties to mobile surgical centers, called postes avances des hospitaux du front or outposts of the frontline hospitals.
Fourth Order of Triage
At the mobile surgical hospitals, the most severe cases were treated, specifically those who were likely to die before reaching a permanent, more equipped hospital. Anyone who could survive the trip was transported to a farther away, often costal, hospital.
Fifth Order of Triage
Upon reaching a permanent hospital, casualties received appropriate care to treat all of their injuries.
World War II
By the onset of World War II, American and British forces had adopted and adapted triage, with other global powers doing the same. The increased availability of airplanes allowed rapid evacuation to a hospital outside of the warzone to become a part of the triage process. Although the basic practices remained the same as in World War I, with initial evacuation to an aid station, followed by transitions to higher levels of care, and eventual admission to a permanent hospital, more advanced care was provided at each stage, and the mindset of treating only what was absolutely necessary fell away. Although triage almost certainly occurred in the days after the atomic bombings of Hiroshima and Nagasaki, the pandemonium caused by the attack left records of such action non-existent until after the fifth day, at which point they are largely without historical use.
The Texas City disaster (1947)
In 1947, the Texas City Disaster occurred when the SS Grandcamp exploded in Texas City, Texas, killing 600 people and injuring thousands more. The entire fire department was killed in the blast, and what followed was a massive informal triage of the victims. Drug stores, clinics, and homes were opened as makeshift triage stations. As the city has no hospital, they had to evacuate casualties to area facilities, including those in Galveston and Houston, with at least one doctor relying on skills he had learned in World War II to inform care decisions.
The Korean War
The Korean War saw the advent of the tiered triage, wherein care providers sorted people into categories defined ahead of time. These categories, immediate, delayed, minimal and expectant are still the basis for most triage systems today. The time period was also marked by improvements in medical understanding, including shock, which allowed effective interventions to be administered earlier in the Triage process, which in turn significantly improved outcomes. At the same time, Mobile Army Surgical Hospitals (MASH) were introduced along with helicopters for evacuation. These helicopters, however were used for evacuation only, and care was not provided in the air during the evacuation. These advances reduced fatalities for injured soldiers by up to 30%, and changed the nature of battlefield medicine significantly.
The Vietnam War
The conditions of the Vietnam War drove further development on the concepts created during the Korean War. Advances in helicopters allowed the introduction of the first helicopter medics, who were able to provide fluid resuscitation, and other interventions mid-flight. This made it so that the average time from injury to definitive care was less than two hours. This evolution also flowed into the everyday life, with air ambulances emerging in the civilian world by the mid-1960's. The use of triage in emergency departments and ambulance services also quickly followed.
The World Trade Center bombing (1993)
In 1993, the north tower of the World Trade Center was bombed, in a plot with a similar intended outcome as the later September 11th attacks. While search, rescue and triage operations immediately following were ordinary, the attack itself represented one of the first terrorist attacks affecting the United States directly. The fact that the U.S. was no longer seen as untouchable, along with the later Oklahoma City bombing in 1995, and the September 11th attack lead to long term changes in triage practices to be more focused on operational safety and the risk of secondary attacks designed to kill care providers.
Matsumato sarin attack (1994)
In June 1994, emergency crews began responding to calls related to symptoms of toxic gas exposure in a neighborhood. Without proper personal protective equipment, more than 253 residents were evacuated and 50 were hospitalized. 20 vehicles were called to the scene, and a mobile operating center was setup nearby, likely within the zone of contamination. Unaware of the presence of Sarin, triage was performed following the standard system of the time, which ultimately resulted in eight care givers experiencing mild sarin poisoning, and an unknown amount of additional staff experiencing general malaise.
At the time, no decontamination procedures or gas masks were available for incidents involving contaminants. In response, the Japan Self-Defense Forces created a decontamination team, which was then instrumental to the response of the Tokyo subway sarin attack which occurred only seven months later.
Triage in the present day
As medical technology has advanced, so have modern approaches to triage, which are increasingly based on scientific models. The categorizations of the victims are frequently the result of triage scores based on specific physiological assessment findings. Some models, such as the START model may be algorithm-based. As triage concepts become more sophisticated, and to improve patient safety and quality of care, several human-in-the-loop decision-support tools have been designed on top of triage systems to standardize and automate the triage process (e.g., eCTAS, NHS 111) in both hospitals and the field. Moreover, the recent development of new machine learning methods offers the possibility to learn optimal triage policies from data and in time could replace or improve upon expert-crafted models.
Specific triage systems and methods
Most simply, the general purpose of triage is to sort patients by level of acuity to inform care decisions; so that the most people possible can be saved. Although a multitude of systems, color codes, codewords, and categories exist to help direct it, in all cases, triage follows the same basic process. In all systems, patients are first assessed for injuries, then, they are categorized based on the severity of those injuries. Although the number of categories differs from system to system, all have at least three in common; high severity, low severity, and deceased. Some systems involve features like scoring systems, such as the Revised Trauma Score, the Injury Severity Score, and the Trauma and Injury Severity Score, the latter of which has been shown to be most effective at determining outcome.
Triage Systems by Methodology
S.T.A.R.T. model
S.T.A.R.T. (Simple Triage and Rapid Treatment) is a simple triage system that can be performed by lightly trained lay and emergency personnel in emergencies. It was developed at Hoag Hospital in Newport Beach, California for use by emergency services in 1983.
Triage separates the injured into four groups:
The expectant who are beyond help
The injured who can be helped by immediate transportation
The injured whose transport can be delayed
Those with minor injuries who need help less urgently
Triage also sets priorities for evacuation and transport as follows:
Deceased are left where they fell. These include those who are not breathing and repositioning their airway efforts were unsuccessful.
Immediate or Priority 1 (red) evacuation to a medical facility as they need advanced medical care at once or within one hour. These people are in critical condition and would die without immediate assistance.
Delayed or Priority 2 (yellow) can have their medical evacuation delayed until all immediate people have been transported. These people are in stable condition but require medical assistance.
Minor or Priority 3 (green) are not evacuated until all immediate and delayed persons have been evacuated. These will not need advanced medical care for at least several hours. Continue to re-triage in case their condition worsens. These people are able to walk and may only need bandages and antiseptic.
JumpSTART triage
The JumpSTART pediatric triage MCI triage tool is a variation of the S.T.A.R.T. model. Both systems are used to sort patients into categories at mass casualty incidents (MCIs). However, JumpSTART was designed specifically for triaging children in disaster settings. Though JumpSTART was developed for use in children from infancy to age 8, where age is not immediately obvious, it is used in any patient who appears to be a child (patients who appear to be young adults are triaged using START).
Triage Systems by Country
Australia and New Zealand
In hospital settings, Australia and New Zealand rely on the Australasian Triage Scale (abbreviated ATS and formally known as the National Triage Scale). The scale has been in use since 1994. The scale consists of 5 levels, with 1 being the most critical (resuscitation), and 5 being the least critical (nonurgent).
In field settings, various standardized triage systems are used, and there is no area wide standard.
Canada
In 1995, the CAEP Triage and Acuity scale was launched in Canada relying on a simplified version of the Australian National Triage Scale. This scale used three categories, emergent, urgent, and non-urgent. This scale was deprecated in 1999 with the introduction of the Canadian Triage and Acuity Scale (CTAS), which is used across the country to sort incoming patients. The system categorizes patients by both injury and physiological findings, and ranks them by severity from 1–5 (1 being highest). The model is not currently used for mass casualty triage, and instead the START protocol and METTAG triage tags is used.
France
In France, the Prehospital triage in case of a disaster uses a multi-tier scale:
Décédé (deceased), or urgence dépassée (beyond urgency)
Extrême urgence (extreme urgency): requiring care within a half hour.
Urgence absolue (absolute urgency): requiring care within an hour.
Urgence relative (relative urgency): requiring care, but not immediately.
Blessé léger (slightly injured)
Impliqué (involved, but not directly injured)
This triage is performed by a physician called médecin trieur (sorting medic).
Germany
The German triage system uses four color codes:
Hong Kong
In Hong Kong, triage in Accident & Emergency Departments is performed by experienced registered nurses, patients are divided into five triage categories: Critical, Emergency, Urgent, Semi-urgent and Non-urgent. In mass casualty incidents, the START triage system is used.
Japan
In Japan, the triage system is mainly used by health professionals. The categories of triage, in corresponding color codes, are:
: Used for viable victims with potentially life-threatening conditions.
: Used for victims with non-life-threatening injuries, but who urgently require treatment.
: Used for victims with minor injuries that do not require ambulance transport.
: Used for victims who are dead, or whose injuries make survival unlikely.
Singapore
All public hospitals in Singapore use the Patient Acuity Category Scale (PACS) to triage patient in Emergency Departement. PACS is a symptom-based differential diagnosis approach that triages patients according to their presenting complaints and objective assessments such as vital signs and Glasgow Coma Scale, allowing acute patients to be identified quickly for treatment. PACS classifies patients into four main categories: P1, P2, P3, and P4.
In mass casualty incidents, the START triage system is used.
Spain
In Spain, there are 2 models which are the most common found in hospitals around the country:
The Sistema Estructurado de Triaje (SET), which is an adaptation of the Model Andorrà de Triatge (MAT). The system uses 650 reasons for medical appointment in 32 symptomatic categories, that together with some patient information and basic exploratory data, classifies the emergency within 5 levels of urgency.
The "Manchester", based on the system with the same name in the UK, use 51 reasons for consultation. Through some yes/no questions, addressed in a diagram, it classifies the emergency in 5 severities.
In mass casualty incidents, the Modelo Extrahospitalario de Triaje Avanzado (META)/Advanced Triage Out of Hospital Model system is used. META is a seven-stage system, classifying patients as: Red 1st, Red 2nd, Red 3rd, Yellow 1st, Yellow 2nd, Green, and Deceased. The system aligns with the ABCDE framework.
United Kingdom
In April 2023, the NHS and ambulance services adopted two new triage tools to be used in major incidents, replacing the NASMeD Triage Sieve. These new tools resulted from a multi-stakeholder review led by the NHS but its implementation became more urgent after the Manchester Arena Inquiry made it a monitored recommendation for the NHS and National Ambulance Resilience Unit to adopt.
Ten Second Triage Tool
The Ten Second Triage Tool (TST) was introduced as a way for all emergency services, including the police and fire service, to assess and prioritise mass casualties to provide lifesaving intervention. The tool allows for rapid assessment by removing the need to measure physiological vital signs focusing on what the emergency responder can see.
– Patients who have catastrophic bleeding, a penetrating injury or those who are unconscious
– Patients who are unable to walk but are conscious
– Patients who are able to walk
– Patients who are not breathing (this replaces the deceased category)
NHS Major Incident Triage Tool
The Major Incident Triage Tool (MITT) serves as the more advanced triage tool for emergency medial responders to triage casualties. The tool, derived from the Modified Physiological Triage Tool, can be used on both adults and children, and also includes the assessment of physiological vital signs.
– Life-threatening injury
– Unconscious but breathing
– Non-life-threatening injury
– No signs of life or non-survivable injury
United States
A multitude of triage systems exist in the United States, and there is no national standard. Among local, regional, state, and interstate systems, the START triage method is most commonly used.
United States Armed Forces
The U.S. armed forced utilize a four-stage system, A battlefield situation, care providers rank casualties for precedence, treat those who they can safely, and transport casualties who need it to a higher level of care, either a Forward Surgical Team or Combat Support Hospital.
The triage categories (with corresponding color codes), in order of priority, are:
Limitations of current practices
Notions of mass casualty triage as an efficient rationing process of determining priority based upon injury severity are not supported by research, evaluation and testing of current triage practices, which lack scientific and methodological bases. START and START-like (START) triage that use color-coded categories to prioritize provide poor assessments of injury severity and then leave it to providers to subjectively order and allocate resources within flawed categories. Some of these limitations include:
lacking the clear goal of maximizing the number of lives saved, as well as the focus, design and objective methodology to accomplish that goal (a protocol of taking the worst Immediate – lowest chances for survival – first can be statistically invalid and dangerous)
using trauma measures that are problematic (e.g., capillary refill) and grouping into broad color-coded categories that are not in accordance with injury severities, medical evidence and needs. Categories do not differentiate among injury severities and survival probabilities, and are invalid based on categorical definitions and evacuation priorities
ordering (prioritization) and allocating resources subjectively within Immediate and Delayed categories, which are neither reproducible nor scalable, with little chance of being optimal
not considering/addressing size of incident, resources, and injury severities and prioritization within its categories – e.g., protocol does not change whether 3, 30 or 3,000 casualties require its use, and regardless of available resources to be rationed
not considering differences in injury severities and survival probabilities between types of trauma (blunt versus penetrating, etc.) and ages
resulting in inconsistent tagging and prioritizing/ordering of casualties and substantial overtriage
Research indicates there are wide ranges and overlaps of survival probabilities of the Immediate and Delayed categories, and other START limitations. The same physiologic measures can have markedly different survival probabilities for blunt and penetrating injuries. For example, a START Delayed (second priority) can have a survival probability of 63% for blunt trauma and a survival probability of 32% for penetrating trauma with the same physiological measures – both with expected rapid deterioration, while a START Immediate (first priority) can have survival probabilities that extend to above 95% with expected slow deterioration. Age categories exacerbate this. For example, a geriatric patient with a penetrating injury in the Delayed category can have an 8% survival probability, and a pediatric patient in the Immediate category can have a 98% survival probability. Issues with the other START categories have also been noted. In this context, color-coded tagging accuracy metrics are not scientifically meaningful.
Poor assessments, invalid categories, no objective methodology and tools for prioritizing casualties and allocating resources, and a protocol of worst first triage provide some challenges for emergency and disaster preparedness and response. These are clear obstacles for efficient triage and resource rationing, for maximizing savings of lives, for best practices and National Incident Management System (NIMS) compatibilities, and for effective response planning and training.
Inefficient triage also provides challenges in containing health care costs and waste. Field triage is based upon the notion of up to 50% overtriage as being acceptable. There have been no cost-benefit analyses of the costs and mitigation of triage inefficiencies embedded in the healthcare system. Such analyses are often required for healthcare grants funded by taxpayers, and represent normal engineering and management science practice. These inefficiencies relate to the following cost areas:
tremendous investment in time and money since 9/11 to develop and improve responders' triage skills
cited benefits from standardization of triage methodology, reproducibility and interoperability, and NIMS compatibilities
avoided capital costs for taxpayer investment in additional EMS and trauma infrastructure
wasteful daily resource utilization and increased operating costs from acceptance of substantial levels of overtriage
prescribed values of a statistical life and estimated savings in human lives that could reasonably be expected using evidence-based triage practices
ongoing performance improvements that could reasonably be expected from a more objective optimization-based triage system and practices
Ethical considerations
Because treatment is intentionally delayed or withheld from individuals under this system, triage has ethical implications that complicate the decision-making process. Individuals involved in triage must take a comprehensive view of the process to ensure fidelity, veracity, justice, autonomy, and beneficence are safeguarded.
Ethical implications vary between different settings and the type of triage system employed, culminating in no single gold-standard approach to triage. Emergency departments are advised to preemptively plan strategies in attempts to mitigate the emotional burden on these triage responders. While doing so, standards of care must be maintained to preserve the safety of both patients and providers.
There is widespread agreement among ethicists that, in practice, during the COVID-19 pandemic triage should prioritize "those who have the best chance of surviving" and follow guidelines with strict criteria that consider both short-term and long-term survivability. Likewise, the triage of other health services has been adjusted during the pandemic to limit resource strain on hospitals.
Utilitarian approach and critique
Under the utilitarian model, triage works to maximize the survival outcomes of the most people possible. This approach implies that some individuals may likely suffer or die, in order for the majority to survive. Triage officers must allocate limited resources and weigh an individual's needs along with the needs of the population as a whole.
Some ethicists argue the utilitarian approach to triage is not an impartial mechanism, but rather a partial one that fails to address the social conditions that prevent optimal outcomes in marginalized communities, rendering it a practical but inadequate means of distributing health resources.
Special population groups
There is wide discussion regarding how VIPs and celebrities should be cared for in the emergency department. It is generally argued that giving special considerations or deviating from the standard medical protocol for VIPs or celebrities is unethical due to the cost to others. However, others argue that it may be morally justifiable as long as their treatment does not hinder the needs of others after assessing overall fairness, quality of care, privacy, and other ethical implications.
Proposed frameworks in conflict
A variety of logistical challenges complicate the triage and ultimate provision of care in conflict situations. Humanitarian actors acknowledge challenges like disruptions in food and medical supply chains, lack of suitable facilities, and existence of policies that prohibit administration of care to certain communities and populations as elements that directly impede the successful delivery of care. The logistical realities of humanitarian emergencies and conflict situations threatens the bioethical principle of beneficence, the obligation to act for the benefit of others.
Technical challenges of triage in conflict settings
To address the ethical concerns that underpin triage in conflict situations and humanitarian crises, new triage frameworks and classification systems have been suggested that aim to uphold human rights. Scholars have argued that new frameworks must prioritize informed consent and rely on established medical criteria only in order to respect the human rights considerations set forth by the Geneva Convention of 1864 and the Universal Declaration of Human Rights, but no comprehensive triage model has been adopted by international bodies.
Veterinary triage
Emergency veterinarian Jessica Fragola wrote in 2022 about the ethics of animal triage. She said that pressures on veterinarians having been exacerbated by staffing shortages that resulted from the Covid pandemic, coupled with growth in spending on veterinary care and on pet insurance.
| Biology and health sciences | General concepts | Health |
30942 | https://en.wikipedia.org/wiki/Tobacco | Tobacco | Tobacco is the common name of several plants in the genus Nicotiana of the family Solanaceae, and the general term for any product prepared from the cured leaves of these plants. More than 70 species of tobacco are known, but the chief commercial crop is N. tabacum. The more potent variant N. rustica is also used in some countries.
Dried tobacco leaves are mainly used for smoking in cigarettes and cigars, as well as pipes and shishas. They can also be consumed as snuff, chewing tobacco, dipping tobacco, and snus.
Tobacco contains the highly addictive stimulant alkaloid nicotine as well as harmala alkaloids. Tobacco use is a cause or risk factor for many deadly diseases, especially those affecting the heart, liver, and lungs as well as many cancers. In 2008, the World Health Organization named tobacco use as the world's single greatest preventable cause of death.
Etymology
The English word 'tobacco' originates from the Spanish word tabaco. The precise origin of this word is disputed, but it is generally thought to have derived, at least in part, from Taíno, the Arawakan language of the Caribbean. In Taíno, it was said to mean either a roll of tobacco leaves (according to Bartolomé de las Casas, 1552), or to tabago, a kind of L-shaped pipe used for sniffing tobacco smoke (according to Oviedo, with the leaves themselves being referred to as cohiba).
However, perhaps coincidentally, similar words in Spanish, Portuguese and Italian were used from 1410 for certain medicinal herbs. These probably derived from the Arabic (also ), a word reportedly dating to the ninth century, referring to various herbs.
History
Cultural significance
According to Iroquois mythology, tobacco first grew out of Earth Woman's head after she died giving birth to her twin sons, Sapling and Flint.
Traditional use
Tobacco has long been used in the Americas, with some cultivation sites in Mexico dating back to 1400–1000 BC. Many Native American tribes traditionally grow and use tobacco. Historically, people from the Northeast Woodlands cultures have carried tobacco in pouches as a readily accepted trade item. It was smoked both socially and ceremonially, such as to seal a peace treaty or trade agreement. In some Native cultures, tobacco is seen as a gift from the Creator, with the ceremonial tobacco smoke carrying one's thoughts and prayers to the Creator.
Some Native Americans consider tobacco to be a medicine and advocate for its respectful usage, rather than a commercial one.
Popularization
Following the arrival of the Europeans to the Americas, tobacco became increasingly popular as a trade item. Francisco Hernández de Toledo, Spanish chronicler of the Indies, was the first European to bring tobacco seeds to the Old World in 1559 following orders of King Philip II of Spain. These seeds were planted in the outskirts of Toledo, more specifically in an area known as "Los Cigarrales" named after the continuous plagues of cicadas (cigarras in Spanish). Before the development of the lighter Virginia and white burley strains of tobacco, the smoke was too harsh to be inhaled. Small quantities were smoked at a time, using a pipe like the midwakh or kiseru, or newly invented waterpipes such as the bong or the hookah (see thuốc lào for a modern continuance of this practice). Tobacco became so popular that the English colony of Jamestown used it as currency and began exporting it as a cash crop; tobacco is often credited as being the export that saved Virginia from ruin. While a lucrative product, the growing expansion of tobacco demand was intimately tied to the history of slavery in the Caribbean.
The alleged benefits of tobacco also contributed to its success. The astronomer Thomas Harriot, who accompanied Sir Richard Grenville on his 1585 expedition to Roanoke Island, thought that the plant "openeth all the pores and passages of the body" so that the bodies of the natives "are notably preserved in health, and know not many grievous diseases, wherewithal we in England are often times afflicted."
Production of tobacco for smoking, chewing, and snuffing became a major industry in Europe and its colonies by 1700.
Tobacco has been a major cash crop in Cuba and in other parts of the Caribbean since the 18th century. Cuban cigars are world-famous.
In the late 19th century, cigarettes became popular. James Bonsack invented a machine to automate cigarette production. This increase in production allowed tremendous growth in the tobacco industry until the health revelations of the late 20th century.
Contemporary
Following the scientific revelations of the mid-20th century, tobacco was condemned as a health hazard, and eventually became recognized as a cause of cancer, as well as other respiratory and circulatory diseases. In the United States, this led to the adoption of the 1998 Tobacco Master Settlement Agreement, which settled the many lawsuits by the U.S. states in exchange for a combination of yearly payments to the states and voluntary restrictions on advertising and marketing of tobacco products.
In the 1970s, Brown & Williamson cross-bred a strain of tobacco to produce Y1, a strain containing an unusually high nicotine content, nearly doubling from 3.2 to 3.5%, to 6.5%. In the 1990s, this prompted the Food and Drug Administration to allege that tobacco companies were intentionally manipulating the nicotine content of cigarettes.
The desire of many addicted smokers to quit has led to the development of tobacco cessation products.
In 2003, in response to growth of tobacco use in developing countries, the World Health Organization successfully rallied 168 countries to sign the Framework Convention on Tobacco Control. The convention is designed to push for effective legislation and enforcement in all countries to reduce the harmful effects of tobacco. Between 2019 and 2021, concerns about increased COVID-19 health risks due to tobacco consumption facilitated smoking reduction and cessation.
Biology
Nicotiana
Many species of tobacco are in the genus of herbs Nicotiana. It is part of the nightshade family (Solanaceae) indigenous to North and South America, Australia, south west Africa, and the South Pacific.
Most nightshades contain varying amounts of nicotine, a powerful neurotoxin to insects. However, tobaccos tend to contain a much higher concentration of nicotine than the others. Unlike many other Solanaceae species, they do not contain tropane alkaloids, which are often poisonous to humans and other animals.
Despite containing enough nicotine and other compounds such as germacrene and anabasine and other piperidine alkaloids (varying between species) to deter most herbivores, a number of such animals have evolved the ability to feed on Nicotiana species without being harmed. Nonetheless, tobacco is unpalatable to many species due to its other attributes. For example, although the cabbage looper is a generalist pest, tobacco's gummosis and trichomes can harm early larvae survival. As a result, some tobacco plants (chiefly N. glauca) have become established as invasive weeds in some places.
Types
The types of tobacco include:
Aromatic fire-cured is cured by smoke from open fires. In the United States, it is grown in northern middle Tennessee, central Kentucky, and Virginia. Fire-cured tobacco grown in Kentucky and Tennessee is used in some chewing tobaccos, moist snuff, some cigarettes, and as a condiment in pipe tobacco blends. Another fire-cured tobacco is Latakia, which is produced from oriental varieties of N. tabacum. The leaves are cured and smoked over smoldering fires of local hardwoods and aromatic shrubs in Cyprus and Syria.
Brightleaf tobacco is commonly known as "Virginia tobacco", often regardless of the state where it is planted. Prior to the American Civil War, most tobacco grown in the US was fire-cured dark-leaf. Sometime after the War of 1812, demand for a milder, lighter, more aromatic tobacco arose. Ohio, Pennsylvania and Maryland all innovated with milder varieties of the tobacco plant. Farmers discovered that brightleaf tobacco needs thin, starved soil, and those who could not grow other crops found that they could grow tobacco. Confederate soldiers traded it with each other and Union soldiers, and developed quite a taste for it. At the end of the war, the soldiers went home and a national market had developed for the local crop.
Broadleaf, a dark tobacco varietal family popular for producing enormous, resilient, and thick wrapper leaves.
Burley tobacco is an air-cured tobacco used predominantly in cigarette production, but also in pipe tobacco as a balance to Virginias and other leaves high in sugar content. In the U.S., burley tobacco plants are started from pelletized seeds placed in polystyrene trays floated on a bed of fertilized water in March or April.
Cavendish is more a process of curing and a method of cutting tobacco than a type, but is used to thicken flavors from other tobaccos that might lack a body. The processing and the cut are used to bring out the natural sweet taste in the tobacco. Cavendish can be produced from any tobacco type but is usually one of, or a blend of, Kentucky, Virginia and burley and is most commonly used for pipe tobacco.
Criollo tobacco is primarily used in the making of cigars. It was by most accounts one of the original Cuban tobaccos that emerged around the time of Columbus.
Dokha is a tobacco originally grown in Iran, mixed with leaves, bark and herbs for smoking in a midwakh.
Perique was developed in 1824 through the technique of pressure-fermentation of local tobacco by a farmer, Pierre Chenet. Considered the truffle of pipe tobaccos, it is used as a component in many blended pipe tobaccos but is too strong to be smoked pure. At one time the freshly moist Perique was also chewed, but it is no longer sold for this purpose. It is typically blended with pure Virginia to lend spice, strength and coolness to the blend.
Shade tobacco is cultivated in Connecticut and Massachusetts. Early Connecticut colonists acquired from the Native Americans the habit of smoking tobacco in pipes, and began cultivating the plant commercially, though the Puritans referred to it as the "evil weed". The Connecticut shade industry has weathered some major catastrophes, including a devastating hailstorm in 1929 and an epidemic of brown spot fungus in 2000, and is in danger of disappearing altogether, given the increase in the value of land.
Turkish tobacco is a sun-cured, highly aromatic, small-leafed variety (Nicotiana tabacum) grown in Turkey, Greece, Bulgaria and North Macedonia. Originally grown in regions historically part of the Ottoman Empire, it is also known as ‘oriental’. Many of the early brands of cigarettes were made mostly or entirely of Turkish tobacco. Its main use evolved to be included in blends of pipe and especially cigarette tobacco. (A typical American cigarette is a blend of bright Virginia, burley and Turkish.)
White burley air-cured leaf was found to be milder than other types of tobacco. In 1865 George Webb of Brown County, Ohio, planted red burley seeds he had purchased and found a few of the seedlings had a whitish, sickly look, which became white burley.
Wild tobacco is native to the southwestern United States, Mexico and parts of South America. Its botanical name is Nicotiana rustica.
Parasites
Tobacco, alongside its related products, can be infested by parasites such as the Lasioderma serricorne (tobacco beetle) and the Ephestia elutella (tobacco moth), which are the most widespread and damaging parasites to the tobacco industry. Infestation can range from the tobacco cultivated in the fields to the leaves used for manufacturing cigars, cigarillos, cigarettes, etc. Both the larvae of Lasioderma serricorne and caterpillars of Ephestia elutella are considered pests.
Production
Cultivation
Tobacco is cultivated similarly to other agricultural products. Seeds were at first quickly scattered onto the soil. However, young plants came under increasing attack from flea beetles (Epitrix cucumeris or E. pubescens), which caused destruction of half the tobacco crops in United States in 1876. By 1890, successful experiments were conducted that placed the plant in a frame covered by thin cotton fabric. Modern tobacco seeds are sown in cold frames or hotbeds, as their germination is activated by light. In the United States, tobacco is often fertilized with the mineral apatite, which partially starves the plant of nitrogen, to produce a more desired flavor.
After the plants are about tall, they are transplanted into the fields. Farmers used to have to wait for rainy weather to plant. A hole is created in the tilled earth with a tobacco peg, either a curved wooden tool or deer antler. After making two holes to the right and left, the planter would move forward two feet, select plants from his/her bag, and repeat. Various mechanical tobacco planters like Bemis, New Idea Setter, and New Holland Transplanter were invented in the late 19th and 20th centuries to automate the process: making the hole, watering it, guiding the plant in—all in one motion.
Tobacco is cultivated annually, and can be harvested in several ways. In the oldest method, still used, the entire plant is harvested at once by cutting off the stalk at the ground with a tobacco knife; it is then speared onto sticks, four to six plants a stick, and hung in a curing barn. In the 19th century, bright tobacco began to be harvested by pulling individual leaves off the stalk as they ripened. The leaves ripen from the ground upwards, so a field of tobacco harvested in this manner entails the serial harvest of a number of "primings", beginning with the volado leaves near the ground, working to the seco leaves in the middle of the plant, and finishing with the potent ligero leaves at the top. Before harvesting, the crop must be topped when the pink flowers develop. Topping always refers to the removal of the tobacco flower before the leaves are systematically harvested. As the industrial revolution took hold, the harvesting wagons which were used to transport leaves were equipped with man-powered stringers, an apparatus that used twine to attach leaves to a pole. In modern times, large fields are harvested mechanically, although topping the flower and in some cases the plucking of immature leaves is still done by hand.
In the U.S., North Carolina and Kentucky are the leaders in tobacco production, followed by Tennessee, Virginia, Georgia, South Carolina and Pennsylvania.
Curing
Curing and subsequent aging allow for the slow oxidation and degradation of carotenoids in tobacco leaf. This produces certain compounds in the tobacco leaves and gives a sweet hay, tea, rose oil, or fruity aromatic flavor that contributes to the "smoothness" of the smoke. Starch is converted to sugar, which glycates protein, which is oxidized into advanced glycation endproducts (AGEs), a caramelization process that also adds flavor. Inhalation of these AGEs in tobacco smoke contributes to atherosclerosis and cancer. Levels of AGEs are dependent on the curing method used.
Tobacco can be cured through several methods, including:
Air-cured tobacco is hung in well-ventilated barns and allowed to dry over a period of four to eight weeks. Air-cured tobacco is low in sugar, which gives the tobacco smoke a light, mild flavor, and high in nicotine. Cigar and burley tobaccos are 'dark' air-cured.
Fire-cured tobacco is hung in large barns where fires of hardwoods are kept on continuous or intermittent low smoulder, and takes between three days and ten weeks, depending on the process and the tobacco. Fire curing produces a tobacco low in sugar and high in nicotine. Pipe tobacco, chewing tobacco, and snuff are fire-cured.
Flue-cured tobacco was originally strung onto tobacco sticks, which were hung from tier poles in curing barns (Aus: kilns, also traditionally called 'oasts'). These barns have flues run from externally fed fire boxes, heat-curing the tobacco without exposing it to smoke, slowly raising the temperature over the course of the curing. The process generally takes about a week. This method produces cigarette tobacco that is high in sugar and has medium to high levels of nicotine. Most cigarettes incorporate flue-cured tobacco, which produces a milder, more inhalable smoke. It is estimated that 1 tree is cut to flue-cure every 300 cigarettes, resulting in serious environmental consequences.
Sun-cured tobacco dries uncovered in the sun. This method is used in Turkey, Greece, and other Mediterranean countries to produce oriental tobacco. Sun-cured tobacco is low in sugar and nicotine and is used in cigarettes.
Some tobaccos go through a second stage of curing, known as fermenting or sweating. Cavendish undergoes fermentation pressed in a casing solution containing sugar and/or flavoring.
Global production
Trends
Production of tobacco leaf increased by 40% between 1971, when 4.2 million tons of leaf were produced, and 1997, when 5.9 million tons of leaf were produced. According to the Food and Agriculture Organization (FAO) of the United Nations, tobacco leaf production was expected to hit 7.1 million tons by 2010. This number is a bit lower than the record-high production of 1992, when 7.5 million tons of leaf were produced. The production growth was almost entirely due to increased productivity by developing nations, where production increased by 128%. During that same time, production in developed countries actually decreased. China's increase in tobacco production was the single biggest factor in the increase in world production. China's share of the world market increased from 17% in 1971 to 47% in 1997. This growth can be partially explained by the existence of a low import tariff on foreign tobacco entering China. While this tariff was reduced from 66% in 1999 to 10% in 2004, it has still led to local Chinese cigarettes being preferred over foreign cigarettes because of their lower cost.
Major producers
Every year, about 5.9 million tons of tobacco are produced throughout the world. The top producers of tobacco are China (36.3%), India (12.9%), Brazil (11.9%) and Zimbabwe (3.5%).
China
Around the peak of global tobacco production, 20 million rural Chinese households were producing tobacco on 2.1 million hectares of land. While it is the major crop for millions of Chinese farmers, growing tobacco is not as profitable as cotton or sugarcane, because the Chinese government sets the market price. While this price is guaranteed, it is lower than the natural market price, because of the lack of market risk. To further control tobacco in their borders, China founded a State Tobacco Monopoly Administration (STMA) in 1982. The STMA controls tobacco production, marketing, imports, and exports, and contributes 12% to the nation's national income. As noted above, despite the income generated for the state by profits from state-owned tobacco companies and the taxes paid by companies and retailers, China's government has acted to reduce tobacco use.
India
India's Tobacco Board is headquartered in Guntur in the state of Andhra Pradesh. India has 96,865 registered tobacco farmers and many more who are not registered. In 2010, 3,120 tobacco product manufacturing facilities were operating in all of India. Around 0.25% of India's cultivated land is used for tobacco production.
Since 1947, the Indian government has supported growth in the tobacco industry. India has seven tobacco research centers, located in Tamil Nadu, Andhra Pradesh, Punjab, Bihar, Mysore, and West Bengal which houses the core research institute.
Brazil
In Brazil, around 135,000 family farmers cite tobacco production as their main economic activity. Tobacco has never exceeded 0.7% of the country's total cultivated area. In the southern regions of Brazil, Virginia, and Amarelinho, flue-cured tobacco, as well as burley and Galpão Comum air-cured tobacco, are produced. These types of tobacco are used for cigarettes. In the northeast, darker, air- and sun-cured tobacco is grown. These types of tobacco are used for cigars, twists, and dark cigarettes. Brazil's government has made attempts to reduce the production of tobacco but has not had a successful systematic antitobacco farming initiative. Brazil's government, however, provides small loans for family farms, including those that grow tobacco, through the Programa Nacional de Fortalecimento da Agricultura Familiar.
Problems in production
Child labor
The International Labour Office reported that the most child-laborers work in agriculture, which is one of the most hazardous types of work. The tobacco industry houses some of these working children. Use of children is widespread on farms in Brazil, China, India, Indonesia, Malawi, and Zimbabwe. While some of these children work with their families on small, family-owned farms, others work on large plantations.
In late 2009, reports were released by the London-based human-rights group Plan International, claiming that child labor was common on Malawi (producer of 1.8% of the world's tobacco) tobacco farms. The organization interviewed 44 teens, who worked full-time on farms during the 2007–08 growing season. The child-laborers complained of low pay and long hours, as well as physical and sexual abuse by their supervisors. They also reported experiencing green tobacco sickness, a form of nicotine poisoning. When wet leaves are handled, nicotine from the leaves gets absorbed in the skin and causes nausea, vomiting, and dizziness. Children were exposed to levels of nicotine equivalent to smoking 50 cigarettes, just through direct contact with tobacco leaves. The effects of nicotine on human brain development in children can permanently alter brain structure and function.
Economy
Major tobacco companies have encouraged global tobacco production. Philip Morris, British American Tobacco, and Japan Tobacco each own or lease tobacco-manufacturing facilities in at least 50 countries and buy crude tobacco leaf from at least 12 more countries. This encouragement, along with government subsidies, has led to a glut in the tobacco market. This surplus has resulted in lower prices, which are devastating to small-scale tobacco farmers. According to the World Bank, between 1985 and 2000, the inflation-adjusted price of tobacco dropped 37%. Tobacco is the most widely smuggled legal product.
Environment
Tobacco production requires the use of large amounts of pesticides. Tobacco companies recommend up to 16 separate applications of pesticides just in the period between planting the seeds in greenhouses and transplanting the young plants to the field. Pesticide use has been worsened by the desire to produce larger crops in less time because of the decreasing market value of tobacco. Pesticides often harm tobacco farmers because they are unaware of the health effects and the proper safety protocol for working with pesticides. These pesticides, as well as fertilizers, end up in the soil, waterways, and the food chain. Coupled with child labor, pesticides pose an even greater threat. Early exposure to pesticides may increase a child's lifelong cancer risk, as well as harm their nervous and immune systems.
As with all crops, tobacco crops extract nutrients (such as phosphorus, nitrogen, and potassium) from soil, decreasing its fertility.
Furthermore, the wood used to cure tobacco in some places leads to deforestation. While some big tobacco producers such as China and the United States have access to petroleum, coal, and natural gas, which can be used as alternatives to wood, most developing countries still rely on wood in the curing process. Brazil alone uses the wood of 60 million trees per year for curing, packaging, and rolling cigarettes.
In 2017 WHO released a study on the environmental effects of tobacco.
Research
Several tobacco plants have been used as model organisms in genetics. Tobacco BY-2 cells, derived from N. tabacum cultivar 'Bright Yellow-2', are among the most important research tools in plant cytology. Tobacco has played a pioneering role in callus culture research and the elucidation of the mechanism by which kinetin works, laying the groundwork for modern agricultural biotechnology. The first genetically modified plant was produced in 1982, using Agrobacterium tumefaciens to create an antibiotic-resistant tobacco plant. This research laid the groundwork for all genetically modified crops.
Genetic modification
Because of its importance as a research tool, transgenic tobacco was the first genetically modified (GM) crop to be tested in field trials, in the United States and France in 1986; China became the first country in the world to approve commercial planting of a GM crop in 1993, which was tobacco.
Field trials
Many varieties of transgenic tobacco have been intensively tested in field trials. Agronomic traits such as resistance to pathogens (viruses, particularly to the tobacco mosaic virus (TMV); fungi; bacteria and nematodes); weed management via herbicide tolerance; resistance against insect pests; resistance to drought and cold; and production of useful products such as pharmaceuticals; and use of GM plants for bioremediation, have all been tested in over 400 field trials using tobacco.
Production
Currently, only the US is producing GM tobacco. The Chinese virus-resistant tobacco was withdrawn from the market in China in 1997. From 2002 to 2010, cigarettes made with GM tobacco with reduced nicotine content were available in the US under the market name Quest.
Consumption
Tobacco is consumed in many forms and through a number of different methods. Some examples are:
Enema
Tobacco smoke enemas were employed by the indigenous peoples of North America to stimulate respiration, injecting the smoke with a rectal tube. Later, in the 18th century, Europeans emulated the Americans. Tobacco resuscitation kits consisting of a pair of bellows and a tube were provided by the Royal Humane Society of London and placed at various points along the Thames.
Nasal administration
Snuff is a ground smokeless tobacco product, inhaled or ‘snuffed’ through the nose. If referring specifically to the orally consumed moist snuff, see dipping tobacco.
Smoked
Beedi (also known as bidis or biris) are thin, often flavoured cigarettes from India made of tobacco wrapped in a tendu leaf, and secured with coloured thread at one end.
Cigarettes are a product consumed through inhalation of smoke and manufactured from cured and finely cut tobacco leaves and reconstituted tobacco, often combined with other additives, then rolled into a paper cylinder.
Cigars are tightly rolled bundles of dried and fermented tobacco, which are ignited so their smoke may be drawn into the smokers' mouths.
Dokha is a middle eastern tobacco with high nicotine levels grown in parts of Oman and Hatta, which is smoked through a thin pipe called a medwakh. It is a form of tobacco which is dried up and ground and contains little to no additives excluding spices, fruits, or flowers to enhance smell and flavor.
Heat-not-burn products heat rather than burn tobacco to generate an aerosol that contains nicotine.
Hookah is a single- or multistemmed (often glass-based) water pipe for smoking. Hookahs were first used in India and Persia; the hookah has gained immense popularity, especially in the Middle East. A hookah operates by water filtration and indirect heat. It can be used for smoking herbal fruits or moassel, a mixture of tobacco, flavouring, and honey or glycerin.
Roll-your-own, often called 'rollies' or 'roll-ups', are relatively popular in some European countries. These are prepared from loose tobacco, cigarette papers, and filters all bought separately. They are usually cheaper to make.
Tobacco pipes typically consist of a small chamber (the bowl) for the combustion of the tobacco to be smoked and a thin stem (shank) that ends in a mouthpiece (the bit). Shredded pieces of tobacco are placed in the chamber and ignited.
In the mouth
Tobacco used in the mouth (buccal (sublabial), sublingual):
Chewing tobacco is the oldest way of consuming tobacco leaves. It is consumed orally, in two forms: through sweetened strands ("chew" or "chaw"), or in a shredded form ("dip"). When consuming the long, sweetened strands, the tobacco is lightly chewed and compacted into a ball. When consuming the shredded tobacco, small amounts are placed inside the bottom lip, between the gum and the teeth, where it is gently compacted, thus it is often called dipping tobacco. Both methods stimulate the salivary glands, which led to the development of the spittoon.
Creamy snuff is tobacco paste, consisting of tobacco, clove oil, glycerin, spearmint, menthol, and camphor, and sold in a toothpaste tube. It is marketed mainly to women in India and is known by the brand names Ipco (made by Asha Industries), Denobac, Tona, and Ganesh. It is locally known as mishri in some parts of Maharashtra.
Dipping tobaccos are a form of smokeless tobacco. Dip is occasionally referred to as "chew", and because of this it is commonly confused with chewing tobacco, which encompasses a wider range of products. A small clump of dip is 'pinched' out of the tin and placed between the lower or upper lip and gums. Some brands, as with snus, are portioned in small, porous pouches for less mess.
Gutka is a preparation of crushed betel nut, tobacco, and sweet or savory flavorings. It is manufactured in India and exported to a few other countries. A mild stimulant, it is sold across India in small, individual-sized packets.
Kreteks are cigarettes made with a complex blend of tobacco, cloves, and a flavoring "sauce". They were first introduced in the 1880s in Kudus, Java, to deliver the medicinal eugenol of cloves to the lungs.
Pituri, a nicotine-containing substance traditionally made from Australian tobacco plants, used by Indigenous Australians for chewing and placed between the lower or upper lip and gums.
Snus is a steam-pasteurized moist powdered tobacco product that is not fermented and induces minimal salivation. It is consumed by placing it (loose or in little pouches) against the upper gums for an extended period of time. It is somewhat similar to dipping tobacco but does not require spitting and is significantly lower in TSNAs.
Tobacco chewing gum A gum containing nicotine or tobacco designed to be chewed.
Tobacco edibles, often in the form of an infusion or a spice, have gained popularity in recent years.
Tobacco water is a traditional organic insecticide used in domestic gardening. Tobacco dust can be used similarly. It is produced by boiling strong tobacco in water, or by steeping the tobacco in water for a longer period. When cooled, the mixture can be applied as a spray, or painted onto the leaves of garden plants, where it kills insects. Tobacco is, however, banned from use as a pesticide in certified organic production by the USDA's National Organic Program.
Topical
Topical tobacco paste is sometimes used as a treatment for wasp, hornet, fire ant, scorpion, and bee stings. An amount equivalent to the contents of a cigarette is mashed in a cup with about a half a teaspoon of water to make a paste that is then applied to the affected area.
Influence
Social
Smoking in public was, for a long time, reserved for men, and smoking by women was sometimes associated with promiscuity; in Japan, during the Edo period, prostitutes and their clients often approached one another under the guise of offering a smoke. The same was true in 19th-century Europe.
Following the American Civil War, the use of tobacco, primarily in cigars, became associated with masculinity and power. Modern tobacco use has often been stigmatized; this has spawned quitting associations and antismoking campaigns. Bhutan is the only country in the world where tobacco sales are illegal. Due to its propensity for causing detumescence and erectile dysfunction, some studies have described tobacco as an anaphrodisiacal substance.
Religion
Christianity
In Christian denominations of the conservative holiness movement, such as the Allegheny Wesleyan Methodist Connection and Evangelical Wesleyan Church, the use of tobacco and other drugs is prohibited; ¶42 of the 2014 Book of Discipline of the Allegheny Wesleyan Methodist Connection states:
Members of the Church of Jesus Christ of Latter-day Saints (popularly known as Mormons) adhere to the Word of Wisdom, a religious health code that is interpreted as prohibiting the consumption of tobacco as well as alcohol, coffee, and tea.
Islam
Most Islamic scholars have condemned tobacco due to its harmful effects on health. The earliest fatwa (religious opinion) against tobacco use dates from 1602. Most major Islamic sects prohibit its use. While tobacco is not mentioned in the Quran, the Quran does instruct Muslims to live healthy lives.
Sikhism
Sikhism, a Dharmic religion from India, considers tobacco consumption as a taboo and very bad for health and spirituality. Initiated Sikhs are never to consume tobacco in any form.
Demographic
Research on tobacco use is limited mainly to smoking, which has been studied more extensively than any other form of consumption. An estimated 1.1 billion people, and up to one-third of the adult population, use tobacco in some form. Smoking is more prevalent among men (however, the gender gap declines with age), the poor, and in transitional or developing countries. A study published in Morbidity and Mortality Weekly Report found that in 2019 approximately one in four youths (23.0%) in the U.S. had used a tobacco product during the past 30 days. This represented approximately three in 10 high school students (31.2%) and approximately one in eight middle school students (12.5%).
Rates of smoking continue to rise in developing countries, but have leveled off or declined in developed countries. Smoking rates in the United States have dropped by half from 1965 to 2006, falling from 42% to 20.8% in adults. In the developing world, tobacco consumption is rising by 3.4% per year.
Health effects
Chemicals
Tobacco smoking harms health because of the toxic chemicals in tobacco smoke, including carbon monoxide, cyanide, and carcinogens, which have been proven to cause heart and lung diseases and cancer.
Thousands of different substances in cigarette smoke, including polycyclic aromatic hydrocarbons (such as benzopyrene), formaldehyde, cadmium, nickel, arsenic, tobacco-specific nitrosamines, and phenols contribute to the harmful effects of smoking.
According to the World Health Organization, tobacco is the single greatest cause of preventable death globally. WHO estimates that tobacco caused 5.4 million deaths in 2004 and 100 million deaths over the course of the 20th century. Similarly, the United States Centers for Disease Control and Prevention describe tobacco use as "the single most important preventable risk to human health in developed countries and an important cause of premature death worldwide." Due to these health consequences, it is estimated that a 10 hectare (approximately 24.7 acre) field of tobacco used for cigarettes causes 30 deaths per year – 10 from lung cancer and 20 from cigarette-induced diseases like cardiac arrest, gangrene, bladder cancer, mouth cancer, etc.
The harms caused by inhaling tobacco smoke include diseases of the heart and lungs, with smoking being a major risk factor for heart attacks, strokes, chronic obstructive pulmonary disease (emphysema), and cancer (particularly cancers of the lungs, larynx, mouth, and pancreas). Cancer is caused by inhaling carcinogenic substances in tobacco smoke.
Inhaling secondhand tobacco smoke (which has been exhaled by a smoker) can cause lung cancer in nonsmoking adults. In the United States, about 3,000 adults die each year due to lung cancer from secondhand smoke exposure. Heart disease caused by secondhand smoke kills around 46,000 nonsmokers every year.
In children, exposure to secondhand tobacco smoke is associated with a higher incidence and severity of respiratory illnesses, middle ear disease, and asthma attacks. Each year in the United States, secondhand smoke exposure causes 24,500 infants to be born with low birthweight, 71,900 preterm births, 202,300 episodes of asthma, and 790,000 health care visits for ear infections.
The addictive alkaloid nicotine is a stimulant, and popularly known as the most characteristic constituent of tobacco. In drug effect preference questionnaires, a rough indicator of addictive potential, nicotine scores almost as highly as opioids. Users typically develop tolerance and dependence. Nicotine is known to produce conditioned place preference, a sign of psychological enforcement value. In one medical study, tobacco's overall harm to user and self was determined at three percent below cocaine, and 13 percent above amphetamines, ranking sixth most harmful of the 20 drugs assessed.
Tobacco also contains 2,3,6-Trimethyl-1,4-naphthoquinone (sometimes called 2,3,6-TQ and TMN) which is a reversible monoamine oxidase inhibitor of type A and B with a binding affinity somewhat similar to that of clorgyline and deprenyl. It is a stronger dopamine releasing agent than nicotine and inhibits dopamine metabolism from its MAOI activity. Tobacco also contains Harmine and Norharmine which is a reversible MAO-A inhibitor. The MAO-A activity of tobacco alkaloids have been thought to play a role in the addictive qualities of tobacco.
Radioactivity
Polonium-210 is a radioactive trace contaminant of tobacco, providing additional explanation for the link between smoking and bronchial cancer.
The radioactive particles build up over time in the lungs and a UCLA study has estimated that the radiation from 25 years of smoking would cause over 120 deaths per thousand smokers.
Economic
Tobacco makes a significant economic contribution. The global tobacco market in 2010 was estimated at US$760 billion, excluding China. The global revenues from tobacco taxes in 2013–2014 was approximately $269 billion.
In China, cigarette manufacturing is one of the few profitable state-owned industries. For example, in 1998 the 1 429 state-owned enterprises in Yunnan province had revenue of Renminbi (RMB) 69.1 billion (US$8.3 billion) while 8 cigarette manufacturing plants alone accounted for about 53 percent (or RMB 36.2 billion) of total provincial industry sales. The Chinese government also collects tax on tobacco products. Tax revenues from cigarettes increased from 740 to 842 billion Chinese yuan between 2014 and 2016. This generated an additional 101 billion Chinese yuan in tax revenues for the government.
In India, tobacco generates approximately 20 billion Indian rupees (US$0.45 billion) of income per annum as a result of employment, income and government revenue.
Statistica estimates that in the U.S. alone, the tobacco industry has a market of US$121 billion, despite the fact the CDC reports that US smoking rates are declining steadily. In terms of health expenditures, cigarette smoking contributed to more than $225 billion (or 11.7%) of annual healthcare spending in the U.S. in 2014. Smoking-attributable healthcare spending increased more than 30% for Medicaid between 2010 and 2014.
In the US, the decline in the number of smokers, the end of the Tobacco Transition Payment Program in 2014, and competition from growers in other countries, made tobacco farming economics more challenging.
Of the 1.22 billion smokers worldwide, 1 billion of them live in developing or transitional economies, and much of the disease burden and premature mortality attributable to tobacco use disproportionately affect the poor. While smoking prevalence has declined in many developed countries, it remains high in others, and is increasing among women and in developing countries. Between one-fifth and two-thirds of men in most populations smoke. Women's smoking rates vary more widely but rarely equal male rates.
Tobacco users must also spend a significant amount of money on cigarettes to maintain regular use, as tobacco products are often heavily taxed by governments. For example, a pack a day smoker in the state of New York would have to spend around $4,690.25 a year on cigarettes alone.
In Indonesia, the lowest income group spends 15% of its total expenditures on tobacco. In Egypt, more than 10% of low-income household expenditure is on tobacco. The poorest 20% of households in Mexico spend 11% of their income on tobacco.
Advertising
The tobacco industry advertises its products through a variety of media, including sponsorship, particularly of sporting events. Because of the health risks of these products, this is now one of the most highly regulated forms of marketing. Some or all forms of tobacco advertising are banned in many countries.
Legality
| Biology and health sciences | Drugs and pharmacology | null |
30958 | https://en.wikipedia.org/wiki/Time-sharing | Time-sharing | In computing, time-sharing is the concurrent sharing of a computing resource among many tasks or users by giving each task or user a small slice of processing time. This quick switch between tasks or users gives the illusion of simultaneous execution. It enables multi-tasking by a single user or enables multiple-user sessions.
Developed during the 1960s, its emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing many users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications.
History
Batch processing
The earliest computers were extremely expensive devices, and very slow in comparison to later models. Machines were typically dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs. These programs might take hours to run. As computers grew in speed, run times dropped, and soon the time taken to start up the next program became a concern. Newer batch processing software and methodologies, including batch operating systems such as IBSYS (1960), decreased these "dead periods" by queuing up programs ready to run.
Comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline". Programs were submitted to the operations team, which scheduled them to be run. Output (generally printed) was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer. Stanford students made a short film humorously critiquing this situation.
The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This was because users might have long periods of entering code while the computer remained idle. This situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part.
Time-sharing
The concept is claimed to have been first described by Robert Dodds in a letter he wrote in 1949 although he did not use the term time-sharing. Later John Backus also described the concept, but did not use the term, in the 1954 summer session at MIT. Bob Bemer used the term time-sharing in his 1957 article "How to consider a computer" in Automatic Control Magazine and it was reported the same year he used the term time-sharing in a presentation. In a paper published in December 1958, W. F. Bauer wrote that "The computers would handle a number of problems concurrently. Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies."
Christopher Strachey, who became Oxford University's first professor of computation, filed a patent application in the United Kingdom for "time-sharing" in February 1959. He gave a paper "Time Sharing in Large Fast Computers" at the first UNESCO Information Processing Conference in Paris in June that year, where he passed the concept on to J. C. R. Licklider. This paper was credited by the MIT Computation Center in 1963 as "the first paper on time-shared computers".
The meaning of the term time-sharing has shifted from its original usage. From 1949 to 1960, time-sharing was used to refer to multiprogramming without multiple user sessions. Later, it came to mean sharing a computer interactively among multiple users. In 1984 Christopher Strachey wrote he considered the change in the meaning of the term time-sharing a source of confusion and not what he meant when he wrote his paper in 1959.
There are also examples of systems which provide multiple user consoles but only for specific applications, they are not general-purpose systems. These include SAGE (1958), SABRE (1960) and PLATO II (1961), created by Donald Bitzer at a public demonstration at Robert Allerton Park near the University of Illinois in early 1961. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for two years.
The first interactive, general-purpose time-sharing system usable for software development, Compatible Time-Sharing System, was initiated by John McCarthy at MIT writing a memo in 1959. Fernando J. Corbató led the development of the system, a prototype of which had been produced and tested by November 1961. Philip M. Morse arranged for IBM to provide a series of their mainframe computers starting with the IBM 704 and then the IBM 709 product line IBM 7090 and IBM 7094. IBM loaned those mainframes at no cost to MIT along with the staff to operate them and also provided hardware modifications mostly in the form of RPQs as prior customers had already commissioned the modifications. There were certain stipulations that governed MIT's use of the loaned IBM hardware. MIT could not charge for use of CTSS. MIT could only use the IBM computers for eight hours a day; another eight hours were available for other colleges and universities; IBM could use their computers for the remaining eight hours, although there were some exceptions. In 1963 a second deployment of CTSS was installed on an IBM 7094 that MIT has purchased using ARPA money. This was used to support Multics development at Project MAC.
JOSS began time-sharing service in January 1964. Dartmouth Time-Sharing System (DTSS) began service in March 1964.
Development
Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers (centralized computing systems), which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies such as the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems.
DTSS's creators wrote in 1968 that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer". Conversely, timesharing users thought that their terminal was the computer, and unless they received a bill for using the service, rarely thought about how others shared the computer's resources, such as when a large JOSS application caused paging for all users. The JOSS Newsletter often asked users to reduce storage usage. Time-sharing was nonetheless an efficient way to share a large computer. DTSS supported more than 100 simultaneous users. Although more than 1,000 of the 19,503 jobs the system completed on "a particularly busy day" required ten seconds or more of computer time, DTSS was able to handle the jobs because 78% of jobs needed one second or less of computer time. About 75% of 3,197 users used their terminal for 30 minutes or less, during which they used less than four seconds of computer time. A football simulation, among early mainframe games written for DTSS, used less than two seconds of computer time during the 15 minutes of real time for playing the game. With the rise of microcomputing in the early 1980s, time-sharing became less significant, because individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle.
However, the Internet brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, web sites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many customers at once, usually with no perceptible communication delays, unless the servers start to get very busy.
Time-sharing business
Genesis
In the 1960s, several companies started providing time-sharing services as service bureaus. Early systems used Teletype Model 33 KSR or ASR or Teletype Model 35 KSR or ASR machines in ASCII environments, and IBM Selectric typewriter-based terminals (especially the IBM 2741) with two different seven-bit codes. They would connect to the central computer by dial-up Bell 103A modem or acoustically coupled modems operating at 10–15 characters per second. Later terminals and modems supported 30–120 characters per second. The time-sharing system would provide a complete operating environment, including a variety of programming language processors, various software packages, file storage, bulk printing, and off-line storage. Users were charged rent for the terminal, a charge for hours of connect time, a charge for seconds of CPU time, and a charge for kilobyte-months of disk storage.
Common systems used for time-sharing included the SDS 940, the PDP-10, the IBM 360, and the GE-600 series. Companies providing this service included GE's GEISCO, the IBM subsidiary The Service Bureau Corporation, Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), AL/COM, Bolt, Beranek, and Newman (BBN) and Time Sharing Ltd. in the UK. By 1968, there were 32 such service bureaus serving the US National Institutes of Health (NIH) alone. The Auerbach Guide to Timesharing (1973) lists 125 different timesharing services using equipment from Burroughs, CDC, DEC, HP, Honeywell, IBM, RCA, Univac, and XDS.
Rise and fall
In 1975, acting president of Prime Computer Ben F. Robelen told stockholders that "The biggest end-user market currently is time-sharing". For DEC, for a while the second largest computer company (after IBM), this was also true: Their PDP-10 and IBM's 360/67 were widely used by commercial timesharing services such as CompuServe, On-Line Systems, Inc. (OLS), Rapidata and Time Sharing Ltd.
The advent of the personal computer marked the beginning of the decline of time-sharing. The economics were such that computer time went from being an expensive resource that had to be shared to being so cheap that computers could be left to sit idle for long periods in order to be available as needed.
Rapidata as an example
Although many time-sharing services simply closed, Rapidata held on, and became part of National Data Corporation. It was still of sufficient interest in 1982 to be the focus of "A User's Guide to Statistics Programs: The Rapidata Timesharing System". Even as revenue fell by 66% and National Data subsequently developed its own problems, attempts were made to keep this timesharing business going.
UK
Time Sharing Limited (TSL, 1969–1974) - launched using DEC systems. PERT was one of its popular offerings. TSL was acquired by ADP in 1974.
OLS Computer Services (UK) Limited (1975–1980) - using HP & DEC systems.
The computer utility
Beginning in 1964, the Multics operating system was designed as a computing utility, modeled on the electrical or telephone utilities. In the 1970s, Ted Nelson's original "Xanadu" hypertext repository was envisioned as such a service.
Security
Time-sharing was the first time that multiple processes, owned by different users, were running on a single machine, and these processes could interfere with one another. For example, one process might alter shared resources which another process relied on, such as a variable stored in memory. When only one user was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see.
To prevent this from happening, an operating system needed to enforce a set of policies that determined which privileges each process had. For example, the operating system might deny access to a certain variable by a certain process.
The first international conference on computer security in London in 1971 was primarily driven by the time-sharing industry and its customers.
Time-sharing in the form of shell accounts has been considered a risk.
Notable time-sharing systems
Significant early timesharing systems:
Allen-Babcock RUSH (Remote Users of Shared Hardware) Time-sharing System on IBM S/360 hardware (1966) → Tymshare
AT&T Bell Labs Unix (1971) → UC Berkeley BSD Unix (1977)
BBN PDP-1 Time-sharing System → Massachusetts General Hospital PDP-1D → MUMPS
BBN TENEX → DEC TOPS-20, Foonly FOONEX, MAXC OS at PARC, Stanford Low Overhead TimeSharing (LOTS), which ran TOPS-20
Berkeley Timesharing System at UC Berkeley Project Genie → Scientific Data Systems SDS 940 (Tymshare, BBN, SRI, Community Memory) → BCC 500 → MAXC at PARC
Burroughs Time-sharing MCP → HP 3000 MPE
Cambridge Multiple Access System was developed for the Titan, the prototype Atlas 2 computer built by Ferranti for the University of Cambridge. This was the first time-sharing system developed outside the United States, and which influenced the later development of UNIX.
Compower Ltd., a wholly owned subsidiary of the National Coal Board (later British Coal Corporation) in the UK. Originally National Coal Board (NCB) Computer Services, it became Compower in 1973 providing computing and time-share services to internal NCB users and as a commercial service to external users. Sold to Philips C&P (Communications and Processing) in August 1994.
CompuServe, also branded as Compu-Serv, CIS.
Compu-Time, Inc., on Honeywell 400/4000, started in 1968 in Ft Lauderdale, Florida, moved to Daytona Beach in 1970.
CDC MACE, APEX → Kronos → NOS → NOS/VE
Dartmouth Time-Sharing System (DTSS) → GE Time-sharing → GEnie
DEC PDP-6 Time-sharing Monitor → TOPS-10 → BBN TENEX → DEC TOPS-20
DEC TSS/8 → RSTS-11, RSX-11 → OpenVMS
English Electric KDF9 COTAN (Culham Online Task Activation Network)
HP 2000 Time-Shared BASIC
HP 3000 series
IBM CALL/360, CALL/OS - using IBM System/360 Model 50
IBM CP-40 → CP-67 → CP-370 → CP/CMS → VM/CMS
IBM TSO for OS/MVT → for OS/VS2 → for MVS → for z/OS
IBM TSS/360 → TSS/370
ICT 1900 series GEORGE 3 MOP (Multiple Online Programming)
International Timesharing Corporation on dual CDC 3300 systems.
Linux: see how it evolved from MIT CTSS
MIT CTSS → MULTICS (MIT / GE / Bell Labs) → Unix → Linux
MIT Time-sharing System for the DEC PDP-1 → ITS
McGill University MUSIC → IBM MUSIC/SP
Michigan Terminal System, on the IBM S/360-67, S/370, and successors.
Michigan State University CDC SCOPE/HUSTLER System
National CSS VP/CSS, on IBM 360 series; originally based on IBM's CP/CMS.
Oregon State University OS-3, on CDC 3000 series.
Prime Computer PRIMOS
RAND JOSS → JOSS-2 → JOSS-3
RCA TSOS → Univac / Unisys VMOS → VS/9
Service in Informatics and Analysis (SIA), on CDC 6600 Kronos.
System Development Corporation Time-sharing System, on the AN/FSQ-32.
Stanford ORVYL and WYLBUR, on IBM S/360-67.
Stanford PDP-1 Time-sharing System → SAIL → WAITS
Time Sharing Ltd. (TSL) on DEC PDP-10 systems → Automatic Data Processing (ADP), first commercial time-sharing system in Europe and first dual (fault tolerant) time-sharing system.
Tone (TSO-like, for VS1), a non-IBM Time-sharing product, marketed by Tone Software Co; TSO required VS2.
Tymshare SDS-940 → Tymcom X → Tymcom XX
Unisys/UNIVAC 1108 EXEC 8 → OS 1100 → OS 2200
UC Berkeley CAL-TSS, on CDC 6400.
XDS UTS → CP-V → Honeywell CP-6
| Technology | Operating systems | null |
30972 | https://en.wikipedia.org/wiki/Jupiter%20trojan | Jupiter trojan | The Jupiter trojans, commonly called trojan asteroids or simply trojans, are a large group of asteroids that share the planet Jupiter's orbit around the Sun. Relative to Jupiter, each trojan librates around one of Jupiter's stable Lagrange points: either , existing 60° ahead of the planet in its orbit, or , 60° behind. Jupiter trojans are distributed in two elongated, curved regions around these Lagrangian points with an average semi-major axis of about 5.2 AU.
The first Jupiter trojan discovered, 588 Achilles, was spotted in 1906 by German astronomer Max Wolf. More than 9,800 Jupiter trojans have been found . By convention, they are each named from Greek mythology after a figure of the Trojan War, hence the name "trojan". The total number of Jupiter trojans larger than 1 km in diameter is believed to be about , approximately equal to the number of asteroids larger than 1 km in the asteroid belt. Like main-belt asteroids, Jupiter trojans form families.
, many Jupiter trojans showed to observational instruments as dark bodies with reddish, featureless spectra. No firm evidence of the presence of water, or any other specific compound on their surface has been obtained, but it is thought that they are coated in tholins, organic polymers formed by the Sun's radiation. The Jupiter trojans' densities (as measured by studying binaries or rotational lightcurves) vary from 0.8 to 2.5 g·cm−3. Jupiter trojans are thought to have been captured into their orbits during the early stages of the Solar System's formation or slightly later, during the migration of giant planets.
The term "Trojan Asteroid" specifically refers to the asteroids co-orbital with Jupiter, but the general term "trojan" is sometimes more generally applied to other small Solar System bodies with similar relationships to larger bodies: Mars trojans, Neptune trojans, Uranus trojans and Earth trojans are known to exist. Temporary Venus trojans and Saturn trojans exist, as well as for 1 Ceres and 4 Vesta. The term "Trojan asteroid" is normally understood to specifically mean the Jupiter trojans because the first Trojans were discovered near Jupiter's orbit and Jupiter currently has by far the most known Trojans.
Observational history
In 1772, Italian-born mathematician Joseph-Louis Lagrange, in studying the restricted three-body problem, predicted that a small body sharing an orbit with a planet but lying 60° ahead or behind it will be trapped near these points. The trapped body will librate slowly around the point of equilibrium in a tadpole or horseshoe orbit. These leading and trailing points are called the and Lagrange points. The first asteroids trapped in Lagrange points were observed more than a century after Lagrange's hypothesis. Those associated with Jupiter were the first to be discovered.
E. E. Barnard made the first recorded observation of a trojan, (identified as A904 RD at the time), in 1904, but neither he nor others appreciated its significance at the time. Barnard believed he had seen the recently discovered Saturnian satellite Phoebe, which was only two arc-minutes away in the sky at the time, or possibly an asteroid. The object's identity was not understood until its orbit was calculated in 1999.
The first accepted discovery of a trojan occurred in February 1906, when astronomer Max Wolf of Heidelberg-Königstuhl State Observatory discovered an asteroid at the Lagrangian point of the Sun–Jupiter system, later named 588 Achilles. In 1906–1907 two more Jupiter trojans were found by fellow German astronomer August Kopff (624 Hektor and 617 Patroclus). Hektor, like Achilles, belonged to the swarm ("ahead" of the planet in its orbit), whereas Patroclus was the first asteroid known to reside at the Lagrangian point ("behind" the planet). By 1938, 11 Jupiter trojans had been detected. This number increased to 14 only in 1961. As instruments improved, the rate of discovery grew rapidly: by January 2000, a total of 257 had been discovered; by May 2003, the number had grown to 1,600. there are 4,601 known Jupiter trojans at and 2,439 at .
Nomenclature
The custom of naming all asteroids in Jupiter's and points after famous heroes of the Trojan War was suggested by Johann Palisa of Vienna, who was the first to accurately calculate their orbits.
Asteroids in the leading () orbit are named after Greek heroes (the "Greek node or camp" or "Achilles group"), and those at the trailing () orbit are named after the heroes of Troy (the "Trojan node or camp"). The asteroids 617 Patroclus and 624 Hektor were named before the Greece/Troy rule was devised, resulting in a "Greek spy", Patroclus, in the Trojan node and a "Trojan spy", Hector, in the Greek node.
In 2018, at its 30th General Assembly in Vienna, the International Astronomical Union amended the naming convention for Jupiter trojans, allowing for asteroids with H larger than 12 (that is, a mean diameter smaller than approximately 22 kilometers, for an assumed albedo of 0.057) to be named after Olympic athletes, because there are now far more known Jupiter trojans than available names of Greek and Trojan warriors that fought in the Trojan war.
Numbers and mass
Estimates of the total number of Jupiter trojans are based on deep surveys of limited areas of the sky. The swarm is believed to hold between 160,000 and 240,000 asteroids with diameters larger than 2 km and about 600,000 with diameters larger than 1 km. If the swarm contains a comparable number of objects, there are more than Jupiter trojans 1 km in size or larger. For the objects brighter than absolute magnitude 9.0 the population is probably complete. These numbers are similar to that of comparable asteroids in the asteroid belt. The total mass of the Jupiter trojans is estimated at 0.0001 of the mass of Earth or one-fifth of the mass of the asteroid belt.
Two more recent studies indicate that the above numbers may overestimate the number of Jupiter trojans by several-fold. This overestimate is caused by (1) the assumption that all Jupiter trojans have a low albedo of about 0.04, whereas small bodies may have an average albedo as high as 0.12; (2) an incorrect assumption about the distribution of Jupiter trojans in the sky. According to the new estimates, the total number of Jupiter trojans with a diameter larger than 2 km is and in the L4 and L5 swarms, respectively. These numbers would be reduced by a factor of 2 if small Jupiter trojans are more reflective than large ones.
The number of Jupiter trojans observed in the swarm is slightly larger than that observed in . Because the brightest Jupiter trojans show little variation in numbers between the two populations, this disparity is probably due to observational bias. Some models indicate that the swarm may be slightly more stable than the swarm.
The largest Jupiter trojan is 624 Hektor, which has a mean diameter of 203 ± 3.6 km. There are few large Jupiter trojans in comparison to the overall population. With decreasing size, the number of Jupiter trojans grows very quickly down to 84 km, much more so than in the asteroid belt. A diameter of 84 km corresponds to an absolute magnitude of 9.5, assuming an albedo of 0.04. Within the 4.4–40 km range the Jupiter trojans' size distribution resembles that of the main-belt asteroids. Nothing is known about the masses of the smaller Jupiter trojans. The size distribution suggests that the smaller Trojans may be the products of collisions by larger Jupiter trojans.
Orbits
Jupiter trojans have orbits with radii between 5.05 and 5.35 AU (the mean semi-major axis is 5.2 ± 0.15 AU), and are distributed throughout elongated, curved regions around the two Lagrangian points; each swarm stretches for about 26° along the orbit of Jupiter, amounting to a total distance of about 2.5 AU. The width of the swarms approximately equals two Hill's radii, which in the case of Jupiter amounts to about 0.6 AU. Many of Jupiter trojans have large orbital inclinations relative to Jupiter's orbital plane—up to 40°.
Jupiter trojans do not maintain a fixed separation from Jupiter. They slowly librate around their respective equilibrium points, periodically moving closer to Jupiter or farther from it. Jupiter trojans generally follow paths called tadpole orbits around the Lagrangian points; the average period of their libration is about 150 years. The amplitude of the libration (along the Jovian orbit) varies from 0.6° to 88°, with the average being about 33°. Simulations show that Jupiter trojans can follow even more complicated trajectories when moving from one Lagrangian point to another—these are called horseshoe orbits (currently no Jupiter Trojan with such an orbit is known, though one is known for Neptune).
Dynamical families and binaries
Discerning dynamical families within the Jupiter trojan population is more difficult than it is in the asteroid belt, because the Jupiter trojans are locked within a far narrower range of possible positions. This means that clusters tend to overlap and merge with the overall swarm. By 2003 roughly a dozen dynamical families were identified. Jupiter-trojan families are much smaller in size than families in the asteroid belt; the largest identified family, the Menelaus group, consists of only eight members.
In 2001, 617 Patroclus was the first Jupiter trojan to be identified as a binary asteroid. The binary's orbit is extremely close, at 650 km, compared to 35,000 km for the primary's Hill sphere. The largest Jupiter trojan—624 Hektor— is probably a contact binary with a moonlet.
Physical properties
Jupiter trojans are dark bodies of irregular shape. Their geometric albedos generally vary between 3 and 10%. The average value is 0.056 ± 0.003 for the objects larger than 57 km, and 0.121 ± 0.003 (R-band) for those smaller than 25 km. The asteroid 4709 Ennomos has the highest albedo (0.18) of all known Jupiter trojans. Little is known about the masses, chemical composition, rotation or other physical properties of the Jupiter trojans.
Rotation
The rotational properties of Jupiter trojans are not well known. Analysis of the rotational light curves of 72 Jupiter trojans gave an average rotational period of about 11.2 hours, whereas the average period of the control sample of asteroids in the asteroid belt was 10.6 hours. The distribution of the rotational periods of Jupiter trojans appeared to be well approximated by a Maxwellian function, whereas the distribution for main-belt asteroids was found to be non-Maxwellian, with a deficit of periods in the range 8–10 hours. The Maxwellian distribution of the rotational periods of Jupiter trojans may indicate that they have undergone a stronger collisional evolution compared to the asteroid belt.
In 2008 a team from Calvin College examined the light curves of a debiased sample of ten Jupiter trojans, and found a median spin period of 18.9 hours. This value was significantly higher than that for main-belt asteroids of similar size (11.5 hours). The difference could mean that the Jupiter trojans possess a lower average density, which may imply that they formed in the Kuiper belt (see below).
Composition
Spectroscopically, the Jupiter trojans mostly are D-type asteroids, which predominate in the outer regions of the asteroid belt. A small number are classified as P or C-type asteroids. Their spectra are red (meaning that they reflect more light at longer wavelengths) or neutral and featureless. No firm evidence of water, organics or other chemical compounds has been obtained . 4709 Ennomos has an albedo slightly higher than the Jupiter-trojan average, which may indicate the presence of water ice. Some other Jupiter Trojans, such as 911 Agamemnon and 617 Patroclus, have shown very weak absorptions at 1.7 and 2.3 μm, which might indicate the presence of organics. The Jupiter trojans' spectra are similar to those of the irregular moons of Jupiter and, to a certain extent, comet nuclei, though Jupiter trojans are spectrally very different from the redder Kuiper belt objects. A Jupiter trojan's spectrum can be matched to a mixture of water ice, a large amount of carbon-rich material (charcoal), and possibly magnesium-rich silicates. The composition of the Jupiter trojan population appears to be markedly uniform, with little or no differentiation between the two swarms.
A team from the Keck Observatory in Hawaii announced in 2006 that it had measured the density of the binary Jupiter trojan 617 Patroclus as being less than that of water ice (0.8 g/cm3), suggesting that the pair, and possibly many other Trojan objects, more closely resemble comets or Kuiper belt objects in composition—water ice with a layer of dust—than they do the main-belt asteroids. Countering this argument, the density of Hektor as determined from its rotational lightcurve (2.480 g/cm3) is significantly higher than that of 617 Patroclus. Such a difference in densities suggests that density may not be a good indicator of asteroid origin.
Origin and evolution
Two main theories have emerged to explain the formation and evolution of the Jupiter trojans. The first suggests that the Jupiter trojans formed in the same part of the Solar System as Jupiter and entered their orbits while it was forming. The last stage of Jupiter's formation involved runaway growth of its mass through the accretion of large amounts of hydrogen and helium from the protoplanetary disk; during this growth, which lasted for only about 10,000 years, the mass of Jupiter increased by a factor of ten. The planetesimals that had approximately the same orbits as Jupiter were caught by the increased gravity of the planet. The capture mechanism was very efficient—about 50% of all remaining planetesimals were trapped. This hypothesis has two major problems: the number of trapped bodies exceeds the observed population of Jupiter trojans by four orders of magnitude, and the present Jupiter trojan asteroids have larger orbital inclinations than are predicted by the capture model. Simulations of this scenario show that such a mode of formation also would inhibit the creation of similar trojans for Saturn, and this has been borne out by observation: to date no trojans have been found near Saturn. In a variation of this theory Jupiter captures trojans during its initial growth then migrates as it continues to grow. During Jupiter's migration the orbits of objects in horseshoe orbits are distorted causing the L4 side of these orbits to be over occupied. As a result, an excess of trojans is trapped on the L4 side when the horseshoe orbits shift to tadpole orbits as Jupiter grows. This model also leaves the Jupiter trojan population 3–4 orders of magnitude too large.
The second theory proposes that the Jupiter trojans were captured during the migration of the giant planets described in the Nice model. In the Nice model the orbits of the giant planets became unstable years after the Solar System's formation when Jupiter and Saturn crossed their 1:2 mean-motion resonance. Encounters between planets resulted in Uranus and Neptune being scattered outward into the primordial Kuiper belt, disrupting it and throwing millions of objects inward. When Jupiter and Saturn were near their 1:2 resonance the orbits of pre-existing Jupiter trojans became unstable during a secondary resonance with Jupiter and Saturn. This occurred when the period of the trojans' libration about their Lagrangian point had a 3:1 ratio to the period at which the position where Jupiter passes Saturn circulated relative to its perihelion. This process was also reversible allowing a fraction of the numerous objects scattered inward by Uranus and Neptune to enter this region and be captured as Jupiter's and Saturn's orbits separated. These new trojans had a wide range of inclinations, the result of multiple encounters with the giant planets before being captured. This process can also occur later when Jupiter and Saturn cross weaker resonances.
In a revised version of the Nice model Jupiter trojans are captured when Jupiter encounters an ice giant during the instability. In this version of the Nice model one of the ice giants (Uranus, Neptune, or a lost fifth planet) is scattered inward onto a Jupiter-crossing orbit and is scattered outward by Jupiter causing the orbits of Jupiter and Saturn to quickly separate. When Jupiter's semi-major axis jumps during these encounters existing Jupiter trojans can escape and new objects with semi-major axes similar to Jupiter's new semi-major axis are captured. Following its last encounter the ice giant can pass through one of the libration points and perturb their orbits leaving this libration point depleted relative to the other. After the encounters end some of these Jupiter trojans are lost and others captured when Jupiter and Saturn are near weak mean motion resonances such as the 3:7 resonance via the mechanism of the original Nice model.
The long-term future of the Jupiter trojans is open to question, because multiple weak resonances with Jupiter and Saturn cause them to behave chaotically over time. Collisional shattering slowly depletes the Jupiter trojan population as fragments are ejected. Ejected Jupiter trojans could become temporary satellites of Jupiter or Jupiter-family comets. Simulations show that the orbits of up to 17% of Jupiter trojans are unstable over the age of the Solar System. Levison et al. believe that roughly 200 ejected Jupiter trojans greater than 1 km in diameter might be travelling the Solar System, with a few possibly on Earth-crossing orbits. Some of the escaped Jupiter trojans may become Jupiter-family comets as they approach the Sun and their surface ice begins evaporating.
Exploration
On 4 January 2017 NASA announced that Lucy was selected as one of their next two Discovery Program missions. Lucy is set to explore seven Jupiter trojans. It was launched on October 16, 2021, and will arrive at the Trojan cloud in 2027 after two Earth gravity assists and a fly-by of a main-belt asteroid. It will then return to the vicinity of Earth for another gravity assist to take it to Jupiter's Trojan cloud where it will visit 617 Patroclus.
| Physical sciences | Solar System | Astronomy |
30977 | https://en.wikipedia.org/wiki/Theorem | Theorem | In mathematics and formal logic, a theorem is a statement that has been proven, or can be proven. The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems.
In mainstream mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), or of a less powerful theory, such as Peano arithmetic. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as theorems only the most important results, and use the terms lemma, proposition and corollary for less important theorems.
In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).
As the axioms are often abstractions of properties of the physical world, theorems may be considered as expressing some truth, but in contrast to the notion of a scientific law, which is experimental, the justification of the truth of a theorem is purely deductive.
A conjecture is a tentative proposition that may evolve to become a theorem if proven true.
Theoremhood and truth
Until the end of the 19th century and the foundational crisis of mathematics, all mathematical theories were built from a few basic properties that were considered as self-evident; for example, the facts that every natural number has a successor, and that there is exactly one line that passes through two given distinct points. These basic properties that were considered as absolutely evident were called postulates or axioms; for example Euclid's postulates. All theorems were proved by using implicitly or explicitly these basic properties, and, because of the evidence of these basic properties, a proved theorem was considered as a definitive truth, unless there was an error in the proof. For example, the sum of the interior angles of a triangle equals 180°, and this was considered as an undoubtable fact.
One aspect of the foundational crisis of mathematics was the discovery of non-Euclidean geometries that do not lead to any contradiction, although, in such geometries, the sum of the angles of a triangle is different from 180°. So, the property "the sum of the angles of a triangle equals 180°" is either true or false, depending whether Euclid's fifth postulate is assumed or denied. Similarly, the use of "evident" basic properties of sets leads to the contradiction of Russell's paradox. This has been resolved by elaborating the rules that are allowed for manipulating sets.
This crisis has been resolved by revisiting the foundations of mathematics to make them more rigorous. In these new foundations, a theorem is a well-formed formula of a mathematical theory that can be proved from the axioms and inference rules of the theory. So, the above theorem on the sum of the angles of a triangle becomes: Under the axioms and inference rules of Euclidean geometry, the sum of the interior angles of a triangle equals 180°. Similarly, Russell's paradox disappears because, in an axiomatized set theory, the set of all sets cannot be expressed with a well-formed formula. More precisely, if the set of all sets can be expressed with a well-formed formula, this implies that the theory is inconsistent, and every well-formed assertion, as well as its negation, is a theorem.
In this context, the validity of a theorem depends only on the correctness of its proof. It is independent from the truth, or even the significance of the axioms. This does not mean that the significance of the axioms is uninteresting, but only that the validity of a theorem is independent from the significance of the axioms. This independence may be useful by allowing the use of results of some area of mathematics in apparently unrelated areas.
An important consequence of this way of thinking about mathematics is that it allows defining mathematical theories and theorems as mathematical objects, and to prove theorems about them. Examples are Gödel's incompleteness theorems. In particular, there are well-formed assertions than can be proved to not be a theorem of the ambient theory, although they can be proved in a wider theory. An example is Goodstein's theorem, which can be stated in Peano arithmetic, but is proved to be not provable in Peano arithmetic. However, it is provable in some more general theories, such as Zermelo–Fraenkel set theory.
Epistemological considerations
Many mathematical theorems are conditional statements, whose proofs deduce conclusions from conditions known as hypotheses or premises. In light of the interpretation of proof as justification of truth, the conclusion is often viewed as a necessary consequence of the hypotheses. Namely, that the conclusion is true in case the hypotheses are true—without any further assumptions. However, the conditional could also be interpreted differently in certain deductive systems, depending on the meanings assigned to the derivation rules and the conditional symbol (e.g., non-classical logic).
Although theorems can be written in a completely symbolic form (e.g., as propositions in propositional calculus), they are often expressed informally in a natural language such as English for better readability. The same is true of proofs, which are often expressed as logically organized and clearly worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, and from which a formal symbolic proof can in principle be constructed.
In addition to the better readability, informal arguments are typically easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but also explains in some way why it is obviously true. In some cases, one might even be able to substantiate a theorem by using a picture as its proof.
Because theorems lie at the core of mathematics, they are also central to its aesthetics. Theorems are often described as being "trivial", or "difficult", or "deep", or even "beautiful". These subjective judgments vary not only from person to person, but also with time and culture: for example, as a proof is obtained, simplified or better understood, a theorem that was once difficult may become trivial. On the other hand, a deep theorem may be stated simply, but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a particularly well-known example of such a theorem.
Informal account of theorems
Logically, many theorems are of the form of an indicative conditional: If A, then B. Such a theorem does not assert B — only that B is a necessary consequence of A. In this case, A is called the hypothesis of the theorem ("hypothesis" here means something very different from a conjecture), and B the conclusion of the theorem. The two together (without the proof) are called the proposition or statement of the theorem (e.g. "If A, then B" is the proposition). Alternatively, A and B can be also termed the antecedent and the consequent, respectively. The theorem "If n is an even natural number, then n/2 is a natural number" is a typical example in which the hypothesis is "n is an even natural number", and the conclusion is "n/2 is also a natural number".
In order for a theorem to be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one.
It is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses form the foundational basis of the theory and are called axioms or postulates. The field of mathematics known as proof theory studies formal languages, axioms and the structure of proofs.
Some theorems are "trivial", in the sense that they follow from definitions, axioms, and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem, and there are many other examples of simple yet deep theorems in number theory and combinatorics, among other areas.
Other theorems have a known proof that cannot easily be written down. The most prominent examples are the four color theorem and the Kepler conjecture. Both of these theorems are only known to be true by reducing them to a computational search that is then verified by a computer program. Initially, many mathematicians did not accept this form of proof, but it has become more widely accepted. The mathematician Doron Zeilberger has even gone so far as to claim that these are possibly the only nontrivial results that mathematicians have ever proved. Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities and hypergeometric identities.
Relation with scientific theories
Theorems in mathematics and theories in science are fundamentally different in their epistemology. A scientific theory cannot be proved; its key attribute is that it is falsifiable, that is, it makes predictions about the natural world that are testable by experiments. Any disagreement between prediction and experiment demonstrates the incorrectness of the scientific theory, or at least limits its accuracy or domain of validity. Mathematical theorems, on the other hand, are purely abstract formal statements: the proof of a theorem cannot involve experiments or other empirical evidence in the same way such evidence is used to support scientific theories.
Nonetheless, there is some degree of empiricism and data collection involved in the discovery of mathematical theorems. By establishing a pattern, sometimes with the use of a powerful computer, mathematicians may have an idea of what to prove, and in some cases even a plan for how to set about doing the proof. It is also possible to find a single counter-example and so establish the impossibility of a proof for the proposition as-stated, and possibly suggest restricted forms of the original proposition that might have feasible proofs.
For example, both the Collatz conjecture and the Riemann hypothesis are well-known unsolved problems; they have been extensively studied through empirical checks, but remain unproven. The Collatz conjecture has been verified for start values up to about 2.88 × 1018. The Riemann hypothesis has been verified to hold for the first 10 trillion non-trivial zeroes of the zeta function. Although most mathematicians can tolerate supposing that the conjecture and the hypothesis are true, neither of these propositions is considered proved.
Such evidence does not constitute proof. For example, the Mertens conjecture is a statement about natural numbers that is now known to be false, but no explicit counterexample (i.e., a natural number n for which the Mertens function M(n) equals or exceeds the square root of n) is known: all numbers less than 1014 have the Mertens property, and the smallest number that does not have this property is only known to be less than the exponential of 1.59 × 1040, which is approximately 10 to the power 4.3 × 1039. Since the number of particles in the universe is generally considered less than 10 to the power 100 (a googol), there is no hope to find an explicit counterexample by exhaustive search.
The word "theory" also exists in mathematics, to denote a body of mathematical axioms, definitions and theorems, as in, for example, group theory (see mathematical theory). There are also "theorems" in science, particularly physics, and in engineering, but they often have statements and proofs in which physical assumptions and intuition play an important role; the physical axioms on which such "theorems" are based are themselves falsifiable.
Terminology
A number of different terms for mathematical statements exist; these terms indicate the role statements play in a particular subject. The distinction between different terms is sometimes rather arbitrary, and the usage of some terms has evolved over time.
An axiom or postulate is a fundamental assumption regarding the object of study, that is accepted without proof. A related concept is that of a definition, which gives the meaning of a word or a phrase in terms of known concepts. Classical geometry discerns between axioms, which are general statements; and postulates, which are statements about geometrical objects. Historically, axioms were regarded as "self-evident"; today they are merely assumed to be true.
A conjecture is an unproved statement that is believed to be true. Conjectures are usually made in public, and named after their maker (for example, Goldbach's conjecture and Collatz conjecture). The term hypothesis is also used in this sense (for example, Riemann hypothesis), which should not be confused with "hypothesis" as the premise of a proof. Other terms are also used on occasion, for example problem when people are not sure whether the statement should be believed to be true. Fermat's Last Theorem was historically called a theorem, although, for centuries, it was only a conjecture.
A theorem is a statement that has been proven to be true based on axioms and other theorems.
A proposition is a theorem of lesser importance, or one that is considered so elementary or immediately obvious, that it may be stated without proof. This should not be confused with "proposition" as used in propositional logic. In classical geometry the term "proposition" was used differently: in Euclid's Elements (), all theorems and geometric constructions were called "propositions" regardless of their importance.
A lemma is an "accessory proposition" - a proposition with little applicability outside its use in a particular proof. Over time a lemma may gain in importance and be considered a theorem, though the term "lemma" is usually kept as part of its name (e.g. Gauss's lemma, Zorn's lemma, and the fundamental lemma).
A corollary is a proposition that follows immediately from another theorem or axiom, with little or no required proof. A corollary may also be a restatement of a theorem in a simpler form, or for a special case: for example, the theorem "all internal angles in a rectangle are right angles" has a corollary that "all internal angles in a square are right angles" - a square being a special case of a rectangle.
A generalization of a theorem is a theorem with a similar statement but a broader scope, from which the original theorem can be deduced as a special case (a corollary).
Other terms may also be used for historical or customary reasons, for example:
An identity is a theorem stating an equality between two expressions, that holds for any value within its domain (e.g. Bézout's identity and Vandermonde's identity).
A rule is a theorem that establishes a useful formula (e.g. Bayes' rule and Cramer's rule).
A law or principle is a theorem with wide applicability (e.g. the law of large numbers, law of cosines, Kolmogorov's zero–one law, Harnack's principle, the least-upper-bound principle, and the pigeonhole principle).
A few well-known theorems have even more idiosyncratic names, for example, the division algorithm, Euler's formula, and the Banach–Tarski paradox.
Layout
A theorem and its proof are typically laid out as follows:
Theorem (name of the person who proved it, along with year of discovery or publication of the proof)
Statement of theorem (sometimes called the proposition)
Proof
Description of proof
End
The end of the proof may be signaled by the letters Q.E.D. (quod erat demonstrandum) or by one of the tombstone marks, such as "□" or "∎", meaning "end of proof", introduced by Paul Halmos following their use in magazines to mark the end of an article.
The exact style depends on the author or publication. Many publications provide instructions or macros for typesetting in the house style.
It is common for a theorem to be preceded by definitions describing the exact meaning of the terms used in the theorem. It is also common for a theorem to be preceded by a number of propositions or lemmas which are then used in the proof. However, lemmas are sometimes embedded in the proof of a theorem, either with nested proofs, or with their proofs presented after the proof of the theorem.
Corollaries to a theorem are either presented between the theorem and the proof, or directly after the proof. Sometimes, corollaries have proofs of their own that explain why they follow from the theorem.
Lore
It has been estimated that over a quarter of a million theorems are proved every year.
The well-known aphorism, "A mathematician is a device for turning coffee into theorems", is probably due to Alfréd Rényi, although it is often attributed to Rényi's colleague Paul Erdős (and Rényi may have been thinking of Erdős), who was famous for the many theorems he produced, the number of his collaborations, and his coffee drinking.
The classification of finite simple groups is regarded by some to be the longest proof of a theorem. It comprises tens of thousands of pages in 500 journal articles by some 100 authors. These papers are together believed to give a complete proof, and several ongoing projects hope to shorten and simplify this proof. Another theorem of this type is the four color theorem whose computer generated proof is too long for a human to read. It is among the longest known proofs of a theorem whose statement can be easily understood by a layman.
Theorems in logic
In mathematical logic, a formal theory is a set of sentences within a formal language. A sentence is a well-formed formula with no free variables. A sentence that is a member of a theory is one of its theorems, and the theory is the set of its theorems. Usually a theory is understood to be closed under the relation of logical consequence. Some accounts define a theory to be closed under the semantic consequence relation (), while others define it to be closed under the syntactic consequence, or derivability relation ().
For a theory to be closed under a derivability relation, it must be associated with a deductive system that specifies how the theorems are derived. The deductive system may be stated explicitly, or it may be clear from the context. The closure of the empty set under the relation of logical consequence yields the set that contains just those sentences that are the theorems of the deductive system.
In the broad sense in which the term is used within logic, a theorem does not have to be true, since the theory that contains it may be unsound relative to a given semantics, or relative to the standard interpretation of the underlying language. A theory that is inconsistent has all sentences as theorems.
The definition of theorems as sentences of a formal language is useful within proof theory, which is a branch of mathematics that studies the structure of formal proofs and the structure of provable formulas. It is also important in model theory, which is concerned with the relationship between formal theories and structures that are able to provide a semantics for them through interpretation.
Although theorems may be uninterpreted sentences, in practice mathematicians are more interested in the meanings of the sentences, i.e. in the propositions they express. What makes formal theorems useful and interesting is that they may be interpreted as true propositions and their derivations may be interpreted as a proof of their truth. A theorem whose interpretation is a true statement about a formal system (as opposed to within a formal system) is called a metatheorem.
Some important theorems in mathematical logic are:
Compactness of first-order logic
Completeness of first-order logic
Gödel's incompleteness theorems of first-order arithmetic
Consistency of first-order arithmetic
Tarski's undefinability theorem
Church-Turing theorem of undecidability
Löb's theorem
Löwenheim–Skolem theorem
Lindström's theorem
Craig's theorem
Cut-elimination theorem
Syntax and semantics
The concept of a formal theorem is fundamentally syntactic, in contrast to the notion of a true proposition, which introduces semantics. Different deductive systems can yield other interpretations, depending on the presumptions of the derivation rules (i.e. belief, justification or other modalities). The soundness of a formal system depends on whether or not all of its theorems are also validities. A validity is a formula that is true under any possible interpretation (for example, in classical propositional logic, validities are tautologies). A formal system is considered semantically complete when all of its theorems are also tautologies.
Interpretation of a formal theorem
Theorems and theories
| Mathematics | Basics | null |
30983 | https://en.wikipedia.org/wiki/Testosterone | Testosterone | Testosterone is the primary male sex hormone and androgen in males. In humans, testosterone plays a key role in the development of male reproductive tissues such as testicles and prostate, as well as promoting secondary sexual characteristics such as increased muscle and bone mass, and the growth of body hair. It is associated with increased aggression, sex drive, dominance, courtship display, and a wide range of behavioral characteristics. In addition, testosterone in both sexes is involved in health and well-being, where it has a significant effect on overall mood, cognition, social and sexual behavior, metabolism and energy output, the cardiovascular system, and in the prevention of osteoporosis. Insufficient levels of testosterone in men may lead to abnormalities including frailty, accumulation of adipose fat tissue within the body, anxiety and depression, sexual performance issues, and bone loss.
Excessive levels of testosterone in men may be associated with hyperandrogenism, higher risk of heart failure, increased mortality in men with prostate cancer, and male pattern baldness.
Testosterone is a steroid hormone from the androstane class containing a ketone and a hydroxyl group at positions three and seventeen respectively. It is biosynthesized in several steps from cholesterol and is converted in the liver to inactive metabolites. It exerts its action through binding to and activation of the androgen receptor. In humans and most other vertebrates, testosterone is secreted primarily by the testicles of males and, to a lesser extent, the ovaries of females. On average, in adult males, levels of testosterone are about seven to eight times as great as in adult females. As the metabolism of testosterone in males is more pronounced, the daily production is about 20 times greater in men. Females are also more sensitive to the hormone.
In addition to its role as a natural hormone, testosterone is used as a medication to treat hypogonadism and breast cancer. Since testosterone levels decrease as men age, testosterone is sometimes used in older men to counteract this deficiency. It is also used illicitly to enhance physique and performance, for instance in athletes. The World Anti-Doping Agency lists it as S1 Anabolic agent substance "prohibited at all times".
Biological effects
Effects on physiological development
In general, androgens such as testosterone promote protein synthesis and thus growth of tissues with androgen receptors. Testosterone can be described as having anabolic and androgenic (virilising) effects, though these categorical descriptions are somewhat arbitrary, as there is a great deal of mutual overlap between them. The relative potency of these effects can depend on various factors and is a topic of ongoing research. Testosterone can either directly exert effects on target tissues or be metabolized by 5α-reductase into dihydrotestosterone (DHT) or aromatized to estradiol (E2). Both testosterone and DHT bind to an androgen receptor; however, DHT has a stronger binding affinity than testosterone and may have more androgenic effect in certain tissues at lower levels.
Anabolic effects include growth of muscle mass and strength, increased bone density and strength, and stimulation of linear growth and bone maturation.
Androgenic effects include maturation of the sex organs, particularly the penis, and the formation of the scrotum in the fetus, and after birth (usually at puberty) a deepening of the voice, growth of facial hair (such as the beard) and axillary (underarm) hair. Many of these fall into the category of male secondary sex characteristics.
Testosterone effects can also be classified by the age of usual occurrence. For postnatal effects in both males and females, these are mostly dependent on the levels and duration of circulating free testosterone.
Before birth
Effects before birth are divided into two categories, classified in relation to the stages of development.
The first period occurs between 4 and 6 weeks of the gestation. Examples include genital virilisation such as midline fusion, phallic urethra, scrotal thinning and rugation, and phallic enlargement; although the role of testosterone is far smaller than that of dihydrotestosterone. There is also development of the prostate gland and seminal vesicles.
During the second trimester, androgen level is associated with sex formation. Specifically, testosterone, along with anti-Müllerian hormone (AMH) promote growth of the Wolffian duct and degeneration of the Müllerian duct respectively. This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult's own levels. Prenatal androgens apparently influence interests and engagement in gendered activities and have moderate effects on spatial abilities. Among women with congenital adrenal hyperplasia, a male-typical play in childhood correlated with reduced satisfaction with the female gender and reduced heterosexual interest in adulthood.
Early infancy
Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–7 months of age. The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body. The male brain is masculinized by the aromatization of testosterone into estradiol, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.
Before puberty
Before puberty, effects of rising androgen levels occur in both boys and girls. These include adult-type body odor, increased oiliness of skin and hair, acne, pubarche (appearance of pubic hair), axillary hair (armpit hair), growth spurt, accelerated bone maturation, and facial hair.
Pubertal
Pubertal effects begin to occur when androgen has been higher than normal adult female levels for months or years. In males, these are usual late pubertal effects, and occur in women after prolonged periods of heightened levels of free testosterone in the blood. The effects include:
Growth of spermatogenic tissue in testicles, male fertility, penis or clitoris enlargement, increased libido and frequency of erection or clitoral engorgement occurs.
Growth of jaw, brow, chin, and nose and remodeling of facial bone contours, in conjunction with human growth hormone occurs.
Completion of bone maturation and termination of growth. This occurs indirectly via estradiol metabolites and hence more gradually in men than women.
Increased muscle strength and mass, shoulders become broader and rib cage expands, deepening of voice, growth of the Adam's apple.
Enlargement of sebaceous glands. This might cause acne, subcutaneous fat in face decreases.
Pubic hair extends to thighs and up toward umbilicus, development of facial hair (sideburns, beard, moustache), loss of scalp hair (androgenetic alopecia), increase in chest hair, periareolar hair, perianal hair, leg hair, armpit hair.
Adult
Testosterone is necessary for normal sperm development. It activates genes in Sertoli cells, which promote differentiation of spermatogonia. It regulates acute hypothalamic–pituitary–adrenal axis (HTA axis) response under dominance challenge. Androgens including testosterone enhance muscle growth. Testosterone also regulates the population of thromboxane A2 receptors on megakaryocytes and platelets and hence platelet aggregation in humans.
Adult testosterone effects are more clearly demonstrable in males than in females, but are likely important to both sexes. Some of these effects may decline as testosterone levels might decrease in the later decades of adult life.
The brain is also affected by this sexual differentiation; the enzyme aromatase converts testosterone into estradiol that is responsible for masculinization of the brain in male mice. In humans, masculinization of the fetal brain appears, by observation of gender preference in patients with congenital disorders of androgen formation or androgen receptor function, to be associated with functional androgen receptors.
There are some differences between a male and female brain that may be due to different testosterone levels, one of them being size: the male human brain is, on average, larger.
Health effects
Testosterone does not appear to increase the risk of developing prostate cancer. In people who have undergone testosterone deprivation therapy, testosterone increases beyond the castrate level have been shown to increase the rate of spread of an existing prostate cancer.
Conflicting results have been obtained concerning the importance of testosterone in maintaining cardiovascular health. Nevertheless, maintaining normal testosterone levels in elderly men has been shown to improve many parameters that are thought to reduce cardiovascular disease risk, such as increased lean body mass, decreased visceral fat mass, decreased total cholesterol, and improved glycemic control.
High androgen levels are associated with menstrual cycle irregularities in both clinical populations and healthy women. There also can be effects in unusual hair growth, acne, weight gain, infertility, and sometimes even scalp hair loss. These effects are seen largely in women with polycystic ovary syndrome (PCOS). For women with PCOS, hormones like birth control pills can be used to help lessen the effects of this increased level of testosterone.
Attention, memory, and spatial ability are key cognitive functions affected by testosterone in humans. Preliminary evidence suggests that low testosterone levels may be a risk factor for cognitive decline and possibly for dementia of the Alzheimer's type, a key argument in life extension medicine for the use of testosterone in anti-aging therapies. Much of the literature, however, suggests a curvilinear or even quadratic relationship between spatial performance and circulating testosterone, where both hypo- and hypersecretion (deficient- and excessive-secretion) of circulating androgens have negative effects on cognition.
Immune system and inflammation
Testosterone deficiency is associated with an increased risk of metabolic syndrome, cardiovascular disease and mortality, which are also sequelae of chronic inflammation. Testosterone plasma concentration inversely correlates to multiple biomarkers of inflammation including CRP, interleukin 1 beta, interleukin 6, TNF alpha and endotoxin concentration, as well as leukocyte count. As demonstrated by a meta-analysis, substitution therapy with testosterone results in a significant reduction of inflammatory markers. These effects are mediated by different mechanisms with synergistic action. In androgen-deficient men with concomitant autoimmune thyroiditis, substitution therapy with testosterone leads to a decrease in thyroid autoantibody titres and an increase in thyroid's secretory capacity (SPINA-GT).
Medical use
Testosterone is used as a medication for the treatment of male hypogonadism, gender dysphoria, and certain types of breast cancer. This is known as hormone replacement therapy (HRT) or testosterone replacement therapy (TRT), which maintains serum testosterone levels in the normal range. Decline of testosterone production with age has led to interest in androgen replacement therapy. It is unclear if the use of testosterone for low levels due to aging is beneficial or harmful.
Testosterone is included in the World Health Organization's list of essential medicines, which are the most important medications needed in a basic health system. It is available as a generic medication. It can be administered as a cream or transdermal patch that is applied to the skin, by injection into a muscle, as a tablet that is placed in the cheek, or by ingestion.
Common side effects from testosterone medication include acne, swelling, and breast enlargement in males. Serious side effects may include liver toxicity, heart disease (though a randomized trial found no evidence of major adverse cardiac events compared to placebo in men with low testosterone), and behavioral changes. Women and children who are exposed may develop virilization. It is recommended that individuals with prostate cancer not use the medication. It can cause harm if used during pregnancy or breastfeeding.
2020 guidelines from the American College of Physicians support the discussion of testosterone treatment in adult men with age-related low levels of testosterone who have sexual dysfunction. They recommend yearly evaluation regarding possible improvement and, if none, to discontinue testosterone; physicians should consider intramuscular treatments, rather than transdermal treatments, due to costs and since the effectiveness and harm of either method is similar. Testosterone treatment for reasons other than possible improvement of sexual dysfunction may not be recommended.
No immediate short term effects on mood or behavior were found from the administration of supraphysiologic doses of testosterone for 10 weeks on 43 healthy men.
Behavioural correlations
Sexual arousal
Testosterone levels follow a circadian rhythm that peaks early each day, regardless of sexual activity.
In women, correlations may exist between positive orgasm experience and testosterone levels. Studies have shown small or inconsistent correlations between testosterone levels and male orgasm experience, as well as sexual assertiveness in both sexes.
Sexual arousal and masturbation in women produce small increases in testosterone concentrations. The plasma levels of various steroids significantly increase after masturbation in men and the testosterone levels correlate to those levels.
Mammalian studies
Studies conducted in rats have indicated that their degree of sexual arousal is sensitive to reductions in testosterone. When testosterone-deprived rats were given medium levels of testosterone, their sexual behaviours (copulation, partner preference, etc.) resumed, but not when given low amounts of the same hormone. Therefore, these mammals may provide a model for studying clinical populations among humans with sexual arousal deficits such as hypoactive sexual desire disorder.
Every mammalian species examined demonstrated a marked increase in a male's testosterone level upon encountering a female. The reflexive testosterone increases in male mice is related to the male's initial level of sexual arousal.
In non-human primates, it may be that testosterone in puberty stimulates sexual arousal, which allows the primate to increasingly seek out sexual experiences with females and thus creates a sexual preference for females. Some research has also indicated that if testosterone is eliminated in an adult male human or other adult male primate's system, its sexual motivation decreases, but there is no corresponding decrease in ability to engage in sexual activity (mounting, ejaculating, etc.).
In accordance with sperm competition theory, testosterone levels are shown to increase as a response to previously neutral stimuli when conditioned to become sexual in male rats. This reaction engages penile reflexes (such as erection and ejaculation) that aid in sperm competition when more than one male is present in mating encounters, allowing for more production of successful sperm and a higher chance of reproduction.
Males
In men, higher levels of testosterone are associated with periods of sexual activity.
Men who watch a sexually explicit movie have an average increase of 35% in testosterone, peaking at 60–90 minutes after the end of the film, but no increase is seen in men who watch sexually neutral films. Men who watch sexually explicit films also report increased motivation and competitiveness, and decreased exhaustion. A link has also been found between relaxation following sexual arousal and testosterone levels.
Females
Androgens may modulate the physiology of vaginal tissue and contribute to female genital sexual arousal. Women's level of testosterone is higher when measured pre-intercourse vs. pre-cuddling, as well as post-intercourse vs. post-cuddling. There is a time lag effect when testosterone is administered, on genital arousal in women. In addition, a continuous increase in vaginal sexual arousal may result in higher genital sensations and sexual appetitive behaviors.
When females have a higher baseline level of testosterone, they have higher increases in sexual arousal levels but smaller increases in testosterone, indicating a ceiling effect on testosterone levels in females. Sexual thoughts also change the level of testosterone but not the level of cortisol in the female body, and hormonal contraceptives may affect the variation in testosterone response to sexual thoughts.
Testosterone may prove to be an effective treatment in female sexual arousal disorders, and is available as a dermal patch. There is no FDA-approved androgen preparation for the treatment of androgen insufficiency; however, it has been used as an off-label use to treat low libido and sexual dysfunction in older women. Testosterone may be a treatment for postmenopausal women as long as they are effectively estrogenized.
Romantic relationships
Falling in love has been linked with decreases in men's testosterone levels while mixed changes are reported for women's testosterone levels. There has been speculation that these changes in testosterone result in the temporary reduction of differences in behavior between the sexes. However, the testosterone changes observed do not seem to be maintained as relationships develop over time.
Men who produce less testosterone are more likely to be in a relationship or married, and men who produce more testosterone are more likely to divorce. Marriage or commitment could cause a decrease in testosterone levels. Single men who have not had relationship experience have lower testosterone levels than single men with experience. It is suggested that these single men with prior experience are in a more competitive state than their non-experienced counterparts. Married men who engage in bond-maintenance activities such as spending the day with their spouse or child have no different testosterone levels compared to times when they do not engage in such activities. Collectively, these results suggest that the presence of competitive activities rather than bond-maintenance activities is more relevant to changes in testosterone levels.
Men who produce more testosterone are more likely to engage in extramarital sex. Testosterone levels do not rely on physical presence of a partner; testosterone levels of men engaging in same-city and long-distance relationships are similar. Physical presence may be required for women who are in relationships for the testosterone–partner interaction, where same-city partnered women have lower testosterone levels than long-distance partnered women.
Fatherhood
Fatherhood decreases testosterone levels in men, suggesting that the emotions and behaviour tied to paternal care decrease testosterone levels. In humans and other species that utilize allomaternal care, paternal investment in offspring is beneficial to said offspring's survival because it allows the two parents to raise multiple children simultaneously. This increases the reproductive fitness of the parents because their offspring are more likely to survive and reproduce. Paternal care increases offspring survival due to increased access to higher quality food and reduced physical and immunological threats. This is particularly beneficial for humans since offspring are dependent on parents for extended periods of time and mothers have relatively short inter-birth intervals.
While the extent of paternal care varies between cultures, higher investment in direct child care has been seen to be correlated with lower average testosterone levels as well as temporary fluctuations. For instance, fluctuation in testosterone levels when a child is in distress has been found to be indicative of fathering styles. If a father's testosterone levels decrease in response to hearing their baby cry, it is an indication of empathizing with the baby. This is associated with increased nurturing behavior and better outcomes for the infant.
Motivation
Testosterone levels play a major role in risk-taking during financial decisions. Higher testosterone levels in men reduce the risk of becoming or staying unemployed. Research has also found that heightened levels of testosterone and cortisol are associated with an increased risk of impulsive and violent criminal behavior. On the other hand, elevated testosterone in men may increase their generosity, primarily to attract a potential mate.
Aggression and criminality
Most studies support a link between adult criminality and testosterone. Nearly all studies of juvenile delinquency and testosterone are not significant. Most studies have found testosterone to be associated with behaviors or personality traits linked with antisocial behavior and alcoholism. Many studies have been undertaken on the relationship between more general aggressive behavior, and feelings, and testosterone. About half of studies have found a relationship and about half, no relationship. Studies have found that testosterone facilitates aggression by modulating vasopressin receptors in the hypothalamus.
There are two theories on the role of testosterone in aggression and competition. The first is the challenge hypothesis which states that testosterone would increase during puberty, thus facilitating reproductive and competitive behavior which would include aggression. It is therefore the challenge of competition among males that facilitates aggression and violence. Studies conducted have found direct correlation between testosterone and dominance, especially among the most violent criminals in prison who had the highest testosterone. The same research found fathers (outside competitive environments) had the lowest testosterone levels compared to other males.
The second theory is similar and known as "evolutionary neuroandrogenic (ENA) theory of male aggression". Testosterone and other androgens have evolved to masculinize a brain to be competitive, even to the point of risking harm to the person and others. By doing so, individuals with masculinized brains as a result of pre-natal and adult life testosterone and androgens, enhance their resource acquiring abilities to survive, attract and copulate with mates as much as possible. The masculinization of the brain is not just mediated by testosterone levels at the adult stage, but also testosterone exposure in the womb. Higher pre-natal testosterone indicated by a low digit ratio as well as adult testosterone levels increased risk of fouls or aggression among male players in a soccer game. Studies have found higher pre-natal testosterone or lower digit ratio to be correlated with higher aggression.
The rise in testosterone during competition predicted aggression in males, but not in females. Subjects who interacted with handguns and an experimental game showed rise in testosterone and aggression. Natural selection might have evolved males to be more sensitive to competitive and status challenge situations, and that the interacting roles of testosterone are the essential ingredient for aggressive behaviour in these situations. Testosterone mediates attraction to cruel and violent cues in men by promoting extended viewing of violent stimuli. Testosterone-specific structural brain characteristic can predict aggressive behaviour in individuals.
The Annals of the New York Academy of Sciences has found anabolic steroid use (which increases testosterone) to be higher in teenagers, and this was associated with increased violence. Studies have found administered testosterone to increase verbal aggression and anger in some participants.
A few studies indicate that the testosterone derivative estradiol might play an important role in male aggression. Estradiol is known to correlate with aggression in male mice. Moreover, the conversion of testosterone to estradiol regulates male aggression in sparrows during breeding season. Rats who were given anabolic steroids that increase testosterone were also more physically aggressive to provocation as a result of "threat sensitivity".
The relationship between testosterone and aggression may also function indirectly, as it has been proposed that testosterone does not amplify tendencies towards aggression, but rather amplifies whatever tendencies will allow an individual to maintain social status when challenged. In most animals, aggression is the means of maintaining social status. However, humans have multiple ways of obtaining status. This could explain why some studies find a link between testosterone and pro-social behaviour, if pro-social behaviour is rewarded with social status. Thus the link between testosterone and aggression and violence is due to these being rewarded with social status. The relationship may also be one of a "permissive effect" whereby testosterone does elevate aggression levels, but only in the sense of allowing average aggression levels to be maintained; chemically or physically castrating the individual will reduce aggression levels (though not eliminate them) but the individual only needs a small-level of pre-castration testosterone to have aggression levels to return to normal, which they will remain at even if additional testosterone is added. Testosterone may also simply exaggerate or amplify existing aggression; for example, chimpanzees who receive testosterone increases become more aggressive to chimps lower than them in the social hierarchy, but will still be submissive to chimps higher than them. Testosterone thus does not make the chimpanzee indiscriminately aggressive, but instead amplifies his pre-existing aggression towards lower-ranked chimps.
In humans, testosterone appears more to promote status-seeking and social dominance than simply increasing physical aggression. When controlling for the effects of belief in having received testosterone, women who have received testosterone make fairer offers than women who have not received testosterone.
Fairness
Testosterone might encourage fair behavior. For one study, subjects took part in a behavioral experiment where the distribution of a real amount of money was decided. The rules allowed both fair and unfair offers. The negotiating partner could subsequently accept or decline the offer. The fairer the offer, the less probable a refusal by the negotiating partner. If no agreement was reached, neither party earned anything. Test subjects with an artificially enhanced testosterone level generally made better, fairer offers than those who received placebos, thus reducing the risk of a rejection of their offer to a minimum. Two later studies have empirically confirmed these results. However men with high testosterone were significantly 27% less generous in an ultimatum game.
Biological activity
Free testosterone
Lipophilic hormones (soluble in lipids but not in water), such as steroid hormones, including testosterone, are transported in water-based blood plasma through specific and non-specific proteins. Specific proteins include sex hormone-binding globulin (SHBG), which binds testosterone, dihydrotestosterone, estradiol, and other sex steroids. Non-specific binding proteins include albumin. The part of the total hormone concentration that is not bound to its respective specific carrier protein is the free part. As a result, testosterone which is not bound to SHBG is called free testosterone. Only the free amount of testosterone can bind to an androgenic receptor, which means it has biological activity. While a significant portion of testosterone is bound to SHBG, a small fraction of testosterone (1%-2%) is bound to albumin and the binding of testosterone to albumin is weak and can be reversed easily; as such, both albumin-bound and unbound testosterone are considered to be bioavailable testosterone. This binding plays an important role in regulating the transport, tissue delivery, bioactivity, and metabolism of testosterone. At the tissue level, testosterone dissociates from albumin and quickly diffuses into the tissues. The percentage of testosterone bound to SHBG is lower in men than in women. Both the free fraction and the one bound to albumin are available at the tissue level (their sum constitutes the bioavailable testosterone), while SHBG effectively and irreversibly inhibits the action of testosterone. The relationship between sex steroids and SHBG in physiological and pathological conditions is complex, as various factors may influence the levels of plasma SHBG, affecting bioavailability of testosterone.
Steroid hormone activity
The effects of testosterone in humans and other vertebrates occur by way of multiple mechanisms: by activation of the androgen receptor (directly or as dihydrotestosterone), and by conversion to estradiol and activation of certain estrogen receptors. Androgens such as testosterone have also been found to bind to and activate membrane androgen receptors.
Free testosterone (T) is transported into the cytoplasm of target tissue cells, where it can bind to the androgen receptor, or can be reduced to 5α-dihydrotestosterone (5α-DHT) by the cytoplasmic enzyme 5α-reductase. 5α-DHT binds to the same androgen receptor even more strongly than testosterone, so that its androgenic potency is about 5 times that of T. The T-receptor or DHT-receptor complex undergoes a structural change that allows it to move into the cell nucleus and bind directly to specific nucleotide sequences of the chromosomal DNA. The areas of binding are called hormone response elements (HREs), and influence transcriptional activity of certain genes, producing the androgen effects.
Androgen receptors occur in many different vertebrate body system tissues, and both males and females respond similarly to similar levels. Greatly differing amounts of testosterone prenatally, at puberty, and throughout life account for a share of biological differences between males and females.
The bones and the brain are two important tissues in humans where the primary effect of testosterone is by way of aromatization to estradiol. In the bones, estradiol accelerates ossification of cartilage into bone, leading to closure of the epiphyses and conclusion of growth. In the central nervous system, testosterone is aromatized to estradiol. Estradiol rather than testosterone serves as the most important feedback signal to the hypothalamus (especially affecting LH secretion). In many mammals, prenatal or perinatal "masculinization" of the sexually dimorphic areas of the brain by estradiol derived from testosterone programs later male sexual behavior.
Neurosteroid activity
Testosterone, via its active metabolite 3α-androstanediol, is a potent positive allosteric modulator of the GABAA receptor.
Testosterone has been found to act as an antagonist of the TrkA and p75NTR, receptors for the neurotrophin nerve growth factor (NGF), with high affinity (around 5 nM). In contrast to testosterone, DHEA and DHEA sulfate have been found to act as high-affinity agonists of these receptors.
Testosterone is an antagonist of the sigma-1 receptor (Ki = 1,014 or 201 nM). However, the concentrations of testosterone required for binding the receptor are far above even total circulating concentrations of testosterone in adult males (which range between 10 and 35 nM).
Biochemistry
Biosynthesis
Like other steroid hormones, testosterone is derived from cholesterol . The first step in the biosynthesis involves the oxidative cleavage of the side-chain of cholesterol by cholesterol side-chain cleavage enzyme (P450scc, CYP11A1), a mitochondrial cytochrome P450 oxidase with the loss of six carbon atoms to give pregnenolone. In the next step, two additional carbon atoms are removed by the CYP17A1 (17α-hydroxylase/17,20-lyase) enzyme in the endoplasmic reticulum to yield a variety of C19 steroids. In addition, the 3β-hydroxyl group is oxidized by 3β-hydroxysteroid dehydrogenase to produce androstenedione. In the final and rate limiting step, the C17 keto group androstenedione is reduced by 17β-hydroxysteroid dehydrogenase to yield testosterone.
The largest amounts of testosterone (>95%) are produced by the testes in men, while the adrenal glands account for most of the remainder. Testosterone is also synthesized in far smaller total quantities in women by the adrenal glands, thecal cells of the ovaries, and, during pregnancy, by the placenta. In the testes, testosterone is produced by the Leydig cells. The male generative glands also contain Sertoli cells, which require testosterone for spermatogenesis. Like most hormones, testosterone is supplied to target tissues in the blood where much of it is transported bound to a specific plasma protein, sex hormone-binding globulin (SHBG).
Regulation
In males, testosterone is synthesized primarily in Leydig cells. The number of Leydig cells in turn is regulated by luteinizing hormone (LH) and follicle-stimulating hormone (FSH). In addition, the amount of testosterone produced by existing Leydig cells is under the control of LH, which regulates the expression of 17β-hydroxysteroid dehydrogenase.
The amount of testosterone synthesized is regulated by the hypothalamic–pituitary–testicular axis . When testosterone levels are low, gonadotropin-releasing hormone (GnRH) is released by the hypothalamus, which in turn stimulates the pituitary gland to release FSH and LH. These latter two hormones stimulate the testis to synthesize testosterone. Finally, increasing levels of testosterone through a negative feedback loop act on the hypothalamus and pituitary to inhibit the release of GnRH and FSH/LH, respectively.
Factors affecting testosterone levels may include:
Age: Testosterone levels gradually reduce as men age. This effect is sometimes referred to as andropause or late-onset hypogonadism.
Exercise: Resistance training increases testosterone levels acutely, however, in older men, that increase can be avoided by protein ingestion. Endurance training in men may lead to lower testosterone levels.
Nutrients: Vitamin A deficiency may lead to sub-optimal plasma testosterone levels. The secosteroid vitamin D in levels of 400–1000 IU/d (10–25 μg/d) raises testosterone levels. Zinc deficiency lowers testosterone levels but over-supplementation has no effect on serum testosterone. There is limited evidence that low-fat diets may reduce total and free testosterone levels in men.
Weight loss: Reduction in weight may result in an increase in testosterone levels. Fat cells synthesize the enzyme aromatase, which converts testosterone, the male sex hormone, into estradiol, the female sex hormone. However no clear association between body mass index and testosterone levels has been found.
Miscellaneous: Sleep: (REM sleep) increases nocturnal testosterone levels.
Behavior: Dominance challenges can, in some cases, stimulate increased testosterone release in men.
Foods: Natural or man-made antiandrogens including spearmint tea reduce testosterone levels. Licorice can decrease the production of testosterone and this effect is greater in females.
Distribution
The plasma protein binding of testosterone is 98.0 to 98.5%, with 1.5 to 2.0% free or unbound. It is bound 65% to sex hormone-binding globulin (SHBG) and 33% bound weakly to albumin.
Metabolism
Both testosterone and 5α-DHT are metabolized mainly in the liver. Approximately 50% of testosterone is metabolized via conjugation into testosterone glucuronide and to a lesser extent testosterone sulfate by glucuronosyltransferases and sulfotransferases, respectively. An additional 40% of testosterone is metabolized in equal proportions into the 17-ketosteroids androsterone and etiocholanolone via the combined actions of 5α- and 5β-reductases, 3α-hydroxysteroid dehydrogenase, and 17β-HSD, in that order. Androsterone and etiocholanolone are then glucuronidated and to a lesser extent sulfated similarly to testosterone. The conjugates of testosterone and its hepatic metabolites are released from the liver into circulation and excreted in the urine and bile. Only a small fraction (2%) of testosterone is excreted unchanged in the urine.
In the hepatic 17-ketosteroid pathway of testosterone metabolism, testosterone is converted in the liver by 5α-reductase and 5β-reductase into 5α-DHT and the inactive 5β-DHT, respectively. Then, 5α-DHT and 5β-DHT are converted by 3α-HSD into 3α-androstanediol and 3α-etiocholanediol, respectively. Subsequently, 3α-androstanediol and 3α-etiocholanediol are converted by 17β-HSD into androsterone and etiocholanolone, which is followed by their conjugation and excretion. 3β-Androstanediol and 3β-etiocholanediol can also be formed in this pathway when 5α-DHT and 5β-DHT are acted upon by 3β-HSD instead of 3α-HSD, respectively, and they can then be transformed into epiandrosterone and epietiocholanolone, respectively. A small portion of approximately 3% of testosterone is reversibly converted in the liver into androstenedione by 17β-HSD.
In addition to conjugation and the 17-ketosteroid pathway, testosterone can also be hydroxylated and oxidized in the liver by cytochrome P450 enzymes, including CYP3A4, CYP3A5, CYP2C9, CYP2C19, and CYP2D6. 6β-Hydroxylation and to a lesser extent 16β-hydroxylation are the major transformations. The 6β-hydroxylation of testosterone is catalyzed mainly by CYP3A4 and to a lesser extent CYP3A5 and is responsible for 75 to 80% of cytochrome P450-mediated testosterone metabolism. In addition to 6β- and 16β-hydroxytestosterone, 1β-, 2α/β-, 11β-, and 15β-hydroxytestosterone are also formed as minor metabolites. Certain cytochrome P450 enzymes such as CYP2C9 and CYP2C19 can also oxidize testosterone at the C17 position to form androstenedione.
Two of the immediate metabolites of testosterone, 5α-DHT and estradiol, are biologically important and can be formed both in the liver and in extrahepatic tissues. Approximately 5 to 7% of testosterone is converted by 5α-reductase into 5α-DHT, with circulating levels of 5α-DHT about 10% of those of testosterone, and approximately 0.3% of testosterone is converted into estradiol by aromatase. 5α-Reductase is highly expressed in the male reproductive organs (including the prostate gland, seminal vesicles, and epididymides), skin, hair follicles, and brain and aromatase is highly expressed in adipose tissue, bone, and the brain. As much as 90% of testosterone is converted into 5α-DHT in so-called androgenic tissues with high 5α-reductase expression, and due to the several-fold greater potency of 5α-DHT as an AR agonist relative to testosterone, it has been estimated that the effects of testosterone are potentiated 2- to 3-fold in such tissues.
Levels
Total levels of testosterone in the body have been reported as 264 to 916 ng/dL (nanograms per deciliter) in non-obese European and American men age 19 to 39 years, while mean testosterone levels in adult men have been reported as 630 ng/dL. Although commonly used as a reference range, some physicians have disputed the use of this range to determine hypogonadism. Several professional medical groups have recommended that 350 ng/dL generally be considered the minimum normal level, which is consistent with previous findings. Levels of testosterone in men decline with age. In women, mean levels of total testosterone have been reported to be 32.6 ng/dL. In women with hyperandrogenism, mean levels of total testosterone have been reported to be 62.1 ng/dL.
Measurement
In measurements of testosterone in blood samples, different assay techniques can yield different results. Immunofluorescence assays exhibit considerable variability in quantifying testosterone concentrations in blood samples due to the cross-reaction of structurally similar steroids, leading to overestimating the results. In contrast, the liquid chromatography/tandem mass spectrometry method is more desirable: it offers superior specificity and precision, making it a more suitable choice for this application.
Testosterone's bioavailable concentration is commonly determined using the Vermeulen calculation or more precisely using the modified Vermeulen method, which considers the dimeric form of sex hormone-binding globulin.
Both methods use chemical equilibrium to derive the concentration of bioavailable testosterone: in circulation, testosterone has two major binding partners, albumin (weakly bound) and sex hormone-binding globulin (strongly bound). These methods are described in detail in the accompanying figure.
Distribution
Testosterone has been detected at variably higher and lower levels among men of various nations and from various backgrounds, explanations for the causes of this have been relatively diverse.
People from nations of the Eurasian Steppe and Central Asia, such as Mongolia, Kyrgyzstan and Uzbekistan, have consistently been detected to have had significantly elevated levels of testosterone, while people from Central European and Baltic nations such as the Czech Republic, Slovakia, Latvia and Estonia have been found to have had significantly decreased levels of testosterone.
The region with the highest-ever tested levels of testosterone is Chita, Russia, the people group with the highest ever tested levels of testosterone were the Yakuts.
History and production
A testicular action was linked to circulating blood fractions – now understood to be a family of androgenic hormones – in the early work on castration and testicular transplantation in fowl by Arnold Adolph Berthold (1803–1861). Research on the action of testosterone received a brief boost in 1889, when the Harvard professor Charles-Édouard Brown-Séquard (1817–1894), then in Paris, self-injected subcutaneously a "rejuvenating elixir" consisting of an extract of dog and guinea pig testicle. He reported in The Lancet that his vigor and feeling of well-being were markedly restored but the effects were transient, and Brown-Séquard's hopes for the compound were dashed. Suffering the ridicule of his colleagues, he abandoned his work on the mechanisms and effects of androgens in human beings.
In 1927, the University of Chicago's Professor of Physiologic Chemistry, Fred C. Koch, established easy access to a large source of bovine testicles – the Chicago stockyards – and recruited students willing to endure the tedious work of extracting their isolates. In that year, Koch and his student, Lemuel McGee, derived 20 mg of a substance from a supply of 40 pounds of bovine testicles that, when administered to castrated roosters, pigs and rats, re-masculinized them. The group of Ernst Laqueur at the University of Amsterdam purified testosterone from bovine testicles in a similar manner in 1934, but the isolation of the hormone from animal tissues in amounts permitting serious study in humans was not feasible until three European pharmaceutical giants – Schering (Berlin, Germany), Organon (Oss, Netherlands) and Ciba – began full-scale steroid research and development programs in the 1930s.
The Organon group in the Netherlands were the first to isolate the hormone, identified in a May 1935 paper "On Crystalline Male Hormone from Testicles (Testosterone)". They named the hormone testosterone, from the stems of testicle and sterol, and the suffix of ketone. The structure was worked out by Schering's Adolf Butenandt, at the Chemisches Institut of Technical University in Gdańsk.
The chemical synthesis of testosterone from cholesterol was achieved in August that year by Butenandt and Hanisch. Only a week later, the Ciba group in Zurich, Leopold Ruzicka (1887–1976) and A. Wettstein, published their synthesis of testosterone. These independent partial syntheses of testosterone from a cholesterol base earned both Butenandt and Ruzicka the joint 1939 Nobel Prize in Chemistry. Testosterone was identified as 17β-hydroxyandrost-4-en-3-one (C19H28O2), a solid polycyclic alcohol with a hydroxyl group at the 17th carbon atom. This also made it obvious that additional modifications on the synthesized testosterone could be made, i.e., esterification and alkylation.
The partial synthesis in the 1930s of abundant, potent testosterone esters permitted the characterization of the hormone's effects, so that Kochakian and Murlin (1936) were able to show that testosterone raised nitrogen retention (a mechanism central to anabolism) in the dog, after which Allan Kenyon's group was able to demonstrate both anabolic and androgenic effects of testosterone propionate in eunuchoidal men, boys, and women. The period of the early 1930s to the 1950s has been called "The Golden Age of Steroid Chemistry", and work during this period progressed quickly.
Like other androsteroids, testosterone is manufactured industrially from microbial fermentation of plant cholesterol (e.g., from soybean oil). In the early 2000s, the steroid market weighed around one million tonnes and was worth $10 billion, making it the 2nd largest biopharmaceutical market behind antibiotics.
Other species
Testosterone is observed in most vertebrates. Testosterone and the classical nuclear androgen receptor first appeared in gnathostomes (jawed vertebrates). Agnathans (jawless vertebrates) such as lampreys do not produce testosterone but instead use androstenedione as a male sex hormone. Fish make a slightly different form called 11-ketotestosterone. Its counterpart in insects is ecdysone. The presence of these ubiquitous steroids in a wide range of animals suggest that sex hormones have an ancient evolutionary history.
| Biology and health sciences | Biochemistry and molecular biology | null |
30990 | https://en.wikipedia.org/wiki/Thermocouple | Thermocouple | A thermocouple, also known as a "thermoelectrical thermometer", is an electrical device consisting of two dissimilar electrical conductors forming an electrical junction. A thermocouple produces a temperature-dependent voltage as a result of the Seebeck effect, and this voltage can be interpreted to measure temperature. Thermocouples are widely used as temperature sensors.
Commercial thermocouples are inexpensive, interchangeable, are supplied with standard connectors, and can measure a wide range of temperatures. In contrast to most other methods of temperature measurement, thermocouples are self-powered and require no external form of excitation. The main limitation with thermocouples is accuracy; system errors of less than one degree Celsius (°C) can be difficult to achieve.
Thermocouples are widely used in science and industry. Applications include temperature measurement for kilns, gas turbine exhaust, diesel engines, and other industrial processes. Thermocouples are also used in homes, offices and businesses as the temperature sensors in thermostats, and also as flame sensors in safety devices for gas-powered appliances.
Principle of operation
In 1821, the German physicist Thomas Johann Seebeck discovered that a magnetic needle held near a circuit made up of two dissimilar metals got deflected when one of the dissimilar metal junctions was heated. At the time, Seebeck referred to this consequence as thermo-magnetism. The magnetic field he observed was later shown to be due to thermo-electric current. In practical use, the voltage generated at a single junction of two different types of wire is what is of interest as this can be used to measure temperature at very high and low temperatures. The magnitude of the voltage depends on the types of wire being used. Generally, the voltage is in the microvolt range and care must be taken to obtain a usable measurement. Although very little current flows, power can be generated by a single thermocouple junction. Power generation using multiple thermocouples, as in a thermopile, is common.
The standard configuration of a thermocouple is shown in the figure. The dissimilar conductors contact at the measuring (aka hot) junction and at the reference (aka cold) junction. The thermocouple is connected to the electrical system at its reference junction. The figure shows the measuring junction on the left, the reference junction in the middle and represents the rest of the electrical system as a voltage meter on the right.
The temperature Tsense is obtained via
a characteristic function E(T) for the type of thermocouple which requires inputs: measured voltage V and reference junction temperature Tref. The solution to the equation E(Tsense) = V + E(Tref) yields Tsense. Sometimes these details are hidden inside a device that packages the reference junction block (with Tref thermometer), voltmeter, and equation solver.
Seebeck effect
The Seebeck effect refers to the development of an electromotive force across two points of an electrically conducting material when there is a temperature difference between those two points.
Under open-circuit conditions where there is no internal current flow, the gradient of voltage () is directly proportional to the gradient in temperature ():
where is a temperature-dependent material property known as the Seebeck coefficient.
The standard measurement configuration shown in the figure shows four temperature regions and thus four voltage contributions:
Change from to , in the lower copper wire.
Change from to , in the alumel wire.
Change from to , in the chromel wire.
Change from to , in the upper copper wire.
The first and fourth contributions cancel out exactly, because these regions involve the same temperature change and an identical material.
As a result, does not influence the measured voltage.
The second and third contributions do not cancel, as they involve different materials.
The measured voltage turns out to be
where and are the Seebeck coefficients of the conductors attached to the positive and negative terminals of the voltmeter, respectively (chromel and alumel in the figure).
Characteristic function
The thermocouple's behaviour is captured by a characteristic function , which needs only to be consulted at two arguments:
In terms of the Seebeck coefficients, the characteristic function is defined by
The constant of integration in this indefinite integral has no significance, but is conventionally chosen such that .
Thermocouple manufacturers and metrology standards organizations such as NIST provide tables of the function that have been measured and interpolated over a range of temperatures, for particular thermocouple types (see External links section for access to these tables).
Reference junction
To obtain the desired measurement of , it is not sufficient to just measure .
The temperature at the reference junctions must also be known.
Two strategies are often used here:
"Ice bath": The reference junction block is maintained at a known temperature as it is immersed in a semi-frozen bath of distilled water at atmospheric pressure. The precise temperature of the melting point phase transition acts as a natural thermostat, fixing to 0 °C.
Reference junction sensor (known as ""): The reference junction block is allowed to vary in temperature, but the temperature is measured at this block using a separate temperature sensor. This secondary measurement is used to compensate for temperature variation at the junction block. The thermocouple junction is often exposed to extreme environments, while the reference junction is often mounted near the instrument's location. Semiconductor thermometer devices are often used in modern thermocouple instruments.
In both cases the value is calculated, then the function is searched for a matching value. The argument where this match occurs is the value of :
.
Practical concerns
Thermocouples ideally should be very simple measurement devices, with each type being characterized by a precise curve, independent of any other details.
In reality, thermocouples are affected by issues such as alloy manufacturing uncertainties, aging effects, and circuit design mistakes/misunderstandings.
Circuit construction
A common error in thermocouple construction is related to cold junction compensation. If an error is made on the estimation of , an error will appear in the temperature measurement. For the simplest measurements, thermocouple wires are connected to copper far away from the hot or cold point whose temperature is measured; this reference junction is then assumed to be at room temperature, but that temperature can vary. Because of the nonlinearity in the thermocouple voltage curve, the errors in and are generally unequal values. Some thermocouples, such as Type B, have a relatively flat voltage curve near room temperature, meaning that a large uncertainty in a room-temperature translates to only a small error in .
Junctions should be made in a reliable manner, but there are many possible approaches to accomplish this.
For low temperatures, junctions can be brazed or soldered; however, it may be difficult to find a suitable flux and this may not be suitable at the sensing junction due to the solder's low melting point.
Reference and extension junctions are therefore usually made with screw terminal blocks.
For high temperatures, the most common approach is the spot weld or crimp using a durable material.
One common myth regarding thermocouples is that junctions must be made cleanly without involving a third metal, to avoid unwanted added EMFs.
This may result from another common misunderstanding that the voltage is generated at the junction. In fact, the junctions should in principle have uniform internal temperature; therefore, no voltage is generated at the junction. The voltage is generated in the thermal gradient, along the wire.
A thermocouple produces small signals, often microvolts in magnitude. Precise measurements of this signal require an amplifier with low input offset voltage and with care taken to avoid thermal EMFs from self-heating within the voltmeter itself. If the thermocouple wire has a high resistance for some reason (poor contact at junctions, or very thin wires used for fast thermal response), the measuring instrument should have high input impedance to prevent an offset in the measured voltage. A useful feature in thermocouple instrumentation will simultaneously measure resistance and detect faulty connections in the wiring or at thermocouple junctions.
Metallurgical grades
While a thermocouple wire type is often described by its chemical composition, the actual aim is to produce a pair of wires that follow a standardized curve.
Impurities affect each batch of metal differently, producing variable Seebeck coefficients.
To match the standard behaviour, thermocouple wire manufacturers will deliberately mix in additional impurities to "dope" the alloy, compensating for uncontrolled variations in source material.
As a result, there are standard and specialized grades of thermocouple wire, depending on the level of precision demanded in the thermocouple behaviour.
Precision grades may only be available in matched pairs, where one wire is modified to compensate for deficiencies in the other wire.
A special case of thermocouple wire is known as "extension grade", designed to carry the thermoelectric circuit over a longer distance.
Extension wires follow the stated curve but for various reasons they are not designed to be used in extreme environments and so they cannot be used at the sensing junction in some applications.
For example, an extension wire may be in a different form, such as highly flexible with stranded construction and plastic insulation, or be part of a multi-wire cable for carrying many thermocouple circuits.
With expensive noble metal thermocouples, the extension wires may even be made of a completely different, cheaper material that mimics the standard type over a reduced temperature range.
Aging
Thermocouples are often used at high temperatures and in reactive furnace atmospheres. In this case, the practical lifetime is limited by thermocouple aging. The thermoelectric coefficients of the wires in a thermocouple that is used to measure very high temperatures may change with time, and the measurement voltage accordingly drops. The simple relationship between the temperature difference of the junctions and the measurement voltage is only correct if each wire is homogeneous (uniform in composition). As thermocouples age in a process, their conductors can lose homogeneity due to chemical and metallurgical changes caused by extreme or prolonged exposure to high temperatures. If the aged section of the thermocouple circuit is exposed to a temperature gradient, the measured voltage will differ, resulting in error.
Aged thermocouples are only partly modified; for example, being unaffected in the parts outside the furnace. For this reason, aged thermocouples cannot be taken out of their installed location and recalibrated in a bath or test furnace to determine error. This also explains why error can sometimes be observed when an aged thermocouple is pulled partly out of a furnace—as the sensor is pulled back, aged sections may see exposure to increased temperature gradients from hot to cold as the aged section now passes through the cooler refractory area, contributing significant error to the measurement. Likewise, an aged thermocouple that is pushed deeper into the furnace might sometimes provide a more accurate reading if being pushed further into the furnace causes the temperature gradient to occur only in a fresh section.
Types
Certain combinations of alloys have become popular as industry standards. Selection of the combination is driven by cost, availability, convenience, melting point, chemical properties, stability, and output. Different types are best suited for different applications. They are usually selected on the basis of the temperature range and sensitivity needed. Thermocouples with low sensitivities (B, R, and S types) have correspondingly lower resolutions. Other selection criteria include the chemical inertness of the thermocouple material and whether it is magnetic or not. Standard thermocouple types are listed below with the positive electrode (assuming ) first, followed by the negative electrode.
Nickel-alloy thermocouples
Type E
Type E (chromel–constantan) has a high output (68 μV/°C), which makes it well suited to cryogenic use. Additionally, it is non-magnetic.
Wide range is −270 °C to +740 °C
and narrow range is −110 °C to +140 °C.
Type J
Type J (iron–constantan) has a more restricted range (−40 °C to +1200 °C) than type K but higher sensitivity of about 50 μV/°C. The Curie point of the iron (770 °C) causes a smooth change in the characteristic, which determines the upper-temperature limit. Note, the European/German Type L is a variant of the type J, with a different specification for the EMF output (reference DIN 43712:1985-01).
The positive wire is made of hard iron, while the negative wire consists of softer copper-nickel. Due to its iron content, the J-type is slightly heavier and the positive wire is magnetic. It is highly vulnerable to corrosion in reducing atmospheres, which can lead to significant degradation of the thermocouple's performance.
Type K
Type K (chromel–alumel) is the most common general-purpose thermocouple with a sensitivity of approximately 41 μV/°C. It is inexpensive, and a wide variety of probes are available in its −200 °C to +1350 °C (−330 °F to +2460 °F) range. Type K was specified at a time when metallurgy was less advanced than it is today, and consequently characteristics may vary considerably between samples. One of the constituent metals, nickel, is magnetic; a characteristic of thermocouples made with magnetic material is that they undergo a deviation in output when the material reaches its Curie point, which occurs for type K thermocouples at around 185 °C.
They operate very well in oxidizing atmospheres. If, however, a mostly reducing atmosphere (such as hydrogen with a small amount of oxygen) comes into contact with the wires, the chromium in the chromel alloy oxidizes. This reduces the emf output, and the thermocouple reads low. This phenomenon is known as green rot, due to the color of the affected alloy. Although not always distinctively green, the chromel wire will develop a mottled silvery skin and become magnetic. An easy way to check for this problem is to see whether the two wires are magnetic (normally, chromel is non-magnetic).
Hydrogen in the atmosphere is the usual cause of green rot. At high temperatures, it can diffuse through solid metals or an intact metal thermowell. Even a sheath of magnesium oxide insulating the thermocouple will not keep the hydrogen out.
Green rot does not occur in atmospheres sufficiently rich in oxygen, or oxygen-free. A sealed thermowell can be filled with inert gas, or an oxygen scavenger (e.g. a sacrificial titanium wire) can be added. Alternatively, additional oxygen can be introduced into the thermowell. Another option is using a different thermocouple type for the low-oxygen atmospheres where green rot can occur; a type N thermocouple is a suitable alternative.
Type M
Type M (82%Ni/18%Mo–99.2%Ni/0.8%Co, by weight) are used in vacuum furnaces for the same reasons as with type C (described below). Upper temperature is limited to 1400 °C. It is less commonly used than other types.
Type N
Type N (Nicrosil–Nisil) thermocouples are suitable for use between −270 °C and +1300 °C, owing to its stability and oxidation resistance. Sensitivity is about 39 μV/°C at 900 °C, slightly lower compared to type K.
Designed at the Defence Science and Technology Organisation (DSTO) of Australia, by Noel A. Burley, type-N thermocouples overcome the three principal characteristic types and causes of thermoelectric instability in the standard base-metal thermoelement materials:
A gradual and generally cumulative drift in thermal EMF on long exposure at elevated temperatures. This is observed in all base-metal thermoelement materials and is mainly due to compositional changes caused by oxidation, carburization, or neutron irradiation that can produce transmutation in nuclear reactor environments. In the case of type-K thermocouples, manganese and aluminium atoms from the KN (negative) wire migrate to the KP (positive) wire, resulting in a down-scale drift due to chemical contamination. This effect is cumulative and irreversible.
A short-term cyclic change in thermal EMF on heating in the temperature range about 250–650 °C, which occurs in thermocouples of types K, J, T, and E. This kind of EMF instability is associated with structural changes such as magnetic short-range order in the metallurgical composition.
A time-independent perturbation in thermal EMF in specific temperature ranges. This is due to composition-dependent magnetic transformations that perturb the thermal EMFs in type-K thermocouples in the range about 25–225 °C, and in type J above 730 °C.
The Nicrosil and Nisil thermocouple alloys show greatly enhanced thermoelectric stability relative to the other standard base-metal thermocouple alloys because their compositions substantially reduce the thermoelectric instabilities described above. This is achieved primarily by increasing component solute concentrations (chromium and silicon) in a base of nickel above those required to cause a transition from internal to external modes of oxidation, and by selecting solutes (silicon and magnesium) that preferentially oxidize to form a diffusion-barrier, and hence oxidation-inhibiting films.
Type N thermocouples are suitable alternative to type K for low-oxygen conditions where type K is prone to green rot. They are suitable for use in vacuum, inert atmospheres, oxidizing atmospheres, or dry reducing atmospheres. They do not tolerate the presence of sulfur.
Type T
Type T (copper–constantan) thermocouples are suited for measurements in the −200 to 350 °C range. Often used as a differential measurement, since only copper wire touches the probes. Since both conductors are non-magnetic, there is no Curie point and thus no abrupt change in characteristics. Type-T thermocouples have a sensitivity of about 43 μV/°C. Note that copper has a much higher thermal conductivity than the alloys generally used in thermocouple constructions, and so it is necessary to exercise extra care with thermally anchoring type-T thermocouples. A similar composition is found in the obsolete Type U in the German specification DIN 43712:1985-01.
Platinum/rhodium-alloy thermocouples
Types B, R, and S thermocouples use platinum or a platinum/rhodium alloy for each conductor. These are among the most stable thermocouples, but have lower sensitivity than other types, approximately 10 μV/°C. Type B, R, and S thermocouples are usually used only for high-temperature measurements due to their high cost and low sensitivity. For type R and S thermocouples, HTX platinum wire can be used in place of the pure platinum leg to strengthen the thermocouple and prevent failures from grain growth that can occur in high temperature and harsh conditions.
Type B
Type B (70%Pt/30%Rh–94%Pt/6%Rh, by weight) thermocouples are suited for use at up to 1800 °C. Type-B thermocouples produce the same output at 0 °C and 42 °C, limiting their use below about 50 °C. The emf function has a minimum around 21 °C (for 21.020262 °C emf=-2.584972 μV), meaning that cold-junction compensation is easily performed, since the compensation voltage is essentially a constant for a reference at typical room temperatures.
Type R
Type R (87%Pt/13%Rh–Pt, by weight) thermocouples are used 0 to 1600 °C. Type R Thermocouples are quite stable and capable of long operating life when used in clean, favorable conditions. When used above 1100 °C ( 2000 °F), these thermocouples must be protected from exposure to metallic and non-metallic vapors. Type R is not suitable for direct insertion into metallic protecting tubes. Long term high temperature exposure causes grain growth which can lead to mechanical failure and a negative calibration drift caused by Rhodium diffusion to pure platinum leg as well as from Rhodium volatilization. This type has the same uses as type S, but is not interchangeable with it.
Type S
Type S (90%Pt/10%Rh–Pt, by weight) thermocouples, similar to type R, are used up to 1600 °C. Before the introduction of the International Temperature Scale of 1990 (ITS-90), precision type-S thermocouples were used as the practical standard thermometers for the range of 630 °C to 1064 °C, based on an interpolation between the freezing points of antimony, silver, and gold. Starting with ITS-90, platinum resistance thermometers have taken over this range as standard thermometers.
Tungsten/rhenium-alloy thermocouples
These thermocouples are well-suited for measuring extremely high temperatures. Typical uses are hydrogen and inert atmospheres, as well as vacuum furnaces. They are not used in oxidizing environments at high temperatures because of embrittlement. A typical range is 0 to 2315 °C, which can be extended to 2760 °C in inert atmosphere and to 3000 °C for brief measurements.
Pure tungsten at high temperatures undergoes recrystallization and becomes brittle. Therefore, types C and D are preferred over type G in some applications.
In presence of water vapor at high temperature, tungsten reacts to form tungsten(VI) oxide, which volatilizes away, and hydrogen. Hydrogen then reacts with tungsten oxide, after which water is formed again. Such a "water cycle" can lead to erosion of the thermocouple and eventual failure. In high temperature vacuum applications, it is therefore desirable to avoid the presence of traces of water.
An alternative to tungsten/rhenium is tungsten/molybdenum, but the voltage–temperature response is weaker and has minimum at around 1000 K.
The thermocouple temperature is limited also by other materials used. For example beryllium oxide, a popular material for high temperature applications, tends to gain conductivity with temperature; a particular configuration of sensor had the insulation resistance dropping from a megaohm at 1000 K to 200 ohms at 2200 K. At high temperatures, the materials undergo chemical reaction. At 2700 K beryllium oxide slightly reacts with tungsten, tungsten-rhenium alloy, and tantalum; at 2600 K molybdenum reacts with BeO, tungsten does not react. BeO begins melting at about 2820 K, magnesium oxide at about 3020 K.
Type C
(95%W/5%Re–74%W/26%Re, by weight) maximum temperature will be measured by type-c thermocouple is 2329 °C.
Type D
(97%W/3%Re–75%W/25%Re, by weight)
Type G
(W–74%W/26%Re, by weight)
Others
Chromel–gold/iron-alloy thermocouples
In these thermocouples (chromel–gold/iron alloy), the negative wire is gold with a small fraction (0.03–0.15 atom percent) of iron. The impure gold wire gives the thermocouple a high sensitivity at low temperatures (compared to other thermocouples at that temperature), whereas the chromel wire maintains the sensitivity near room temperature. It can be used for cryogenic applications (1.2–300 K and even up to 600 K). Both the sensitivity and the temperature range depend on the iron concentration. The sensitivity is typically around 15 μV/K at low temperatures, and the lowest usable temperature varies between 1.2 and 4.2 K.
Type P (noble-metal alloy) or "Platinel II"
Type P (55%Pd/31%Pt/14%Au–65%Au/35%Pd, by weight) thermocouples give a thermoelectric voltage that mimics the type K over the range 500 °C to 1400 °C, however they are constructed purely of noble metals and so shows enhanced corrosion resistance. This combination is also known as Platinel II.
Platinum/molybdenum-alloy thermocouples
Thermocouples of platinum/molybdenum-alloy (95%Pt/5%Mo–99.9%Pt/0.1%Mo, by weight) are sometimes used in nuclear reactors, since they show a low drift from nuclear transmutation induced by neutron irradiation, compared to the platinum/rhodium-alloy types.
Iridium/rhodium alloy thermocouples
The use of two wires of iridium/rhodium alloys can provide a thermocouple that can be used up to about 2000 °C in inert atmospheres.
Pure noble-metal thermocouples Au–Pt, Pt–Pd
Thermocouples made from two different, high-purity noble metals can show high accuracy even when uncalibrated, as well as low levels of drift. Two combinations in use are gold–platinum and platinum–palladium. Their main limitations are the low melting points of the metals involved (1064 °C for gold and 1555 °C for palladium). These thermocouples tend to be more accurate than type S, and due to their economy and simplicity are even regarded as competitive alternatives to the platinum resistance thermometers that are normally used as standard thermometers.
HTIR-TC (High Temperature Irradiation Resistant) thermocouples
HTIR-TC offers a breakthrough in measuring high-temperature processes. Its characteristics are: durable and reliable at high temperatures, up to at least 1700 °C; resistant to irradiation; moderately priced; available in a variety of configurations - adaptable to each application; easily installed. Originally developed for use in nuclear test reactors, HTIR-TC may enhance the safety of operations in future reactors. This thermocouple was developed by researchers at the Idaho National Laboratory (INL).
Comparison of types
The table below describes properties of several different thermocouple types. Within the tolerance columns, T represents the temperature of the hot junction, in degrees Celsius. For example, a thermocouple with a tolerance of ±0.0025×T would have a tolerance of ±2.5 °C at 1000 °C.
Each cell in the Color Code columns depicts the end of a thermocouple cable, showing the jacket color and the color of the individual leads. The background color represents the color of the connector body.
Thermocouple insulation
Wires insulation
The wires that make up the thermocouple must be insulated from each other everywhere, except at the sensing junction. Any additional electrical contact between the wires, or contact of a wire to other conductive objects, can modify the voltage and give a false reading of temperature.
Plastics are suitable insulators for low temperatures parts of a thermocouple, whereas ceramic insulation can be used up to around 1000 °C. Other concerns (abrasion and chemical resistance) also affect the suitability of materials.
When wire insulation disintegrates, it can result in an unintended electrical contact at a different location from the desired sensing point. If such a damaged thermocouple is used in the closed loop control of a thermostat or other temperature controller, this can lead to a runaway overheating event and possibly severe damage, as the false temperature reading will typically be lower than the sensing junction temperature. Failed insulation will also typically outgas, which can lead to process contamination. For parts of thermocouples used at very high temperatures or in contamination-sensitive applications, the only suitable insulation may be vacuum or inert gas; the mechanical rigidity of the thermocouple wires is used to keep them separated.
Table of insulation materials
Temperature ratings for insulations may vary based on what the overall thermocouple construction cable consists of.
Note: T300 is a new high-temperature material that was recently approved by UL for 300 °C operating temperatures.
Applications
Thermocouples are suitable for measuring over a large temperature range, from −270 up to 3000 °C (for a short time, in inert atmosphere). Applications include temperature measurement for kilns, gas turbine exhaust, diesel engines, other industrial processes and fog machines. They are less suitable for applications where smaller temperature differences need to be measured with high accuracy, for example the range 0–100 °C with 0.1 °C accuracy. For such applications thermistors, silicon bandgap temperature sensors and resistance thermometers are more suitable.
Steel industry
Type B, S, R and K thermocouples are used extensively in the steel and iron industries to monitor temperatures and chemistry throughout the steel making process. Disposable, immersible, type S thermocouples are regularly used in the electric arc furnace process to accurately measure the temperature of steel before tapping. The cooling curve of a small steel sample can be analyzed and used to estimate the carbon content of molten steel.
Gas appliance safety
Many gas-fed heating appliances such as ovens and water heaters make use of a pilot flame to ignite the main gas burner when required. If the pilot flame goes out, unburned gas may be released, which is an explosion risk and a health hazard. To prevent this, some appliances use a thermocouple in a fail-safe circuit to sense when the pilot light is burning. The tip of the thermocouple is placed in the pilot flame, generating a voltage which operates the supply valve which feeds gas to the pilot. So long as the pilot flame remains lit, the thermocouple remains hot, and the pilot gas valve is held open. If the pilot light goes out, the thermocouple temperature falls, causing the voltage across the thermocouple to drop and the valve to close.
Where the probe may be easily placed above the flame, a rectifying sensor may often be used instead. With part ceramic construction, they may also be known as flame rods, flame sensors or flame detection electrodes.
Some combined main burner and pilot gas valves (mainly by Honeywell) reduce the power demand to within the range of a single universal thermocouple heated by a pilot (25 mV open circuit falling by half with the coil connected to a 10–12 mV, 0.2–0.25 A source, typically) by sizing the coil to be able to hold the valve open against a light spring, but only after the initial turning-on force is provided by the user pressing and holding a knob to compress the spring during lighting of the pilot. These systems are identifiable by the "press and hold for x minutes" in the pilot lighting instructions. (The holding current requirement of such a valve is much less than a bigger solenoid designed for pulling the valve in from a closed position would require.) Special test sets are made to confirm the valve let-go and holding currents, because an ordinary milliammeter cannot be used as it introduces more resistance than the gas valve coil. Apart from testing the open circuit voltage of the thermocouple, and the near short-circuit DC continuity through the thermocouple gas valve coil, the easiest non-specialist test is substitution of a known good gas valve.
Some systems, known as millivolt control systems, extend the thermocouple concept to both open and close the main gas valve as well. Not only does the voltage created by the pilot thermocouple activate the pilot gas valve, it is also routed through a thermostat to power the main gas valve as well. Here, a larger voltage is needed than in a pilot flame safety system described above, and a thermopile is used rather than a single thermocouple. Such a system requires no external source of electricity for its operation and thus can operate during a power failure, provided that all the other related system components allow for this. This excludes common forced air furnaces because external electrical power is required to operate the blower motor, but this feature is especially useful for un-powered convection heaters. A similar gas shut-off safety mechanism using a thermocouple is sometimes employed to ensure that the main burner ignites within a certain time period, shutting off the main burner gas supply valve should that not happen.
Out of concern about energy wasted by the standing pilot flame, designers of many newer appliances have switched to an electronically controlled pilot-less ignition, also called intermittent ignition. With no standing pilot flame, there is no risk of gas buildup should the flame go out, so these appliances do not need thermocouple-based pilot safety switches. As these designs lose the benefit of operation without a continuous source of electricity, standing pilots are still used in some appliances. The exception is later model instantaneous (aka "tankless") water heaters that use the flow of water to generate the current required to ignite the gas burner; these designs also use a thermocouple as a safety cut-off device in the event the gas fails to ignite, or if the flame is extinguished.
Thermopile radiation sensors
Thermopiles are used for measuring the intensity of incident radiation, typically visible or infrared light, which heats the hot junctions, while the cold junctions are on a heat sink. It is possible to measure radiative intensities of only a few μW/cm2 with commercially available thermopile sensors. For example, some laser power meters are based on such sensors; these are specifically known as thermopile laser sensor.
The principle of operation of a thermopile sensor is distinct from that of a bolometer, as the latter relies on a change in resistance.
Manufacturing
Thermocouples can generally be used in the testing of prototype electrical and mechanical apparatus. For example, switchgear under test for its current carrying capacity may have thermocouples installed and monitored during a heat run test, to confirm that the temperature rise at rated current does not exceed designed limits.
Power production
A thermocouple can produce current to drive some processes directly, without the need for extra circuitry and power sources. For example, the power from a thermocouple can activate a valve when a temperature difference arises. The electrical energy generated by a thermocouple is converted from the heat which must be supplied to the hot side to maintain the electric potential. A continuous transfer of heat is necessary because the current flowing through the thermocouple tends to cause the hot side to cool down and the cold side to heat up (the Peltier effect).
Thermocouples can be connected in series to form a thermopile, where all the hot junctions are exposed to a higher temperature and all the cold junctions to a lower temperature. The output is the sum of the voltages across the individual junctions, giving larger voltage and power output. In a radioisotope thermoelectric generator, the radioactive decay of transuranic elements as a heat source has been used to power spacecraft on missions too far from the Sun to use solar power.
Thermopiles heated by kerosene lamps were used to run batteryless radio receivers in isolated areas. There are commercially produced lanterns that use the heat from a candle to run several light-emitting diodes, and thermoelectrically powered fans to improve air circulation and heat distribution in wood stoves.
Process plants
Chemical production and petroleum refineries will usually employ computers for logging and for limit testing the many temperatures associated with a process, typically numbering in the hundreds. For such cases, a number of thermocouple leads will be brought to a common reference block (a large block of copper) containing the second thermocouple of each circuit. The temperature of the block is in turn measured by a thermistor. Simple computations are used to determine the temperature at each measured location.
Thermocouple as vacuum gauge
A thermocouple can be used as a vacuum gauge over the range of approximately 0.001 to 1 torr absolute pressure. In this pressure range, the mean free path of the gas is comparable to the dimensions of the vacuum chamber, and the flow regime is neither purely viscous nor purely molecular. In this configuration, the thermocouple junction is attached to the centre of a short heating wire, which is usually energised by a constant current of about 5 mA, and the heat is removed at a rate related to the thermal conductivity of the gas.
The temperature detected at the thermocouple junction depends on the thermal conductivity of the surrounding gas, which depends on the pressure of the gas. The potential difference measured by a thermocouple is proportional to the square of pressure over the low- to medium-vacuum range. At higher (viscous flow) and lower (molecular flow) pressures, the thermal conductivity of air or any other gas is essentially independent of pressure. The thermocouple was first used as a vacuum gauge by Voege in 1906. The mathematical model for the thermocouple as a vacuum gauge is quite complicated, as explained in detail by Van Atta, but can be simplified to:
where P is the gas pressure, B is a constant that depends on the thermocouple temperature, the gas composition and the vacuum-chamber geometry, V0 is the thermocouple voltage at zero pressure (absolute), and V is the voltage indicated by the thermocouple.
The alternative is the Pirani gauge, which operates in a similar way, over approximately the same pressure range, but is only a 2-terminal device, sensing the change in resistance with temperature of a thin electrically heated wire, rather than using a thermocouple.
| Technology | Components | null |
30992 | https://en.wikipedia.org/wiki/Thermistor | Thermistor | A thermistor is a semiconductor type of resistor whose resistance is strongly dependent on temperature, more so than in standard resistors. The word thermistor is a portmanteau of thermal and resistor.
Thermistors are categorized based on their conduction models. Negative-temperature-coefficient (NTC) thermistors have less resistance at higher temperatures, while positive-temperature-coefficient (PTC) thermistors have more resistance at higher temperatures.
NTC thermistors are widely used as inrush-current limiters and temperature sensors, while PTC thermistors are used as self-resetting overcurrent protectors and self-regulating heating elements. An operational temperature range of a thermistor is dependent on the probe type and is typically between .
Types
Depending on materials used, thermistors are classified into two types:
With NTC thermistors, resistance decreases as temperature rises; usually because electrons are bumped up by thermal agitation from the valence band to the conduction band. An NTC is commonly used as a temperature sensor, or in series with a circuit as an inrush current limiter.
With PTC thermistors, resistance increases as temperature rises; usually because of increased thermal lattice agitations, particularly those of impurities and imperfections. PTC thermistors are commonly installed in series with a circuit, and used to protect against overcurrent conditions, as resettable fuses.
Thermistors are generally produced using powdered metal oxides. With vastly improved formulas and techniques over the past 20 years, NTC thermistors can now achieve accuracies over wide temperature ranges such as ±0.1 °C or ±0.2 °C from 0 °C to 70 °C with excellent long-term stability. NTC thermistor elements come in many styles such as axial-leaded glass-encapsulated (DO-35, DO-34 and DO-41 diodes), glass-coated chips, epoxy-coated with bare or insulated lead wire and surface-mount, as well as thin film versions. The typical operating temperature range of a thermistor is −55 °C to +150 °C, though some glass-body thermistors have a maximal operating temperature of +300 °C.
Thermistors differ from resistance temperature detectors (RTDs) in that the material used in a thermistor is generally a ceramic or polymer, while RTDs use pure metals. The temperature response is also different; RTDs are useful over larger temperature ranges, while thermistors typically achieve a greater precision within a limited temperature range, typically −90 °C to 130 °C.
Basic operation
Assuming, as a first-order approximation, that the relationship between resistance and temperature is linear, then
where
, change in resistance,
, change in temperature,
, first-order temperature coefficient of resistance.
Depending on type of the thermistor in question the may be either positive or negative.
If is positive, the resistance increases with increasing temperature, and the device is called a positive-temperature-coefficient (PTC) thermistor, or posistor. There are two types of PTC resistorswitching thermistor and silistor. If is negative, the resistance decreases with increasing temperature, and the device is called a negative-temperature-coefficient (NTC) thermistor. Resistors that are not thermistors are designed to have a as close to 0 as possible so that their resistance remains nearly constant over a wide temperature range.
Instead of the temperature coefficient k, sometimes the temperature coefficient of resistance ("alpha sub T") is used. It is defined as
This coefficient should not be confused with the parameter below.
Construction and materials
Thermistors are typically built by using metal oxides. They're typically pressed into a bead, disk, or cylindrical shape and then encapsulated with an impermeable material such as epoxy or glass.
NTC thermistors are manufactured from oxides of the iron group of metals: e.g. chromium (CrO, Cr2O3), manganese (e.g. MnO), cobalt (CoO), iron (iron oxides), and nickel (NiO, Ni2O3). these oxides form a ceramic body with terminals composed of conductive metals such as silver, nickel, and tin.
PTCs are usually prepared from barium (Ba), strontium, or lead titanates (e.g. PbTiO3).
Steinhart–Hart equation
In practical devices, the linear approximation model (above) is accurate only over a limited temperature range. Over wider temperature ranges, a more complex resistance–temperature transfer function provides a more faithful characterization of the performance. The Steinhart–Hart equation is a widely used third-order approximation:
where a, b and c are called the Steinhart–Hart parameters and must be specified for each device. T is the absolute temperature, and R is the resistance. The equation is not dimensionally correct, since a change in the units of R results in an equation with a different form, containing a term. In practice, the equation gives good numerical results for resistances expressed in ohms or kΩ, but the coefficients a, b, and c must be stated with reference to the unit. To give resistance as a function of temperature, the above cubic equation in can be solved, the real root of which is given by
where
The error in the Steinhart–Hart equation is generally less than 0.02 °C in the measurement of temperature over a 200 °C range. As an example, typical values for a thermistor with a resistance of 3 kΩ at room temperature (25 °C = 298.15 K, R in Ω) are:
B or β parameter equation
NTC thermistors can also be characterised with the B (or β) parameter equation, which is essentially the Steinhart–Hart equation with , and ,
where the temperatures and the B parameter are in kelvins, and R0 is the resistance of the thermistor at temperature T0 (25 °C = 298.15 K). Solving for R yields
or, alternatively,
where .
This can be solved for the temperature:
The B-parameter equation can also be written as . This can be used to convert the function of resistance vs. temperature of a thermistor into a linear function of vs. . The average slope of this function will then yield an estimate of the value of the B parameter.
Conduction model
NTC (negative temperature coefficient)
Many NTC thermistors are made from a pressed disc, rod, plate, bead or cast chip of semiconducting material such as sintered metal oxides. They work because raising the temperature of a semiconductor increases the number of active charge carriers by promoting them into the conduction band. The more charge carriers that are available, the more current a material can conduct. In certain materials like ferric oxide (Fe2O3) with titanium (Ti) doping an n-type semiconductor is formed and the charge carriers are electrons. In materials such as nickel oxide (NiO) with lithium (Li) doping a p-type semiconductor is created, where holes are the charge carriers.
This is described in the formula
where
= electric current (amperes),
= density of charge carriers (count/m3),
= cross-sectional area of the material (m2),
= drift velocity of electrons (m/s),
= charge of an electron ( coulomb).
Over large changes in temperature, calibration is necessary. Over small changes in temperature, if the right semiconductor is used, the resistance of the material is linearly proportional to the temperature. There are many different semiconducting thermistors with a range from about 0.01 kelvin to 2,000 kelvins (−273.14 °C to 1,700 °C).
The IEC standard symbol for a NTC thermistor includes a "−t°" under the rectangle.
PTC (positive temperature coefficient)
Most PTC thermistors are made from doped polycrystalline ceramic (containing barium titanate (BaTiO3) and other compounds) which have the property that their resistance rises suddenly at a certain critical temperature. Barium titanate is ferroelectric and its dielectric constant varies with temperature. Below the Curie point temperature, the high dielectric constant prevents the formation of potential barriers between the crystal grains, leading to a low resistance. In this region the device has a small negative temperature coefficient. At the Curie point temperature, the dielectric constant drops sufficiently to allow the formation of potential barriers at the grain boundaries, and the resistance increases sharply with temperature. At even higher temperatures, the material reverts to NTC behaviour.
Another type of thermistor is a silistor (a thermally sensitive silicon resistor). Silistors employ silicon as the semiconductive component material. Unlike ceramic PTC thermistors, silistors have an almost linear resistance-temperature characteristic. Silicon PTC thermistors have a much smaller drift than an NTC thermistor. They are stable devices which are hermetically sealed in an axial leaded glass encapsulated package.
Barium titanate thermistors can be used as self-controlled heaters; for a given voltage, the ceramic will heat to a certain temperature, but the power used will depend on the heat loss from the ceramic.
The dynamics of PTC thermistors being powered lends to a wide range of applications. When first connected to a voltage source, a large current corresponding to the low, cold, resistance flows, but as the thermistor self-heats, the current is reduced until a limiting current (and corresponding peak device temperature) is reached. The current-limiting effect can replace fuses. In the degaussing circuits of many CRT monitors and televisions an appropriately chosen thermistor is connected in series with the degaussing coil. This results in a smooth current decrease for an improved degaussing effect. Some of these degaussing circuits have auxiliary heating elements to heat the thermistor (and reduce the resulting current) further.
Another type of PTC thermistor is the polymer PTC, which is sold under brand names such as "Polyswitch" "Semifuse", and "Multifuse". This consists of plastic with carbon grains embedded in it. When the plastic is cool, the carbon grains are all in contact with each other, forming a conductive path through the device. When the plastic heats up, it expands, forcing the carbon grains apart, and causing the resistance of the device to rise, which then causes increased heating and rapid resistance increase. Like the BaTiO3 thermistor, this device has a highly nonlinear resistance/temperature response useful for thermal or circuit control, not for temperature measurement. Besides circuit elements used to limit current, self-limiting heaters can be made in the form of wires or strips, useful for heat tracing. PTC thermistors "latch" into a hot / high resistance state: once hot, they stay in that high resistance state, until cooled.
The effect can be used as a primitive latch/memory circuit, the effect being enhanced by using two PTC thermistors in series, with one thermistor cool, and the other thermistor hot.
The IEC standard symbol for a PTC thermistor includes a "+t°" under the rectangle.
Self-heating effects
When a current flows through a thermistor, it generates heat, which raises the temperature of the thermistor above that of its environment. If the thermistor is being used to measure the temperature of the environment, this electrical heating may introduce a significant error (an observer effect) if a correction is not made. Alternatively, this effect itself can be exploited. It can, for example, make a sensitive air-flow device employed in a sailplane rate-of-climb instrument, the electronic variometer, or serve as a timer for a relay as was formerly done in telephone exchanges.
The electrical power input to the thermistor is just
where I is current, and V is the voltage drop across the thermistor. This power is converted to heat, and this heat energy is transferred to the surrounding environment. The rate of transfer is well described by Newton's law of cooling:
where T(R) is the temperature of the thermistor as a function of its resistance R, is the temperature of the surroundings, and K is the dissipation constant, usually expressed in units of milliwatts per degree Celsius. At equilibrium, the two rates must be equal:
The current and voltage across the thermistor depend on the particular circuit configuration. As a simple example, if the voltage across the thermistor is held fixed, then by Ohm's law we have , and the equilibrium equation can be solved for the ambient temperature as a function of the measured resistance of the thermistor:
The dissipation constant is a measure of the thermal connection of the thermistor to its surroundings. It is generally given for the thermistor in still air and in well-stirred oil. Typical values for a small glass-bead thermistor are 1.5 mW/°C in still air and 6.0 mW/°C in stirred oil. If the temperature of the environment is known beforehand, then a thermistor may be used to measure the value of the dissipation constant. For example, the thermistor may be used as a flow-rate sensor, since the dissipation constant increases with the rate of flow of a fluid past the thermistor.
The power dissipated in a thermistor is typically maintained at a very low level to ensure insignificant temperature measurement error due to self-heating. However, some thermistor applications depend upon significant "self-heating" to raise the body temperature of the thermistor well above the ambient temperature, so the sensor then detects even subtle changes in the thermal conductivity of the environment. Some of these applications include liquid-level detection, liquid-flow measurement and air-flow measurement.
Applications
PTC
As current-limiting devices for circuit protection, as replacements for fuses. Current through the device causes a small amount of resistive heating. If the current is large enough to generate heat more quickly than the device can lose it to its surroundings, the device heats up, causing its resistance to increase. This creates a self-reinforcing effect that drives the resistance upwards, therefore limiting the current.
As timers in the degaussing coil circuit of most CRT displays. When the display unit is initially switched on, current flows through the thermistor and degaussing coil. The coil and thermistor are intentionally sized so that the current flow will heat the thermistor to the point that the degaussing coil shuts off in under a second. For effective degaussing, it is necessary that the magnitude of the alternating magnetic field produced by the degaussing coil decreases smoothly and continuously, rather than sharply switching off or decreasing in steps; the PTC thermistor accomplishes this naturally as it heats up. A degaussing circuit using a PTC thermistor is simple, reliable (for its simplicity), and inexpensive.
As heaters, in the automotive industry, to provide cabin heating (in addition to heating provided by a heat pump or the waste heat of an internal combustion engine), or to heat diesel fuel in cold conditions before engine injection.
In temperature-compensated voltage-controlled oscillators in synthesizers.
In lithium battery protection circuits.
In an electrically actuated wax motor to provide the heat necessary to expand the wax.
Many electric motors and dry type power transformers incorporate PTC thermistors in their windings. When used in conjunction with a monitoring relay they provide overtemperature protection to prevent insulation damage. The equipment manufacturer selects a thermistor with a highly non-linear response curve where resistance increases dramatically at the maximum allowable winding temperature, causing the relay to operate.
To prevent thermal runaway in electronic circuits. Many electronic devices, for example bipolar transistors, draw more power as they get hotter. Commonly, such circuits contain ordinary resistors to limit the current available and prevent the device from overheating. However, in some applications, PTC thermistors allow better performance than resistors.
To prevent current hogging in electronic circuits. Current hogging can occur when electronic devices are connected in parallel. In severe cases, current hogging can cause cascading failure of all the devices. A PTC thermistor attached in series with each device can assure the current is divided reasonably evenly between the devices.
In crystal oscillators for temperature compensation, medical equipment temperature control, and industrial automation, silicon PTC thermistors display a nearly linear positive temperature coefficient (0.7%/°C). A linearization resistor can be added if further linearization is needed.
NTC
As a thermometer for low-temperature measurements of the order of 10 K.
As an inrush current limiter device in power supply circuits, they present a higher resistance initially, which prevents large currents from flowing at turn-on, and then heat up and become much lower resistance to allow higher current flow during normal operation. These thermistors are usually much larger than measuring type thermistors, and are purposely designed for this application.
As sensors in automotive applications to monitor fluid temperatures like the engine coolant, cabin air, external air or engine oil temperature, and feed the relative readings to control units like the ECU and to the dashboard.
To monitor the temperature of an incubator.
Thermistors are also commonly used in modern digital thermostats and to monitor the temperature of battery packs while charging.
Thermistors are often used in the hot ends of 3D printers; they monitor the heat produced and allow the printer's control circuitry to keep a constant temperature for melting the plastic filament.
In the food handling and processing industry, especially for food storage systems and food preparation. Maintaining the correct temperature is critical to prevent foodborne illness.
Throughout the consumer appliance industry for measuring temperature. Toasters, coffee makers, refrigerators, freezers, hair dryers, etc. all rely on thermistors for proper temperature control.
NTC thermistors come in bare and lugged forms, the former is for point sensing to achieve high accuracy for specific points, such as laser diode die, etc.
For measurement of temperature profile inside the sealed cavity of a convective (thermal) inertial sensor.
Thermistor Probe Assemblies offer protection of the sensor in harsh environments. The thermistor sensing element can be packaged into a variety of enclosures for use in industries such as HVAC/R, Building Automation, Pool/Spa, Energy and Industrial Electronics. Enclosures can be made out of stainless steel, aluminum, copper brass or plastic and configurations include threaded (NPT, etc.), flanged (with mounting holes for ease of installation) and straight (flat tip, pointed tip, radius tip, etc.). Thermistor probe assemblies are very rugged and are highly customizable to fit the application needs. Probe assemblies have gained in popularity over the years as improvements in research, engineering and manufacturing techniques have been made.
UL Recognized NTC thermistors in the XGPU2 category helps save equipment manufacturers time and money when applying for safety approvals for their end product. DO-35 hermetically sealed glass encapsulated thermistors can operate up to 250 °C which gives them an advantage in many applications when UL is requested for a sensing element.
History
The first NTC thermistor was discovered in 1833 by Michael Faraday, who reported on the semiconducting behavior of silver sulfide. Faraday noticed that the resistance of silver sulfide decreased dramatically as temperature increased. (This was also the first documented observation of a semiconducting material.)
Because early thermistors were difficult to produce and applications for the technology were limited, commercial production of thermistors did not begin until the 1930s. A commercially viable thermistor was invented by Samuel Ruben in 1930.
| Technology | Components | null |
30993 | https://en.wikipedia.org/wiki/Thermometer | Thermometer | A thermometer is a device that measures temperature (the hotness or coldness of an object) or temperature gradient (the rates of change of temperature in space). A thermometer has two important elements: (1) a temperature sensor (e.g. the bulb of a mercury-in-glass thermometer or the pyrometric sensor in an infrared thermometer) in which some change occurs with a change in temperature; and (2) some means of converting this change into a numerical value (e.g. the visible scale that is marked on a mercury-in-glass thermometer or the digital readout on an infrared model). Thermometers are widely used in technology and industry to monitor processes, in meteorology, in medicine (medical thermometer), and in scientific research.
A standard scale
While an individual thermometer is able to measure degrees of hotness, the readings on two thermometers cannot be compared unless they conform to an agreed scale. Today there is an absolute thermodynamic temperature scale. Internationally agreed temperature scales are designed to approximate this closely, based on fixed points and interpolating thermometers. The most recent official temperature scale is the International Temperature Scale of 1990. It extends from to approximately .
History
Sparse and conflicting historical records make it difficult to pinpoint the invention of the thermometer to any single person or date with certitude. In addition, given the many parallel developments in the thermometer's history and its many gradual improvements over time, the instrument is best viewed not as a single invention, but an evolving technology.
Ancient developments
Early pneumatic devices and ideas from antiquity provided inspiration for the thermometer's invention during the Renaissance period.
Philo of Byzantium
In the 3rd century BC, Philo of Byzantium documented his experiment with a tube submerged in a container of liquid on one end and connected to an air-tight, hollow sphere on the other. When air in the sphere is heated with a candle or by exposing it to the sun, expanding air exits the sphere and generates bubbles in the vessel. As air in the sphere cools, a partial vacuum is created, sucking liquid up into the tube. Any changes in the position of the liquid will now indicate whether the air in the sphere is getting hotter or colder.
Translations of Philo's experiment from the original ancient Greek were utilized by Robert Fludd sometime around 1617 and used as the basis for his air thermometer.
Hero of Alexandria
In his book, Pneumatics, Hero of Alexandria (10–70 AD) provides a recipe for building a "Fountain which trickles by the Action of the Sun's Rays," a more elaborate version of Philo's pneumatic experiment but which worked on the same principle of heating and cooling air to move water around. Translations of the ancient work Pneumatics were introduced to late 16th century Italy and studied by many, including Galileo Galilei, who had read it by 1594.
First temperature scale with a fixed point
The Roman Greek physician Galen is given credit for introducing two concepts important to the development of a scale of temperature and the eventual invention of the thermometer. First, he had the idea that hotness or coldness may be measured by "degrees of hot and cold." He also conceived of a fixed reference temperature, a mixture of equal amounts of ice and boiling water, with four degrees of heat above this point and four degrees of cold below. 16th century physician Johann Hasler developed body temperature scales based on Galen's theory of degrees to help him mix the appropriate amount of medicine for patients.
Late Renaissance developments
Thermoscope
In the late 16th and early 17th centuries, several European scientists, notably Galileo Galilei and Italian physiologist Santorio Santorio developed devices with an air-filled glass bulb, connected to a tube, partially filled with water. As the air in the bulb warms or cools, the height of the column of water in the tube falls or rises, allowing an observer to compare the current height of the water to previous heights to detect relative changes of the heat in the bulb and its immediate environment. Such devices, with no scale for assigning a numerical value to the height of the liquid, are referred to as a thermoscope because they provide an observable indication of sensible heat (the modern concept of temperature was yet to arise).
Air thermometer
The difference between a thermoscope and a thermometer is that the latter has a scale.
Given this, the possible inventors of the thermometer are usually considered to be Galileo, Santorio, Dutch inventor Cornelis Drebbel, or British mathematician Robert Fludd. Though Galileo is often said to be the inventor of the thermometer, there is no surviving document that he actually produced any such instrument.
The first diagrams
The first clear diagram of a thermoscope was published in 1617 by Giuseppe Biancani (1566 – 1624); the first showing a scale and thus constituting a thermometer was by Santorio Santorio in 1625. This was a vertical tube, closed by a bulb of air at the top, with the lower end opening into a vessel of water. The water level in the tube was controlled by the expansion and contraction of the air, so it was what we would now call an air thermometer.
Coining of "thermometer"
The word thermometer (in its French form) first appeared in 1624 in La Récréation Mathématique by Jean Leurechon, who describes one with a scale of 8 degrees. The word comes from the Greek words θερμός, thermos, meaning "hot" and μέτρον, metron, meaning "measure".
Sealed liquid-in-glass thermometer
The above instruments suffered from the disadvantage that they were also barometers, i.e. sensitive to air pressure. In 1629, Joseph Solomon Delmedigo, a student of Galileo and Santorio in Padua, published what is apparently the first description and illustration of a sealed liquid-in-glass thermometer. It is described as having a bulb at the bottom of a sealed tube partially filled with brandy. The tube had a numbered scale. Delmedigo did not claim to have invented this instrument. Nor did he name anyone else as its inventor. In about 1654, Ferdinando II de' Medici, Grand Duke of Tuscany (1610–1670) did produce such an instrument, the first modern-style thermometer, dependent on the expansion of a liquid and independent of air pressure. Many other scientists experimented with various liquids and designs of thermometer. However, each inventor and each thermometer was unique — there was no standard scale.
Early attempts at standardization
Early attempts at standardization added a single reference point such as the freezing point of water. The use of two references for graduating the thermometer is said to have been introduced by Joachim Dalence in 1668, although Christiaan Huygens (1629–1695) in 1665 had already suggested the use of graduations based on the melting and boiling points of water as standards and, in 1694, Carlo Renaldini (1615–1698) proposed using them as fixed points along a universal scale. In 1701, Isaac Newton (1642–1726/27) proposed a scale of 12 degrees between the melting point of ice and body temperature.
Precision thermometry
In 1714, scientist and inventor Daniel Gabriel Fahrenheit invented a reliable thermometer, using mercury instead of alcohol and water mixtures. In 1724, he proposed a temperature scale which now (slightly adjusted) bears his name. In 1742, Anders Celsius (1701–1744) proposed a scale with zero at the boiling point and 100 degrees at the freezing point of water, though the scale which now bears his name has them the other way around. French entomologist René Antoine Ferchault de Réaumur invented an alcohol thermometer and, temperature scale in 1730, that ultimately proved to be less reliable than Fahrenheit's mercury thermometer.
The first physician to use thermometer measurements in clinical practice was Herman Boerhaave (1668–1738). In 1866, Sir Thomas Clifford Allbutt (1836–1925) invented a clinical thermometer that produced a body temperature reading in five minutes as opposed to twenty. In 1999, Dr. Francesco Pompei of the Exergen Corporation introduced the world's first temporal artery thermometer, a non-invasive temperature sensor which scans the forehead in about two seconds and provides a medically accurate body temperature.
Registering
Traditional thermometers were all non-registering thermometers. That is, the thermometer did not hold the temperature reading after it was moved to a place with a different temperature. Determining the temperature of a pot of hot liquid required the user to leave the thermometer in the hot liquid until after reading it. If the non-registering thermometer was removed from the hot liquid, then the temperature indicated on the thermometer would immediately begin changing to reflect the temperature of its new conditions (in this case, the air temperature). Registering thermometers are designed to hold the temperature indefinitely, so that the thermometer can be removed and read at a later time or in a more convenient place. Mechanical registering thermometers hold either the highest or lowest temperature recorded until manually re-set, e.g., by shaking down a mercury-in-glass thermometer, or until an even more extreme temperature is experienced. Electronic registering thermometers may be designed to remember the highest or lowest temperature, or to remember whatever temperature was present at a specified point in time.
Thermometers increasingly use electronic means to provide a digital display or input to a computer.
Physical principles of thermometry
Thermometers may be described as empirical or absolute. Absolute thermometers are calibrated numerically by the thermodynamic absolute temperature scale. Empirical thermometers are not in general necessarily in exact agreement with absolute thermometers as to their numerical scale readings, but to qualify as thermometers at all they must agree with absolute thermometers and with each other in the following way: given any two bodies isolated in their separate respective thermodynamic equilibrium states, all thermometers agree as to which of the two has the higher temperature, or that the two have equal temperatures. For any two empirical thermometers, this does not require that the relation between their numerical scale readings be linear, but it does require that relation to be strictly monotonic. This is a fundamental character of temperature and thermometers.
As it is customarily stated in textbooks, taken alone, the so-called "zeroth law of thermodynamics" fails to deliver this information, but the statement of the zeroth law of thermodynamics by James Serrin in 1977, though rather mathematically abstract, is more informative for thermometry: "Zeroth Law – There exists a topological line which serves as a coordinate manifold of material behaviour. The points of the manifold are called 'hotness levels', and is called the 'universal hotness manifold'." To this information there needs to be added a sense of greater hotness; this sense can be had, independently of calorimetry, of thermodynamics, and of properties of particular materials, from Wien's displacement law of thermal radiation: the temperature of a bath of thermal radiation is proportional, by a universal constant, to the frequency of the maximum of its frequency spectrum; this frequency is always positive, but can have values that tend to zero. Another way of identifying hotter as opposed to colder conditions is supplied by Planck's principle, that when a process of isochoric adiabatic work is the sole means of change of internal energy of a closed system, the final state of the system is never colder than the initial state; except for phase changes with latent heat, it is hotter than the initial state.
There are several principles on which empirical thermometers are built, as listed in the section of this article entitled "Primary and secondary thermometers". Several such principles are essentially based on the constitutive relation between the state of a suitably selected particular material and its temperature. Only some materials are suitable for this purpose, and they may be considered as "thermometric materials". Radiometric thermometry, in contrast, can be only slightly dependent on the constitutive relations of materials. In a sense then, radiometric thermometry might be thought of as "universal". This is because it rests mainly on a universality character of thermodynamic equilibrium, that it has the universal property of producing blackbody radiation.
Thermometric materials
There are various kinds of empirical thermometer based on material properties.
Many empirical thermometers rely on the constitutive relation between pressure, volume and temperature of their thermometric material. For example, mercury expands when heated.
If it is used for its relation between pressure and volume and temperature, a thermometric material must have three properties:
(1) Its heating and cooling must be rapid. That is to say, when a quantity of heat enters or leaves a body of the material, the material must expand or contract to its final volume or reach its final pressure and must reach its final temperature with practically no delay; some of the heat that enters can be considered to change the volume of the body at constant temperature, and is called the latent heat of expansion at constant temperature; and the rest of it can be considered to change the temperature of the body at constant volume, and is called the specific heat at constant volume. Some materials do not have this property, and take some time to distribute the heat between temperature and volume change.
(2) Its heating and cooling must be reversible. That is to say, the material must be able to be heated and cooled indefinitely often by the same increment and decrement of heat, and still return to its original pressure, volume and temperature every time. Some plastics do not have this property;
(3) Its heating and cooling must be monotonic. That is to say, throughout the range of temperatures for which it is intended to work,
(a) at a given fixed pressure,
either (i) the volume increases when the temperature increases, or else (ii) the volume decreases when the temperature increases;
but not (i) for some temperatures and (ii) for others; or
(b) at a given fixed volume,
either (i) the pressure increases when the temperature increases, or else (ii) the pressure decreases when the temperature increases;
but not (i) for some temperatures and (ii) for others.
At temperatures around about 4 °C, water does not have the property (3), and is said to behave anomalously in this respect; thus water cannot be used as a material for this kind of thermometry for temperature ranges near 4 °C.
Gases, on the other hand, all have the properties (1), (2), and (3)(a)(α) and (3)(b)(α). Consequently, they are suitable thermometric materials, and that is why they were important in the development of thermometry.
Constant volume thermometry
According to Preston (1894/1904), Regnault found constant pressure air thermometers unsatisfactory, because they needed troublesome corrections. He therefore built a constant volume air thermometer. Constant volume thermometers do not provide a way to avoid the problem of anomalous behaviour like that of water at approximately 4 °C.
Radiometric thermometry
Planck's law very accurately quantitatively describes the power spectral density of electromagnetic radiation, inside a rigid walled cavity in a body made of material that is completely opaque and poorly reflective, when it has reached thermodynamic equilibrium, as a function of absolute thermodynamic temperature alone. A small enough hole in the wall of the cavity emits near enough blackbody radiation of which the spectral radiance can be precisely measured. The walls of the cavity, provided they are completely opaque and poorly reflective, can be of any material indifferently. This provides a well-reproducible absolute thermometer over a very wide range of temperatures, able to measure the absolute temperature of a body inside the cavity.
Primary and secondary thermometers
A thermometer is called primary or secondary based on how the raw physical quantity it measures is mapped to a temperature. As summarized by Kauppinen et al., "For primary thermometers the measured property of matter is known so well that temperature can be calculated without any unknown quantities. Examples of these are thermometers based on the equation of state of a gas, on the velocity of sound in a gas, on the thermal noise voltage or current of an electrical resistor, and on the angular anisotropy of gamma ray emission of certain radioactive nuclei in a magnetic field."
In contrast, "Secondary thermometers are most widely used because of their convenience. Also, they are often much more sensitive than primary ones. For secondary thermometers knowledge of the measured property is not sufficient to allow direct calculation of temperature. They have to be calibrated against a primary thermometer at least at one temperature or at a number of fixed temperatures. Such fixed points, for example, triple points and superconducting transitions, occur reproducibly at the same temperature."
Calibration
Thermometers can be calibrated either by comparing them with other calibrated thermometers or by checking them against known fixed points on the temperature scale. The best known of these fixed points are the melting and boiling points of pure water. (Note that the boiling point of water varies with pressure, so this must be controlled.)
The traditional way of putting a scale on a liquid-in-glass or liquid-in-metal thermometer was in three stages:
Immerse the sensing portion in a stirred mixture of pure ice and water at atmospheric pressure and mark the point indicated when it had come to thermal equilibrium.
Immerse the sensing portion in a steam bath at standard atmospheric pressure and again mark the point indicated.
Divide the distance between these marks into equal portions according to the temperature scale being used.
Other fixed points used in the past are the body temperature (of a healthy adult male) which was originally used by Fahrenheit as his upper fixed point ( to be a number divisible by 12) and the lowest temperature given by a mixture of salt and ice, which was originally the definition of . (This is an example of a frigorific mixture.) As body temperature varies, the Fahrenheit scale was later changed to use an upper fixed point of boiling water at .
These have now been replaced by the defining points in the International Temperature Scale of 1990, though in practice the melting point of water is more commonly used than its triple point, the latter being more difficult to manage and thus restricted to critical standard measurement. Nowadays manufacturers will often use a thermostat bath or solid block where the temperature is held constant relative to a calibrated thermometer. Other thermometers to be calibrated are put into the same bath or block and allowed to come to equilibrium, then the scale marked, or any deviation from the instrument scale recorded. For many modern devices calibration will be stating some value to be used in processing an electronic signal to convert it to a temperature.
Precision, accuracy, and reproducibility
The precision or resolution of a thermometer is simply to what fraction of a degree it is possible to make a reading. For high temperature work it may only be possible to measure to the nearest 10 °C or more. Clinical thermometers and many electronic thermometers are usually readable to 0.1 °C. Special instruments can give readings to one thousandth of a degree. However, this precision does not mean the reading is true or accurate, it only means that very small changes can be observed.
A thermometer calibrated to a known fixed point is accurate (i.e. gives a true reading) at that point. The invention of the technology to measure temperature led to the creation of scales of temperature. In between fixed calibration points, interpolation is used, usually linear. This may give significant differences between different types of thermometer at points far away from the fixed points. For example, the expansion of mercury in a glass thermometer is slightly different from the change in resistance of a platinum resistance thermometer, so these two will disagree slightly at around 50 °C. There may be other causes due to imperfections in the instrument, e.g. in a liquid-in-glass thermometer if the capillary tube varies in diameter.
For many purposes reproducibility is important. That is, does the same thermometer give the same reading for the same temperature (or do replacement or multiple thermometers give the same reading)? Reproducible temperature measurement means that comparisons are valid in scientific experiments and industrial processes are consistent. Thus if the same type of thermometer is calibrated in the same way its readings will be valid even if it is slightly inaccurate compared to the absolute scale.
An example of a reference thermometer used to check others to industrial standards would be a platinum resistance thermometer with a digital display to 0.1 °C (its precision) which has been calibrated at 5 points against national standards (−18, 0, 40, 70, 100 °C) and which is certified to an accuracy of ±0.2 °C.
According to British Standards, correctly calibrated, used and maintained liquid-in-glass thermometers can achieve a measurement uncertainty of ±0.01 °C in the range 0 to 100 °C, and a larger uncertainty outside this range: ±0.05 °C up to 200 or down to −40 °C, ±0.2 °C up to 450 or down to −80 °C.
Indirect methods of temperature measurement
Thermal expansion
Utilizing the property of thermal expansion of various phases of matter.
Pairs of solid metals with different expansion coefficients can be used for bi-metal mechanical thermometers. Another design using this principle is Breguet's thermometer.
Some liquids possess relatively high expansion coefficients over a useful temperature ranges thus forming the basis for an alcohol or mercury thermometer. Alternative designs using this principle are the reversing thermometer and Beckmann differential thermometer.
As with liquids, gases can also be used to form a gas thermometer.
Pressure
Vapour pressure thermometer
Density
Galileo thermometer
Thermochromism
Some compounds exhibit thermochromism at distinct temperature changes. Thus by tuning the phase transition temperatures for a series of substances the temperature can be quantified in discrete increments, a form of digitization. This is the basis for a liquid crystal thermometer.
Band edge thermometry (BET)
Band edge thermometry (BET) takes advantage of the temperature-dependence of the band gap of semiconductor materials to provide very precise optical (i.e. non-contact) temperature measurements. BET systems require a specialized optical system, as well as custom data analysis software.
All objects above absolute zero emit blackbody radiation for which the spectra is directly proportional to the temperature. This property is the basis for a pyrometer or infrared thermometer and thermography. It has the advantage of remote temperature sensing; it does not require contact or even close proximity unlike most thermometers. At higher temperatures, blackbody radiation becomes visible and is described by the colour temperature. For example a glowing heating element or an approximation of a star's surface temperature.
Fluorescence
Phosphor thermometry
Optical absorbance spectra
Fiber optical thermometer
Electrical resistance
Resistance thermometer which use materials such as Balco alloy
Thermistor
Coulomb blockade thermometer
Electrical potential
Thermocouples are useful over a wide temperature range from cryogenic temperatures to over 1000°C, but typically have an error of ±0.5-1.5°C.
Silicon bandgap temperature sensors are commonly found packaged in integrated circuits with accompanying ADC and interface such as I2C. Typically they are specified to work within about —50 to 150°C with accuracies in the ±0.25 to 1°C range but can be improved by binning.
Electrical resonance
Quartz thermometer
Nuclear magnetic resonance
Chemical shift is temperature dependent. This property is used to calibrate the thermostat of NMR probes, usually using methanol or ethylene glycol. This can potentially be problematic for internal standards which are usually assumed to have a defined chemical shift (e.g 0 ppm for TMS) but in fact exhibit a temperature dependence.
Magnetic susceptibility
Above the Curie temperature, the magnetic susceptibility of a paramagnetic material exhibits an inverse temperature dependence. This phenomenon is the basis of a magnetic cryometer.
Applications
Thermometers utilize a range of physical effects to measure temperature. Temperature sensors are used in a wide variety of scientific and engineering applications, especially measurement systems. Temperature systems are primarily either electrical or mechanical, occasionally inseparable from the system which they control (as in the case of a mercury-in-glass thermometer). Thermometers are used in roadways in cold weather climates to help determine if icing conditions exist. Indoors, thermistors are used in climate control systems such as air conditioners, freezers, heaters, refrigerators, and water heaters. Galileo thermometers are used to measure indoor air temperature, due to their limited measurement range.
Such liquid crystal thermometers (which use thermochromic liquid crystals) are also used in mood rings and used to measure the temperature of water in fish tanks.
Fiber Bragg grating temperature sensors are used in nuclear power facilities to monitor reactor core temperatures and avoid the possibility of nuclear meltdowns.
Nanothermometry
Nanothermometry is an emergent research field dealing with the knowledge of temperature in the sub-micrometric scale. Conventional thermometers cannot measure the temperature of an object which is smaller than a micrometre, and new methods and materials have to be used. Nanothermometry is used in such cases. Nanothermometers are classified as luminescent thermometers (if they use light to measure temperature) and non-luminescent thermometers (systems where thermometric properties are not directly related to luminescence).
Cryometer
Thermometers used specifically for low temperatures.
Medical
Ear thermometers tend to be an infrared thermometer.
Forehead thermometer is an example of a liquid crystal thermometer.
Rectal and oral thermometers have typically been mercury but have since largely been superseded by NTC thermistors with a digital readout.
Various thermometric techniques have been used throughout history such as the Galileo thermometer to thermal imaging.
Medical thermometers such as mercury-in-glass thermometers, infrared thermometers, pill thermometers, and liquid crystal thermometers are used in health care settings to determine if individuals have a fever or are hypothermic.
Food and food safety
Thermometers are important in food safety, where food at temperatures within can be prone to potentially harmful levels of bacterial growth after several hours which could lead to foodborne illness. This includes monitoring refrigeration temperatures and maintaining temperatures in foods being served under heat lamps or hot water baths.
Cooking thermometers are important for determining if a food is properly cooked. In particular meat thermometers are used to aid in cooking meat to a safe internal temperature while preventing over cooking. They are commonly found using either a bimetallic coil, or a thermocouple or thermistor with a digital readout.
Candy thermometers are used to aid in achieving a specific water content in a sugar solution based on its boiling temperature.
Environmental
Indoor-outdoor thermometer
Heat meter uses a thermometer to measure rate of heat flow.
Thermostats have used bimetallic strips but digital thermistors have since become popular.
Alcohol thermometers, infrared thermometers, mercury-in-glass thermometers, recording thermometers, thermistors, and Six's thermometers (maximum-minimum thermometer) are used in meteorology and climatology in various levels of the atmosphere and oceans. Aircraft use thermometers and hygrometers to determine if atmospheric icing conditions exist along their flight path. These measurements are used to initialize weather forecast models. Thermometers are used in roadways in cold weather climates to help determine if icing conditions exist and indoors in climate control systems.
| Technology | Measuring instruments | null |
31039 | https://en.wikipedia.org/wiki/Turboprop | Turboprop | A turboprop is a gas turbine engine that drives an aircraft propeller.
A turboprop consists of an intake, reduction gearbox, compressor, combustor, turbine, and a propelling nozzle. Air enters the intake and is compressed by the compressor. Fuel is then added to the compressed air in the combustor, where the fuel-air mixture then combusts. The hot combustion gases expand through the turbine stages, generating power at the point of exhaust. Some of the power generated by the turbine is used to drive the compressor and electric generator. The gases are then exhausted from the turbine. In contrast to a turbojet or turbofan, the engine's exhaust gases do not provide enough power to create significant thrust, since almost all of the engine's power is used to drive the propeller.
Technological aspects
Exhaust thrust in a turboprop is sacrificed in favor of shaft power, which is obtained by extracting additional power (beyond that necessary to drive the compressor) from turbine expansion. Owing to the additional expansion in the turbine system, the residual energy in the exhaust jet is low. Consequently, the exhaust jet produces about 10% of the total thrust. A higher proportion of the thrust comes from the propeller at low speeds and less at higher speeds.
Turboprops have bypass ratios of 50–100, although the propulsion airflow is less clearly defined for propellers than for fans.
The propeller is coupled to the turbine through a reduction gear that converts the high RPM/low torque output to low RPM/high torque. This can be of two primary designs, free-turbine and fixed. A free-turbine turboshaft found on the Pratt & Whitney Canada PT6, where the gas generator is not connected to the propeller. This allows for propeller strike or similar damage to occur without damaging the gas generator and allowing for only the power section (turbine and gearbox) to be removed and replaced in such an event, and also allows for less stress on the start during engine ground starts. Whereas a fixed shaft has the gearbox and gas generator connected, such as on the Honeywell TPE331.
The propeller itself is normally a constant-speed (variable pitch) propeller type similar to that used with larger aircraft reciprocating engines, except that the propeller-control requirements are very different. Due to the turbine engine's slow response to power inputs, particularly at low speeds, the propeller has a greater range of selected travel in order to make rapid thrust changes, notably for taxi, reverse, and other ground operations. The propeller has 2 modes, Alpha and Beta. Alpha is the mode for all flight operations including takeoff. Beta, a mode typically consisting of zero to negative thrust, is used for all ground operations aside from takeoff. The Beta mode is further broken down into 2 additional modes, Beta for taxi and Beta plus power. Beta for taxi as the name implies is used for taxi operations and consists of all pitch ranges from the lowest alpha range pitch, all the way down to zero pitch, producing very little to zero-thrust and is typically accessed by moving the power lever to a beta for taxi range. Beta plus power is a reverse range and produces negative thrust, often used for landing on short runways where the aircraft would need to rapidly slow down, as well as backing operations and is accessed by moving the power lever below the beta for taxi range. Due to the pilot not being able to see out of the rear of the aircraft for backing and the amount of debris reverse stirs up, manufacturers will often limit the speeds beta plus power may be used and restrict its use on unimproved runways. Feathering of these propellers is performed by the propeller control lever.
The constant-speed propeller is distinguished from the reciprocating engine constant-speed propeller by the control system. The turboprop system consists of 3 propeller governors, a governor, and overspeed governor, and a fuel-topping governor. The governor works in much the same way a reciprocating engine propeller governor works, though a turboprop governor may incorporate beta control valve or beta lift rod for beta operation and is typically located in the 12 o'clock position. There are also other governors that are included in addition depending on the model, such as an overspeed and fuel topping governor on a Pratt & Whitney Canada PT6, and an under-speed governor on a Honeywell TPE331. The turboprop is also distinguished from other kinds of turbine engine in that the fuel control unit is connected to the governor to help dictate power.
To make the engine more compact, reverse airflow can be used. On a reverse-flow turboprop engine, the compressor intake is at the aft of the engine, and the exhaust is situated forward, reducing the distance between the turbine and the propeller.
Unlike the small-diameter fans used in turbofan engines, the propeller has a large diameter that lets it accelerate a large volume of air. This permits a lower airstream velocity for a given amount of thrust. Since it is more efficient at low speeds to accelerate a large amount of air by a small degree than a small amount of air by a large degree, a low disc loading (thrust per unit disc area) increases the aircraft's energy efficiency, and this reduces the fuel use.
Propellers work well until the flight speed of the aircraft is high enough that the airflow past the blade tips reaches the speed of sound. Beyond that speed, the proportion of the power that drives the propeller that is converted to propeller thrust falls dramatically. For this reason turboprop engines are not commonly used on aircraft that fly faster than 0.6–0.7 Mach, with some exceptions such as the Tupolev Tu-95. However, propfan engines, which are very similar to turboprop engines, can cruise at flight speeds approaching 0.75 Mach. To maintain propeller efficiency across a wide range of airspeeds, turboprops use constant-speed (variable-pitch) propellers. The blades of a constant-speed propeller increase their pitch as aircraft speed increases. Another benefit of this type of propeller is that it can also be used to generate reverse thrust to reduce stopping distance on the runway. Additionally, in the event of an engine failure, the propeller can be feathered, thus minimizing the drag of the non-functioning propeller.
While the power turbine may be integral with the gas generator section, many turboprops today feature a free power turbine on a separate coaxial shaft. This enables the propeller to rotate freely, independent of compressor speed.
History
Alan Arnold Griffith had published a paper on compressor design in 1926. Subsequent work at the Royal Aircraft Establishment investigated axial compressor-based designs that would drive a propeller. From 1929, Frank Whittle began work on centrifugal compressor-based designs that would use all the gas power produced by the engine for jet thrust.
The world's first turboprop was designed by the Hungarian mechanical engineer György Jendrassik. Jendrassik published a turboprop idea in 1928, and on 12 March 1929 he patented his invention. In 1938, he built a small-scale (100 Hp; 74.6 kW) experimental gas turbine. The larger Jendrassik Cs-1, with a predicted output of 1,000 bhp, was produced and tested at the Ganz Works in Budapest between 1937 and 1941. It was of axial-flow design with 15 compressor and 7 turbine stages, annular combustion chamber. First run in 1940, combustion problems limited its output to 400 bhp. Two Jendrassik Cs-1s were the engines for the world's first turboprop aircraft – the Varga RMI-1 X/H. This was a Hungarian fighter-bomber of WWII which had one model completed, but before its first flight it was destroyed in a bombing raid. In 1941, the engine was abandoned due to war, and the factory converted to conventional engine production.
The first mention of turboprop engines in the general public press was in the February 1944 issue of the British aviation publication Flight, which included a detailed cutaway drawing of what a possible future turboprop engine could look like. The drawing was very close to what the future Rolls-Royce Trent would look like. The first British turboprop engine was the Rolls-Royce RB.50 Trent, a converted Derwent II fitted with reduction gear and a Rotol five-bladed propeller. Two Trents were fitted to Gloster Meteor EE227 — the sole "Trent-Meteor" — which thus became the world's first turboprop-powered aircraft to fly, albeit as a test-bed not intended for production. It first flew on 20 September 1945. From their experience with the Trent, Rolls-Royce developed the Rolls-Royce Clyde, the first turboprop engine to receive a type certificate for military and civil use, and the Dart, which became one of the most reliable turboprop engines ever built. Dart production continued for more than fifty years. The Dart-powered Vickers Viscount was the first turboprop aircraft of any kind to go into production and sold in large numbers. It was also the first four-engined turboprop. Its first flight was on 16 July 1948. The world's first single engined turboprop aircraft was the Armstrong Siddeley Mamba-powered Boulton Paul Balliol, which first flew on 24 March 1948.
The Soviet Union built on German World War II turboprop preliminary design work by Junkers Motorenwerke, while BMW, Heinkel-Hirth and Daimler-Benz also worked on projected designs. While the Soviet Union had the technology to create the airframe for a jet-powered strategic bomber comparable to Boeing's B-52 Stratofortress, they instead produced the Tupolev Tu-95 Bear, powered with four Kuznetsov NK-12 turboprops, mated to eight contra-rotating propellers (two per nacelle) with supersonic tip speeds to achieve maximum cruise speeds in excess of 575 mph, faster than many of the first jet aircraft and comparable to jet cruising speeds for most missions. The Bear would serve as their most successful long-range combat and surveillance aircraft and symbol of Soviet power projection through to the end of the 20th century. The USA used turboprop engines with contra-rotating propellers, such as the Allison T40, on some experimental aircraft during the 1950s. The T40-powered Convair R3Y Tradewind flying-boat was operated by the U.S. Navy for a short time.
The first American turboprop engine was the General Electric XT31, first used in the experimental Consolidated Vultee XP-81. The XP-81 first flew in December 1945, the first aircraft to use a combination of turboprop and turbojet power. The technology of Allison's earlier T38 design evolved into the Allison T56, used to power the Lockheed Electra airliner, its military maritime patrol derivative the P-3 Orion, and the C-130 Hercules military transport aircraft.
The first turbine-powered, shaft-driven helicopter was the Kaman K-225, a development of Charles Kaman's K-125 synchropter, which used a Boeing T50 turboshaft engine to power it on 11 December 1951.
December 1963 saw the first delivery of Pratt & Whitney Canada's PT6 turboprop engine for the then Beechcraft 87, soon to become Beechcraft King Air.
1964 saw the first deliveries of the Garrett AiResearch TPE331, (now owned by Honeywell Aerospace) on the Mitsubishi MU-2, making it the fastest turboprop aircraft for that year.
Usage
In contrast to turbofans, turboprops are most efficient at flight speeds below 725 km/h (450 mph; 390 knots) because the jet velocity of the propeller (and exhaust) is relatively low. Modern turboprop airliners operate at nearly the same speed as small regional jet airliners but burn two-thirds of the fuel per passenger.
Compared to piston engines, their greater power-to-weight ratio (which allows for shorter takeoffs) and reliability can offset their higher initial cost, maintenance and fuel consumption. As jet fuel can be easier to obtain than avgas in remote areas, turboprop-powered aircraft like the Cessna Caravan and Quest Kodiak are used as bush airplanes.
Turboprop engines are generally used on small subsonic aircraft, but the Tupolev Tu-114 can reach . Large military aircraft, like the Tupolev Tu-95, and civil aircraft, such as the Lockheed L-188 Electra, were also turboprop powered. The Airbus A400M is powered by four Europrop TP400 engines, which are the second most powerful turboprop engines ever produced, after the Kuznetsov NK-12.
In 2017, the most widespread turboprop airliners in service were the ATR 42/72 (950 aircraft), Bombardier Q400 (506), De Havilland Canada Dash 8-100/200/300 (374), Beechcraft 1900 (328), de Havilland Canada DHC-6 Twin Otter (270), Saab 340 (225). Less widespread and older airliners include the BAe Jetstream 31, Embraer EMB 120 Brasilia, Fairchild Swearingen Metroliner, Dornier 328, Saab 2000, Xian MA60, MA600 and MA700, Fokker 27 and 50.
Turboprop business aircraft include the Piper Meridian, Socata TBM, Pilatus PC-12, Piaggio P.180 Avanti, Beechcraft King Air and Super King Air. In April 2017, there were 14,311 business turboprops in the worldwide fleet.
Reliability
Between 2012 and 2016, the ATSB observed 417 events with turboprop aircraft, 83 per year, over 1.4 million flight hours: 2.2 per 10,000 hours.
Three were "high risk" involving engine malfunction and unplanned landing in single‑engine Cessna 208 Caravans, four "medium risk" and 96% "low risk".
Two occurrences resulted in minor injuries due to engine malfunction and terrain collision in agricultural aircraft and five accidents involved aerial work: four in agriculture and one in an air ambulance.
Current engines
| Technology | Aircraft components | null |
31083 | https://en.wikipedia.org/wiki/Teaspoon | Teaspoon | A teaspoon (tsp.) is a small spoon that can be used to stir a cup of tea or coffee, or as a tool for measuring volume. The size of teaspoons ranges from about 2.5 to 7.3mL (about 0·088 to 0·257 imperial fluid ounce or 0·085 to 0·247 US fluid ounce). For dosing of medicine and, in places where metric units are used, for cooking purposes, a teaspoonful is defined as 5mL (about 0·18 imperial fluid ounce or 0·17 US fluid ounce), and standard measuring spoons are used.
Cutlery
A teaspoon is a small spoon suitable for stirring and sipping the contents of a cup of tea or coffee, or adding a portion of loose sugar to it. These spoons have heads more or less oval in shape. Teaspoons are a common part of a place setting.
Teaspoons with longer handles, such as iced tea spoons, are commonly used also for ice cream desserts or floats. Similar spoons include the tablespoon and the dessert spoon, the latter intermediate in size between a teaspoon and a tablespoon, used in eating dessert and sometimes soup or cereals. Much less common is the coffee spoon, which is a smaller version of the teaspoon, intended for use with the small type of coffee cup. Another teaspoon, called an orange spoon (in American English: grapefruit spoon), tapers to a sharp point or teeth, and is used to separate citrus fruits from their membranes. A bar spoon, equivalent to a teaspoon, is used in measuring ingredients for mixed drinks.
A container designed to hold extra teaspoons, called a spooner, usually in a set with a covered sugar container, formed a part of Victorian table service.
History
The teaspoon is a European invention. Small spoons were common in Europe since at least the 13th century. These special spoons were introduced almost simultaneously with tea and coffee (Pettigrew points to use in the mid-17th century). Originally teaspoons were exotic items, precious and small, resembling the demitasse spoons of the later times. Also used for coffee, these spoons were usually made of gilt silver, and were available with a variety of handle shapes: plain, twisted, decorated with knobs, also known as knops, hence the knop-top name for such spoons. Widespread use and modern size date back to the Georgian era.
The teaspoon is first mentioned in an advertisement in a 1686 edition of the London Gazette, teaspoons, probably of English origin, are present on the 1700 Dutch painting by Nicholas Verkolje, "A Tea Party".
A special dish for resting the teaspoons, a "spoon boat", was a part of the tea set in the 18th century. At that time, the spoons were playing important role in the tea drinking etiquette: a spoon laid "across" the teacup indicated that the guest did not need any more tea, otherwise, the hostess was obligated to offer a fresh cup of tea, and it was considered impolite to refuse the offering. Pettigrew reports that sometimes the spoons were numbered to make it easier to match the cups with the guests after a refill.
Unit of measure
In some countries, a teaspoon (occasionally teaspoonful) is a cooking measure of volume, especially widely used in cooking recipes and pharmaceutic medical prescriptions. In English it is abbreviated as tsp. or, less often, as t., ts., or tspn.. The abbreviation is never capitalized because a capital letter is customarily reserved for the larger tablespoon ("Tbsp.", "T.", "Tbls.", or "Tb.").
A small scale study in Greece found that household teaspoons are a poor approximation of the standard tsp measure. The study investigated the accuracy of teaspoons as a measuring tool for liquid medicine. They surveyed 71 teaspoons from 25 houses and found that the volume varied between 2.5 to 7.3mL (about 0·088 to 0·257 imperial fluid ounce or 0·085 to 0·247 US fluid ounce).
Metric teaspoon
The metric teaspoon as a unit of culinary measure is 5mL, equal to , international metric tablespoon, or Australian metric tablespoon.
United States customary unit
As a unit of culinary measure, one teaspoon in the United States is tablespoon, exactly millilitres (mL), 1 US customary fluid drams, US customary fl. oz, US cup, US liquid gallon, or (0.30078125) cubic inches.
For nutritional labeling and medicine in the US, the teaspoon is defined the same as a metric teaspoonprecisely 5 millilitres (mL).
British culinary measurement unit
Traditionally, in the United Kingdom, 1 teaspoon is 1 British imperial fluid drachm ( British imperial fluid ounce). 1 UK teaspoon is the equivalence of UK tablespoon, UK dessert spoon, or 2 UK salt spoons.
Dry ingredients
For dry granular or powdered ingredients (e.g., salt, flour, spices, and especially beverages involving tea and sugar), a recipe may call for the spoon to be filled in a certain way that changes the volume of the ingredient. As with much of cooking, these measures are by their nature inexact. This can be exacerbated here by failing to use a real teaspoon: a teaspoon's greater area supports considerably more to be heaped above it than a deeper hemispherical measuring spoon, so if using a measuring spoon, one will typically use less than called for by the recipe. The definitions of "spoonful" vary, in particular, in a typical American recipe a "spoon" without clarification stands for a "level" spoon (the one with no ingredient showing above the rim of the spoon bowl), while a British cookbook would mean a "round" or "heaped" spoon, with the ingredient peaking above the rim:
A scant teaspoon is one which has been filled to slightly less than level.
A level teaspoon, which is the default teaspoon if no adjective is given, refers to an approximately leveled filling of the spoon, producing the same volume as for liquids. The excess of ingredient can be scraped off by a knife.
A rounded teaspoon is roughly symmetrical with as much ingredient above the rim as is in the spoon below the rim, giving a measure roughly equivalent to two level teaspoons.
A heaping (North American English) or heaped (UK English) teaspoon is a larger inexact measure consisting of the amount obtained by scooping the dry ingredient up as high as possible to balance on a spoon. This quantity can vary considerably, up to 5 amounts of ingredient in the level spoon. Many cookbooks treat heaped and rounded spoons interchangeably.
Lincoln used the spoon measure without adjectives to define either a rounded one (for flour and sugar) or a level one (for salt and spices).
Apothecary
As an unofficial but once widely used unit of apothecaries' measure, the teaspoon is equal to 1 fluid dram (or drachm) and thus of a tablespoon or of a fluid ounce. The apothecaries' teaspoon was formally known by the Latin cochleare minus (cochl. min.) to distinguish it from the tablespoon or cochleare majus (cochl. maj.).
When tea-drinking was first introduced to England circa 1660, tea was rare and expensive, as a consequence of which teacups and teaspoons were smaller than today. This situation persisted until 1784, when the Commutation Act reduced the tax on tea from 119% to 12.5%. As the price of tea declined, the size of teacups and teaspoons increased. By the 1850s, the teaspoon as a unit of culinary measure had increased to of a tablespoon, but the apothecary unit of measure remained the same. Nevertheless, the teaspoon, usually under its Latin name, continued to be used in apothecaries' measures for several more decades, with the original definition of one fluid dram.
| Physical sciences | Volume | Basics and measurement |
31084 | https://en.wikipedia.org/wiki/Tablespoon | Tablespoon | A tablespoon (tbsp., Tbsp., Tb., or T.) is a large spoon. In many English-speaking regions, the term now refers to a large spoon used for serving; however, in some regions, it is the largest type of spoon used for eating.
By extension, the term is also used as a cooking measure of volume. In this capacity, it is most commonly abbreviated tbsp. or Tbsp. and occasionally referred to as a tablespoonful to distinguish it from the utensil. The unit of measurement varies by region: a United States liquid tablespoon is approximately 14·8mL (exactly US fluid ounce; about 0·52 imperial fluid ounce), a British tablespoon is approximately 14·2mL (exactly imperial fluid ounce; about 0·48 US fluid ounce), an international metric tablespoon is exactly 15mL (about 0·53 imperial fluid ounce or 0·51 US fluid ounce), and an Australian metric tablespoon is 20mL (about 0·7 imperial fluid ounce or 0·68 US fluid ounce). The capacity of the utensil (as opposed to the measurement) is defined by neither law nor custom but only by preferences, and may or may not significantly approximate the measurement.
Dining
Before about 1700, it was customary for Europeans to bring their own spoons to the table. Spoons were carried as personal property in much the same way as people today carry wallets, key rings, etc. From about 1700 the place setting became popular, and with it the "table-spoon" (hyphenated), "table-fork" and "table-knife". Around the same time the tea-spoon and dessert-spoon first appeared, and the table-spoon was reserved for eating soup. The 18th century witnessed a proliferation of different sorts of spoons, including the mustard-spoon, salt-spoon, coffee-spoon, and soup-spoon.
In the late 19th century UK, the dessert-spoon and soup-spoon began to displace the table-spoon as the primary implement for eating from a bowl, at which point the name "table-spoon" took on a secondary meaning as a much larger serving spoon. At the time the first edition of the Oxford English Dictionary was published in 1928, "tablespoon" (which by then was no longer hyphenated) still had two definitions in the UK: the original definition (eating spoon) and the new definition (serving spoon).
Victorian and Edwardian era tablespoons used in the UK are often 25mL (about 0·88 imperial fluid ounce or 0·85 US fluid ounce) or sometimes larger. They are used only for preparing and serving food, not as part of a place-setting. Common tablespoons intended for use as cutlery (called dessert spoons in the UK, where a tablespoon is always a serving spoon) usually hold 7–14mL (about 0·25–0·49 imperial fluid ounce or 0·24–0·47 US fluid ounce), considerably less than some tablespoons used for serving.
Culinary measure
Naming
In recipes, an abbreviation like tbsp. is usually used to refer to a tablespoon, to differentiate it from the smaller teaspoon (tsp.). Some authors additionally capitalize the abbreviation, as Tbsp., while leaving tsp. in lower case, to emphasize that the larger tablespoon, rather than the smaller teaspoon, is wanted. The tablespoon abbreviation is sometimes further abbreviated to Tb. or T.
Traditional definitions
In most places, one tablespoon equals three teaspoons. In Australia and the UK, one tablespoon equals four teaspoons.
International metric
An international metric tablespoon is exactly equal to 15mL. It is the equivalence of 1 metric dessert spoons or 3 metric teaspoons.
Australian metric
The Australian metric tablespoon is different from that of the rest of the world. The Australian official definition of the tablespoon as a unit of volume is:
{|
|-
|1 Australian metric tablespoon || colspan="3" | = 20 ml
|-
| || = 1 international metric tablespoons
|-
| || = 2 metric dessert spoons, ||style="text-align:right;"| 1 metric dessert spoon = || 10 ml each
|-
| || = 4 metric teaspoons, ||style="text-align:right;"| 1 metric teaspoon = || 5 ml each
|-
| ||colspan=3| ≈ 5·63 British imperial fluid drachms
|-
| ||colspan=3| ≈ 0·7 British imperial fluid ounce
|-
| ||colspan=3| ≈ 1·41 UK tablespoons
|-
| ||colspan=3| ≈ 2·82 UK dessert spoons
|-
| ||colspan=3| ≈ 4·12 UK teaspoons
|-
| ||colspan=3| ≈ 11·26 UK salt spoons
|-
| ||colspan=3| ≈ 22·52 UK pinches (solids only)
|-
| ||colspan=3| ≈ 337·87 UK drops (liquids only)
|-
| ||colspan=3| ≈ 5·41 US customary fluid drams
|-
| ||colspan=3| ≈ 0·67 US customary fluid ounce
|-
| ||colspan=3| ≈ 1·35 US customary tablespoons
|-
| ||colspan=3| ≈ 2·03 US customary dessert spoons
|-
| ||colspan=3| ≈ 4·06 US customary teaspoons
|-
| ||colspan=3| ≈ 4·06 US customary coffee spoons
|-
| ||colspan=3| ≈ 16·23 US customary salt spoons
|-
| ||colspan=3| ≈ 32·46 US customary dashes (solids only)
|-
| ||colspan=3| ≈ 64·92 US customary pinches (solids only)
|-
| ||colspan=3| ≈ 129·85 US customary smidgens (solids only)
|-
| ||colspan=3| ≈ 389·54 US customary drops (liquids only)
|}
This definition was promulgated by the Metric Conversion Board in the 1970s, as part of the country’s metrication process. There is not a distinct Australian metric dessert spoon or metric teaspoon.
United Kingdom
In the UK, 1 tablespoon is traditionally 4 British imperial fluid drachms ( British imperial fluid ounce).
United States
The traditional U.S. interpretation of the tablespoon as a unit of volume is:
{|
|-
|1 US customary tablespoon ||= 4 US fluid drams
|-
|||= 2 US customary dessert spoons
|-
|||= 3 US customary teaspoons
|-
|||= 6 US customary coffee spoons
|-
|||= 12 US customary salt spoons
|-
|||= 24 US customary dashes (solids only)
|-
|||= 48 US customary pinches (solids only)
|-
|||= 96 US customary smidgens (solids only)
|-
|||= 288 US customary drops (liquids only)
|-
|||= US fluid ounce
|-
|||≈ 4·16 British imperial fluid drachms
|-
|||≈ 0·52 British imperial fluid ounce
|-
|||≈ 1·04 UK tablespoons
|-
|||≈ 2·08 UK dessert spoons
|-
|||≈ 4·16 UK teaspoons
|-
|||≈ 8·33 UK salt spoons
|-
|||≈ 16·65 UK pinches (solids only)
|-
|||≈ 249·8 UK drops (liquids only)
|-
|||≈ 14.8 mL
|-
|||≈ 0·99 international metric tablespoon
|-
|||≈ 0·74 Australian metric tablespoon
|-
|||≈ 1·48 metric dessert spoons
|-
|||≈ 2·96 metric teaspoons
|}
In nutrition labeling in the U.S., a tablespoon is defined as 15mL (about 4·22 British imperial fluid drachms (0·53 British imperial fluid ounce) or 4·06 US customary fluid drams (0·51 US customary fluid ounce)).
Dry measure
For dry ingredients, if a recipe calls for a level tablespoon, the usual meaning without further qualification, is measured by filling the spoon and scraping it level. In contrast, a heaped, heaping, or rounded spoonful is not leveled off, and includes a heap above the spoon. The exact volume of a heaped tablespoon depends somewhat on the shape and curvature of the measuring spoon being used and largely upon the physical properties of the substance being measured, and so is not a precise unit of measurement. If neither a rounded nor a level tablespoon is specified, a level tablespoon is used, just as a cup of flour is a level cup unless otherwise specified.
Apothecary measure
In the 18th century, the table-spoon became an unofficial unit of the apothecaries' system of measures, equal to 4 drams ( fl oz, 14.8 ml). It was more commonly known by the Latin name cochleare majus (abbreviated cochl. maj.) or, in apothecaries' notation, f℥ss or f℥ß (fluid ℥, i.e. ounce, semis, one-half).
| Physical sciences | Volume | Basics and measurement |
31091 | https://en.wikipedia.org/wiki/Titanite | Titanite | Titanite, or sphene (), is a calcium titanium nesosilicate mineral, CaTiSiO5. Trace impurities of iron and aluminium are typically present. Also commonly present are rare earth metals including cerium and yttrium; calcium may be partly replaced by thorium.
Nomenclature
The International Mineralogical Association Commission on New Minerals and Mineral Names (CNMMN) adopted the name titanite and "discredited" the name sphene as of 1982, although commonly papers and books initially identify the mineral using both names. Sphene was the most commonly used name until the IMA decision, although both were well known. Some authorities think it is less confusing as the word is used to describe any chemical or crystal with oxidized titanium such as the rare earth titanate pyrochlores series and many of the minerals with the perovskite structure. The name sphene continues to be publishable in peer-reviewed scientific literature, e.g. a paper by Hayden et al. was published in early 2008 in the journal Contributions to Mineralogy and Petrology. Sphene persists as the informal name for titanite gemstones.
Physical properties
Titanite, which is named for its titanium content, occurs as translucent to transparent, reddish brown, gray, yellow, green, or red monoclinic crystals. These crystals are typically sphenoid in habit and are often twinned. Possessing a subadamantine tending to slightly resinous luster, titanite has a hardness of 5.5 and a weak cleavage. Its specific gravity varies between 3.52 and 3.54. Titanite's refractive index is 1.885–1.990 to 1.915–2.050 with a strong birefringence of 0.105 to 0.135 (biaxial positive); under the microscope this leads to a distinctive high relief which combined with the common yellow-brown colour and lozenge-shape cross-section makes the mineral easy to identify. Transparent specimens are noted for their strong trichroism, the three colours presented being dependent on body colour. Owing to the quenching effect of iron, sphene exhibits no fluorescence under ultraviolet light. Some titanite has been found to be metamict, in consequence of structural damage due to radioactive decomposition of the often significant thorium content. When viewed in thin section with a petrographic microscope, pleochroic halos can be observed in minerals surrounding a titanite crystal.
Occurrence
Titanite occurs as a common accessory mineral in intermediate and felsic igneous rocks and associated pegmatites. It also occurs in metamorphic rocks such as gneiss and schists and skarns. Source localities include: Pakistan; Italy; Russia; China; Brazil; Tujetsch, St. Gothard, Switzerland; Madagascar; Tyrol, Austria; Renfrew County, Ontario, Canada; Sanford, Maine, Gouverneur, Diana, Rossie, Fine, Pitcairn, Brewster, New York and California in the US.
Uses
Titanite is a source of titanium dioxide, TiO2, used in pigments.
As a gemstone, titanite is usually some shade of chartreuse, but can be brown or black. Hue depends on iron (Fe) content, with low Fe content causing green and yellow colours, and high Fe content causing brown or black hues. Zoning is typical in titanite. It is prized for its exceptional dispersive power (0.051, B to G interval) which exceeds that of diamond. Jewelry use of titanite is limited, both because the stone is uncommon in gem quality and is relatively soft.
Titanite can also be used as a U-Pb geochronometer, specifically in metamorphic terranes.
Image gallery
| Physical sciences | Silicate minerals | Earth science |
31112 | https://en.wikipedia.org/wiki/Tesseract | Tesseract | In geometry, a tesseract or 4-cube is a four-dimensional hypercube, analogous to a two-dimensional square and a three-dimensional cube. Just as the perimeter of the square consists of four edges and the surface of the cube consists of six square faces, the hypersurface of the tesseract consists of eight cubical cells, meeting at right angles. The tesseract is one of the six convex regular 4-polytopes.
The tesseract is also called an 8-cell, C8, (regular) octachoron, or cubic prism. It is the four-dimensional measure polytope, taken as a unit for hypervolume. Coxeter labels it the polytope. The term hypercube without a dimension reference is frequently treated as a synonym for this specific polytope.
The Oxford English Dictionary traces the word tesseract to Charles Howard Hinton's 1888 book A New Era of Thought. The term derives from the Greek ( 'four') and ( 'ray'), referring to the four edges from each vertex to other vertices. Hinton originally spelled the word as tessaract.
Geometry
As a regular polytope with three cubes folded together around every edge, it has Schläfli symbol {4,3,3} with hyperoctahedral symmetry of order 384. Constructed as a 4D hyperprism made of two parallel cubes, it can be named as a composite Schläfli symbol {4,3} × { }, with symmetry order 96. As a 4-4 duoprism, a Cartesian product of two squares, it can be named by a composite Schläfli symbol {4}×{4}, with symmetry order 64. As an orthotope it can be represented by composite Schläfli symbol { } × { } × { } × { } or { }4, with symmetry order 16.
Since each vertex of a tesseract is adjacent to four edges, the vertex figure of the tesseract is a regular tetrahedron. The dual polytope of the tesseract is the 16-cell with Schläfli symbol {3,3,4}, with which it can be combined to form the compound of tesseract and 16-cell.
Each edge of a regular tesseract is of the same length. This is of interest when using tesseracts as the basis for a network topology to link multiple processors in parallel computing: the distance between two nodes is at most 4 and there are many different paths to allow weight balancing.
A tesseract is bounded by eight three-dimensional hyperplanes. Each pair of non-parallel hyperplanes intersects to form 24 square faces. Three cubes and three squares intersect at each edge. There are four cubes, six squares, and four edges meeting at every vertex. All in all, a tesseract consists of 8 cubes, 24 squares, 32 edges, and 16 vertices.
Coordinates
A unit tesseract has side length , and is typically taken as the basic unit for hypervolume in 4-dimensional space. The unit tesseract in a Cartesian coordinate system for 4-dimensional space has two opposite vertices at coordinates and , and other vertices with coordinates at all possible combinations of s and s. It is the Cartesian product of the closed unit interval in each axis.
Sometimes a unit tesseract is centered at the origin, so that its coordinates are the more symmetrical This is the Cartesian product of the closed interval in each axis.
Another commonly convenient tesseract is the Cartesian product of the closed interval in each axis, with vertices at coordinates . This tesseract has side length 2 and hypervolume .
Net
An unfolding of a polytope is called a net. There are 261 distinct nets of the tesseract. The unfoldings of the tesseract can be counted by mapping the nets to paired trees (a tree together with a perfect matching in its complement).
Construction
The construction of hypercubes can be imagined the following way:
1-dimensional: Two points A and B can be connected to become a line, giving a new line segment AB.
2-dimensional: Two parallel line segments AB and CD separated by a distance of AB can be connected to become a square, with the corners marked as ABCD.
3-dimensional: Two parallel squares ABCD and EFGH separated by a distance of AB can be connected to become a cube, with the corners marked as ABCDEFGH.
4-dimensional: Two parallel cubes ABCDEFGH and IJKLMNOP separated by a distance of AB can be connected to become a tesseract, with the corners marked as ABCDEFGHIJKLMNOP. However, this parallel positioning of two cubes such that their 8 corresponding pairs of vertices are each separated by a distance of AB can only be achieved in a space of 4 or more dimensions.
The 8 cells of the tesseract may be regarded (three different ways) as two interlocked rings of four cubes.
The tesseract can be decomposed into smaller 4-polytopes. It is the convex hull of the compound of two demitesseracts (16-cells). It can also be triangulated into 4-dimensional simplices (irregular 5-cells) that share their vertices with the tesseract. It is known that there are such triangulations and that the fewest 4-dimensional simplices in any of them is 16.
The dissection of the tesseract into instances of its characteristic simplex (a particular orthoscheme with Coxeter diagram ) is the most basic direct construction of the tesseract possible. The characteristic 5-cell of the 4-cube is a fundamental region of the tesseract's defining symmetry group, the group which generates the B4 polytopes. The tesseract's characteristic simplex directly generates the tesseract through the actions of the group, by reflecting itself in its own bounding facets (its mirror walls).
Radial equilateral symmetry
The radius of a hypersphere circumscribed about a regular polytope is the distance from the polytope's center to one of the vertices, and for the tesseract this radius is equal to its edge length; the diameter of the sphere, the length of the diagonal between opposite vertices of the tesseract, is twice the edge length. Only a few uniform polytopes have this property, including the four-dimensional tesseract and 24-cell, the three-dimensional cuboctahedron, and the two-dimensional hexagon. In particular, the tesseract is the only hypercube (other than a zero-dimensional point) that is radially equilateral. The longest vertex-to-vertex diagonal of an -dimensional hypercube of unit edge length is which for the square is for the cube is and only for the tesseract is edge lengths.
An axis-aligned tesseract inscribed in a unit-radius 3-sphere has vertices with coordinates
Properties
For a tesseract with side length :
Hypervolume (4D):
Surface "volume" (3D):
Face diagonal:
Cell diagonal:
4-space diagonal:
As a configuration
This configuration matrix represents the tesseract. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole tesseract. The diagonal reduces to the f-vector (16,32,24,8).
The nondiagonal numbers say how many of the column's element occur in or at the row's element. For example, the 2 in the first column of the second row indicates that there are 2 vertices in (i.e., at the extremes of) each edge; the 4 in the second column of the first row indicates that 4 edges meet at each vertex.
The bottom row defines they facets, here cubes, have f-vector (8,12,6). The next row left of diagonal is ridge elements (facet of cube), here a square, (4,4).
The upper row is the f-vector of the vertex figure, here tetrahedra, (4,6,4). The next row is vertex figure ridge, here a triangle, (3,3).
Projections
It is possible to project tesseracts into three- and two-dimensional spaces, similarly to projecting a cube into two-dimensional space.
The cell-first parallel projection of the tesseract into three-dimensional space has a cubical envelope. The nearest and farthest cells are projected onto the cube, and the remaining six cells are projected onto the six square faces of the cube.
The face-first parallel projection of the tesseract into three-dimensional space has a cuboidal envelope. Two pairs of cells project to the upper and lower halves of this envelope, and the four remaining cells project to the side faces.
The edge-first parallel projection of the tesseract into three-dimensional space has an envelope in the shape of a hexagonal prism. Six cells project onto rhombic prisms, which are laid out in the hexagonal prism in a way analogous to how the faces of the 3D cube project onto six rhombs in a hexagonal envelope under vertex-first projection. The two remaining cells project onto the prism bases.
The vertex-first parallel projection of the tesseract into three-dimensional space has a rhombic dodecahedral envelope. Two vertices of the tesseract are projected to the origin. There are exactly two ways of dissecting a rhombic dodecahedron into four congruent rhombohedra, giving a total of eight possible rhombohedra, each a projected cube of the tesseract. This projection is also the one with maximal volume. One set of projection vectors are , , .
Tessellation
The tesseract, like all hypercubes, tessellates Euclidean space. The self-dual tesseractic honeycomb consisting of 4 tesseracts around each face has Schläfli symbol {4,3,3,4}. Hence, the tesseract has a dihedral angle of 90°.
The tesseract's radial equilateral symmetry makes its tessellation the unique regular body-centered cubic lattice of equal-sized spheres, in any number of dimensions.
Related polytopes and honeycombs
The tesseract is 4th in a series of hypercube:
The tesseract (8-cell) is the third in the sequence of 6 convex regular 4-polytopes (in order of size and complexity).
As a uniform duoprism, the tesseract exists in a sequence of uniform duoprisms: {p}×{4}.
The regular tesseract, along with the 16-cell, exists in a set of 15 uniform 4-polytopes with the same symmetry. The tesseract {4,3,3} exists in a sequence of regular 4-polytopes and honeycombs, {p,3,3} with tetrahedral vertex figures, {3,3}. The tesseract is also in a sequence of regular 4-polytope and honeycombs, {4,3,p} with cubic cells.
The regular complex polytope 4{4}2, , in has a real representation as a tesseract or 4-4 duoprism in 4-dimensional space. 4{4}2 has 16 vertices, and 8 4-edges. Its symmetry is 4[4]2, order 32. It also has a lower symmetry construction, , or 4{}×4{}, with symmetry 4[2]4, order 16. This is the symmetry if the red and blue 4-edges are considered distinct.
In popular culture
Since their discovery, four-dimensional hypercubes have been a popular theme in art, architecture, and science fiction. Notable examples include:
"And He Built a Crooked House", Robert Heinlein's 1940 science fiction story featuring a building in the form of a four-dimensional hypercube. This and Martin Gardner's "The No-Sided Professor", published in 1946, are among the first in science fiction to introduce readers to the Moebius band, the Klein bottle, and the hypercube (tesseract).
Crucifixion (Corpus Hypercubus), a 1954 oil painting by Salvador Dalí featuring a four-dimensional hypercube unfolded into a three-dimensional Latin cross.
The Grande Arche, a monument and building near Paris, France, completed in 1989. According to the monument's engineer, Erik Reitzel, the Grande Arche was designed to resemble the projection of a hypercube.
Fez, a video game where one plays a character who can see beyond the two dimensions other characters can see, and must use this ability to solve platforming puzzles. Features "Dot", a tesseract who helps the player navigate the world and tells how to use abilities, fitting the theme of seeing beyond human perception of known dimensional space.
The word tesseract has been adopted for numerous other uses in popular culture, including as a plot device in works of science fiction, often with little or no connection to the four-dimensional hypercube; see Tesseract (disambiguation).
| Mathematics | Four-dimensional space | null |
31128 | https://en.wikipedia.org/wiki/Theobromine | Theobromine | Theobromine, also known as xantheose, is the principal alkaloid of Theobroma cacao (cacao plant). Theobromine is slightly water-soluble (330 mg/L) with a bitter taste. In industry, theobromine is used as an additive and precursor to some cosmetics. It is found in chocolate, as well as in a number of other foods, including tea (Camellia sinensis), some American hollies (yaupon and guayusa) and the kola nut. It is a white or colourless solid, but commercial samples can appear yellowish.
Structure
Theobromine is a flat molecule, a derivative of purine and an isomer of theophylline. It is also classified as a dimethyl xanthine. Related compounds include theophylline, caffeine, paraxanthine, and 7-methylxanthine, each of which differ in the number or placement of the methyl groups.
History
Theobromine was first discovered in 1841 in cacao beans by the chemist A. Woskresensky. Synthesis of theobromine from xanthine was first reported in 1882 by Hermann Emil Fischer.
Etymology
Theobromine is derived from Theobroma, the name of the genus of the cacao tree, with the suffix -ine given to alkaloids and other basic nitrogen-containing compounds. That name in turn is made up of the Greek roots theo ("god") and broma ("food"), meaning "food of the gods".
Despite its name, the compound contains no bromine, which is based on Greek bromos ("stench").
Sources
Theobromine is the primary alkaloid found in cocoa and chocolate. Cocoa butter only contains trace amounts of theobromine. There are usually higher concentrations in dark than in milk chocolate.
There are approximately of theobromine in of milk chocolate, while the same amount of dark chocolate contains about . Cocoa beans naturally contain approximately 1% theobromine.
Plant species and components with substantial amounts of theobromine are:
Theobroma cacao – seed and seed coat
Theobroma bicolor – seed coat
Ilex paraguariensis – leaf
Ilex guayusa – leaf
Ilex vomitoria – leaf
Camellia sinensis – leaf
Theobromine can also be found in trace amounts in the kola nut, the guarana berry, yerba mate (Ilex paraguariensis), and the tea plant.
The mean theobromine concentrations in cocoa and carob products are:
Biosynthesis
Theobromine is a purine alkaloid derived from xanthosine, a nucleoside. Cleavage of the ribose and N-methylation yields 7-methylxanthosine. 7-Methylxanthosine in turn is the precursor to theobromine, which in turn is the precursor to caffeine.
Pharmacology
Even without dietary intake, theobromine may occur in the body as it is a product of the human metabolism of caffeine, which is metabolised in the liver into 12% theobromine, 4% theophylline, and 84% paraxanthine.
In the liver, theobromine is metabolized into xanthine and subsequently into methyluric acid. Important enzymes include CYP1A2 and CYP2E1. The elimination half life of theobromine is between 6 and 8 hours.
Unlike caffeine, which is highly water-soluble, theobromine is only slightly water-soluble and is more fat soluble, and thus peaks more slowly in the blood. While caffeine peaks after only 30 minutes, theobromine requires 2–3 hours to peak.
The primary mechanism of action for theobromine inside the body is inhibition of adenosine receptors. Its effect as a phosphodiesterase inhibitor is thought to be small.
Effects
Humans
Theobromine is a heart stimulator and diuretic but has no significant stimulant effect on the human central nervous system. It is a bronchodilator and causes relaxation of vascular smooth muscle. It is available as a prescription drug in South Korea. The amount of theobromine found in chocolate is small enough that chocolate can, in general, be safely consumed by humans.
Compared with caffeine, theobromine is weaker in both its inhibition of cyclic nucleotide phosphodiesterases and its antagonism of adenosine receptors. The potential phosphodiesterase inhibitory effect of theobromine is seen only at amounts much higher than what people normally would consume in a typical diet including chocolate.
Toxicity
At doses of 0.8–1.5 g/day (50–100 g cocoa), sweating, trembling and severe headaches were noted, with limited mood effects found at 250 mg/day.
Also, chocolate may be a factor for heartburn in some people because theobromine may affect the esophageal sphincter muscle in a way that permits stomach acids to enter the esophagus.
Animals
Theobromine is the reason chocolate is poisonous to dogs. Dogs and other animals that metabolize theobromine (found in chocolate) more slowly can succumb to theobromine poisoning from as little as of milk chocolate for a smaller dog and , or around nine small milk chocolate bars, for an average-sized dog. The concentration of theobromine in dark chocolates (about ) is up to 10 times that of milk chocolate (), meaning dark chocolate is far more toxic to dogs per unit weight or volume than milk chocolate.
The median lethal dose of theobromine for dogs is ; therefore, a dog would need to consume a minimum of of the most theobromine-rich () dark chocolate, or a maximum of (of theobromine-rich milk chocolate), to have a 50% chance of receiving a lethal dose. However, even of milk chocolate may induce vomiting and diarrhea.
The same risk is reported for cats as well, although cats are less likely to ingest sweet food, as cats lack sweet taste receptors. Complications include digestive issues, dehydration, excitability, and a slow heart rate. Later stages of theobromine poisoning include epileptic-like seizures and death. If caught early on, theobromine poisoning is treatable. Although not common, the effects of theobromine poisoning can be fatal.
| Physical sciences | Alkaloids | Chemistry |
31150 | https://en.wikipedia.org/wiki/Lagrange%27s%20theorem%20%28group%20theory%29 | Lagrange's theorem (group theory) | In the mathematical field of group theory, Lagrange's theorem states that if H is a subgroup of any finite group , then is a divisor of , i.e. the order (number of elements) of every subgroup H divides the order of group G.
The theorem is named after Joseph-Louis Lagrange. The following variant states that for a subgroup of a finite group , not only is an integer, but its value is the index , defined as the number of left cosets of in .
This variant holds even if is infinite, provided that , , and are interpreted as cardinal numbers.
Proof
The left cosets of in are the equivalence classes of a certain equivalence relation on : specifically, call and in equivalent if there exists in such that .
Therefore, the set of left cosets forms a partition of .
Each left coset has the same cardinality as because defines a bijection (the inverse is ).
The number of left cosets is the index .
By the previous three sentences,
Extension
Lagrange's theorem can be extended to the equation of indexes between three subgroups of .
If we take ( is the identity element of ), then and . Therefore, we can recover the original equation .
Applications
A consequence of the theorem is that the order of any element of a finite group (i.e. the smallest positive integer number with , where is the identity element of the group) divides the order of that group, since the order of is equal to the order of the cyclic subgroup generated by . If the group has elements, it follows
This can be used to prove Fermat's little theorem and its generalization, Euler's theorem. These special cases were known long before the general theorem was proved.
The theorem also shows that any group of prime order is cyclic and simple, since the subgroup generated by any non-identity element must be the whole group itself.
Lagrange's theorem can also be used to show that there are infinitely many primes: suppose there were a largest prime . Any prime divisor of the Mersenne number satisfies (see modular arithmetic), meaning that the order of in the multiplicative group is . By Lagrange's theorem, the order of must divide the order of , which is . So divides , giving , contradicting the assumption that is the largest prime.
Existence of subgroups of given order
Lagrange's theorem raises the converse question as to whether every divisor of the order of a group is the order of some subgroup. This does not hold in general: given a finite group G and a divisor d of |G|, there does not necessarily exist a subgroup of G with order d. The smallest example is A4 (the alternating group of degree 4), which has 12 elements but no subgroup of order 6.
A "Converse of Lagrange's Theorem" (CLT) group is a finite group with the property that for every divisor of the order of the group, there is a subgroup of that order. It is known that a CLT group must be solvable and that every supersolvable group is a CLT group. However, there exist solvable groups that are not CLT (for example, A4) and CLT groups that are not supersolvable (for example, S4, the symmetric group of degree 4).
There are partial converses to Lagrange's theorem. For general groups, Cauchy's theorem guarantees the existence of an element, and hence of a cyclic subgroup, of order any prime dividing the group order. Sylow's theorem extends this to the existence of a subgroup of order equal to the maximal power of any prime dividing the group order. For solvable groups, Hall's theorems assert the existence of a subgroup of order equal to any unitary divisor of the group order (that is, a divisor coprime to its cofactor).
Counterexample of the converse of Lagrange's theorem
The converse of Lagrange's theorem states that if is a divisor of the order of a group , then there exists a subgroup where .
We will examine the alternating group , the set of even permutations as the subgroup of the Symmetric group .
so the divisors are . Assume to the contrary that there exists a subgroup in with .
Let be the non-cyclic subgroup of called the Klein four-group.
.
Let . Since both and are subgroups of , is also a subgroup of .
From Lagrange's theorem, the order of must divide both and , the orders of and respectively. The only two positive integers that divide both and are and . So or .
Assume , then . If does not share any elements with , then the 5 elements in besides the Identity element must be of the form where are distinct elements in .
Since any element of the form squared is , and , any element of in the form must be paired with its inverse. Specifically, the remaining 5 elements of must come from distinct pairs of elements in that are not in . This is impossible since pairs of elements must be even and cannot total up to 5 elements. Thus, the assumptions that is wrong, so .
Then, where , must be in the form where are distinct elements of . The other four elements in are cycles of length 3.
Note that the cosets generated by a subgroup of a group form a partition of the group. The cosets generated by a specific subgroup are either identical to each other or disjoint. The index of a subgroup in a group is the number of cosets generated by that subgroup. Since and , will generate two left cosets, one that is equal to and another, , that is of length 6 and includes all the elements in not in .
Since there are only 2 distinct cosets generated by , then must be normal. Because of that, . In particular, this is true for . Since .
Without loss of generality, assume that , , , . Then , , , , . Transforming back, we get . Because contains all disjoint transpositions in , . Hence, .
Since , we have demonstrated that there is a third element in . But earlier we assumed that , so we have a contradiction.
Therefore, our original assumption that there is a subgroup of order 6 is not true and consequently there is no subgroup of order 6 in and the converse of Lagrange's theorem is not necessarily true.
Q.E.D.
History
Lagrange himself did not prove the theorem in its general form. He stated, in his article Réflexions sur la résolution algébrique des équations, that if a polynomial in variables has its variables permuted in all ways, the number of different polynomials that are obtained is always a factor of . (For example, if the variables , , and are permuted in all 6 possible ways in the polynomial then we get a total of 3 different polynomials: , , and . Note that 3 is a factor of 6.) The number of such polynomials is the index in the symmetric group of the subgroup of permutations that preserve the polynomial. (For the example of , the subgroup in contains the identity and the transposition .) So the size of divides . With the later development of abstract groups, this result of Lagrange on polynomials was recognized to extend to the general theorem about finite groups which now bears his name.
In his Disquisitiones Arithmeticae in 1801, Carl Friedrich Gauss proved Lagrange's theorem for the special case of , the multiplicative group of nonzero integers modulo , where is a prime. In 1844, Augustin-Louis Cauchy proved Lagrange's theorem for the symmetric group .
Camille Jordan finally proved Lagrange's theorem for the case of any permutation group in 1861.
| Mathematics | Abstract algebra | null |
2455731 | https://en.wikipedia.org/wiki/Generation%20%28particle%20physics%29 | Generation (particle physics) | In particle physics, a generation or family is a division of the elementary particles. Between generations, particles differ by their flavour quantum number and mass, but their electric and strong interactions are identical.
There are three generations according to the Standard Model of particle physics. Each generation contains two types of leptons and two types of quarks. The two leptons may be classified into one with electric charge −1 (electron-like) and neutral (neutrino); the two quarks may be classified into one with charge − (down-type) and one with charge + (up-type). The basic features of quark–lepton generation or families, such as their masses and mixings etc., can be described by some of the proposed family symmetries.
Overview
Each member of a higher generation has greater mass than the corresponding particle of the previous generation, with the possible exception of the neutrinos (whose small but non-zero masses have not been accurately determined). For example, the first-generation electron has a mass of only , the second-generation muon has a mass of , and the third-generation tau has a mass of (almost twice as heavy as a proton). This mass hierarchy
causes particles of higher generations to decay to the first generation, which explains why everyday matter (atoms) is made of particles from the first generation only. Electrons surround a nucleus made of protons and neutrons, which contain up and down quarks. The second and third generations of charged particles do not occur in normal matter and are only seen in extremely high-energy environments such as cosmic rays or particle accelerators. The term generation was first introduced by Haim Harari in Les Houches Summer School, 1976.
Neutrinos of all generations stream throughout the universe but rarely interact with other matter.
It is hoped that a comprehensive understanding of the relationship between the generations of the leptons may eventually explain the ratio of masses of the fundamental particles, and shed further light on the nature of mass generally, from a quantum perspective.
Fourth generation
Fourth and further generations are considered unlikely by many (but not all) theoretical physicists. Some arguments against the possibility of a fourth generation are based on the subtle modifications of precision electroweak observables that extra generations would induce; such modifications are strongly disfavored by measurements. There are functions used to generalize terms for introduction in a new quark that is an isosinglet and is responsible for generating Flavour-Changing-Neutral-Currents' (FCNC) at tree level in the electroweak sectors. Furthermore, a fourth generation with a 'light' neutrino (one with a mass less than about ) has been ruled out by measurements of the decay widths of the Z boson at CERN's Large Electron–Positron Collider (LEP).
Nonetheless, searches at high-energy colliders for particles from a fourth generation continue, but as yet no evidence has been observed.
In such searches, fourth-generation particles are denoted by the same symbols as third-generation ones with an added prime (e.g. b′ and t′).
The lower bound for a fourth generation of quark (b′, t′) masses is currently at 1.4 TeV from experiments at the LHC.
The lower bound for a fourth generation neutrino (ν'τ) mass is currently at about 60 GeV (millions of times larger than the upper bound for the other 3 neutrino masses).
The lower bound for a fourth generation charged lepton (τ''') mass is currently 100GeV and proposed upper bound of 1.2 TeV from unitarity considerations.
If the Koide formula continues to hold, the masses of the fourth generation charged lepton would be 44 GeV (ruled out) and b′ and t′ should be 3.6 TeV and 84 TeV respectively. (The maximum possible energy for protons in the LHC is about 6 TeV.)
Origin
The origin of multiple generations of fermions, and the particular count of 3'', is an unsolved problem of physics. String theory provides a cause for multiple generations, but the particular number depends on the details of the compactification of the D-brane intersections. Additionally, grand unified theories in 10 dimensions compactified on certain orbifolds down to 4 D naturally contain 3 generations of matter. This includes many heterotic string theory models.
In standard quantum field theory, under certain assumptions, a single fermion field can give rise to multiple fermion poles with mass ratios of around and potentially explaining the large ratios of fermion masses between successive generations and their origin.
The existence of precisely three generations with the correct structure was at least tentatively derived from first principles through a connection with gravity. The result implies a unification of gauge forces into SU(5). The question regarding the masses is unsolved, but this is a logically separate question, related to the Higgs sector of the theory.
| Physical sciences | Particle physics: General | Physics |
2455842 | https://en.wikipedia.org/wiki/Strawberry | Strawberry | The garden strawberry (or simply strawberry; Fragaria × ananassa) is a widely grown hybrid cultivated worldwide for its fruit. The genus Fragaria, the strawberries, is in the rose family, Rosaceae. The fruit is appreciated for its aroma, bright red colour, juicy texture, and sweetness. It is eaten either fresh or in prepared foods such as jam, ice cream, and chocolates. Artificial strawberry flavourings and aromas are widely used in commercial products. Botanically, the strawberry is not a berry but an aggregate accessory fruit. Each apparent 'seed' on the outside of the strawberry is actually an achene, a botanical fruit with a seed inside it.
The garden strawberry was first bred in Brittany, France, in the 1750s via a cross of F. virginiana from eastern North America and F. chiloensis, which was brought from Chile by Amédée-François Frézier in 1714. Cultivars of F. × ananassa have replaced the woodland strawberry F. vesca in commercial production. In 2022, world production of strawberries exceeded nine million tons, led by China with 35% of the total.
Strawberries have appeared in literature and art from Roman times; Virgil wrote about the snake lurking beneath the strawberry, an image reinterpreted by later writers including Shakespeare. Strawberries appear in Italian, Flemish, and German paintings, including Hieronymus Bosch's The Garden of Earthly Delights. It has been understood to symbolise the ephemerality of earthly joys or the benefit that blessed souls get from religion, or to allegorise death and resurrection. By the late 20th century, its meaning had shifted: it symbolised female sexuality.
Evolution
History and taxonomy
In Europe, until the 17th century cultivated plants were obtained by transplanting strawberries from the forests; the plants were propagated asexually by pegging down the runners, allowing them to root, and then separating the new plants. F. virginiana, the Virginia strawberry, was brought to Europe from eastern North America; F. chiloensis, the Chilean strawberry, was brought from Chile by Amédée-François Frézier in 1714. At first introduction to Europe, the Chilean strawberry plants grew vigorously, but produced no fruit. French gardeners in Brittany in the 1750s noticed that the Chilean plants bore only female flowers. They planted the wild woodland strawberry F. vesca among the Chilean plants to provide pollen; the Chilean strawberry plants then bore abundant fruits.
In 1759, Philip Miller recorded the 'pine strawberry' (F. ananassa) in Chelsea, England. In the gardens of the Palace of Versailles, France, Antoine Nicolas Duchesne found in 1766 that F. ananassa was a hybrid of the recently arrived F. chiloensis and F. virginiana. In 1806, Michael Keens of Isleworth, England selected the Keens Imperial cultivar from many hybrids, winning the Royal Horticultural Society's Silver Cup. Both the names 'pine' and 'ananassa' meant "pineapple", for the fruit's flavour. Modern strawberries and both parent species are octoploid (8N, meaning they have 8 sets of 7 chromosomes). The genome sequence of the garden strawberry was published in 2019.
Further breeding in the following centuries produced varieties with a longer cropping season and more fruit. During the Green Revolution of the 1950s, agronomists used selective breeding to expand phenotypic diversity of the garden strawberry. Adoption of perpetual flowering hybrids not sensitive to changes in photoperiod gave higher yields and enabled production in California to expand.
Phylogeny
The phylogeny of the cultivated strawberry within the genus Fragaria of the Rosaceae family was determined by chloroplast genomics in 2021. The polyploidy (number of sets of chromosomes) is shown as "2N" etc. by each species.
Description
In culinary terms, a strawberry is an edible fruit. From a botanical point of view, it is not a berry but an aggregate accessory fruit, because the fleshy part is derived from the receptacle. Each apparent seed on the outside of the strawberry is actually an achene, a botanical fruit with a seed inside it.
Composition
Nutrition
Raw strawberries are 91% water, 8% carbohydrates, 1% protein, and contain negligible fat (table). A reference amount of supplies 33 kilocalories, is a rich source of vitamin C (65% of the Daily Value, DV), and a good source of manganese (17% DV), with no other micronutrients in significant content (table). Strawberries contain a modest amount of essential unsaturated fatty acids in the achene (seed) oil.
Phytochemicals
Garden strawberries contain diverse phytochemicals, including the dimeric ellagitannin agrimoniin, which is an isomer of sanguiin H-6. Other polyphenols present include flavonoids, such as anthocyanins, flavanols, flavonols and phenolic acids, such as hydroxybenzoic acid and hydroxycinnamic acid. Although achenes comprise only about 1% of the total fresh weight of a strawberry, they contribute 11% of all polyphenols in the whole fruit; achene phytochemicals include ellagic acid, ellagic acid glycosides, and ellagitannins.
Pelargonidin-3-glucoside is the major anthocyanin pigment in strawberries, giving them their red colour, with cyanidin-3-glucoside in smaller amounts. Strawberries also contain purple minor pigments, such as dimeric anthocyanins.
Flavour and fragrance
Sweetness, fragrance and complex flavour are important attributes of strawberries. In plant breeding and farming, emphasis is placed on sugars, acids, and volatile compounds, which improve the taste and fragrance of the ripe fruit. Esters, terpenes, and furans are the chemical compounds having the strongest relationships to strawberry flavour, sweetness and fragrance, with a total of 31 out of some 360 volatile compounds significantly correlated to desirable flavour and fragrance. In breeding strawberries for the commercial market in the United States, the volatile compounds methyl anthranilate and gamma-decalactone, prominent in aromatic wild strawberries, are especially desired for their "sweet and fruity" aroma characteristics. As strawberry flavour and fragrance appeal to consumers, they are used widely in manufacturing, including foods, beverages, perfumes and cosmetics.
Allergy
Some people experience an anaphylactoid reaction to eating strawberries. The most common form of this reaction is oral allergy syndrome, but symptoms may also mimic hay fever or include dermatitis or hives, and, in severe cases, may cause breathing problems. Proteomic studies indicate that the allergen may be tied to a protein for the red anthocyanin biosynthesis expressed in strawberry ripening, named Fra a1 (Fragaria allergen1). White-fruited strawberry cultivars, lacking Fra a1, may be an option for people allergic to strawberries. They ripen but remain pale, appearing like immature berries. A virtually allergen-free cultivar named 'Sofar' is available.
Varieties
Strawberries are often grouped according to their flowering habit. Traditionally in the Northern Hemisphere, this has consisted of a division between "June-bearing" strawberries, which bear their fruit in the early summer and "everbearing" strawberries, which often bear several crops of fruit throughout the season. One plant throughout a season may produce 50 to 60 times or roughly once every three days. Strawberries occur in three basic flowering habits: short-day, long-day, and day-neutral. These describe the day-length sensitivity of the plant and the type of photoperiod that induces flower formation. Day-neutral cultivars produce flowers regardless of the photoperiod. Strawberry cultivars vary widely in size, colour, flavour, shape, degree of fertility, season of ripening, liability to disease and constitution of plant.
Cultivation
Production
In 2022, world production of strawberries was 9.6million tonnes, led by China with 35 percent of the total and the United States and Turkey as other significant producers. Due to the relatively fragile nature of the strawberry, approximately 35 percent of the $2.2billion United States crop was spoiled in 2020. An Idaho company announced plans to launch more durable gene-edited strawberries. In the U.S., , it cost growers around $35,000 per acre to plant and $35,000 per acre to harvest strawberries.
For commercial production, plants can be propagated from bare root plants or plugs. One method of cultivation uses annual plasticulture; another is a perennial system of matted rows or mounds which has been used in cold growing regions for many years. In some areas, greenhouses are used; in principle they could provide strawberries during the off season for field crops.
In the plasticulture system, raised beds are covered with plastic to prevent weed growth and erosion. Plants are planted through holes punched in this covering. Irrigation tubing can be run underneath if necessary.
Another method uses a compost sock. Plants grown in compost socks have been shown to produce significantly more flavonoids, anthocyanins, fructose, glucose, sucrose, malic acid, and citric acid than fruit produced in the black plastic mulch or matted row systems. Similar results in an earlier study conducted by United States Department of Agriculture confirms how compost plays a role in the bioactive qualities of two strawberry cultivars.
Strawberries may be propagated by seed. Strawberries can be grown indoors in pots. Strawberries will not grow indoors in winter though an experiment using a combination of blue and red LED lamps shows that this could be achieved in principle. In Florida, winter is the natural growing season and harvesting begins in mid-November.
Manuring and harvesting
Nitrogen fertiliser is often needed at the beginning of every planting year. There are normally adequate levels of phosphorus and potash when fields have been fertilised for other crops in preceding years. To provide more organic matter, a cover crop of wheat or rye can be planted in the year before planting the strawberries. Strawberries prefer a somewhat acidic pH from 5.5 to 6.5, so lime is usually not required.
To achieve top quality, berries are harvested at least every other day. The berries are picked with the caps and half the stem still attached. Strawberries need to remain on the plant until fully ripe, because they do not continue to ripen after being picked. The harvesting and cleaning process has not changed substantially over time. As they are delicate, strawberries are still often harvested by hand and packed in the field.
Domestic cultivation
Strawberries are popular in home gardens, and numerous cultivars have been selected for consumption and for exhibition purposes. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
'Cambridge Favourite'
'Hapil'
'Honeoye' ( )
'Pegasus'
'Rhapsody'
'Symphony'
Pests and diseases
Over 200 species of pest arthropods attack strawberries. These include moths, fruit flies, chafers, strawberry root weevils, strawberry thrips, strawberry sap beetles, strawberry crown moth, mites, and aphids. Non-arthropod pests include slugs. Some are vectors of plant diseases; for instance, the strawberry aphid, Chaetosiphon fragaefolii, can carry the strawberry mild yellow-edge virus.
Strawberry plants are subject to many diseases, especially when subjected to stress. The leaves may be infected by powdery mildew, leaf spot (caused by the fungus Sphaerella fragariae), leaf blight (caused by the fungus Phomopsis obscurans), and by a variety of slime molds. The crown and roots may fall victim to red stele, verticillium wilt, black root rot, and nematodes. The fruits are subject to damage from gray mold (Botrytis cinerea), rhizopus rot, and leather rot.
Disease resistance and protection
The , AtNPR1, confers A. thalianas broad-spectrum resistance when transexpressed in F. ananassa. This includes resistance to anthracnose, powdery mildew, and angular leaf spot.
A 1997 study found that many wound volatiles were effective against gray mold (B. cinerea). Both Tribute and Chandler varieties benefited from the treatments, although the effects vary widely with substance and variety. Strawberry plants metabolise these volatiles, more rapidly than do either blackberry or grape.
Culinary use
Strawberries were eaten fresh with cream in the time of Thomas Wolsey in the court of King Henry VIII. Strawberries can be frozen or made into jam or preserves, as well as dried and used in prepared foods, such as cereal bars. In the United Kingdom, strawberries and cream is a popular dessert at the Wimbledon tennis tournament. Desserts using strawberries include pavlova, fraisier, and strawberry shortcake.
In art and literature
The Roman poet Ovid wrote that in the past Golden Age, people had lived on wild fruits such as mountain strawberries. Virgil wrote in his Eclogues that "Ye who cull flowers and low-growing strawberries, / Away from here lads; a chill snake lurks in the grass", and his imagery was taken up by medieval and early modern writers, the snake beneath the strawberry standing for dangerous literature, or beautiful but unfaithful women, or eventually any risky pleasure. In this vein, Shakespeare's King Richard III asks for a dish of strawberries while feigning friendship to his enemy; while in Othello, Iago shows Desdemona's handkerchief "spotted with strawberries", implying she has been unfaithful and hinting at Iago's own devious plans.
The strawberry is found in Italian, Flemish, and German art, and in English miniatures. In medieval depictions, the strawberry often appears in the Virgin Mary's garden, while in the Madonna of the Strawberries, she is seated on a strawberry bed and garlanded with strawberry leaves.
In the work of the late medieval painter Hieronymus Bosch, strawberries feature in The Garden of Earthly Delights amongst "frolicking nude figures". Fray Jose de Siguenza described the painting as embodying the strawberry as a symbol of the ephemerality of earthly joys. More recently, scholars have seen the symbolism entirely differently: Clément Wertheim-Aymes believed it meant the blessed souls' benefit from religion; Pater Gerlach supposed it meant spiritual love; and Laurinda Dixon asserted it was part of an allegory of death and resurrection. By the late 20th century, the strawberry (and the raspberry) had become "traditional symbols of the mouth and female sexuality".
| Biology and health sciences | Rosales | null |
2458048 | https://en.wikipedia.org/wiki/White%20coat | White coat | A white coat, also known as a laboratory coat or lab coat, is a knee-length overcoat or smock worn by professionals in the medical field or by those involved in laboratory work. The coat protects their street clothes and also serves as a simple uniform. The garment is made from white or light-colored cotton, linen, or cotton polyester blend, allowing it to be washed at high temperature and making it easy to see if it is clean.
Similar coats are a symbol of learning in Argentina and Uruguay, where they are worn by both students and teachers in state schools. In Tunisia and Mozambique, teachers wear white coats to protect their street clothes from chalk.
Like the word "suit", the phrase "white coat" is sometimes used as a metonym to denote the wearer, such as a scientist working in a high-tech company.
Medicine
White coats are sometimes seen as the distinctive dress of both physicians and surgeons, who have worn them for over 100 years. In the nineteenth century, respect for the certainty of science was in stark contrast to the quackery and mysticism of nineteenth-century medicine. To emphasize the transition to the more scientific approach of modern medicine, physicians began to represent themselves as scientists, donning the most recognizable symbol of the scientist, the white laboratory coat. The modern white coat was introduced to medicine in the late 1800s as a symbol of cleanliness.
Patient perceptions
A study conducted in the United Kingdom found that the majority of patients prefer their doctors to wear white coats, but the majority of doctors prefer other clothing, such as scrubs. The study found that psychiatrists were among the least likely to wear white coats and when they are worn, they are typically worn over the scrubs. Some medical doctors view the coats as hot and uncomfortable, and many feel that they spread infection.
White coat hypertension
Some patients who have their blood pressure measured in a clinical setting have higher readings than they do when measured in a home setting. This is apparently a result of patients feeling more relaxed when they are at home. The phenomenon is sometimes called white coat hypertension, in reference to the traditional white coats worn in a clinical setting, though the coats themselves may have nothing to do with the elevated readings.
Psychiatry
The term is also used as verbal shorthand for psychiatric orderlies or other personnel and may be used, in a usually jocular manner, to imply someone's lunacy or paranoia.
White versus black
Until the mid-1920s, students who were examining cadavers would wear black lab coats to show respect for the dead. Black lab coats were used in early biomedical and microbiology laboratories. The "whiteness" and "pureness" concepts that were established in medicine pervaded that environment at the end of the 19th and beginning of the 20th centuries and physicians changed the black for the white coat. Black coats were worn by surgeons as opposed to white until general anaesthesia became widespread in the early 1900s. Anaesthesia allowed surgeries to be performed more slowly and precisely, reducing mess and bloodiness; white coats then developed a symbolic association with a bloodless field.
White coat ceremony
A white coat ceremony is a relatively new ritual that marks one's entrance into medical school and, more recently, into a number of health-related schools and professions. It originated at University of Chicago's Pritzker School of Medicine in 1989 and involves a formal "robing" or "cloaking" in white lab coats.
Controversy
Studies have shown that doctor's coats worn in hospitals can harbor contagions including MRSA.
In 2007, the UK National Health Service started banning long-sleeved coats.
In 2009, the American Medical Association investigated banning coats with long sleeves to protect patients, but did not institute a ban.
A study published in 2011 investigating the effectiveness of the NHS ban showed no statistical difference in contamination levels over an 8-hour period between residents wearing long-sleeved coats and those wearing short-sleeved scrubs.
In an effort to reduce the contamination of healthcare uniforms, ASTM International is developing standards to specifically address liquid penetration resistance, liquid repellency, bacterial decontamination, and antimicrobial properties of such uniforms. The spread of white coat infection has been rampant and talked about in the scientific community.
Indian Physician Dr Edmond Fernandes triggered a controversy in India and parts of South Asia by calling for a ban on white coats because of the spread of nosocomial infections.
In laboratory work
When used in the laboratory, lab coats protect against accidental spills, e.g., acids. In this case, they usually have long sleeves and are made of absorbent material, such as cotton, so that the user can be protected from the chemical. Some lab coats have buttons or elastic at the end of the sleeves, to secure them around the wrist so that they do not hang into containers of chemicals or tip over lab equipment. Higher quality coats use snap-on buttons instead of traditional buttons as these are easier to quickly undo (they allow pulling the coat off directly instead of fumbling with the buttons to unhook each one). This renders taking off the coat in an emergency much faster, so these are the preferred type for laboratory work as opposed to clinical work. Short-sleeved lab coats also exist where protection from substances such as acid is not necessary, and are favored by certain scientists, such as microbiologists, avoiding the problem of hanging sleeves altogether, combined with the ease of washing the forearms (an important consideration in microbiology).
Howie coats
For added safety, a variant of the lab coat called a "Howie" style lab coat is often adopted. It is called such after a 1978 report commissioned by the UK Department of Health and Social Security to codify standard clinical laboratory practices, chaired by James Howie. Among the codified standards was protective clothing; the type of wrap-around full coverage lab coat that had been in use in the UK for over a hundred years was nicknamed the "Howie-Style" coat to indicate its compliance with the provisions of this report. It has the buttons on the left flank, elasticated wrists and a mandarin collar, and is quite similar to a chef's uniform. It is designed to minimize pathogen contact with street clothes.
Use as a school uniform
White coats which resemble lab coats are worn by students and teachers of most public primary schools as a daily uniform in countries like Argentina, Uruguay, Spain, Bolivia and Morocco, and in private schools in Colombia. It also was formerly worn during past decades in Paraguay and Chile.
| Biology and health sciences | General concepts | Health |
2458485 | https://en.wikipedia.org/wiki/Conserved%20quantity | Conserved quantity | A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system.
Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity.
Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative.
Differential equations
For a first order system of differential equations
where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain,
Note that by using the multivariate chain rule,
so that the definition may be written as
which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists.
Hamiltonian mechanics
For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution
and hence is conserved if and only if . Here denotes the Poisson bracket.
Lagrangian mechanics
Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by
is conserved.
Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by
is conserved. This may be derived by using the Euler–Lagrange equations.
| Mathematics | Dynamical systems | null |
2459259 | https://en.wikipedia.org/wiki/Greisen | Greisen | Greisen is a highly altered granitic rock or pegmatite, usually composed predominantly of quartz and micas (mostly muscovite). Greisen is formed by self-generated alteration of a granite and is a class of moderate- to high-temperature magmatic-hydrothermal alteration related to the late-stage release of volatiles dissolved in a magma during the solidification of that magma.
Greisens are usually variably altered rocks, grading from coarse, crystalline granite, commonly vuggy with miarolitic cavities, through to quartz and muscovite rich rocks, which may be locally rich in topaz, tourmaline, cassiterite, fluorite, beryl, wolframite, siderite, molybdenite and other sulfide minerals, and other accessory minerals. They may occur as small to large veins, or large zones in the roof of some granites. The rocks can sometimes be mined as ores of tin and other minerals.
Petrogenesis
Greisens are formed by endogenous alteration of granite during the cooling stages of emplacement. Greisen fluids are formed by granites as the last highly gas- and water-rich phases of complete crystallisation of granite melts. This fluid is forced through the interstitial spaces of the granite into veins and pools at the upper margins, where boiling and rock alteration occur.
Alteration facies
Incipient greisen (granite): addition of muscovite ± chlorite, topaz, tourmaline, and fluorite (original texture of granites retained).
Greisenized granite: quartz-muscovite-topaz-fluorite, ± tourmaline (some original texture of granites retained).
Massive greisen: quartz-muscovite ± topaz ± fluorite ± tourmaline (typically no original texture preserved). Tourmaline can be ubiquitous as disseminations, concentrated or diffuse clots, or late fracture fillings. Greisen may form in any wallrock environment, but typically in granites and metamorphic rocks.
Greisen environments
Greisens appear to be restricted to intrusions which are emplaced high in the crust, generally at a depth between 0.5 and 5 km, as the hydrous fluid separation from granite to produce greisenation cannot occur deeper than about 5 kilometres. The roof or upper aureole is mostly sealed shut to prevent most fluids escaping. This sealing is largely due to hornfelsing and silicification of the overlying rocks, and fracturing of these rock typically forms greisen veins.
They are generally associated mostly with potassic plutonic rocks; Alkali feldspar granite, and are rare in less potassic rocks like granodiorite or diorite. Greisens are prospective for mineralisation because the last fluids of granite crystallization tend to concentrate incompatible metals such as tin, tungsten, molybdenum and beryllium, and in places other metals such as tantalum, gold, silver, and copper.
Tectonically, greisen granites are generally associated with generation of S-type suites of granites in thick arc and back-arc fold belts where subducted sedimentary and felsic rock is melted.
Distribution
Examples of greisen are:
Tin and tungsten deposits of Cornwall
Ardlethan, New South Wales, Australia (tin-antimony greisen)
Timbarra gold mine, New South Wales, Australia (gold greisen deposit)
Anchor Mine, Lottah, Tasmania, Australia (tin copper topaz greisen)
Pitinga topaz granite, Brazil (tin, topaz, beryl)
Lost River, Alaska, US (tin greisen)
Sisson Brook, Burnt Hill and other deposits, New Brunswick, Canada (tin-tungsten-molybdenum greisen)
Ore Mountains, Czech Republic (tin greisen)
Panasqueira Mine, Portugal Tin and Tungsten deposit
The Tin Range Tungsten-Tin deposit, Stewart Island/Rakiura, New Zealand
| Physical sciences | Igneous rocks | Earth science |
2459566 | https://en.wikipedia.org/wiki/Health%20education | Health education | Health education is a profession of educating people about health. Areas within this profession encompass environmental health, physical health, social health, emotional health, intellectual health, and spiritual health, as well as sexual and reproductive health education. It can also be defined as any combination of learning activities that aim to assist individuals and communities improve their health by expanding knowledge or altering attitudes.
Health education has been defined differently by various sources. The National Conference on Preventive Medicine in 1975 defined it as "a process that informs, motivates, and helps people to adopt and maintain healthy practices and lifestyles, advocates environmental changes as needed to facilitate this goal, and conducts professional training and research to the same end." The Joint Committee on Health Education and Promotion Terminology of 2001 defined Health Education as "any combination of planned learning experiences based on sound theories that provide individuals, groups, and communities the opportunity to acquire information and the skills needed to make quality health decisions." The World Health Organization (WHO) defined Health Education as consisting of "consciously constructed opportunities for learning involving some form of communication designed to improve health literacy, including improving knowledge, and developing life skills which are conducive to individual and community health."
History
It is often thought that health education began with the beginning of healthcare in the earliest parts of history as knowledge was passed from generation to generation. Some people might be surprised to hear that health education's roots date back to the Greeks between the sixth and fourth century B.C.E. They shifted their focus away from superstitious and supernatural conceptions of health and toward the physiological causes of ailments, according to documents that have been uncovered. They discussed how physical health, social settings, and human behavior are connected to preventing disease and sustaining good health. The Greeks wanted to empower people and communities by establishing supportive settings and regulations that would promote taking medication and upholding healthy behaviors. They did this by educating people about their health and developing their skills. Other preserved texts from ancient civilizations in China, India, Egypt, Rome, Persia also contain information regarding various diseases, their kinds of treatments, and even preventative measures. The first medical school was later founded at the end of the 8th century in Salerno, Italy and focused a significant portion of its curriculum on proper hygiene and healthy lifestyles. Much later, Johann Guttenberg's printing press paved the way for making educational materials more accessible as some of the first things to be printed were treatises regarding health. Informational materials containing information about hygiene and healthy lifestyle choices became popular as a tool to combat epidemics. In the 19th century, "awareness-rising" began to increase to improve the knowledge of the average people regarding health and other topics. As medicine has continued to progress, with new fields being created to address new problems, so too has methods of providing health education.
Prior to the 1960s, the physician was primarily in charge and the patients were expected to have a passive role in their own health decisions. In 1976, the Patient Education and Counseling journal was founded and the concept of health education began to really take off. It was around this time that it became apparent that if patients are informed about their health, they could improve it through various lifestyle changes. In the 1980s, patient advocacy groups drew attention to the issue of patients' rights such as the right to be informed about health conditions and the potential options for care. The 1990s fully brought about the shared decision making model present in healthcare settings today, including the emergence of electronic health communication. Lastly, in the 21st century, there has been an emergence of associations designated as platforms for promoting health education and communication.
In the United States specifically...
The purpose and approach of health education in the United States have evolved over time. From the late nineteenth to the mid-twentieth century, the aim of public health was controlling the harm from infectious diseases, which were largely under control by the 1950s. The major recent trend regarding changing definitions of school health education is the increasing acknowledgement that school education influences adult behavior.
In the 1970s, health education was viewed in the U.S. mostly as a means of communicating healthy medical practices to those who should be practicing them. By this time, it was clear that reducing illness, death, and rising health care costs could best be achieved through a focus on health promotion and disease prevention. At the heart of the new approach was the role of a health educator.
In the 1980s definitions began to incorporate the belief that education is a means of empowerment for the individual, allowing them to make educated health decisions. Health education in the U.S. became "the process of assisting individuals... to make informed decisions about matters affecting their personal health and the health of others." This definition emerged in the same year as the first national-scale investigation of health education in schools in the United States, which eventually led to a much more aggressive approach to educating young people on matters of health. In the late 1990s the World Health Organization launched a Global Health Initiative which aimed at developing "health-promoting schools", which would enhance school health programs at all levels including: local, regional, national, and global level.
Today school health education is seen in the U.S. as a "comprehensive health curricula", combining community, schools, and patient care practice, in which "Health education covers the continuum from disease prevention and promotion of optimal health to the detection of illness to treatment, rehabilitation, and long-term care." This concept is recently prescribed in current scientific literature as 'health promotion', a phrase that is used interchangeably with health education, although health promotion is broader in focus.
Role of the Health Education Specialist
A health educator is "a professionally prepared individual who serves in a variety of roles and is specifically trained to use appropriate educational strategies and methods to facilitate the development of policies, procedures, interventions, and systems conducive to the health of individuals, groups, and communities" (Joint Committee on Terminology, 2001, p. 100). In other words, they conduct, evaluate, and design activities that pertain to the improvement of the health and well-being of humans. Examples of this include "patient educators, health education teachers, trainers, community organizers, and health program managers." There is a variation in job titles and because of this, there is not a definite system of one health education system. In January 1978 the Role Delineation Project was put into place, in order to define the basic roles and responsibilities for the health educator. The result was a Framework for the Development of Competency-Based Curricula for Entry Level Health Educators (NCHEC, 1985). A second result was a revised version of A Competency-Based Framework for the Professional Development of Certified Health Education Specialists (NCHEC, 1996). These documents outlined the seven areas of responsibilities which are shown below. The Health Education Specialist Practice Analysis (HESPA II 2020) produced "a new hierarchical model with 8 Areas of Responsibility, 35 Competencies, and 193 Sub-competencies".
Health education aims to immediately impact an individual's knowledge, behavior, or attitude about a health-related topic with the ultimate aim of improving quality of life or health status for an individual. Health education utilizes several different intervention strategies in its practices to improve quality of life and health status. Health education intervention strategies involve a planned combination of elements that work together to produce change in an individual's skills, behavior, knowledge, or status related to health.
Peer Health Educators
Peer health education is described as student's taking initiative to inform their peers on how to live healthy lifestyles. Prevention is the biggest aspect of this idea and often includes alcohol, sexual health, and emotional wellbeing education in addition to many other aspects. Sloane and Zimmer also describe peer health education as "motivational models designed to empower students to help each other promote positive health beliefs and behaviors" Health education specialists often advise peer educators as well; this creates relationships with health professionals while providing relevant resources and models necessary to educate the most students possible.
The most research on peer educators has been done within colleges and universities within Western-civilizations. However, a specific example of peer health education being utilized is seen within The Shantou Experience in China. In this experience, medical students were selected to educate their peers on topics from diet and safer sex to mental and physical health. Self-administered questionnaires were used to track results from the participants as well as from the peer health educators. According to the questionnaire results, "All peer educators responded positively and the majority of students respondents positively evaluated. Although some students preferred to seek health information online, approximately one-quarter of the student respondents would contact peer educators". Ultimately, peer education has a greater acceptance in Western-societies and would require "cultural adaptation for greater effectiveness in China" and other Eastern-societies.
Teaching School Health Education
In the United States, around forty states require the teaching of health education. A comprehensive health education curriculum consists of planned learning experiences that will help students achieve desirable attitudes and practices related to critical health issues. Studies have shown that students are able to identify how emotions and healthy eating habits can possibly impact each other. Some of these are: emotional health and a positive self-image; appreciation, respect for, and care of the human body and its vital organs; physical fitness; health issues of alcohol, tobacco, drug use, and substance use disorders; health misconceptions and myths; effects of exercise on the body systems and on general well being; nutrition and weight control; sexual relationships and sexuality, the scientific, social, and economic aspects of community and ecological health; communicable and degenerative diseases including sexually transmitted diseases; disaster preparedness; safety and driver education; factors in the environment and how those factors affect an individual's or population's environmental health (ex: air quality, water quality, food sanitation); life skills; choosing professional medical and health services; and choices of health careers.
Mental Health
The topic of mental health has been getting more awareness and is becoming a more socially acceptable concept. However the average individual's mental health literacy, one's ability to "...recognize, manage, and prevent mental disorders", is not acceptable. Having a well-developed MHL will allow for students to not only manage their own mental health but help support others too. In Seedaket et als systematic review, they concluded that both school-based and community-based interventions can be successful in improving MHL.
Teaching children about mental health in school can help them see mental health as a normal occurrence and not something that should be ignored. In recent times we have seen an effort of increasing this way of teaching in health programs. The issue now is that "...teachers have limited skills to manage complex mental health difficulties". Mental health and MHL are complex ideas. Teachers do not have that kind of medical training to teach students everything that they need to know. To help the educators attain the ability to teach mental health topics as well as help their confidence in their ability to teach these topics, more specific training should be done.
Students can be taught about mental health with community-based interventions as well. This allows for experts to be brought in and teach youth about the signs of a mental illness and the ways to help manage them. This information can help increase an individual's MHL and help them in their future. Parents should also be informing their children about these topics. Having open discussions about mental health will create an environment where the child feels comfortable talking about this topic with their guardians. They should also be supportive and willing to listen to any problems their kids have.
School National Health Education Standards
The National Health Education Standards (NHES) are written expectations for what the students should know and be able to do by grades 2, 5, 8, and 12 to promote personal, family, and community health. The standards provide a framework for curriculum development and selection, instruction, and student assessment in health education. The performance indicators articulate specifically what students should know or be able to do in support of each standard by the conclusion of each of the following grade spans: Pre-K–Grade 12. The performance indicators serve as a blueprint for organizing student assessment.
Health Education Credentials in the United States
The National Commission for Health Education Credentialing (NCHEC) is a non-profit organization that provides certification and professional development opportunities for health education specialists in the United States. NCHEC was established in 1988 to improve the quality and consistency of health education in the United States. NCHEC offers several credentialing programs, including the Certified Health Education Specialist (CHES) and the Master Certified Health Education Specialist (MCHES) designations. NCHEC also provides continuing education opportunities for health education specialists, hosts an annual conference, and advocates for the profession of health education. The organization is governed by a Board of Commissioners and is supported by a network of volunteers, partners, and stakeholders in the health education field.
Health educators may gain professional certification in teaching health education in the United States by passing the Certified Health Education Specialist (CHES) exam. The CHES credential was created in 1989 and was later accredited in 2008 by the National Commission of Certifying Agencies. The National Commission for Health Education offers this exam in April and October each year to individuals that qualify. The CHES exam consists of 150 multiple choice competency-based questions that test individuals in the Eight Areas of Responsibility for Health Education Specialists. These eight areas include assessing individual and community needs, planning health education programs and interventions, implementing health education programs and interventions, evaluating and researching health outcomes, programs, and interventions, advocating for health education, leadership and management in health education, communicating health education, and ethics and professionalism for health educators. Individuals are eligible to take the exam if they meet certain academic and educational requirements. Individuals must hold a bachelor's, master's, or doctoral degree that was obtained from an accredited institution. The transcript of this degree must show that an individual completed a major related to health education or completed a minimum of 25 semester hours in qualifying health education related courses. Individuals that have not yet obtained their bachelor's, master's, or doctoral degree but otherwise qualify for the CHES exam may sit for the exam with the condition that they will graduate within ninety days of their CHES examination date.
The National Commission for Health Education later created the Master Certified Health Education Specialist (MCHES) exam in order to certify advanced competencies in health education specialists. The MCHES first began being administered in 2011 and it gained accreditation from the National Commission of Certifying Agencies in 2013. Individuals that have actively held the CHES certification for five years are eligible to take the MCHES exam. Individuals that are not CHES certified or have been actively CHES certified for less than five years, have five years of work experience as a health education specialist, and have a Master's degree in a field related to health education or a minimum of 25 semester hours completed in qualifying health education courses at the master or doctoral level are also eligible for the MCHES exam.
It is not required for individuals to obtain CHES or MCHES certification in order to work as a health education specialist in the United States. However, many employers give preference to applicants that are Certified Health Education Specialists and both credentials allow individuals to increase their employment opportunities and competitiveness.
Health Education Code of Ethics
The Health Education Code of Ethics has been a work in progress since approximately 1976, begun by the Society for Public Health Education (SOPHE).
"The Code of Ethics that has evolved from this long and arduous process is not seen as a completed project. Rather, it is envisioned as a living document that will continue to evolve as the practice of Health Education changes to meet the challenges of the new millennium."
Health Education Societies in the United States
Society for Public Health Educators
The Society for Public Health Educators (SOPHE) is an independent professional society of health educators, academics, and education researchers that was founded in 1950. Their mission is to "Promote the health of all people through education". SOPHE works with different health educators to promote healthy behaviors, healthy communities, and healthy environments. SOPHE helps fund and drive research on health education theory and practice.
American Public Health Association
The American Public Health Association (APHA) is a professional association that promotes good health and strengthens the public health profession by covering general information, issues, policies, news, and much more regarding the topic of health. The mission of this association is to "improve the health of the public and achieve equity in health status."
Members of this association include those that work in the public health field, healthcare professionals, or anyone with an interest in public health. Membership requires a fee based on employment status and offers many benefits such as networking opportunities, webinars, access to the American Journal of Public Health, etc.
American Association of Health Education
The American Association of Health Education (AAHE) is the oldest health education membership organization in the United States. It was established in 1937 to serve and assist health education professionals and it is one of six organizations that comprise the American Alliance for Health, Physical Education, Recreation, and Dance. Currently, the organization has a membership of over 5,500 health education professionals. The organization works to provide its members strategies, tools, and approaches related to health education and health promotion that can be used for a variety of public health settings.
International Union for Health Promotion and Education
Originally called Interim Commission, the International Union for Health Promotion and Education (IUHPE) was created in 1951 by Lucien Viborel, a then consultant to the WHO and United Nations, to focus a division for health education. Their mission is to promote global health and create health equity. Every three years they hold a World Conference on Health Promotion and Health Education. The Executive Board is made up of the President, the past-President, a maximum of 15 global members, and the regional Vice-Presidents. The organization is also made of memberships that are individual or institutional subscriptions that Health Educators can join.
Coalition of National Health Education Organizations
The Coalition of National Health Education Organizations (CNHEO) is an organization that was established in 1972 to serve at the national level by facilitating communication as well as collaborating and coordinating with individuals in other health organizations across the United States. The Coalition holds monthly meetings that are similar to those of public health departments where discussions are held to address any previous updates, finances and other current events that are appropriate for the many organizations CHNEO is in contact and collaboration with.
School Health Education Worldwide
Romania
Since 2001, the Ministry of Education, Research, Youth, and Sports developed a national curriculum on Health Education. The National Health Education Programme in Romanian Schools was considered a priority for the intervention of the GFATM (Global Fund) and UN Agencies.
For the development of students' acquirement of practical skills and knowledge to have a new specialization in Nutrition and Dietetics, the study program was initiated in the University of Medicine and Pharmacy (UMF) of Iuliu Hațieganu in 2008. Other universities continued to have the authority of this study including the University of Medicine, Pharmacy, Science, and Technology (UMFST) of Târgu Mureş, Iaşi, and Timişoara. The 104 students from these universities also participated in "Nutrition Medicine of the Future," the first National Symposium of Nutrition and Dietetics on 6–7 May 2011 to give and hear lectures. The second edition of this Symposium invited more International participants, such as the International Federation of Dietitians with the attendance of more than 150 students and other professionals.
Japan
Yogo Teachers
School nurses in Japan are called yogo teachers also known as hoken kyoushi (Kanji: 保健教師). Yogo teachers take a part of the educational staff to support students growth through the health education and services which are under school educational activities. Yogo teachers are trained to take care of student's physical health and their mental health. Through their observations of student's actions, the yogo teachers are able to identify students early-stage mood disorders and help support them as a school education. The problems causing mood disorders may include, family history, physical illness, previous diagnosis, and trauma. As many students have traumas, yogo teachers are able to detect physical or mental abuse cases (which could be a cause of trauma) more than other teachers. Therefore, yogo teachers are expected to take quick actions during the students early stages of mood disorder or child abuse as soon as possible.
Nutrition
Shokuiku (Kanji: 食育) is the Japanese term for "food education". The law defines it as the "acquisition of knowledge about food and nutrition, as well as the ability to make appropriate decisions through practical experience with food, with the aim of developing people's ability to live on a healthy diet".
It was initiated by Sagen Ishizuka, a famous military doctor and pioneer of the macrobiotic diet. Following the introduction of Western fast food in the late 20th century, the Japanese government mandated education in nutrition and food origins, starting with the Basic Law of Shokuiku in 2005, and followed with the School Health Law in 2008. Universities have established programs to teach shokuiku in public schools, as well as investigating its effectiveness through academic study.
Major concerns that led to the development of shokuiku law include:
School children skipping breakfast.
Children purchasing meals at a convenience store instead of eating with their parents.
Families not eating meals together.
Classes in shokuiku will study the processes of making food, such as farming or fermentation; how additives create flavor; and where food comes from.
Poland
Health education in Poland is not mandatory. However, research has shown that even with implantation of health education that the adolescents of Poland were still not choosing to live a healthy lifestyle. Health education is still needed in Poland, but the factor of what is actually available, especially in rural areas, and what is affordable affects the decisions more than what is healthy.
Although Polish schools curricula include health education, it is not a separate subject but concluded in other subjects such as nature, biology, and physical education. Some measurements have been taken to address this issue by non-government organizations.
Taiwan
Health education in Taiwan focuses on multiple topics, including:
Education for student to enhance their health status.
Assists parents to use health resources and health education information.
Teach students to understand specific diseases and basic medical knowledge.
Ireland
One school in Ireland has been teaching health education since 2004. The children are able to learn about their physical health, for instance, the students were able to go on a school walk, learn traditional Irish dancing, and learn how to swim. However, not all their activities are based on physical health. The kids also learn about healthy eating. One activity involves a food pyramid. Here students will learn about different foods and how they affect our health. At the bottom of the food pyramid are fruits and veggies like apples and carrots and at the top of the pyramid are fried foods like fries. The pyramid is wooden and has different colors corresponding to the level of the pyramid. Green corresponds with the lowest level of the pyramid and so on. The foods on the pyramid are 3D toys so the kids can see what the food looks like.
United Kingdom
The UK had implemented health education in their school system since the early 2000s. According to Gov.UK "... all pupils will study compulsory health education as well as new reformed relationships education in primary school and relationships and sex education in secondary school (Gov.UK, 2018)". However, these are not the only things being learned. The UK school system also teaches their students about mental health, leading a healthy lifestyle, and education about obesity.
Health Education and Sustainable Development Goals
Health Education is crucial in working towards achieving Sustainable Development Goals (SDG) created by the United Nations (UN). The UN created these goals in the hope that there will be motivation in following "a shared blueprint for peace and prosperity for people and the planet, now and into the future." By increasing Health Education implementation, it contributes to bringing awareness and learning to individuals, creating an understanding of the significance of international health and well-being.
| Biology and health sciences | Fields of medicine | Health |
2459654 | https://en.wikipedia.org/wiki/Pseudomonas%20aeruginosa | Pseudomonas aeruginosa | Pseudomonas aeruginosa is a common encapsulated, Gram-negative, aerobic–facultatively anaerobic, rod-shaped bacterium that can cause disease in plants and animals, including humans. A species of considerable medical importance, P. aeruginosa is a multidrug resistant pathogen recognized for its ubiquity, its intrinsically advanced antibiotic resistance mechanisms, and its association with serious illnesses – hospital-acquired infections such as ventilator-associated pneumonia and various sepsis syndromes. P. aeruginosa is able to selectively inhibit various antibiotics from penetrating its outer membrane - and has high resistance to several antibiotics. According to the World Health Organization P. aeruginosa poses one of the greatest threats to humans in terms of antibiotic resistance.
The organism is considered opportunistic insofar as serious infection often occurs during existing diseases or conditions – most notably cystic fibrosis and traumatic burns. It generally affects the immunocompromised but can also infect the immunocompetent as in hot tub folliculitis. Treatment of P. aeruginosa infections can be difficult due to its natural resistance to antibiotics. When more advanced antibiotic drug regimens are needed adverse effects may result.
It is citrate, catalase, and oxidase positive. It is found in soil, water, skin flora, and most human-made environments throughout the world. As a facultative anaerobe, P. aeruginosa thrives in diverse habitats. It uses a wide range of organic material for food; in animals, its versatility enables the organism to infect damaged tissues or those with reduced immunity. The symptoms of such infections are generalized inflammation and sepsis. If such colonizations occur in critical body organs, such as the lungs, the urinary tract, and kidneys, the results can be fatal. Because it thrives on moist surfaces, this bacterium is also found on and in medical equipment, including catheters, causing cross-infections in hospitals and clinics. It is also able to decompose hydrocarbons and has been used to break down tarballs and oil from oil spills. P. aeruginosa is not extremely virulent in comparison with other major species of pathogenic bacteria such as Gram-positive Staphylococcus aureus and Streptococcus pyogenes – though P. aeruginosa is capable of extensive colonization, and can aggregate into enduring biofilms.
Nomenclature
The word Pseudomonas means "false unit", from the Greek pseudēs (Greek: ψευδής, false) and (, from Greek: μονάς, a single unit). The stem word mon was used early in the history of microbiology to refer to microorganisms and germs, e.g., kingdom Monera.
The species name aeruginosa is a Latin word meaning verdigris ("copper rust"), referring to the blue-green color of laboratory cultures of the species. This blue-green pigment is a combination of two secondary metabolites of P. aeruginosa, pyocyanin (blue) and pyoverdine (green), which impart the blue-green characteristic color of cultures. Another assertion from 1956 is that aeruginosa may be derived from the Greek prefix ae- meaning "old or aged", and the suffix ruginosa means wrinkled or bumpy.
The names pyocyanin and pyoverdine are from the Greek, with pyo-, meaning "pus", cyanin, meaning "blue", and verdine, meaning "green". Hence, the term "pyocyanic bacteria" refers specifically to the "blue pus" characteristic of a P. aeruginosa infection. Pyoverdine in the absence of pyocyanin is a fluorescent-yellow color.
Biology
Genome
The genome of Pseudomonas aeruginosa consists of a relatively large circular chromosome (5.5–6.8 Mb) that carries between 5,500 and 6,000 open reading frames, and sometimes plasmids of various sizes depending on the strain. Comparison of 389 genomes from different P. aeruginosa strains showed that just 17.5% is shared. This part of the genome is the P. aeruginosa core genome.
A comparative genomic study (in 2020) analyzed 494 complete genomes from the Pseudomonas genus, of which 189 were P. aeruginosa strains. The study observed that their protein count and GC content ranged between 5500 and 7352 (average: 6192) and between 65.6 and 66.9% (average: 66.1%), respectively. This comparative analysis further identified 1811 aeruginosa-core proteins, which accounts for more than 30% of the proteome. The higher percentage of aeruginosa-core proteins in this latter analysis could partly be attributed to the use of complete genomes. Although P. aeruginosa is a very well-defined monophyletic species, phylogenomically and in terms of ANIm values, it is surprisingly diverse in terms of protein content, thus revealing a very dynamic accessory proteome, in accordance with several analyses. It appears that, on average, industrial strains have the largest genomes, followed by environmental strains, and then clinical isolates. The same comparative study (494 Pseudomonas strains, of which 189 are P. aeruginosa) identified that 41 of the 1811 P. aeruginosa core proteins were present only in this species and not in any other member of the genus, with 26 (of the 41) being annotated as hypothetical. Furthermore, another 19 orthologous protein groups are present in at least 188/189 P. aeruginosa strains and absent in all the other strains of the genus.
Population structure
The population of P. aeruginosa can be classified in three main lineages, genetically characterised by the model strains PAO1, PA14, and the more divergent PA7.
While P. aeruginosa is generally thought of as an opportunistic pathogen, several widespread clones appear to have become more specialised pathogens, particularly in cystic fibrosis patients, including the Liverpool epidemic strain (LES) which is found mainly in the UK, DK2 in Denmark, and AUST-02 in Australia (also previously known as AES-2 and P2). There is also a clone that is frequently found infecting the reproductive tracts of horses.
Metabolism
P. aeruginosa is a facultative anaerobe, as it is well adapted to proliferate in conditions of partial or total oxygen depletion. This organism can achieve anaerobic growth with nitrate or nitrite as a terminal electron acceptor. When oxygen, nitrate, and nitrite are absent, it is able to ferment arginine and pyruvate by substrate-level phosphorylation. Additionally, phenazines produced by P. aeruginosa can act as electron shuttles to facilitate survival of cells at depth in biofilms. Adaptation to microaerobic or anaerobic environments is essential for certain lifestyles of P. aeruginosa, for example, during lung infection in cystic fibrosis and primary ciliary dyskinesia, where thick layers of lung mucus and bacterially-produced alginate surrounding mucoid bacterial cells can limit the diffusion of oxygen. P. aeruginosa growth within the human body can be asymptomatic until the bacteria form a biofilm, which overwhelms the immune system. These biofilms are found in the lungs of people with cystic fibrosis and primary ciliary dyskinesia, and can prove fatal.
Cellular co-operation
P. aeruginosa relies on iron as a nutrient source to grow. However, iron is not easily accessible because it is not commonly found in the environment. Iron is usually found in a largely insoluble ferric form. Furthermore, excessively high levels of iron can be toxic to P. aeruginosa. To overcome this and regulate proper intake of iron, P. aeruginosa uses siderophores, which are secreted molecules that bind and transport iron. These iron-siderophore complexes, however, are not specific. The bacterium that produced the siderophores does not necessarily receive the direct benefit of iron intake. Rather, all members of the cellular population are equally likely to access the iron-siderophore complexes. Members of the cellular population that can efficiently produce these siderophores are commonly referred to as cooperators; members that produce little to no siderophores are often referred to as cheaters. Research has shown when cooperators and cheaters are grown together, cooperators have a decrease in fitness, while cheaters have an increase in fitness. The magnitude of change in fitness increases with increasing iron limitation. With an increase in fitness, the cheaters can outcompete the cooperators; this leads to an overall decrease in fitness of the group, due to lack of sufficient siderophore production. These observations suggest that having a mix of cooperators and cheaters can reduce the virulent nature of P. aeruginosa.
Enzymes
LigDs form a subfamily of the DNA ligases. These all have a LigDom/ligase domain, but many bacterial LigDs also have separate polymerase domains/PolDoms and nuclease domains/NucDoms. In P. aeruginosas case the nuclease domains are N-terminus, and the polymerase domains are C-terminus, extensions of the single central ligase domain.
Pathogenesis
Frequently acting as an opportunistic, nosocomial pathogen of immunocompromised individuals, but capable of infecting the immunocompetent, P. aeruginosa typically infects the airway, urinary tract, burns, and wounds, and also causes other blood infections.
It is the most common cause of infections of burn injuries and of the outer ear (otitis externa), and is the most frequent colonizer of medical devices (e.g., catheters). Pseudomonas can be spread by equipment that gets contaminated and is not properly cleaned or on the hands of healthcare workers. Pseudomonas can, in rare circumstances, cause community-acquired pneumonias, as well as ventilator-associated pneumonias, being one of the most common agents isolated in several studies. Pyocyanin is a virulence factor of the bacteria and has been known to cause death in C. elegans by oxidative stress. However, salicylic acid can inhibit pyocyanin production. One in ten hospital-acquired infections is from Pseudomonas . Cystic fibrosis patients are also predisposed to P. aeruginosa infection of the lungs due to a functional loss in chloride ion movement across cell membranes as a result of a mutation. P. aeruginosa may also be a common cause of "hot-tub rash" (dermatitis), caused by lack of proper, periodic attention to water quality. Since these bacteria thrive in moist environments, such as hot tubs and swimming pools, they can cause skin rash or swimmer's ear. Pseudomonas is also a common cause of postoperative infection in radial keratotomy surgery patients. The organism is also associated with the skin lesion ecthyma gangrenosum. P. aeruginosa is frequently associated with osteomyelitis involving puncture wounds of the foot, believed to result from direct inoculation with P. aeruginosa via the foam padding found in tennis shoes, with diabetic patients at a higher risk.
A comparative genomic analysis of 494 complete Pseudomonas genomes, including 189 complete P. aeruginosa genomes, identified several proteins that are shared by the vast majority of P. aeruginosa strains, but are not observed in other analyzed Pseudomonas genomes. These aeruginosa-specific core proteins, such as CntL, CntM, PlcB, Acp1, MucE, SrfA, Tse1, Tsi2, Tse3, and EsrC are known to play an important role in this species' pathogenicity.
Toxins
P. aeruginosa uses the virulence factor exotoxin A to inactivate eukaryotic elongation factor 2 via ADP-ribosylation in the host cell, much as the diphtheria toxin does. Without elongation factor 2, eukaryotic cells cannot synthesize proteins and necrotise. The release of intracellular contents induces an immunologic response in immunocompetent patients.
In addition P. aeruginosa uses an exoenzyme, ExoU, which degrades the plasma membrane of eukaryotic cells, leading to lysis. Increasingly, it is becoming recognized that the iron-acquiring siderophore, pyoverdine, also functions as a toxin by removing iron from mitochondria, inflicting damage on this organelle. Since pyoverdine is secreted into the environment, it can be easily detected by the host or predator, resulting the host/predator migration towards the bacteria.
Phenazines
Phenazines are redox-active pigments produced by P. aeruginosa. These pigments are involved in quorum sensing, virulence, and iron acquisition. P. aeruginosa produces several pigments all produced by a biosynthetic pathway: phenazine-1-carboxamide (PCA), 1-hydroxyphenazine, 5-methylphenazine-1-carboxylic acid betaine, pyocyanin and aeruginosin A. Two nearly identical operons are involved in phenazine biosynthesis: phzA1B1C1D1E1F1G1 and phzA2B2C2D2E2F2G2. The enzymes encoded by these operons convert chorismic acid to PCA. The products of three key genes, phzH, phzM, and phzS then convert PCA to the other phenazines mentioned above. Though phenazine biosynthesis is well studied, questions remain as to the final structure of the brown phenazine pyomelanin.
When pyocyanin biosynthesis is inhibited, a decrease in P. aeruginosa pathogenicity is observed in vitro. This suggests that pyocyanin is mostly responsible for the initial colonization of P. aeruginosa in vivo.
Triggers
With low phosphate levels, P. aeruginosa has been found to activate from benign symbiont to express lethal toxins inside the intestinal tract and severely damage or kill the host, which can be mitigated by providing excess phosphate instead of antibiotics.
Plants and invertebrates
In higher plants, P. aeruginosa induces soft rot, for example in Arabidopsis thaliana (Thale cress) and Lactuca sativa (lettuce). It is also pathogenic to invertebrate animals, including the nematode Caenorhabditis elegans, the fruit fly Drosophila, and the moth Galleria mellonella. The associations of virulence factors are the same for plant and animal infections. In both insects and plants, P. aeruginosa virulence is highly quorum sensing (QS) dependent. Its QS is in turn highly dependent upon such genes as acyl-homoserine-lactone synthase, and lasI.
Quorum sensing
P. aeruginosa is an opportunistic pathogen with the ability to coordinate gene expression in order to compete against other species for nutrients or colonization. Regulation of gene expression can occur through cell-cell communication or quorum sensing (QS) via the production of small molecules called autoinducers that are released into the external environment. These signals, when reaching specific concentrations correlated with specific population cell densities, activate their respective regulators thus altering gene expression and coordinating behavior. P. aeruginosa employs five interconnected QS systems – las, rhl, pqs, iqs, and pch – that each produce unique signaling molecules. The las and rhl systems are responsible for the activation of numerous QS-controlled genes, the pqs system is involved in quinolone signaling, and the iqs system plays an important role in intercellular communication. QS in P. aeruginosa is organized in a hierarchical manner. At the top of the signaling hierarchy is the las system, since the las regulator initiates the QS regulatory system by activating the transcription of a number of other regulators, such as rhl. So, the las system defines a hierarchical QS cascade from the las to the rhl regulons. Detection of these molecules indicates P. aeruginosa is growing as biofilm within the lungs of cystic fibrosis patients. The impact of QS and especially las systems on the pathogenicity of P. aeruginosa is unclear, however. Studies have shown that lasR-deficient mutants are associated with more severe outcomes in cystic fibrosis patients and are found in up to 63% of chronically infected cystic fibrosis patients despite impaired QS activity.
QS is known to control expression of a number of virulence factors in a hierarchical manner, including the pigment pyocyanin. However, although the las system initiates the regulation of gene expression, its absence does not lead to loss of virulence factors. Recently, it has been demonstrated that the rhl system partially controls las-specific factors, such as proteolytic enzymes responsible for elastolytic and staphylolytic activities, but in a delayed manner. So, las is a direct and indirect regulator of QS-controlled genes. Another form of gene regulation that allows the bacteria to rapidly adapt to surrounding changes is through environmental signaling. Recent studies have discovered anaerobiosis can significantly impact the major regulatory circuit of QS. This important link between QS and anaerobiosis has a significant impact on production of virulence factors of this organism. Garlic experimentally blocks quorum sensing in P. aeruginosa.
Biofilms formation and cyclic di-GMP
As in most Gram negative bacteria, P. aeruginosa biofilm formation is regulated by one single molecule: cyclic di-GMP. At low cyclic di-GMP concentration, P. aeruginosa has a free-swimming mode of life. But when cyclic di-GMP levels increase, P. aeruginosa start to establish sessile communities on surfaces. The intracellular concentration of cyclic di-GMP increases within seconds when P. aeruginosa touches a surface (e.g.: a rock, plastic, host tissues...). This activates the production of adhesive pili, that serve as "anchors" to stabilize the attachment of P. aeruginosa on the surface. At later stages, bacteria will start attaching irreversibly by producing a strongly adhesive matrix. At the same time, cyclic di-GMP represses the synthesis of the flagellar machinery, preventing P. aeruginosa from swimming. When suppressed, the biofilms are less adherent and easier to treat.
The biofilm matrix of P. aeruginosa is composed of nucleic acids, amino acids, carbohydrates, and various ions. It mechanically and chemically protects P. aeruginosa from aggression by the immune system and some toxic compounds. P. aeruginosa biofilm's matrix is composed of up to three types of sugar polymers (or "exopolysacharides") named PSL, PEL, and alginate. Which exopolysacharides are produced varies by strain.
The polysaccharide synthesis operon and cyclic di-GMP form a positive feedback loop. This 15-gene operon is responsible for the cell-cell and cell-surface interactions required for cell communication.
PEL is a cationic exopolysaccharide that cross-links extracellular DNA in the P. aeruginosa biofilm matrix.
Upon certain cues or stresses, P. aeruginosa revert the biofilm program and detach. Recent studies have shown that the dispersed cells from P. aeruginosa biofilms have lower cyclic di-GMP levels and different physiologies from those of planktonic and biofilm cells, with unique population dynamics and motility. Such dispersed cells are found to be highly virulent against macrophages and C. elegans, but highly sensitive towards iron stress, as compared with planktonic cells.
Biofilms and treatment resistance
Biofilms of P. aeruginosa can cause chronic opportunistic infections, which are a serious problem for medical care in industrialized societies, especially for immunocompromised patients and the elderly. They often cannot be treated effectively with traditional antibiotic therapy. Biofilms serve to protect these bacteria from adverse environmental factors, including host immune system components in addition to antibiotics. P. aeruginosa can cause nosocomial infections and is considered a model organism for the study of antibiotic-resistant bacteria. Researchers consider it important to learn more about the molecular mechanisms that cause the switch from planktonic growth to a biofilm phenotype and about the role of QS in treatment-resistant bacteria such as P. aeruginosa. This should contribute to better clinical management of chronically infected patients, and should lead to the development of new drugs.
Scientists have been examining the possible genetic basis for P. aeruginosa resistance to antibiotics such as tobramycin. One locus identified as being an important genetic determinant of the resistance in this species is ndvB, which encodes periplasmic glucans that may interact with antibiotics and cause them to become sequestered into the periplasm. These results suggest a genetic basis exists behind bacterial antibiotic resistance, rather than the biofilm simply acting as a diffusion barrier to the antibiotic.
Diagnosis
Depending on the nature of infection, an appropriate specimen is collected and sent to a bacteriology laboratory for identification. As with most bacteriological specimens, a Gram stain is performed, which may show Gram-negative rods and/or white blood cells. P. aeruginosa produces colonies with a characteristic "grape-like" or "fresh-tortilla" odor on bacteriological media. In mixed cultures, it can be isolated as clear colonies on MacConkey agar (as it does not ferment lactose) which will test positive for oxidase. Confirmatory tests include production of the blue-green pigment pyocyanin on cetrimide agar and growth at 42 °C. A TSI slant is often used to distinguish nonfermenting Pseudomonas species from enteric pathogens in faecal specimens.
When P. aeruginosa is isolated from a normally sterile site (blood, bone, deep collections), it is generally considered dangerous, and almost always requires treatment. However, P. aeruginosa is frequently isolated from nonsterile sites (mouth swabs, sputum, etc.), and, under these circumstances, it may represent colonization and not infection. The isolation of P. aeruginosa from nonsterile specimens should, therefore, be interpreted cautiously, and the advice of a microbiologist or infectious diseases physician/pharmacist should be sought prior to starting treatment. Often, no treatment is needed.
Classification
Morphological, physiological, and biochemical characteristics of Pseudomonas aeruginosa are shown in the Table below.
Note: + = Positive, - =Negative
P. aeruginosa is a Gram-negative, aerobic (and at times facultatively anaerobic), rod-shaped bacterium with unipolar motility. It has been identified as an opportunistic pathogen of both humans and plants. P. aeruginosa is the type species of the genus Pseudomonas.
Identification of P. aeruginosa can be complicated by the fact individual isolates often lack motility. The colony morphology itself also displays several varieties. The main two types are large, smooth, with a flat edge and elevated center and small, rough, and convex. A third type, mucoid, can also be found. The large colony can typically be found in clinal settings while the small is found in nature. The third, however, is present in biological settings and has been found in respiratory and in the urinary tract. Furthermore, mutations in the gene lasR drastically alter colony morphology and typically lead to failure to hydrolyze gelatin or hemolyze.
In certain conditions, P. aeruginosa can secrete a variety of pigments, including pyocyanin (blue), pyoverdine (yellow and fluorescent), pyorubin (red), and pyomelanin (brown). These can be used to identify the organism.
Clinical identification of P. aeruginosa may include identifying the production of both pyocyanin and fluorescein, as well as its ability to grow at 42 °C. P. aeruginosa is capable of growth in diesel and jet fuels, where it is known as a hydrocarbon-using microorganism, causing microbial corrosion. It creates dark, gellish mats sometimes improperly called "algae" because of their appearance.
Treatment
Many P. aeruginosa isolates are resistant to a large range of antibiotics and may demonstrate additional resistance after unsuccessful treatment. It should usually be possible to guide treatment according to laboratory sensitivities, rather than choosing an antibiotic empirically. If antibiotics are started empirically, then every effort should be made to obtain cultures (before administering the first dose of antibiotic), and the choice of antibiotic used should be reviewed when the culture results are available.
Due to widespread resistance to many common first-line antibiotics, carbapenems, polymyxins, and more recently tigecycline were considered to be the drugs of choice; however, resistance to these drugs has also been reported. Despite this, they are still being used in areas where resistance has not yet been reported. Use of β-lactamase inhibitors such as sulbactam has been advised in combination with antibiotics to enhance antimicrobial action even in the presence of a certain level of resistance. Combination therapy after rigorous antimicrobial susceptibility testing has been found to be the best course of action in the treatment of multidrug-resistant P. aeruginosa. Some next-generation antibiotics that are reported as being active against P. aeruginosa include doripenem, ceftobiprole, and ceftaroline. However, these require more clinical trials for standardization. Therefore, research for the discovery of new antibiotics and drugs against P. aeruginosa is very much needed.
Antibiotics that may have activity against P. aeruginosa include:
aminoglycosides (gentamicin, amikacin, tobramycin, but not kanamycin)
quinolones (ciprofloxacin, levofloxacin, but not moxifloxacin)
cephalosporins (ceftazidime, cefepime, cefoperazone, cefpirome, ceftobiprole, but not cefuroxime, cefotaxime, or ceftriaxone)
antipseudomonal penicillins: carboxypenicillins (carbenicillin and ticarcillin), and ureidopenicillins (mezlocillin, azlocillin, and piperacillin). P. aeruginosa is intrinsically resistant to all other penicillins.
carbapenems (meropenem, imipenem, doripenem, but not ertapenem)
polymyxins (polymyxin B and colistin)
monobactams (aztreonam)
As fluoroquinolones are one of the few antibiotic classes widely effective against P. aeruginosa, in some hospitals, their use is severely restricted to avoid the development of resistant strains. On the rare occasions where infection is superficial and limited (for example, ear infections or nail infections), topical gentamicin or colistin may be used.
For pseudomonal wound infections, acetic acid with concentrations from 0.5% to 5% can be an effective bacteriostatic agent in eliminating the bacteria from the wound. Usually a sterile gauze soaked with acetic acid is placed on the wound after irrigation with normal saline. Dressing would be done once per day. Pseudomonas is usually eliminated in 90% of the cases after 10 to 14 days of treatment.
Antibiotic resistance
One of the most worrisome characteristics of P. aeruginosa is its low antibiotic susceptibility, which is attributable to a concerted action of multidrug efflux pumps with chromosomally encoded antibiotic resistance genes, i.e., the genes that encode proteins that serve as enzymes to break down antibiotics. Examples of such genes are:
AmpC: encodes an AmpC-type β-lactamase enzyme, which breaks down penicillins, cephalosporins, and carbapenems;
PER-1: encodes a PER-1 type extended-spectrum β-lactamase enzyme, which breaks down penicillins and cephalosporins;
IMP: encodes active-on-imipenem (IMP) carbapenemase (metallo-β-lactamase) enzyme which breaks down carbapenems;
NDM-1: encodes a New Delhi metallo-β-lactamase 1 enzyme, which breaks down carbapenems;
OXA: encodes an oxacillinase (OCA) β-lactamase enzyme, which breaks down carbapenems;
AAC(6')-Ib: encodes an aminoglycoside-modifying enzyme called aminoglycoside N6'-acetyltransferase, which alters the structure of aminoglycoside antibiotics such as gentamicin and tobramycin;
Qnr: encodes a Qnr protein, which protects DNA gyrase and topoisomerase IV from the effects of quinolone (fluoroquinolone) antibiotics such as ciprofloxacin.
Specific genes and enzymes involved in antibiotic resistance can vary between different strains. P. aeruginosa TG523 harbored genes predicted to have antibacterial activity and those which are implicated in virulence.
Another feature that contributes to antibiotic resistance of P. aeruginosa is the low permeability of the bacterial cellular envelopes. In addition to this intrinsic resistance, P. aeruginosa easily develops acquired resistance either by mutation in chromosomally encoded genes or by the horizontal gene transfer of antibiotic resistance determinants. Development of multidrug resistance by P. aeruginosa isolates requires several different genetic events, including acquisition of different mutations and/or horizontal transfer of antibiotic resistance genes. Hypermutation favours the selection of mutation-driven antibiotic resistance in P. aeruginosa strains producing chronic infections, whereas the clustering of several different antibiotic resistance genes in integrons favors the concerted acquisition of antibiotic resistance determinants. Some recent studies have shown phenotypic resistance associated to biofilm formation or to the emergence of small-colony variants may be important in the response of P. aeruginosa populations to antibiotic treatment.
Mechanisms underlying antibiotic resistance have been found to include production of antibiotic-degrading or antibiotic-inactivating enzymes, outer membrane proteins to evict the antibiotics, and mutations to change antibiotic targets. Presence of antibiotic-degrading enzymes such as extended-spectrum β-lactamases like PER-1, PER-2, and VEB-1, AmpC cephalosporinases, carbapenemases like serine oxacillinases, metallo-b-lactamases, OXA-type carbapenemases, and aminoglycoside-modifying enzymes, among others, have been reported. P. aeruginosa can also modify the targets of antibiotic action: for example, methylation of 16S rRNA to prevent aminoglycoside binding and modification of DNA, or topoisomerase to protect it from the action of quinolones. P. aeruginosa has also been reported to possess multidrug efflux pumps systems that confer resistance against a number of antibiotic classes, and the MexAB-OprM (Resistance-nodulation-division (RND) family) is considered as the most important. An important factor found to be associated with antibiotic resistance is the decrease in the virulence capabilities of the resistant strain. Such findings have been reported in the case of rifampicin-resistant and colistin-resistant strains, in which decrease in infective ability, quorum sensing, and motility have been documented.
Mutations in DNA gyrase are commonly associated with antibiotic resistance in P. aeruginosa. These mutations, when combined with others, confer high resistance without hindering survival. Additionally, genes involved in cyclic-di-GMP signaling may contribute to resistance. When P. aeruginosa is grown under in vitro conditions designed to mimic a cystic fibrosis patient's lungs, these genes mutate repeatedly.
Two small RNAs, Sr0161 and ErsA, were shown to interact with mRNA encoding the major porin OprD responsible for the uptake of carbapenem antibiotics into the periplasm. The sRNAs bind to the 5'UTR of oprD, causing increase in bacterial resistance to meropenem. Another sRNA, Sr006, may positively regulate (post-transcriptionally) the expression of PagL, an enzyme responsible for deacylation of lipid A. This reduces the pro-inflammatory property of lipid A. Furthermore, similar to a process found in Salmonella, Sr006 regulation of PagL expression may aid in polymyxin B resistance.
Prevention
Probiotic prophylaxis may prevent colonization and delay onset of Pseudomonas infection in an ICU setting. Immunoprophylaxis against Pseudomonas is being investigated.
The risk of contracting P. aeruginosa can be reduced by avoiding pools, hot tubs, and other bodies of standing water; regularly disinfecting and/or replacing equipment that regularly encounters moisture (such as contact lens equipment and solutions); and washing one's hands often (which is protective against many other pathogens as well). However, even the best hygiene practices cannot totally protect an individual against P. aeruginosa, given how common the organism is in the environment.
Experimental therapies
Phage therapy against P. aeruginosa has been investigated as a possible effective treatment, which can be combined with antibiotics, has no contraindications and minimal adverse effects. Phages are produced as sterile liquid, suitable for intake, applications etc.
Phage therapy against ear infections caused by P. aeruginosa was reported in the journal Clinical Otolaryngology in August 2009. research on the topic is ongoing.
Research
In 2013, João Xavier described an experiment in which P. aeruginosa, when subjected to repeated rounds of conditions in which it needed to swarm to acquire food, developed the ability to "hyperswarm" at speeds 25% faster than baseline organisms, by developing multiple flagella, whereas the baseline organism has a single flagellum. This result was notable in the field of experimental evolution in that it was highly repeatable. P. aeruginosa has been studied for use in bioremediation and use in processing polyethylene in municipal solid waste.
Research on this bacterium's systems biology led to the development of genome-scale metabolic models that enable computer simulation and prediction of bacterial growth rates under varying conditions, including its virulence properties.
Distribution
Pest risk analysis
the East African Community considers P. aeruginosa to be a quarantine concern because of the presence of Phaseolus vulgaris–pathogenic strains of P. aeruginosa in Kenya for the rest of the area. A pest risk analysis by the EAC was based on this bacterium's CABI's Crop Protection Compendium listing, following Kaaya & Darji 1989's initial detection in Kenya.
Eyedrops
A small number of infections in the United States in 2022 and 2023 were likely caused by poorly manufactured eyedrops.
| Biology and health sciences | Gram-negative bacteria | Plants |
10173651 | https://en.wikipedia.org/wiki/Slosh%20dynamics | Slosh dynamics | In fluid dynamics, slosh refers to the movement of liquid inside another object (which is, typically, also undergoing motion).
Strictly speaking, the liquid must have a free surface to constitute a slosh dynamics problem, where the dynamics of the liquid can interact with the container to alter the system dynamics significantly. Important examples include propellant slosh in spacecraft tanks and rockets (especially upper stages), and the free surface effect (cargo slosh) in ships and trucks transporting liquids (for example oil and gasoline).
However, it has become common to refer to liquid motion in a completely filled tank, i.e. without a free surface, as "fuel slosh".
Such motion is characterized by "inertial waves" and can be an important effect in spinning spacecraft dynamics. Extensive mathematical and empirical relationships have been derived to describe liquid slosh. These types of analyses are typically undertaken using computational fluid dynamics and finite element methods to solve the fluid-structure interaction problem, especially if the solid container is flexible. Relevant fluid dynamics non-dimensional parameters include the Bond number, the Weber number, and the Reynolds number.
Slosh is an important effect for spacecraft, ships, some land vehicles and some aircraft. Slosh was a factor in the Falcon 1 second test flight anomaly, and has been implicated in various other spacecraft anomalies, including a near-disaster with the Near Earth Asteroid Rendezvous (NEAR Shoemaker) satellite.
Spacecraft effects
Liquid slosh in microgravity is relevant to spacecraft, most commonly Earth-orbiting satellites, and must take account of liquid surface tension which can alter the shape (and thus the eigenvalues) of the liquid slug. Typically, a large fraction of the mass of a satellite is liquid propellant at/near Beginning of Life (BOL), and slosh can adversely affect satellite performance in a number of ways. For example, propellant slosh can introduce uncertainty in spacecraft attitude (pointing) which is often called jitter. Similar phenomena can cause pogo oscillation and can result in structural failure of a space vehicle.
Another example is problematic interaction with the spacecraft's Attitude Control System (ACS), especially for spinning satellites which can suffer resonance between slosh and nutation, or adverse changes to the rotational inertia. Because of these types of risk, in the 1960s the National Aeronautics and Space Administration (NASA) extensively studied liquid slosh in spacecraft tanks, and in the 1990s NASA undertook the Middeck 0-Gravity Dynamics Experiment on the Space Shuttle. The European Space Agency has advanced these investigations with the launch of SLOSHSAT. Most spinning spacecraft since 1980 have been tested at the Applied Dynamics Laboratories drop tower using sub-scale models. Extensive contributions have also been made by the Southwest Research Institute, but research is widespread in academia and industry.
Research is continuing into slosh effects on in-space propellant depots. In October 2009, the United States Air Force and United Launch Alliance (ULA) performed an experimental on-orbit demonstration on a modified Centaur upper stage on the DMSP-18 satellite launch in order to improve "understanding of propellant settling and slosh", "The light weight of DMSP-18 allowed of remaining LO2 and LH2 propellant, 28% of Centaur’s capacity", for the on-orbit tests. The post-spacecraft mission extension ran 2.4 hours before the planned deorbit burn was executed.
NASA's Launch Services Program is working on two on-going slosh fluid dynamics experiments with partners: CRYOTE and SPHERES-Slosh. ULA has additional small-scale demonstrations of cryogenic fluid management are planned with project CRYOTE in 2012–2014 leading to a ULA large-scale cryo-sat propellant depot test under the NASA flagship technology demonstrations program in 2015. SPHERES-Slosh with Florida Institute of Technology and Massachusetts Institute of Technology will examine how liquids move around inside containers in microgravity with the SPHERES Testbed on the International Space Station.
Sloshing in road tank vehicles
Liquid sloshing strongly influences the directional dynamics and safety performance of highway tank vehicles in a highly adverse manner. Hydrodynamic forces and moments arising from liquid cargo oscillations in the tank under steering and/or braking maneuvers reduce the stability limit and controllability of partially-filled tank vehicles. Anti-slosh devices such as baffles are widely used in order to limit the adverse liquid slosh effect on directional performance and stability of the tank vehicles. Since most of the time, tankers are carrying dangerous liquid contents such as ammonia, gasoline and fuel oils, stability of partially-filled liquid cargo vehicles is very important. Optimizations and sloshing reduction techniques in fuel tanks such as elliptical tank, rectangular, modified oval and generic tank shape have been performed in different filling levels using numerical, analytical and analogical analyses. Most of these studies concentrate on effects of baffles on sloshing while the influence of cross-section is completely ignored.
The Bloodhound LSR 1,000 mph project car utilizes a liquid-fuelled rocket that requires a specially-baffled oxidizer tank to prevent directional instability, rocket thrust variations and even oxidizer tank damage.
Practical effects
Sloshing or shifting cargo, water ballast, or other liquid (e.g., from leaks or fire fighting) can cause disastrous capsizing in ships due to free surface effect; this can also affect trucks and aircraft.
The effect of slosh is used to limit the bounce of a roller hockey ball. Water slosh can significantly reduce the rebound height of a ball but some amounts of liquid seem to lead to a resonance effect. Many of the balls for roller hockey commonly available contain water to reduce the bounce height.
| Physical sciences | Fluid mechanics | Physics |
7888444 | https://en.wikipedia.org/wiki/Food%20chemistry | Food chemistry | Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, milk as examples. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This discipline also encompasses how products change under certain food processing techniques and ways either to enhance or to prevent them from happening. An example of enhancing a process would be to encourage fermentation of dairy products with microorganisms that convert lactose to lactic acid; an example of preventing a process would be stopping the browning on the surface of freshly cut apples using lemon juice or other acidulated water.
History of food chemistry
The scientific approach to food and nutrition arose with attention to agricultural chemistry in the works of J. G. Wallerius, Humphry Davy, and others. For example, Davy published Elements of Agricultural Chemistry, in a Course of Lectures for the Board of Agriculture (1813) in the United Kingdom which would serve as a foundation for the profession worldwide, going into a fifth edition. Earlier work included that by Carl Wilhelm Scheele, who isolated malic acid from apples in 1785.
Some of the findings of Liebig on food chemistry were translated and published by Eben Horsford in Lowell Massachusetts in 1848.
In 1874 the Society of Public Analysts was formed, with the aim of applying analytical methods to the benefit of the public. Its early experiments were based on bread, milk, and wine.
It was also out of concern for the quality of the food supply, mainly food adulteration and contamination issues that would first stem from intentional contamination to later with chemical food additives by the 1950s. The development of colleges and universities worldwide, most notably in the United States, would expand food chemistry as well as research of the dietary substances, most notably the Single-grain experiment during 1907-11. Additional research by Harvey W. Wiley at the United States Department of Agriculture during the late 19th century would play a key factor in the creation of the United States Food and Drug Administration in 1906. The American Chemical Society established its Agricultural and Food Chemistry Division in 1908 while the Institute of Food Technologists established its Food Chemistry Division in 1995.
Food chemistry concepts are often drawn from rheology, theories of transport phenomena, physical and chemical thermodynamics, chemical bonds, and interaction forces, quantum mechanics and reaction kinetics, biopolymer science, colloidal interactions, nucleation, glass transitions and freezing/disordered or noncrystalline solids, and thus has Food Physical Chemistry as a foundation area.
Water in food systems
A major component of food is water, which can encompass anywhere from 50% in meat products to 95% in lettuce, cabbage, and tomato products. It is also an excellent place for bacterial growth and food spoilage if it is not properly processed. One way this is measured in food is by water activity which is very important in the shelf life of many foods during processing. One of the keys to food preservation in most instances is reduce the amount of water or alter the water's characteristics to enhance shelf-life. Such methods include dehydration, freezing, and refrigeration This field encompasses the "physiochemical principles of the reactions and conversions that occur during the manufacture, handling, and storage of foods".
Carbohydrates
Comprising 75% of the biological world and 80% of all food intake for human consumption, the most common known human carbohydrate is sucrose. The simplest version of a carbohydrate is a monosaccharide which contains carbon, hydrogen, and oxygen in a 1:2:1 ratio under a general formula of CnH2nOn where n is a minimum of 3. Glucose and fructose are examples of monosaccharides. When combined in the way that the image to the right depicts, sucrose, one of the more common sugar products found in plants, is formed.
A chain of monosaccharides form to make a polysaccharide. Such polysaccharides include pectin, dextran, agar, and xanthan. Some of these carbohydrate polysaccharides are accessible for digestion by human enzymes and mainly absorbed in the small intestine, whereas dietary fiber passes to the large intestine where some of these polysaccharides are fermented by the gastrointestinal microbiota.
Sugar content is commonly measured in degrees brix.
Lipids
The term lipid comprises a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids (including essential fatty acids), fatty-acid derived phospholipids, sphingolipids, glycolipids and terpenoids, such as retinoids and steroids. Some lipids are linear aliphatic molecules, while others have ring structures. Some are aromatic, while others are not. Some are flexible, while others are rigid.
Most lipids have some polar character in addition to being largely nonpolar. Generally, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere -OH group (hydroxyl or alcohol).
Lipids in food include the oils of such grains as corn, soybean, from animal fats, and are parts of many foods such as milk, cheese, and meat. They also act as vitamin carriers.
Food proteins
Proteins comprise over 50% of the dry weight of an average living cell and are very complex macromolecules. They also play a fundamental role in the structure and function of cells. Consisting mainly of carbon, nitrogen, hydrogen, oxygen, and some sulfur, they also may contain iron, copper, phosphorus, or zinc.
In food, proteins are essential for growth and survival, and requirements vary depending upon a person's age and physiology (e.g., pregnancy). Protein is commonly obtained from animal sources: eggs, milk, and meat. Nuts, grains and legumes provide vegetable sources of protein, and protein combining of vegetable sources is used to achieve complete protein nutritional quotas from vegetables.
Protein sensitivity as food allergy is detected with the ELISA test.
Enzymes
Enzymes are biochemical catalysts used in converting processes from one substance to another. They are also involved in reducing the amount of time and energy required to complete a chemical process. Many aspects of the food industry use catalysts, including baking, brewing, dairy, and fruit juices, to make cheese, beer, and bread.
Vitamins
Vitamins are nutrients required in small amounts for essential metabolic reactions in the body. These are broken down in nutrition as either water-soluble (vitamin C) or fat-soluble (vitamin E). An adequate supply of vitamins can prevent diseases such as beriberi, anemia, and scurvy while an overdose of vitamins can produce nausea and vomiting or even death.
Minerals
Dietary minerals in foods are large and diverse with many required to function while other trace elements can be hazardous if consumed in excessive amounts. Bulk minerals with a Reference Daily Intake (RDI, formerly Recommended Daily Allowance (RDA)) of more than 200 mg/day are calcium, magnesium, and potassium while important trace minerals (RDI less than 200 mg/day) are copper, iron, and zinc. These are found in many foods, but can also be taken in dietary supplements.
Colour
Food colouring is added to change the colour of any food substance. It is mainly for sensory analysis purposes. It can be used to simulate the natural colour of a product as perceived by the customer, such as red dye (like FD&C Red No.40 Allura Red AC) to ketchup or to add unnatural colours to a product like Kellogg's Froot Loops. Caramel is a natural food dye; the industrial form, caramel colouring, is the most widely used food colouring and is found in foods from soft drinks to soy sauce, bread, and pickles.
Flavours
Flavour in food is important in how food smells and tastes to the consumer, especially in sensory analysis. Some of these products occur naturally like salt and sugar, but flavour chemists (called a "flavourist") develop many of these flavours for food products. Such artificial flavours include methyl salicylate which creates the wintergreen odor and lactic acid which gives milk a tart taste.
Food additives
Food additives are substances added to food for preserving flavours, or improving taste, look, smell and freshness. The processes are as old as adding vinegar for pickling or as an emulsifier for emulsion mixtures like mayonnaise. These are generally listed by "E number" in the European Union or GRAS ("generally recognized as safe") by the United States Food and Drug Administration.
| Physical sciences | Subdisciplines | Chemistry |
225813 | https://en.wikipedia.org/wiki/Old%20World%20flycatcher | Old World flycatcher | The Old World flycatchers are a large family, the Muscicapidae, of small passerine birds restricted to the Old World (Europe, Africa and Asia), with the exception of several vagrants and two species, bluethroat (Luscinia svecica) and northern wheatear (Oenanthe oenanthe), found also in North America. These are mainly small arboreal insectivores, many of which, as the name implies, take their prey on the wing. The family is relatively large and includes 357 species, which are divided into 54 genera.
Taxonomy
The name Muscicapa for the family was introduced by the Scottish naturalist John Fleming in 1822. The word had earlier been used for the genus Muscicapa by the French zoologist Mathurin Jacques Brisson in 1760. Muscicapa comes from the Latin musca meaning a fly, and capere to catch.
In 1910, the German ornithologist Ernst Hartert found it impossible to define boundaries between the three families Muscicapidae, Sylviidae (Old World warblers) and Turdidae (thrushes). He therefore treated them as subfamilies of an extended flycatcher family that also included Timaliidae (Old World babblers) and Monarchidae (Monarch flycatchers). Forty years later, a similar arrangement was adopted by the American ornithologists Ernst Mayr and Dean Amadon in an article published in 1951. Their large family, Muscicapidae, which they termed the "primitive insect eaters" contained 1460 species divided into eight subfamilies. The use of the extended group was endorsed by a committee set up following the Eleventh International Ornithological Congress held in Basel in 1954. Subsequent DNA–DNA hybridization studies by Charles Sibley and others showed that the subfamilies were not closely related to one another. As a result, the large group was broken up into a number of separate families, although for a while most authorities continued to retain the thrushes in Muscicapidae. In 1998 the American Ornithologists' Union chose to treat the thrushes as a separate family in the seventh edition of their Check-list of North American birds and subsequently most authors have followed their example.
Genera
The family formerly included fewer species. At the time of the publication of the third edition of Howard and Moore Complete Checklist of the Birds of the World in 2003, the genera Myophonus, Alethe, Brachypteryx and Monticola were included in the thrush family Turdidae. Subsequent molecular phylogenetic studies have shown that the species in these four genera are more closely related to species in Muscicapidae. As a consequence, these four genera are now placed here. In contrast, the genus Cochoa which was previously placed in Muscicapidae has been shown to belong in Turdidae.
Two large molecular phylogenetic studies of species within Muscicapidae published in 2010 showed that the genera Fraseria, Melaenornis and Muscicapa were non-monophyletic. The authors were unable to propose revised genera as not all the species were sampled and not all the nodes in their phylogenies were strongly supported. A subsequent study published in 2016, that included 37 of the 42 Muscicapini species, confirmed that the genera were non-monophyletic and proposed a reorganised arrangement of the species with several new or resurrected genera.
The International Ornithologists' Union recognises 357 species and divides the family into 54 genera. Subdivisions have been proposed by Sangster et al (2010). For a complete list of species, see "List of Old World flycatcher species".
Family Muscicapidae
Subfamily Muscicapinae (Fleming, 1822)
Tribe Copsychini (Sundevall, 1872)
Alethe – alethes
Cercotrichas – scrub robins
Copsychus – magpie-robins or shamas
Tribe Muscicapini (Fleming, 1822)
Agricola
Fraseria – forest flycatchers
Melaenornis
Namibornis – single species: Herero chat
Empidornis – single species: silverbird
Sigelus – single species: fiscal flycatcher
Bradornis
Humblotia – single species: Humblot's flycatcher
Muscicapa
Subfamily Niltavinae (Sangster, Alström, Forsmark and Olsson, 2010)
Leucoptilon – single species: white-tailed flycatcher
Sholicola – sholakilis
Niltava – niltavas
Cyanoptila – flycatchers
Eumyias – blue flycatchers
Anthipes – flycatchers
Cyornis – blue flycatchers
Subfamily Erithacinae (G.R. Gray, 1846) – African forest robin assemblage
Erithacus – single species: European robin
Swynnertonia – single species: Swynnerton's robin
Pogonocichla – single species: white-starred robin
Stiphrornis – single species: forest robin
Cossyphicula – robin-chats
Chamaetylas – (4 species)
Cossypha – robin-chats
Cichladusa – palm thrushes
Xenocopsychus – single species: Angola cave chat
Dessonornis – robin-chats
Sheppardia – akalats
Subfamily Saxicolinae (Vigors, 1825)
Irania – single species: white-throated robin
Luscinia – nightingales and relatives
Myiomela – robins
Calliope – rubythroats
Enicurus – forktails
Cinclidium – single species: blue-fronted robin
Myophonus – whistling thrushes
Heinrichia – single species: great shortwing
Vauriella
Leonardina – single species: Bagobo babbler
Brachypteryx – shortwings
Larvivora – East and South-East Asian robins
Ficedula – flycatchers
Tarsiger – bush robins and bluetails
Heteroxenicus – single species: Gould's shortwing
Phoenicurus – redstarts
Monticola – rock thrushes
Saxicola – stonechats and chats
Campicoloides – single species: buff-streaked chat
Emarginata
Pinarochroa – single species: moorland chat
Thamnolaea – cliff chats
Myrmecocichla – chats
Oenanthe – wheatears
The cladogram below is based on a molecular phylogenetic study of the family by Min Zhao and collaborators that was published in 2023. Some regions of the phylogenetic tree were not strongly supported by the sequence data. Both the genera included and the number of species in each genera are taken from the list of birds maintained by Frank Gill, Pamela Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Description
The appearance of these birds is very varied, but they mostly have weak songs and harsh calls. They are small to medium birds, ranging from 9 to 22 cm in length. Many species are dull brown in colour, but the plumage of some can be much brighter, especially in the males. Most have broad, flattened bills suited to catching insects in flight, although the few ground-foraging species typically have finer bills.
Old World flycatchers live in almost every environment with a suitable supply of trees, from dense forest to open scrub, and even the montane woodland of the Himalayas. The more northerly species migrate south in winter, ensuring a continuous diet of insects.
Depending on the species, their nests are either well-constructed cups placed in a tree or cliff ledge, or simply lining in a pre-existing tree hole. The hole-nesting species tend to lay larger clutches, with an average of eight eggs, rather than just two to five.
| Biology and health sciences | Passerida | null |
225982 | https://en.wikipedia.org/wiki/Ground%20state | Ground state | The ground state of a quantum-mechanical system is its stationary state of lowest energy; the energy of the ground state is known as the zero-point energy of the system. An excited state is any state with energy greater than the ground state. In quantum field theory, the ground state is usually called the vacuum state or the vacuum.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator that acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
Absence of nodes in one dimension
In one dimension, the ground state of the Schrödinger equation can be proven to have no nodes.
Derivation
Consider the average energy of a state with a node at ; i.e., . The average energy in this state would be
where is the potential.
With integration by parts:
Hence in case that is equal to zero, one gets:
Now, consider a small interval around ; i.e., . Take a new (deformed) wave function to be defined as , for ; and , for ; and constant for . If is small enough, this is always possible to do, so that is continuous.
Assuming around , one may write
where is the norm.
Note that the kinetic-energy densities hold everywhere because of the normalization. More significantly, the average kinetic energy is lowered by by the deformation to .
Now, consider the potential energy. For definiteness, let us choose . Then it is clear that, outside the interval , the potential energy density is smaller for the because there.
On the other hand, in the interval we have
which holds to order .
However, the contribution to the potential energy from this region for the state with a node is
lower, but still of the same lower order as for the deformed state , and subdominant to the lowering of the average kinetic energy.
Therefore, the potential energy is unchanged up to order , if we deform the state with a node into a state without a node, and the change can be ignored.
We can therefore remove all nodes and reduce the energy by , which implies that cannot be the ground state. Thus the ground-state wave function cannot have a node. This completes the proof. (The average energy may then be further lowered by eliminating undulations, to the variational absolute minimum.)
Implication
As the ground state has no nodes it is spatially non-degenerate, i.e. there are no two stationary quantum states with the energy eigenvalue of the ground state (let's name it ) and the same spin state and therefore would only differ in their position-space wave functions.
The reasoning goes by contradiction: For if the ground state would be degenerate then there would be two orthonormal stationary states and — later on represented by their complex-valued position-space wave functions and — and any superposition with the complex numbers fulfilling the condition would also be a be such a state, i.e. would have the same energy-eigenvalue and the same spin-state.
Now let be some random point (where both wave functions are defined) and set:
and
with
(according to the premise no nodes).
Therefore, the position-space wave function of is
Hence
for all .
But i.e., is a node of the ground state wave function and that is in contradiction to the premise that this wave function cannot have a node.
Note that the ground state could be degenerate because of different spin states like and while having the same position-space wave function: Any superposition of these states would create a mixed spin state but leave the spatial part (as a common factor of both) unaltered.
Examples
The wave function of the ground state of a particle in a one-dimensional box is a half-period sine wave, which goes to zero at the two edges of the well. The energy of the particle is given by , where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well.
The wave function of the ground state of a hydrogen atom is a spherically symmetric distribution centred on the nucleus, which is largest at the center and reduces exponentially at larger distances. The electron is most likely to be found at a distance from the nucleus equal to the Bohr radius. This function is known as the 1s atomic orbital. For hydrogen (H), an electron in the ground state has energy , relative to the ionization threshold. In other words, 13.6 eV is the energy input required for the electron to no longer be bound to the atom.
The exact definition of one second of time since 1997 has been the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at a temperature of 0 K.
| Physical sciences | Quantum mechanics | Physics |
226090 | https://en.wikipedia.org/wiki/Cutlass | Cutlass | A cutlass is a short, broad sabre or slashing sword with a straight or slightly curved blade sharpened on the cutting edge and a hilt often featuring a solid cupped or basket-shaped guard. It was a common naval weapon during the early Age of Sail.
Etymology
The word "cutlass" developed from the 17th-century English use of coutelas, a 16th-century French word for a machete-like mid-length single-edged blade (the modern French for "knife", in general, is couteau; in 17th- and 18th-century English the word was often spelled "cuttoe"). The French word coutelas may be a convergent development from a Latin root, along with the Italian coltellaccio or cortelazo, meaning "large knife".
In Italy, the cortelazo was a similar short, broad-bladed sabre popular during the 16th century.
The root coltello, for "knife", derived ultimately from the Latin cultellus meaning "smaller knife", which is the common Latin root for both the Italian and French words.
In the English-speaking Caribbean, the word "cutlass" is also used as a word for machete.<ref>John Klein, "What Is a Machete, Anyway?", "The Atlantic, Oct 21, 2013 (accessed Jan 25 2015)</ref>
History and use
Origin
The cutlass is a 17th-century descendant of the edged short sword, exemplified by the medieval falchion.
Woodsmen and soldiers in the 17th and 18th centuries used a similar short and broad backsword called a hanger, or in German a messer, meaning "knife". Often occurring with the full tang (i.e. slab tang) more typical of daggers than swords in Europe, these blades may ultimately derive through the falchion (facon, falcon, fauchard) from the falx or seax.
In England, about 1685 the rather long straight-bladed sword formerly in use began to be superseded by the "hanger". This weapon had a short and more or less curved single-edged blade with a brass hilt of a rather flat double-shell and knuckle-bow. The grip was generally of wood, bound with wire, but some specimens show a brass grip with spiral grooves. These are probably early models. The length of the blade is usually about .
History
Although also used on land, the cutlass is best known as the sailor's preferred weapon, as it was robust enough to hack or cut through heavy ropes, thick canvas, and dense vegetation while being short enough to be used in relatively close quarters combat, such as during boarding actions, in the rigging, or below decks. Another advantage to the cutlass was its simplicity of use, as it required less training than that required to master a rapier or small sword.
Cutlasses are famous for being used by pirates, although there is no reason to believe that Caribbean buccaneers invented them, as has occasionally been claimed. However, the subsequent use of cutlasses by pirates is well documented in contemporary sources, notably by the pirate crews of William Fly, William Kidd, and Stede Bonnet. French historian Alexandre Exquemelin reports the buccaneer François l'Ollonais using a cutlass as early as 1667. Pirates used these weapons for intimidation as much as for combat, often needing no more than to grip their hilts to induce a crew to surrender, or beating captives with the flat of the blade to force their compliance or responsiveness to interrogation.
Owing to its versatility, the cutlass was as often an agricultural implement and tool as it was a weapon (cf. machete, to which the same comment applies). It was used commonly in rain forest and sugarcane areas, such as the Caribbean and Central America. In their most simplified form they are held to have become the machete of the Caribbean.
The leadcutter sword was a weapon modelled on the cutlass, designed for use in shows and demonstrations of swordsmanship in the late Victorian era. Wilkinson Sword made these swords in four sizes, no. 1 to no. 4, of increasing weight to suit the strength of the user. The leadcutter was so named because in demonstrations it was used to cut a lead bar in half. Wilkinson included a mould for the lead bar with each purchase of their swords.
Modern history
In 1830, after a constable of the London Metropolitan Police was shot and stabbed while on duty, the Home Secretary ordered that each police officer in the force "should be issued with a cutlass for his defence"; training in their use was provided at Wellington Barracks. Initially carried while on night duty, they were soon relegated to being kept in the local inspector's office for use in an emergency. Provincial police forces sometimes deployed cutlasses during public disorder, using the hilts and flat edges of the blades to strike rioters, but there is no record of anyone being killed with one. The last recorded issue of police cutlasses was during the Tottenham Outrage, an armed robbery in 1909.
In 1936, the British Royal Navy announced that from then on cutlasses would be carried only for ceremonial duties and not used in landing parties. The last recorded use of cutlasses by the Royal Navy is often said to be on 16 February 1940 during the boarding action known as the Altmark Incident. However, this is disbelieved by the majority of the HMS Cossack Association (Cossack was the ship that boarded Altmark) and the authors of British Naval Swords and Swordsmanship''. The authors point to another claim, a boarding by HMS Armada in 1952, but disbelieve this one too. In their view, the last use of cutlasses by the Royal Navy was by a shore party in China in 1900. Cutlasses continue to be worn in the Royal Navy by Chief Petty Officers when escorting the White Ensign and by Senior or Leading Ratings as part of an escort at a court-martial.
The cutlass remained an official weapon in the United States Navy until it was stricken from the Navy's active inventory in 1949. The cutlass was seldom used for weapons training after the early 1930s. The last new model of cutlass adopted by the US Navy was the US M1917 cutlass, adopted during World War I; it was based on the Dutch M1898 klewang. Although cutlasses were still being made during World War II under the US M1941 designation, this was only a slightly modified variant of the US M1917 cutlass. A US Marine Combat Engineer NCO is reported to have killed an enemy combatant with a US M1941 cutlass at the Battle of Inchon during the Korean War. A cutlass is still carried by the recruit designated as the Recruit Chief Petty Officer for each recruit division while at the US Navy Recruit Training Command. In a message released 31 March 2010, the US Navy approved optional wear of a ceremonial cutlass as part of the Chief Petty Officer dress uniform, pending final design approval. That approval came in January 2011, and the cutlass was made available for ceremonial wear by Chief Petty Officers in August of that same year.
| Technology | Swords | null |
226309 | https://en.wikipedia.org/wiki/Secondary%20metabolite | Secondary metabolite | Secondary metabolites, also called specialised metabolites, secondary products, or natural products, are organic compounds produced by any lifeform, e.g. bacteria, archaea, fungi, animals, or plants, which are not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. Specific secondary metabolites are often restricted to a narrow set of species within a phylogenetic group. Secondary metabolites often play an important role in plant defense against herbivory and other interspecies defenses. Humans use secondary metabolites as medicines, flavourings, pigments, and recreational drugs.
The term secondary metabolite was first coined by Albrecht Kossel, the 1910 Nobel Prize laureate for medicine and physiology. 30 years later a Polish botanist Friedrich Czapek described secondary metabolites as end products of nitrogen metabolism.
Secondary metabolites commonly mediate antagonistic interactions, such as competition and predation, as well as mutualistic ones such as pollination and resource mutualisms. Usually, secondary metabolites are confined to a specific lineage or even species, though there is considerable evidence that horizontal transfer across species or genera of entire pathways plays an important role in bacterial (and, likely, fungal) evolution. Research also shows that secondary metabolism can affect different species in varying ways. In the same forest, four separate species of arboreal marsupial folivores reacted differently to a secondary metabolite in eucalypts. This shows that differing types of secondary metabolites can be the split between two herbivore ecological niches. Additionally, certain species evolve to resist secondary metabolites and even use them for their own benefit. For example, monarch butterflies have evolved to be able to eat milkweed (Asclepias) despite the presence of toxic cardiac glycosides. The butterflies are not only resistant to the toxins, but are actually able to benefit by actively sequestering them, which can lead to the deterrence of predators.
Plant secondary metabolites
Plants are capable of producing and synthesizing diverse groups of organic compounds and are divided into two major groups: primary and secondary metabolites. Secondary metabolites are metabolic intermediates or products which are not essential to growth and life of the producing plants but rather required for interaction of plants with their environment and produced in response to stress. Their antibiotic, antifungal and antiviral properties protect the plant from pathogens. Some secondary metabolites such as phenylpropanoids protect plants from UV damage. The biological effects of plant secondary metabolites on humans have been known since ancient times. The herb Artemisia annua which contains Artemisinin, has been widely used in Chinese traditional medicine more than two thousand years ago. Plant secondary metabolites are classified by their chemical structure and can be divided into four major classes: terpenes, phenylpropanoids (i.e. phenolics), polyketides, and alkaloids.
Chemical classes
Terpenoids
Terpenes constitute a large class of natural products which are composed of isoprene units. Terpenes are only hydrocarbons and terpenoids are oxygenated hydrocarbons. The general molecular formula of terpenes are multiples of (C5H8)n, where 'n' is number of linked isoprene units. Hence, terpenes are also termed as isoprenoid compounds. Classification is based on the number of isoprene units present in their structure. Some terpenoids (i.e. many sterols) are primary metabolites. Some terpenoids that may have originated as secondary metabolites have subsequently been recruited as plant hormones, such as gibberellins, brassinosteroids, and strigolactones.
Examples of terpenoids built from hemiterpene oligomerization are:
Azadirachtin, present in Azadirachta indica, the (Neem tree)
Artemisinin, present in Artemisia annua, Chinese wormwood
Tetrahydrocannabinol, present in Cannabis sativa, cannabis
Saponins, glycosylated triterpenes present in e.g. Chenopodium quinoa, quinoa.
Phenolic compounds
Phenolics are a chemical compound characterized by the presence of aromatic ring structure bearing one or more hydroxyl groups. Phenolics are the most abundant secondary metabolites of plants ranging from simple molecules such as phenolic acid to highly polymerized substances such as tannins. Classes of phenolics have been characterized on the basis of their basic skeleton.
An example of a plant phenol is:
Resveratrol, a C14 stilbenoid produced by e.g. grapes.
Alkaloids
Alkaloids are a diverse group of nitrogen-containing basic compounds. They are typically derived from plant sources and contain one or more nitrogen atoms. Chemically they are very heterogeneous. Based on chemical structures, they may be classified into two broad categories:
Non heterocyclic or atypical alkaloids, for example hordenine or N-methyltyramine, colchicine, and taxol
Heterocyclic or typical alkaloids, for example quinine, caffeine, and nicotine
Examples of alkaloids produced by plants are:
Hyoscyamine, present in Datura stramonium
Atropine, present in Atropa belladonna, deadly nightshade
Cocaine, present in Erythroxylum coca the Coca plant
Scopolamine, present in the Solanaceae (nightshade) plant family
Codeine and morphine, present in Papaver somniferum, the opium poppy
Vincristine and vinblastine, mitotic inhibitors found in Catharanthus roseus, the rosy periwinkle
Many alkaloids affect the central nervous system of animals by binding to neurotransmitter receptors.
Glucosinolates
Glucosinolates are secondary metabolites that include both sulfur and nitrogen atoms, and are derived from glucose, an amino acid and sulfate.
An example of a glucosinolate in plants is Glucoraphanin, from broccoli (Brassica oleracea var. italica).
Plant secondary metabolites in medicine
Many drugs used in modern medicine are derived from plant secondary metabolites.
The two most commonly known terpenoids are artemisinin and paclitaxel. Artemisinin was widely used in Traditional Chinese medicine and later rediscovered as a powerful antimalarial by a Chinese scientist Tu Youyou. She was later awarded the Nobel Prize for the discovery in 2015. Currently, the malaria parasite, Plasmodium falciparum, has become resistant to artemisinin alone and the World Health Organization recommends its use with other antimalarial drugs for a successful therapy. Paclitaxel the active compound found in Taxol is a chemotherapy drug used to treat many forms of cancers including ovarian cancer, breast cancer, lung cancer, Kaposi sarcoma, cervical cancer, and pancreatic cancer. Taxol was first isolated in 1973 from barks of a coniferous tree, the Pacific Yew.
Morphine and codeine both belong to the class of alkaloids and are derived from opium poppies. Morphine was discovered in 1804 by a German pharmacist Friedrich Sertürnert. It was the first active alkaloid extracted from the opium poppy. It is mostly known for its strong analgesic effects, however, morphine is also used to treat shortness of breath and treatment of addiction to stronger opiates such as heroin. Despite its positive effects on humans, morphine has very strong adverse effects, such as addiction, hormone imbalance or constipation. Due to its highly addictive nature morphine is a strictly controlled substance around the world, used only in very severe cases with some countries underusing it compared to the global average due to the social stigma around it.
Codeine, also an alkaloid derived from the opium poppy, is considered the most widely used drug in the world according to World Health Organization. It was first isolated in 1832 by a French chemist Pierre Jean Robiquet, also known for the discovery of caffeine and a widely used red dye alizarin. Primarily codeine is used to treat mild pain and relief coughing although in some cases it is used to treat diarrhea and some forms of irritable bowel syndrome. Codeine has the strength of 0.1-0.15 compared to morphine ingested orally, hence it is much safer to use. Although codeine can be extracted from the opium poppy, the process is not feasible economically due to the low abundance of pure codeine in the plant. A chemical process of methylation of the much more abundant morphine is the main method of production.
Atropine is an alkaloid first found in Atropa belladonna, a member of the nightshade family. While atropine was first isolated in the 19th century, its medical use dates back to at least the fourth century B.C. where it was used for wounds, gout, and sleeplessness. Currently atropine is administered intravenously to treat bradycardia and as an antidote to organophosphate poisoning. Overdosing of atropine may lead to atropine poisoning which results in side effects such as blurred vision, nausea, lack of sweating, dry mouth and tachycardia.
Resveratrol is a phenolic compound of the flavonoid class. It is highly abundant in grapes, blueberries, raspberries and peanuts. It is commonly taken as a dietary supplement for extending life and reducing the risk of cancer and heart disease, however there is no strong evidence supporting its efficacy. Nevertheless, flavonoids are in general thought to have beneficial effects for humans. Certain studies shown that flavonoids have direct antibiotic activity. A number of in vitro and limited in vivo studies shown that flavonoids such as quercetin have synergistic activity with antibiotics and are able to suppress bacterial loads.
Digoxin is a cardiac glycoside first derived by William Withering in 1785 from the foxglove (Digitalis) plant. It is typically used to treat heart conditions such as atrial fibrillation, atrial flutter or heart failure. Digoxin can, however, have side effects such as nausea, bradycardia, diarrhea or even life-threatening arrhythmia.
Fungal secondary metabolites
The three main classes of fungal secondary metabolites are: polyketides, nonribosomal peptides and terpenes. Although fungal secondary metabolites are not required for growth they play an essential role in survival of fungi in their ecological niche. The most known fungal secondary metabolite is penicillin discovered by Alexander Fleming in 1928. Later in 1945, Fleming, alongside Ernst Chain and Howard Florey, received a Nobel Prize for its discovery which was pivotal in reducing the number of deaths in World War II by over 100,000.
Examples of other fungal secondary metabolites are:
Lovastatin, a polyketide from e.g. Pleurotus ostreatus, oyster mushrooms.
Aflatoxin B1, a polyketide from Aspergillus flavus.
Ciclosporin, a non-ribosomal cyclic peptide from Tolypocladium inflatum.
Lovastatin was the first FDA approved secondary metabolite to lower cholesterol levels. Lovastatin occurs naturally in low concentrations in oyster mushrooms, red yeast rice, and Pu-erh. Lovastatin's mode of action is competitive inhibition of HMG-CoA reductase, and a rate-limiting enzyme responsible for converting HMG-CoA to mevalonate.
Fungal secondary metabolites can also be dangerous to humans. Claviceps purpurea, a member of the ergot group of fungi typically growing on rye, results in death when ingested. The build-up of poisonous alkaloids found in C. purpurea lead to symptoms such as seizures and spasms, diarrhea, paresthesias, Itching, psychosis or gangrene. Currently, removal of ergot bodies requires putting the rye in brine solution with healthy grains sinking and infected floating.
Bacterial secondary metabolites
Bacterial production of secondary metabolites starts in the stationary phase as a consequence of lack of nutrients or in response to environmental stress. Secondary metabolite synthesis in bacteria is not essential for their growth, however, they allow them to better interact with their ecological niche. The main synthetic pathways of secondary metabolite production in bacteria are; b-lactam, oligosaccharide, shikimate, polyketide and non-ribosomal pathways. Many bacterial secondary metabolites are toxic to mammals. When secreted those poisonous compounds are known as exotoxins whereas those found in the prokaryotic cell wall are endotoxins.
Examples of bacterial secondary metabolites are:
Phenazine
Pyocyanin, from Pseudomonas aeruginosa.
Other phenazines from Pseudomonas ssp. and Streptomyces ssp.
Polyketides
Avermectin, from Streptomyces avermitilis.
Epothilones, macrolactones from the soil-dwelling myxobacterium Sorangium cellulosum.
Erythromycin, Saccharopolyspora erythraea.
Nystatin, from Streptomyces noursei.
Rifamycin, from Amycolatopsis rifamycinica.
Nonribosomal peptides
Bacitracin, from Bacillus subtilis (Tracy strain).
Gramicidin, from Brevibacillus brevis.
Polymyxin, from Paenibacillus polymyxa.
Ramoplanin, from Actinoplanes strain ATCC 33076.
Teicoplanins, from Actinoplanes teicomyceticus.
Vancomycin, from the soil bacterium Amycolatopsis orientalis.
Ribosomal peptides
Microcins, bacteriocins such as microcin V from Escherichia coli.
Thiostrepton, from several strains of streptomycetes, e.g. Streptomyces azureus.
Glucosides
Nojirimycin, an iminosugar from a class of Streptomyces species.
Alkaloids
Tetrodotoxin, a neurotoxin produced by Pseudoalteromonas and other bacteria living in symbiosis with animals such as e.g. pufferfish.
Terpenoids
Carotenoids, a pigment produced by different species of bacteria, such as Micrococcus sp.
Strepsesquitriol, compound produced by Streptomyces sp. that can reduce inflammation without being toxic to cells, making a promising candidate for developing anti-inflammatory medicines.
Micromonohalimane B., a diterpene identified from Micromonospora sp., shows moderate antibacterial activity against some antibiotic resistant Gram-positive bacteria.
Cyclomarins, a potent anti-inflammatory and antiviral compound produced by a marine Streptomyces, with strong cytotoxicity against cancer cell lines and also effective against herpesviruses.
Archaea secondary metabolites
Archaea are capable of producing a variety of secondary metabolites, which may have significant biotechnological applications. Despite knowing this, the biosynthetic pathways for secondary metabolites in archaea are less understood than those in bacteria. Notably, archaea often lack some biosynthesis genes commonly present in bacteria, which suggests that they may possess unique metabolic pathways for synthetizing these compounds.
Extracellular polymeric substances
Extracellular polymeric substances can effectively adsorb and degrade hazardous organic chemicals. While these compounds are produced by various organisms, archaea are particularly promising for wastewater treatment due to their high tolerance to saline concentrations and their ability to grow anaerobically.
Biotechnological approaches
Selective breeding was used as one of the first biotechnological techniques used to reduce the unwanted secondary metabolites in food, such as naringin causing bitterness in grapefruit. In some cases increasing the content of secondary metabolites in a plant is the desired outcome. Traditionally this was done using in-vitro plant tissue culture techniques which allow for: control of growth conditions, mitigate seasonality of plants or protect them from parasites and harmful-microbes. Synthesis of secondary metabolites can be further enhanced by introducing elicitors into a tissue plant culture, such as jasmonic acid, UV-B or ozone. These compounds induce stress onto a plant leading to increased production of secondary metabolites.
To further increase the yield of SMs new approaches have been developed. A novel approach used by Evolva uses recombinant yeast S. cerevisiae strains to produce secondary metabolites normally found in plants. The first successful chemical compound synthesised with Evolva was vanillin, widely used in the food beverage industry as flavouring. The process involves inserting the desired secondary metabolite gene into an artificial chromosome in the recombinant yeast leading to synthesis of vanillin. Currently Evolva produces a wide array of chemicals such as stevia, resveratrol or nootkatone.
Nagoya protocol
With the development of recombinant technologies the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was signed in 2010. The protocol regulates the conservation and protection of genetic resources to prevent the exploitation of smaller and poorer countries. If genetic, protein or small molecule resources sourced from biodiverse countries become profitable a compensation scheme was put in place for the countries of origin.
| Biology and health sciences | Metabolic processes | Biology |
226344 | https://en.wikipedia.org/wiki/Northern%20shoveler | Northern shoveler | The northern shoveler (; Spatula clypeata), known simply in Britain as the shoveler, is a common and widespread duck. It breeds in northern areas of Europe and across the Palearctic and across most of North America, wintering in southern Europe, the Indian subcontinent, Southeast Asia, Central America, the Caribbean, and northern South America. It is a rare vagrant to Australia. In North America, it breeds along the southern edge of Hudson Bay and west of this body of water, and as far south as the Great Lakes west to Colorado, Nevada, and Oregon.
The northern shoveler is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies. The conservation status of this bird is Least Concern.
Taxonomy
The northern shoveler was first formally described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. He introduced the binomial name Anas clypeata. A molecular phylogentic study comparing mitochondrial DNA sequences published in 2009 found that the genus Anas, as then defined, was non-monophyletic. The genus was subsequently split into four monophyletic genera with ten species including the northern shoveler moved into the resurrected genus Spatula. This genus had been originally proposed by the German zoologist Friedrich Boie in 1822. The name Spatula is the Latin for a "spoon" or "spatula". The specific epithet is derived from Latin clypeata, "shield-bearing" (from clypeus, "shield").
No living subspecies are accepted today. Fossil bones of a very similar duck have been found in Early Pleistocene deposits at Dursunlu, Turkey. It is unresolved, however, how these birds were related to the northern shoveler of today; i.e., whether the differences noted were due to being a related species or paleosubspecies, or attributable to individual variation.
Description
This species is unmistakable in the northern hemisphere due to its large spatulate bill. The breeding drake has an iridescent dark green head, white breast and chestnut belly and flanks. In flight, pale blue forewing feathers are revealed, separated from the green speculum by a white border. In early fall the male will have a white crescent on each side of the face. In non-breeding (eclipse) plumage, the drake resembles the female.
The female is a drab mottled brown like other dabblers, with plumage much like a female mallard, but easily distinguished by the long broad bill, which is gray tinged with orange on cutting edge and lower mandible. The female's forewing is gray.
They are long and have a wingspan of with a weight of .
Behavior
Northern shovelers feed by dabbling for plant food, often by swinging its bill from side to side and using the bill to strain food from the water. They use their highly specialized bill (from which their name is derived) to forage for aquatic invertebrates. Their wide-flat bill is equipped with well-developed lamellae – small, comb-like structures on the edge of the bill that act like sieves, allowing the birds to skim crustaceans and plankton from the water's surface. This adaptation, more specialized in shovelers, gives them an advantage over other puddle ducks, with which they do not have to compete for food resources during most of the year. Thus, mud-bottomed marshes rich in invertebrate life are their habitat of choice.
The shoveler prefers to nest in grassy areas away from open water. Their nest is a shallow depression on the ground, lined with plant material and down. Hens typically lay about nine eggs. The drakes are very territorial during breeding season and will defend their territory and partners from competing males. Drakes also engage in elaborate courtship behaviors, both on the water and in the air; it is not uncommon for a dozen or more males to pursue a single hen. Despite their stout appearance, shovelers are nimble fliers.
This is a fairly quiet species. The male has a clunking call, whereas the female has a Mallard-like quack.
Habitat and range
This is a bird of open wetlands, such as wet grassland or marshes with some emergent vegetation. It breeds in wide areas across Eurasia, western North America and the Great Lakes region of the United States.
This bird winters in southern Europe, the Indian Subcontinent, the Caribbean, northern South America, Malay Archipelago, Japan and other areas. Those wintering in the Indian Subcontinent make the taxing journey over the Himalayas, often taking a break in wetlands just south of the Himalaya before continuing further south to warmer regions. In North America it winters south of a line from Washington to Idaho and from New Mexico east to Kentucky, also along the Eastern Seaboard as far north as Massachusetts. In the British Isles, home to more than 20% of the North Western European population, it is best known as a winter visitor, although it is more frequently seen in southern and eastern England, especially around the Ouse Washes, the Humber and the North Kent Marshes, and in much smaller numbers in Scotland and western parts of England. In winter, breeding birds move south, and are replaced by an influx of continental birds from further north. It breeds across most of Ireland, but the population there is very difficult to assess. Surveys in 2017 and 2018 suggest that it is more common and widespread in Ireland
than previously thought.
It is strongly migratory and winters further south than its breeding range. It has occasionally been reported as a vagrant as far south as Australia, New Zealand and South Africa. It is not as gregarious as some dabbling ducks outside the breeding season and tends to form only small flocks. Among North America's duck species, northern shovelers trail only mallards and blue-winged teal in overall abundance. Their populations have been healthy since the 1960s, and have soared in recent years to more than 5 million birds (2015), most likely because of favorable breeding, migration, and wintering habitat conditions.
| Biology and health sciences | Anseriformes | Animals |
226424 | https://en.wikipedia.org/wiki/Four-vector | Four-vector | In special relativity, a four-vector (or 4-vector, sometimes Lorentz vector) is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (,) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts (a change by a constant velocity to another inertial reference frame).
Four-vectors describe, for instance, position in spacetime modeled as Minkowski space, a particle's four-momentum , the amplitude of the electromagnetic four-potential at a point in spacetime, and the elements of the subspace spanned by the gamma matrices inside the Dirac algebra.
The Lorentz group may be represented by 4×4 matrices . The action of a Lorentz transformation on a general contravariant four-vector (like the examples above), regarded as a column vector with Cartesian coordinates with respect to an inertial frame in the entries, is given by
(matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the corresponding covariant vectors , and . These transform according to the rule
where denotes the matrix transpose. This rule is different from the above rule. It corresponds to the dual representation of the standard representation. However, for the Lorentz group the dual of any representation is equivalent to the original representation. Thus the objects with covariant indices are four-vectors as well.
For an example of a well-behaved four-component object in special relativity that is not a four-vector, see bispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule reads , where is a 4×4 matrix other than . Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These include scalars, spinors, tensors and spinor-tensors.
The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends to general relativity, some of the results stated in this article require modification in general relativity.
Notation
The notations in this article are: lowercase bold for three-dimensional vectors, hats for three-dimensional unit vectors, capital bold for four dimensional vectors (except for the four-gradient), and tensor index notation.
Four-vector algebra
Four-vectors in a real-valued basis
A four-vector A is a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations:
where Aα is the magnitude component and Eα is the basis vector component; note that both are necessary to make a vector, and that when Aα is seen alone, it refers strictly to the components of the vector.
The upper indices indicate contravariant components. Here the standard convention is that Latin indices take values for spatial components, so that i = 1, 2, 3, and Greek indices take values for space and time components, so α = 0, 1, 2, 3, used with the summation convention. The split between the time component and the spatial components is a useful one to make when determining contractions of one four vector with other tensor quantities, such as for calculating Lorentz invariants in inner products (examples are given below), or raising and lowering indices.
In special relativity, the spacelike basis E1, E2, E3 and components A1, A2, A3 are often Cartesian basis and components:
although, of course, any other basis and components may be used, such as spherical polar coordinates
or cylindrical polar coordinates,
or any other orthogonal coordinates, or even general curvilinear coordinates. Note the coordinate labels are always subscripted as labels and are not indices taking numerical values. In general relativity, local curvilinear coordinates in a local basis must be used. Geometrically, a four-vector can still be interpreted as an arrow, but in spacetime - not just space. In relativity, the arrows are drawn as part of Minkowski diagram (also called spacetime diagram). In this article, four-vectors will be referred to simply as vectors.
It is also customary to represent the bases by column vectors:
so that:
The relation between the covariant and contravariant coordinates is through the Minkowski metric tensor (referred to as the metric), η which raises and lowers indices as follows:
and in various equivalent notations the covariant components are:
where the lowered index indicates it to be covariant. Often the metric is diagonal, as is the case for orthogonal coordinates (see line element), but not in general curvilinear coordinates.
The bases can be represented by row vectors:
so that:
The motivation for the above conventions are that the inner product is a scalar, see below for details.
Lorentz transformation
Given two inertial or rotated frames of reference, a four-vector is defined as a quantity which transforms according to the Lorentz transformation matrix Λ:
In index notation, the contravariant and covariant components transform according to, respectively:
in which the matrix has components in row and column , and the matrix has components in row and column .
For background on the nature of this transformation definition, see tensor. All four-vectors transform in the same way, and this can be generalized to four-dimensional relativistic tensors; see special relativity.
Pure rotations about an arbitrary axis
For two frames rotated by a fixed angle about an axis defined by the unit vector:
without any boosts, the matrix Λ has components given by:
where δij is the Kronecker delta, and εijk is the three-dimensional Levi-Civita symbol. The spacelike components of four-vectors are rotated, while the timelike components remain unchanged.
For the case of rotations about the z-axis only, the spacelike part of the Lorentz matrix reduces to the rotation matrix about the z-axis:
Pure boosts in an arbitrary direction
For two frames moving at constant relative three-velocity v (not four-velocity, see below), it is convenient to denote and define the relative velocity in units of c by:
Then without rotations, the matrix Λ has components given by:
where the Lorentz factor is defined by:
and is the Kronecker delta. Contrary to the case for pure rotations, the spacelike and timelike components are mixed together under boosts.
For the case of a boost in the x-direction only, the matrix reduces to;
Where the rapidity expression has been used, written in terms of the hyperbolic functions:
This Lorentz matrix illustrates the boost to be a hyperbolic rotation in four dimensional spacetime, analogous to the circular rotation above in three-dimensional space.
Properties
Linearity
Four-vectors have the same linearity properties as Euclidean vectors in three dimensions. They can be added in the usual entrywise way:
and similarly scalar multiplication by a scalar λ is defined entrywise by:
Then subtraction is the inverse operation of addition, defined entrywise by:
Minkowski tensor
Applying the Minkowski tensor to two four-vectors and , writing the result in dot product notation, we have, using Einstein notation:
in special relativity. The dot product of the basis vectors is the Minkowski metric, as opposed to the Kronecker delta as in Euclidean space. It is convenient to rewrite the definition in matrix form:
in which case above is the entry in row and column of the Minkowski metric as a square matrix. The Minkowski metric is not a Euclidean metric, because it is indefinite (see metric signature). A number of other expressions can be used because the metric tensor can raise and lower the components of or . For contra/co-variant components of and co/contra-variant components of , we have:
so in the matrix notation:
while for and each in covariant components:
with a similar matrix expression to the above.
Applying the Minkowski tensor to a four-vector A with itself we get:
which, depending on the case, may be considered the square, or its negative, of the length of the vector.
Following are two common choices for the metric tensor in the standard basis (essentially Cartesian coordinates). If orthogonal coordinates are used, there would be scale factors along the diagonal part of the spacelike part of the metric, while for general curvilinear coordinates the entire spacelike part of the metric would have components dependent on the curvilinear basis used.
Standard basis, (+−−−) signature
The (+−−−) metric signature is sometimes called the "mostly minus" convention, or the "west coast" convention.
In the (+−−−) metric signature, evaluating the summation over indices gives:
while in matrix form:
It is a recurring theme in special relativity to take the expression
in one reference frame, where C is the value of the inner product in this frame, and:
in another frame, in which C′ is the value of the inner product in this frame. Then since the inner product is an invariant, these must be equal:
that is:
Considering that physical quantities in relativity are four-vectors, this equation has the appearance of a "conservation law", but there is no "conservation" involved. The primary significance of the Minkowski inner product is that for any two four-vectors, its value is invariant for all observers; a change of coordinates does not result in a change in value of the inner product. The components of the four-vectors change from one frame to another; A and A′ are connected by a Lorentz transformation, and similarly for B and B′, although the inner products are the same in all frames. Nevertheless, this type of expression is exploited in relativistic calculations on a par with conservation laws, since the magnitudes of components can be determined without explicitly performing any Lorentz transformations. A particular example is with energy and momentum in the energy-momentum relation derived from the four-momentum vector (see also below).
In this signature we have:
With the signature (+−−−), four-vectors may be classified as either spacelike if , timelike if , and null vectors if .
Standard basis, (−+++) signature
The (-+++) metric signature is sometimes called the "east coast" convention.
Some authors define η with the opposite sign, in which case we have the (−+++) metric signature. Evaluating the summation with this signature:
while the matrix form is:
Note that in this case, in one frame:
while in another:
so that:
which is equivalent to the above expression for C in terms of A and B. Either convention will work. With the Minkowski metric defined in the two ways above, the only difference between covariant and contravariant four-vector components are signs, therefore the signs depend on which sign convention is used.
We have:
With the signature (−+++), four-vectors may be classified as either spacelike if , timelike if , and null if .
Dual vectors
Applying the Minkowski tensor is often expressed as the effect of the dual vector of one vector on the other:
Here the Aνs are the components of the dual vector A* of A in the dual basis and called the covariant coordinates of A, while the original Aν components are called the contravariant coordinates.
Four-vector calculus
Derivatives and differentials
In special relativity (but not general relativity), the derivative of a four-vector with respect to a scalar λ (invariant) is itself a four-vector. It is also useful to take the differential of the four-vector, dA and divide it by the differential of the scalar, dλ:
where the contravariant components are:
while the covariant components are:
In relativistic mechanics, one often takes the differential of a four-vector and divides by the differential in proper time (see below).
Fundamental four-vectors
Four-position
A point in Minkowski space is a time and spatial position, called an "event", or sometimes the position four-vector or four-position or 4-position, described in some reference frame by a set of four coordinates:
where r is the three-dimensional space position vector. If r is a function of coordinate time t in the same frame, i.e. r = r(t), this corresponds to a sequence of events as t varies. The definition R0 = ct ensures that all the coordinates have the same dimension (of length) and units (in the SI, meters). These coordinates are the components of the position four-vector for the event.
The displacement four-vector is defined to be an "arrow" linking two events:
For the differential four-position on a world line we have, using a norm notation:
defining the differential line element ds and differential proper time increment dτ, but this "norm" is also:
so that:
When considering physical phenomena, differential equations arise naturally; however, when considering space and time derivatives of functions, it is unclear which reference frame these derivatives are taken with respect to. It is agreed that time derivatives are taken with respect to the proper time . As proper time is an invariant, this guarantees that the proper-time-derivative of any four-vector is itself a four-vector. It is then important to find a relation between this proper-time-derivative and another time derivative (using the coordinate time t of an inertial reference frame). This relation is provided by taking the above differential invariant spacetime interval, then dividing by (cdt)2 to obtain:
where u = dr/dt is the coordinate 3-velocity of an object measured in the same frame as the coordinates x, y, z, and coordinate time t, and
is the Lorentz factor. This provides a useful relation between the differentials in coordinate time and proper time:
This relation can also be found from the time transformation in the Lorentz transformations.
Important four-vectors in relativity theory can be defined by applying this differential .
Four-gradient
Considering that partial derivatives are linear operators, one can form a four-gradient from the partial time derivative /t and the spatial gradient ∇. Using the standard basis, in index and abbreviated notations, the contravariant components are:
Note the basis vectors are placed in front of the components, to prevent confusion between taking the derivative of the basis vector, or simply indicating the partial derivative is a component of this four-vector. The covariant components are:
Since this is an operator, it doesn't have a "length", but evaluating the inner product of the operator with itself gives another operator:
called the D'Alembert operator.
Kinematics
Four-velocity
The four-velocity of a particle is defined by:
Geometrically, U is a normalized vector tangent to the world line of the particle. Using the differential of the four-position, the magnitude of the four-velocity can be obtained:
in short, the magnitude of the four-velocity for any object is always a fixed constant:
The norm is also:
so that:
which reduces to the definition of the Lorentz factor.
Units of four-velocity are m/s in SI and 1 in the geometrized unit system. Four-velocity is a contravariant vector.
Four-acceleration
The four-acceleration is given by:
where a = du/dt is the coordinate 3-acceleration. Since the magnitude of U is a constant, the four acceleration is orthogonal to the four velocity, i.e. the Minkowski inner product of the four-acceleration and the four-velocity is zero:
which is true for all world lines. The geometric meaning of four-acceleration is the curvature vector of the world line in Minkowski space.
Dynamics
Four-momentum
For a massive particle of rest mass (or invariant mass) m0, the four-momentum is given by:
where the total energy of the moving particle is:
and the total relativistic momentum is:
Taking the inner product of the four-momentum with itself:
and also:
which leads to the energy–momentum relation:
This last relation is useful in relativistic mechanics, essential in relativistic quantum mechanics and relativistic quantum field theory, all with applications to particle physics.
Four-force
The four-force acting on a particle is defined analogously to the 3-force as the time derivative of 3-momentum in Newton's second law:
where P is the power transferred to move the particle, and f is the 3-force acting on the particle. For a particle of constant invariant mass m0, this is equivalent to
An invariant derived from the four-force is:
from the above result.
Thermodynamics
Four-heat flux
The four-heat flux vector field, is essentially similar to the 3d heat flux vector field q, in the local frame of the fluid:
where T is absolute temperature and k is thermal conductivity.
Four-baryon number flux
The flux of baryons is:
where is the number density of baryons in the local rest frame of the baryon fluid (positive values for baryons, negative for antibaryons), and the four-velocity field (of the fluid) as above.
Four-entropy
The four-entropy vector is defined by:
where is the entropy per baryon, and the absolute temperature, in the local rest frame of the fluid.
Electromagnetism
Examples of four-vectors in electromagnetism include the following.
Four-current
The electromagnetic four-current (or more correctly a four-current density) is defined by
formed from the current density j and charge density ρ.
Four-potential
The electromagnetic four-potential (or more correctly a four-EM vector potential) defined by
formed from the vector potential and the scalar potential .
The four-potential is not uniquely determined, because it depends on a choice of gauge.
In the wave equation for the electromagnetic field:
In vacuum,
With a four-current source and using the Lorenz gauge condition ,
Waves
Four-frequency
A photonic plane wave can be described by the four-frequency, defined as
where is the frequency of the wave and is a unit vector in the travel direction of the wave. Now:
so the four-frequency of a photon is always a null vector.
Four-wavevector
The quantities reciprocal to time and space are the angular frequency and angular wave vector , respectively. They form the components of the four-wavevector or wave four-vector:
The wave four-vector has coherent derived unit of reciprocal meters in the SI.
A wave packet of nearly monochromatic light can be described by:
The de Broglie relations then showed that four-wavevector applied to matter waves as well as to light waves:
yielding and , where is the Planck constant divided by .
The square of the norm is:
and by the de Broglie relation:
we have the matter wave analogue of the energy–momentum relation:
Note that for massless particles, in which case , we have:
or . Note this is consistent with the above case; for photons with a 3-wavevector of modulus in the direction of wave propagation defined by the unit vector
Quantum theory
Four-probability current
In quantum mechanics, the four-probability current or probability four-current is analogous to the electromagnetic four-current:
where is the probability density function corresponding to the time component, and is the probability current vector. In non-relativistic quantum mechanics, this current is always well defined because the expressions for density and current are positive definite and can admit a probability interpretation. In relativistic quantum mechanics and quantum field theory, it is not always possible to find a current, particularly when interactions are involved.
Replacing the energy by the energy operator and the momentum by the momentum operator in the four-momentum, one obtains the four-momentum operator, used in relativistic wave equations.
Four-spin
The four-spin of a particle is defined in the rest frame of a particle to be
where is the spin pseudovector. In quantum mechanics, not all three components of this vector are simultaneously measurable, only one component is. The timelike component is zero in the particle's rest frame, but not in any other frame. This component can be found from an appropriate Lorentz transformation.
The norm squared is the (negative of the) magnitude squared of the spin, and according to quantum mechanics we have
This value is observable and quantized, with the spin quantum number (not the magnitude of the spin vector).
Other formulations
Four-vectors in the algebra of physical space
A four-vector A can also be defined in using the Pauli matrices as a basis, again in various equivalent notations:
or explicitly:
and in this formulation, the four-vector is represented as a Hermitian matrix (the matrix transpose and complex conjugate of the matrix leaves it unchanged), rather than a real-valued column or row vector. The determinant of the matrix is the modulus of the four-vector, so the determinant is an invariant:
This idea of using the Pauli matrices as basis vectors is employed in the algebra of physical space, an example of a Clifford algebra.
Four-vectors in spacetime algebra
In spacetime algebra, another example of Clifford algebra, the gamma matrices can also form a basis. (They are also called the Dirac matrices, owing to their appearance in the Dirac equation). There is more than one way to express the gamma matrices, detailed in that main article.
The Feynman slash notation is a shorthand for a four-vector A contracted with the gamma matrices:
The four-momentum contracted with the gamma matrices is an important case in relativistic quantum mechanics and relativistic quantum field theory. In the Dirac equation and other relativistic wave equations, terms of the form:
appear, in which the energy and momentum components are replaced by their respective operators.
| Physical sciences | Theory of relativity | Physics |
226425 | https://en.wikipedia.org/wiki/Lampriformes | Lampriformes | Lampriformes is an order of ray-finned fish. Members are collectively called lamprids (which is more properly used for the Lampridae) or lampriforms, and unite such open-ocean and partially deep-sea Teleostei as the crestfishes, oarfish, opahs, and ribbonfishes. A synonym for this order is Allotriognathi, while an often-seen, but apparently incorrect, spelling variant is Lampridiformes. They contain seven extant families which are generally small but highly distinct, and a mere 12 lampriform genera with some 20 species altogether are recognized. They are the only extant members of the superorder Lamprimorpha, which was formerly diverse throughout much of the Late Cretaceous.
The scientific name literally means "shaped (like the) bright (one)", as "lampr-", meaning bright, comes from lampris, the generic name for the opah. In contrast, most other living lampriforms are actually ribbon-like and not very similar to the disc-shaped opahs in habitus. They are, however, quite distinctly united by their anatomy, and the family's phylogeny, as well as the most ancient fossils of this order suggest the original lampriform was rather "opah-shaped". The scientific name is a combination of Lampris (the type genus) + the standard fish order suffix "-formes". It ultimately derives from Ancient Greek lamprós (λαμπρός, "bright") + Latin forma ("external form"), the former in reference to brilliant coloration of opahs.
Description and ecology
These oceanic fishes are pelagic feeders that stay well above the sea floor, and normally occur in waters 100–1000 m deep. They are typically brightly coloured as adults, often with brilliant crimson fins. Lampriforms have highly variable body forms, but they are generally laterally compressed. Some are rounded in lateral view, while others are very elongated. The former are termed bathysomes—"deep-bodies", from Ancient Greek bathýs (βᾶθύς) "deep" + sōma (σῶμα) "body"—and the latter taeniosomes—"ribbon-bodies", Greek tainía (ταινία) "ribbon". They vary greatly in size, too, ranging from less than in the sailfin moonfishes (Veliferidae) to Regalecus glesne, the longest of all living bony fishes, which may reach in length.
The lampriforms have 84 to 96 total vertebrae; an orbitosphenoid bone is present in some members of this order. Their premaxilla completely excludes the maxilla from the gape, but the jaws are highly protrusible, nonetheless. The upper jaw's protrusion is achieved in a unique way: the maxilla, instead of being ligamentously attached to the ethmoid and palatine, slides in and out with the highly protractile premaxilla. The pelvic fins have up to 17 rays and are placed rather far toward the front of the animal, but they can be missing entirely. The dorsal fin is long, and tends to extend along most of the length of the body. Fin spines are absent in all. Some have a physoclistous gas bladder, while others have none. They either have tiny scales or naked skin.
Systematics and evolution
The Lampriformes are anatomically similar to some Acanthopterygii at a first glance, but more detailed studies reveal they are not as advanced, and many authors assign them to a basal position inside the advanced spiny-rayed Teleostei clade called Acanthomorpha, as monotypic superorder Lampridiomorpha. Unlike their presumed relatives, they lack fin spines, however, and other authors have considered them to form a lineage just outside the Acanthomorpha, and the sister taxon of the Myctophiformes. Molecular data also support the view that the Lampriformes are close to the advanced Teleostei. But the data do not agree on their exact relationships, and the Myctophiformes are also inferred to be close to the Protacanthopterygii, one of the core groups of moderately advanced teleosts. As modern taxonomy tries to avoid a profusion of small taxa, and the delimitation of the Euteleostei (Protacanthopterygii sensu stricto and their allies) versus Acanthopterygii remains uncertain, the systematics and taxonomy of the Lampriformes among the teleosts are in need of further study.
The lampriforms diverged from other teleosts in the Cretaceous, perhaps 80 million years ago (Mya) or slightly more, considering that the oldest-known lampriforms, Nardovelifer, date from the late Campanian epoch and are already clearly assignable to the present order. The basal lampriforms were bathysomes, while the taeniosome body shape is apomorphic and seems to have evolved only once. The order underwent its main radiation in the Paleocene period; the opah-like Turkmenidae were a family of lampriforms thriving at that time, but going extinct around the start of the Neogene, about 23 Mya. Other fossil Lampridiformes are Bajaichthys, Palaeocentrotus, and Veronavelifer.
Classification
The order is occasionally divided into the Bathysomi and the Taeniosomi. The former are a paraphyletic assemblage, thus effectively synonymous with the entire order, while the latter can be considered a valid suborder. Including fossil taxa, the classification of the Lampriformes in phylogenetic sequence, with the number of living genera and species, can thus be given as:
Basal and incertae sedis
Genus Bathysoma (fossil)
Genus Nardovelifer (fossil)
Genus Palaeocentrotus (fossil)
Genus Whitephippus (fossil)
Family Turkmenidae (fossil)
Family Veliferidae — sailfin moonfishes (two genera, six species)
Family Lampridae — opahs (one genus, two species)
Suborder Taeniosomi
Family Lophotidae — crestfishes (two genera, three species)
Family Radiicephalidae — tapertail (monotypic)
Family Trachipteridae — ribbonfishes (three genera, 10 species)
Family Regalecidae — oarfishes (two genera, three species)
Timeline of genera
| Biology and health sciences | Acanthomorpha | Animals |
226631 | https://en.wikipedia.org/wiki/Logistic%20regression | Logistic regression | In statistics, the logistic model (or logit model) is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression (or logit regression) estimates the parameters of a logistic model (the coefficients in the linear or non linear combinations). In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable (two classes, coded by an indicator variable) or a continuous variable (any real value). The corresponding probability of the value labeled "1" can vary between 0 (certainly the value "0") and 1 (certainly the value "1"), hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See and for formal mathematics, and for a worked example.
Binary variables are widely used in statistics to model the probability of a certain class or event taking place, such as the probability of a team winning, of a patient being healthy, etc. (see ), and the logistic model has been the most commonly used model for binary regression since about 1970. Binary variables can be generalized to categorical variables when there are more than two possible values (e.g. whether an image is of a cat, dog, lion, etc.), and the binary logistic regression generalized to multinomial logistic regression. If the multiple categories are ordered, one can use the ordinal logistic regression (for example the proportional odds ordinal logistic model). See for further extensions. The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier.
Analogous linear models for binary variables with a different sigmoid function instead of the logistic function (to convert the linear combination to a probability) can also be used, most notably the probit model; see . The defining characteristic of the logistic model is that increasing one of the independent variables multiplicatively scales the odds of the given outcome at a constant rate, with each independent variable having its own parameter; for a binary dependent variable this generalizes the odds ratio. More abstractly, the logistic function is the natural parameter for the Bernoulli distribution, and in this sense is the "simplest" way to convert a real number to a probability. In particular, it maximizes entropy (minimizes added information), and in this sense makes the fewest assumptions of the data being modeled; see .
The parameters of a logistic regression are most commonly estimated by maximum-likelihood estimation (MLE). This does not have a closed-form expression, unlike linear least squares; see . Logistic regression by MLE plays a similarly basic role for binary or categorical responses as linear regression by ordinary least squares (OLS) plays for scalar responses: it is a simple, well-analyzed baseline model; see for discussion. The logistic regression as a general statistical model was originally developed and popularized primarily by Joseph Berkson, beginning in , where he coined "logit"; see .
Applications
General
Logistic regression is used in various fields, including machine learning, most medical fields, and social sciences. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boyd using logistic regression. Many other medical scales used to assess severity of a patient have been developed using logistic regression. Logistic regression may be used to predict the risk of developing a given disease (e.g. diabetes; coronary heart disease), based on observed characteristics of the patient (age, sex, body mass index, results of various blood tests, etc.). Another example might be to predict whether a Nepalese voter will vote Nepali Congress or Communist Party of Nepal or Any Other Party, based on age, income, sex, race, state of residence, votes in previous elections, etc. The technique can also be used in engineering, especially for predicting the probability of failure of a given process, system or product. It is also used in marketing applications such as prediction of a customer's propensity to purchase a product or halt a subscription, etc. In economics, it can be used to predict the likelihood of a person ending up in the labor force, and a business application would be to predict the likelihood of a homeowner defaulting on a mortgage. Conditional random fields, an extension of logistic regression to sequential data, are used in natural language processing. Disaster planners and engineers rely on these models to predict decision take by householders or building occupants in small-scale and large-scales evacuations, such as building fires, wildfires, hurricanes among others. These models help in the development of reliable disaster managing plans and safer design for the built environment.
Supervised machine learning
Logistic regression is a supervised machine learning algorithm widely used for binary classification tasks, such as identifying whether an email is spam or not and diagnosing diseases by assessing the presence or absence of specific conditions based on patient test results. This approach utilizes the logistic (or sigmoid) function to transform a linear combination of input features into a probability value ranging between 0 and 1. This probability indicates the likelihood that a given input corresponds to one of two predefined categories. The essential mechanism of logistic regression is grounded in the logistic function's ability to model the probability of binary outcomes accurately. With its distinctive S-shaped curve, the logistic function effectively maps any real-valued number to a value within the 0 to 1 interval. This feature renders it particularly suitable for binary classification tasks, such as sorting emails into "spam" or "not spam". By calculating the probability that the dependent variable will be categorized into a specific group, logistic regression provides a probabilistic framework that supports informed decision-making.
Example
Problem
As a simple example, we can use a logistic regression with one explanatory variable and two categories to answer the following question:
A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?
The reason for using logistic regression for this problem is that the values of the dependent variable, pass and fail, while represented by "1" and "0", are not cardinal numbers. If the problem was changed so that pass/fail was replaced with the grade 0–100 (cardinal numbers), then simple regression analysis could be used.
The table shows the number of hours each student spent studying, and whether they passed (1) or failed (0).
We wish to fit a logistic function to the data consisting of the hours studied (xk) and the outcome of the test (yk =1 for pass, 0 for fail). The data points are indexed by the subscript k which runs from to . The x variable is called the "explanatory variable", and the y variable is called the "categorical variable" consisting of two categories: "pass" or "fail" corresponding to the categorical values 1 and 0 respectively.
Model
The logistic function is of the form:
where μ is a location parameter (the midpoint of the curve, where ) and s is a scale parameter. This expression may be rewritten as:
where and is known as the intercept (it is the vertical intercept or y-intercept of the line ), and (inverse scale parameter or rate parameter): these are the y-intercept and slope of the log-odds as a function of x. Conversely, and .
Remark: This model is actually an oversimplification, since it assumes everybody will pass if they learn long enough (limit = 1). The limit value should be a variable parameter too, if you want to make it more realistic.
Fit
The usual measure of goodness of fit for a logistic regression uses logistic loss (or log loss), the negative log-likelihood. For a given xk and yk, write . The are the probabilities that the corresponding will equal one and are the probabilities that they will be zero (see Bernoulli distribution). We wish to find the values of and which give the "best fit" to the data. In the case of linear regression, the sum of the squared deviations of the fit from the data points (yk), the squared error loss, is taken as a measure of the goodness of fit, and the best fit is obtained when that function is minimized.
The log loss for the k-th point is:
The log loss can be interpreted as the "surprisal" of the actual outcome relative to the prediction , and is a measure of information content. Log loss is always greater than or equal to 0, equals 0 only in case of a perfect prediction (i.e., when and , or and ), and approaches infinity as the prediction gets worse (i.e., when and or and ), meaning the actual outcome is "more surprising". Since the value of the logistic function is always strictly between zero and one, the log loss is always greater than zero and less than infinity. Unlike in a linear regression, where the model can have zero loss at a point by passing through a data point (and zero loss overall if all points are on a line), in a logistic regression it is not possible to have zero loss at any points, since is either 0 or 1, but .
These can be combined into a single expression:
This expression is more formally known as the cross-entropy of the predicted distribution from the actual distribution , as probability distributions on the two-element space of (pass, fail).
The sum of these, the total loss, is the overall negative log-likelihood , and the best fit is obtained for those choices of and for which is minimized.
Alternatively, instead of minimizing the loss, one can maximize its inverse, the (positive) log-likelihood:
or equivalently maximize the likelihood function itself, which is the probability that the given data set is produced by a particular logistic function:
This method is known as maximum likelihood estimation.
Parameter estimation
Since ℓ is nonlinear in and , determining their optimum values will require numerical methods. One method of maximizing ℓ is to require the derivatives of ℓ with respect to and to be zero:
and the maximization procedure can be accomplished by solving the above two equations for and , which, again, will generally require the use of numerical methods.
The values of and which maximize ℓ and L using the above data are found to be:
which yields a value for μ and s of:
Predictions
The and coefficients may be entered into the logistic regression equation to estimate the probability of passing the exam.
For example, for a student who studies 2 hours, entering the value into the equation gives the estimated probability of passing the exam of 0.25:
Similarly, for a student who studies 4 hours, the estimated probability of passing the exam is 0.87:
This table shows the estimated probability of passing the exam for several values of hours studying.
Model evaluation
The logistic regression analysis gives the following output.
By the Wald test, the output indicates that hours studying is significantly associated with the probability of passing the exam (). Rather than the Wald method, the recommended method to calculate the p-value for logistic regression is the likelihood-ratio test (LRT), which for these data give (see below).
Generalizations
This simple model is an example of binary logistic regression, and has one explanatory variable and a binary categorical variable which can assume one of two categorical values. Multinomial logistic regression is the generalization of binary logistic regression to include any number of explanatory variables and any number of categories.
Background
Definition of the logistic function
An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input , and outputs a value between zero and one. For the logit, this is interpreted as taking input log-odds and having output probability. The standard logistic function is defined as follows:
A graph of the logistic function on the t-interval (−6,6) is shown in Figure 1.
Let us assume that is a linear function of a single explanatory variable (the case where is a linear combination of multiple explanatory variables is treated similarly). We can then express as follows:
And the general logistic function can now be written as:
In the logistic model, is interpreted as the probability of the dependent variable equaling a success/case rather than a failure/non-case. It is clear that the response variables are not identically distributed: differs from one data point to another, though they are independent given design matrix and shared parameters .
Definition of the inverse of the logistic function
We can now define the logit (log odds) function as the inverse of the standard logistic function. It is easy to see that it satisfies:
and equivalently, after exponentiating both sides we have the odds:
Interpretation of these terms
In the above equations, the terms are as follows:
is the logit function. The equation for illustrates that the logit (i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression.
denotes the natural logarithm.
is the probability that the dependent variable equals a case, given some linear combination of the predictors. The formula for illustrates that the probability of the dependent variable equaling a case is equal to the value of the logistic function of the linear regression expression. This is important in that it shows that the value of the linear regression expression can vary from negative to positive infinity and yet, after transformation, the resulting expression for the probability ranges between 0 and 1.
is the intercept from the linear regression equation (the value of the criterion when the predictor is equal to zero).
is the regression coefficient multiplied by some value of the predictor.
base denotes the exponential function.
Definition of the odds
The odds of the dependent variable equaling a case (given some linear combination of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.
So we define odds of the dependent variable equaling a case (given some linear combination of the predictors) as follows:
The odds ratio
For a continuous independent variable the odds ratio can be defined as:
This exponential relationship provides an interpretation for : The odds multiply by for every 1-unit increase in x.
For a binary independent variable the odds ratio is defined as where a, b, c and d are cells in a 2×2 contingency table.
Multiple explanatory variables
If there are multiple explanatory variables, the above expression can be revised to . Then when this is used in the equation relating the log odds of a success to the values of the predictors, the linear regression will be a multiple regression with m explanators; the parameters for all are all estimated.
Again, the more traditional equations are:
and
where usually .
Definition
A dataset contains N points. Each point i consists of a set of m input variables x1,i ... xm,i (also called independent variables, explanatory variables, predictor variables, features, or attributes), and a binary outcome variable Yi (also known as a dependent variable, response variable, output variable, or class), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to use the dataset to create a predictive model of the outcome variable.
As in linear regression, the outcome variables Yi are assumed to depend on the explanatory variables x1,i ... xm,i.
Explanatory variables
The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables and discrete variables.
(Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value".)
Outcome variables
Formally, the outcomes Yi are described as being Bernoulli-distributed data, where each outcome is determined by an unobserved probability pi that is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms:
The meanings of these four lines are:
The first line expresses the probability distribution of each Yi : conditioned on the explanatory variables, it follows a Bernoulli distribution with parameters pi, the probability of the outcome of 1 for trial i. As noted above, each separate trial has its own probability of success, just as each trial has its own explanatory variables. The probability of success pi is not observed, only the outcome of an individual Bernoulli trial using that probability.
The second line expresses the fact that the expected value of each Yi is equal to the probability of success pi, which is a general property of the Bernoulli distribution. In other words, if we run a large number of Bernoulli trials using the same probability of success pi, then take the average of all the 1 and 0 outcomes, then the result would be close to pi. This is because doing an average this way simply computes the proportion of successes seen, which we expect to converge to the underlying probability of success.
The third line writes out the probability mass function of the Bernoulli distribution, specifying the probability of seeing each of the two possible outcomes.
The fourth line is another way of writing the probability mass function, which avoids having to write separate cases and is more convenient for certain types of calculations. This relies on the fact that Yi can take only the value 0 or 1. In each case, one of the exponents will be 1, "choosing" the value under it, while the other is 0, "canceling out" the value under it. Hence, the outcome is either pi or 1 − pi, as in the previous line.
Linear predictor function
The basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials. The linear predictor function for a particular data point i is written as:
where are regression coefficients indicating the relative effect of a particular explanatory variable on the outcome.
The model is usually put into a more compact form as follows:
The regression coefficients β0, β1, ..., βm are grouped into a single vector β of size m + 1.
For each data point i, an additional explanatory pseudo-variable x0,i is added, with a fixed value of 1, corresponding to the intercept coefficient β0.
The resulting explanatory variables x0,i, x1,i, ..., xm,i are then grouped into a single vector Xi of size m + 1.
This makes it possible to write the linear predictor function as follows:
using the notation for a dot product between two vectors.
Many explanatory variables, two categories
The above example of binary logistic regression on one explanatory variable can be generalized to binary logistic regression on any number of explanatory variables x1, x2,... and any number of categorical values .
To begin with, we may consider a logistic model with M explanatory variables, x1, x2 ... xM and, as in the example above, two categorical values (y = 0 and 1). For the simple binary logistic regression model, we assumed a linear relationship between the predictor variable and the log-odds (also called logit) of the event that . This linear relationship may be extended to the case of M explanatory variables:
where t is the log-odds and are parameters of the model. An additional generalization has been introduced in which the base of the model (b) is not restricted to Euler's number e. In most applications, the base of the logarithm is usually taken to be e. However, in some cases it can be easier to communicate results by working in base 2 or base 10.
For a more compact notation, we will specify the explanatory variables and the β coefficients as -dimensional vectors:
with an added explanatory variable x0 =1. The logit may now be written as:
Solving for the probability p that yields:
,
where is the sigmoid function with base . The above formula shows that once the are fixed, we can easily compute either the log-odds that for a given observation, or the probability that for a given observation. The main use-case of a logistic model is to be given an observation , and estimate the probability that . The optimum beta coefficients may again be found by maximizing the log-likelihood. For K measurements, defining as the explanatory vector of the k-th measurement, and as the categorical outcome of that measurement, the log likelihood may be written in a form very similar to the simple case above:
As in the simple example above, finding the optimum β parameters will require numerical methods. One useful technique is to equate the derivatives of the log likelihood with respect to each of the β parameters to zero yielding a set of equations which will hold at the maximum of the log likelihood:
where xmk is the value of the xm explanatory variable from the k-th measurement.
Consider an example with explanatory variables, , and coefficients , , and which have been determined by the above method. To be concrete, the model is:
,
where p is the probability of the event that . This can be interpreted as follows:
is the y-intercept. It is the log-odds of the event that , when the predictors . By exponentiating, we can see that when the odds of the event that are 1-to-1000, or . Similarly, the probability of the event that when can be computed as
means that increasing by 1 increases the log-odds by . So if increases by 1, the odds that increase by a factor of . The probability of has also increased, but it has not increased by as much as the odds have increased.
means that increasing by 1 increases the log-odds by . So if increases by 1, the odds that increase by a factor of Note how the effect of on the log-odds is twice as great as the effect of , but the effect on the odds is 10 times greater. But the effect on the probability of is not as much as 10 times greater, it's only the effect on the odds that is 10 times greater.
Multinomial logistic regression: Many explanatory variables and many categories
In the above cases of two categories (binomial logistic regression), the categories were indexed by "0" and "1", and we had two probabilities: The probability that the outcome was in category 1 was given by and the probability that the outcome was in category 0 was given by . The sum of these probabilities equals 1, which must be true, since "0" and "1" are the only possible categories in this setup.
In general, if we have explanatory variables (including x0) and categories, we will need separate probabilities, one for each category, indexed by n, which describe the probability that the categorical outcome y will be in category y=n, conditional on the vector of covariates x. The sum of these probabilities over all categories must equal 1. Using the mathematically convenient base e, these probabilities are:
for
Each of the probabilities except will have their own set of regression coefficients . It can be seen that, as required, the sum of the over all categories n is 1. The selection of to be defined in terms of the other probabilities is artificial. Any of the probabilities could have been selected to be so defined. This special value of n is termed the "pivot index", and the log-odds (tn) are expressed in terms of the pivot probability and are again expressed as a linear combination of the explanatory variables:
Note also that for the simple case of , the two-category case is recovered, with and .
The log-likelihood that a particular set of K measurements or data points will be generated by the above probabilities can now be calculated. Indexing each measurement by k, let the k-th set of measured explanatory variables be denoted by and their categorical outcomes be denoted by which can be equal to any integer in [0,N]. The log-likelihood is then:
where is an indicator function which equals 1 if yk = n and zero otherwise. In the case of two explanatory variables, this indicator function was defined as yk when n = 1 and 1-yk when n = 0. This was convenient, but not necessary. Again, the optimum beta coefficients may be found by maximizing the log-likelihood function generally using numerical methods. A possible method of solution is to set the derivatives of the log-likelihood with respect to each beta coefficient equal to zero and solve for the beta coefficients:
where is the m-th coefficient of the vector and is the m-th explanatory variable of the k-th measurement. Once the beta coefficients have been estimated from the data, we will be able to estimate the probability that any subsequent set of explanatory variables will result in any of the possible outcome categories.
Interpretations
There are various equivalent specifications and interpretations of logistic regression, which fit into different types of more general models, and allow different generalizations.
As a generalized linear model
The particular model used by logistic regression, which distinguishes it from standard linear regression and from other types of regression analysis used for binary-valued outcomes, is the way the probability of a particular outcome is linked to the linear predictor function:
Written using the more compact notation described above, this is:
This formulation expresses logistic regression as a type of generalized linear model, which predicts variables with various types of probability distributions by fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable.
The intuition for transforming using the logit function (the natural log of the odds) was explained above. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over — thereby matching the potential range of the linear prediction function on the right side of the equation.
Both the probabilities pi and the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g. maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject to regularization conditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doing maximum a posteriori (MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done using a squared regularizing function, which is equivalent to placing a zero-mean Gaussian prior distribution on the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such as iteratively reweighted least squares (IRLS) or, more commonly these days, a quasi-Newton method such as the L-BFGS method.
The interpretation of the βj parameter estimates is as the additive effect on the log of the odds for a unit change in the j the explanatory variable. In the case of a dichotomous explanatory variable, for instance, gender is the estimate of the odds of having the outcome for, say, males compared with females.
An equivalent formula uses the inverse of the logit function, which is the logistic function, i.e.:
The formula can also be written as a probability distribution (specifically, using a probability mass function):
As a latent-variable model
The logistic model has an equivalent formulation as a latent-variable model. This formulation is common in the theory of discrete choice models and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely related probit model.
Imagine that, for each trial i, there is a continuous latent variable Yi* (i.e. an unobserved random variable) that is distributed as follows:
where
i.e. the latent variable can be written directly in terms of the linear predictor function and an additive random error variable that is distributed according to a standard logistic distribution.
Then Yi can be viewed as an indicator for whether this latent variable is positive:
The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact, it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameter μ (which sets the mean) is equivalent to a distribution with a zero location parameter, where μ has been added to the intercept coefficient. Both situations produce the same value for Yi* regardless of settings of explanatory variables. Similarly, an arbitrary scale parameter s is equivalent to setting the scale parameter to 1 and then dividing all regression coefficients by s. In the latter case, the resulting value of Yi* will be smaller by a factor of s than in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the same Yi choice.
(This predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.)
It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of the generalized linear model and without any latent variables. This can be shown as follows, using the fact that the cumulative distribution function (CDF) of the standard logistic distribution is the logistic function, which is the inverse of the logit function, i.e.
Then:
This formulation—which is standard in discrete choice models—makes clear the relationship between logistic regression (the "logit model") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data).
Two-way latent-variable model
Yet another formulation uses two separate latent variables:
where
where EV1(0,1) is a standard type-1 extreme value distribution: i.e.
Then
This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in the multinomial logit model. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoretical utility associated with making the associated choice, and thus motivate logistic regression in terms of utility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulating discrete choice models, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.)
The choice of the type-1 extreme value distribution seems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use through rational choice theory.
It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions:
An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes one degree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e. We can demonstrate the equivalent as follows:
Example
As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. the Parti Québécois, which wants Quebec to secede from Canada). We would then use three latent variables, one for each choice. Then, in accordance with utility theory, we can then interpret the latent variables as expressing the utility that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps a weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money.
These intuitions can be expressed as follows:
This clearly shows that
Separate sets of regression coefficients need to exist for each choice. When phrased in terms of utility, this can be seen very easily. Different choices have different effects on net utility; furthermore, the effects vary in complex ways that depend on the characteristics of each individual, so there need to be separate sets of coefficients for each characteristic, not simply a single extra per-choice characteristic.
Even though income is a continuous variable, its effect on utility is too complex for it to be treated as a single variable. Either it needs to be directly split up into ranges, or higher powers of income need to be added so that polynomial regression on income is effectively done.
As a "log-linear" model
Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of the multinomial logit.
Here, instead of writing the logit of the probabilities pi as a linear predictor, we separate the linear predictor into two, one for each of the two outcomes:
Two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes the logarithm of the associated probability as a linear predictor, with an extra term at the end. This term, as it turns out, serves as the normalizing factor ensuring that the result is a distribution. This can be seen by exponentiating both sides:
In this form it is clear that the purpose of Z is to ensure that the resulting distribution over Yi is in fact a probability distribution, i.e. it sums to 1. This means that Z is simply the sum of all un-normalized probabilities, and by dividing each probability by Z, the probabilities become "normalized". That is:
and the resulting equations are
Or generally:
This shows clearly how to generalize this formulation to more than two outcomes, as in multinomial logit.
This general formulation is exactly the softmax function as in
In order to prove that this is equivalent to the previous model, the above model is overspecified, in that and cannot be independently specified: rather so knowing one automatically determines the other. As a result, the model is nonidentifiable, in that multiple combinations of β0 and β1 will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities:
As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to set Then,
and so
which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings where will produce equivalent results.)
Most treatments of the multinomial logit model start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common in econometrics and political science, where discrete choice models and utility theory reign, while the "log-linear" formulation here is more common in computer science, e.g. machine learning and natural language processing.
As a single-layer perceptron
The model has an equivalent formulation
This functional form is commonly called a single-layer perceptron or single-layer artificial neural network. A single-layer neural network computes a continuous output instead of a step function. The derivative of pi with respect to X = (x1, ..., xk) is computed from the general form:
where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation. This function is also preferred because its derivative is easily calculated:
In terms of binomial data
A closely related model assumes that each i is associated not with a single Bernoulli trial but with ni independent identically distributed trials, where the observation Yi is the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows a binomial distribution:
An example of this distribution is the fraction of seeds (pi) that germinate after ni are planted.
In terms of expected values, this model is expressed as follows:
so that
Or equivalently:
This model can be fit using the same sorts of methods as the above more basic model.
Model fitting
Maximum likelihood estimation (MLE)
The regression coefficients are usually estimated using maximum likelihood estimation. Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximize the likelihood function so an iterative process must be used instead; for example Newton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until no more improvement is made, at which point the process is said to have converged.
In some instances, the model may not reach convergence. Non-convergence of a model indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large ratio of predictors to cases, multicollinearity, sparseness, or complete separation.
Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to non-convergence. Regularized logistic regression is specifically intended to be used in this situation.
Multicollinearity refers to unacceptably high correlations between predictors. As multicollinearity increases, coefficients remain unbiased but standard errors increase and the likelihood of model convergence decreases. To detect multicollinearity amongst the predictors, one can conduct a linear regression analysis with the predictors of interest for the sole purpose of examining the tolerance statistic used to assess whether multicollinearity is unacceptably high.
Sparseness in the data refers to having a large proportion of empty cells (cells with zero counts). Zero cell counts are particularly problematic with categorical predictors. With continuous predictors, the model can infer values for the zero cell counts, but this is not the case with categorical predictors. The model will not converge with zero cell counts for categorical predictors because the natural logarithm of zero is an undefined value so that the final solution to the model cannot be reached. To remedy this problem, researchers may collapse categories in a theoretically meaningful way or add a constant to all cells.
Another numerical problem that may lead to a lack of convergence is complete separation, which refers to the instance in which the predictors perfectly predict the criterion – all cases are accurately classified and the likelihood maximized with infinite coefficients. In such instances, one should re-examine the data, as there may be some kind of error.
One can also take semi-parametric or non-parametric approaches, e.g., via local-likelihood or nonparametric quasi-likelihood methods, which avoid assumptions of a parametric form for the index function and is robust to the choice of the link function (e.g., probit or logit).
Iteratively reweighted least squares (IRLS)
Binary logistic regression ( or ) can, for example, be calculated using iteratively reweighted least squares (IRLS), which is equivalent to maximizing the log-likelihood of a Bernoulli distributed process using Newton's method. If the problem is written in vector matrix form, with parameters , explanatory variables and expected value of the Bernoulli distribution , the parameters can be found using the following iterative algorithm:
where is a diagonal weighting matrix, the vector of expected values,
The regressor matrix and the vector of response variables. More details can be found in the literature.
Bayesian
In a Bayesian statistics context, prior distributions are normally placed on the regression coefficients, for example in the form of Gaussian distributions. There is no conjugate prior of the likelihood function in logistic regression. When Bayesian inference was performed analytically, this made the posterior distribution difficult to calculate except in very low dimensions. Now, though, automatic software such as OpenBUGS, JAGS, PyMC, Stan or Turing.jl allows these posteriors to be computed using simulation, so lack of conjugacy is not a concern. However, when the sample size or the number of parameters is large, full Bayesian simulation can be slow, and people often use approximate methods such as variational Bayesian methods and expectation propagation.
"Rule of ten"
Widely used, the "one in ten rule", states that logistic regression models give stable values for the explanatory variables if based on a minimum of about 10 events per explanatory variable (EPV); where event denotes the cases belonging to the less frequent category in the dependent variable. Thus a study designed to use explanatory variables for an event (e.g. myocardial infarction) expected to occur in a proportion of participants in the study will require a total of participants. However, there is considerable debate about the reliability of this rule, which is based on simulation studies and lacks a secure theoretical underpinning. According to some authors the rule is overly conservative in some circumstances, with the authors stating, "If we (somewhat subjectively) regard confidence interval coverage less than 93 percent, type I error greater than 7 percent, or relative bias greater than 15 percent as problematic, our results indicate that problems are fairly frequent with 2–4 EPV, uncommon with 5–9 EPV, and still observed with 10–16 EPV. The worst instances of each problem were not severe with 5–9 EPV and usually comparable to those with 10–16 EPV".
Others have found results that are not consistent with the above, using different criteria. A useful criterion is whether the fitted model will be expected to achieve the same predictive discrimination in a new sample as it appeared to achieve in the model development sample. For that criterion, 20 events per candidate variable may be required. Also, one can argue that 96 observations are needed only to estimate the model's intercept precisely enough that the margin of error in predicted probabilities is ±0.1 with a 0.95 confidence level.
Error and significance of fit
Deviance and likelihood ratio test ─ a simple case
In any fitting procedure, the addition of another fitting parameter to a model (e.g. the beta parameters in a logistic regression model) will almost always improve the ability of the model to predict the measured outcomes. This will be true even if the additional term has no predictive value, since the model will simply be "overfitting" to the noise in the data. The question arises as to whether the improvement gained by the addition of another fitting parameter is significant enough to recommend the inclusion of the additional term, or whether the improvement is simply that which may be expected from overfitting.
In short, for logistic regression, a statistic known as the deviance is defined which is a measure of the error between the logistic model fit and the outcome data. In the limit of a large number of data points, the deviance is chi-squared distributed, which allows a chi-squared test to be implemented in order to determine the significance of the explanatory variables.
Linear regression and logistic regression have many similarities. For example, in simple linear regression, a set of K data points (xk, yk) are fitted to a proposed model function of the form . The fit is obtained by choosing the b parameters which minimize the sum of the squares of the residuals (the squared error term) for each data point:
The minimum value which constitutes the fit will be denoted by
The idea of a null model may be introduced, in which it is assumed that the x variable is of no use in predicting the yk outcomes: The data points are fitted to a null model function of the form y = b0 with a squared error term:
The fitting process consists of choosing a value of b0 which minimizes of the fit to the null model, denoted by where the subscript denotes the null model. It is seen that the null model is optimized by where is the mean of the yk values, and the optimized is:
which is proportional to the square of the (uncorrected) sample standard deviation of the yk data points.
We can imagine a case where the yk data points are randomly assigned to the various xk, and then fitted using the proposed model. Specifically, we can consider the fits of the proposed model to every permutation of the yk outcomes. It can be shown that the optimized error of any of these fits will never be less than the optimum error of the null model, and that the difference between these minimum error will follow a chi-squared distribution, with degrees of freedom equal those of the proposed model minus those of the null model which, in this case, will be . Using the chi-squared test, we may then estimate how many of these permuted sets of yk will yield a minimum error less than or equal to the minimum error using the original yk, and so we can estimate how significant an improvement is given by the inclusion of the x variable in the proposed model.
For logistic regression, the measure of goodness-of-fit is the likelihood function L, or its logarithm, the log-likelihood ℓ. The likelihood function L is analogous to the in the linear regression case, except that the likelihood is maximized rather than minimized. Denote the maximized log-likelihood of the proposed model by .
In the case of simple binary logistic regression, the set of K data points are fitted in a probabilistic sense to a function of the form:
where is the probability that . The log-odds are given by:
and the log-likelihood is:
For the null model, the probability that is given by:
The log-odds for the null model are given by:
and the log-likelihood is:
Since we have at the maximum of L, the maximum log-likelihood for the null model is
The optimum is:
where is again the mean of the yk values. Again, we can conceptually consider the fit of the proposed model to every permutation of the yk and it can be shown that the maximum log-likelihood of these permutation fits will never be smaller than that of the null model:
Also, as an analog to the error of the linear regression case, we may define the deviance of a logistic regression fit as:
which will always be positive or zero. The reason for this choice is that not only is the deviance a good measure of the goodness of fit, it is also approximately chi-squared distributed, with the approximation improving as the number of data points (K) increases, becoming exactly chi-square distributed in the limit of an infinite number of data points. As in the case of linear regression, we may use this fact to estimate the probability that a random set of data points will give a better fit than the fit obtained by the proposed model, and so have an estimate how significantly the model is improved by including the xk data points in the proposed model.
For the simple model of student test scores described above, the maximum value of the log-likelihood of the null model is The maximum value of the log-likelihood for the simple model is so that the deviance is
Using the chi-squared test of significance, the integral of the chi-squared distribution with one degree of freedom from 11.6661... to infinity is equal to 0.00063649...
This effectively means that about 6 out of a 10,000 fits to random yk can be expected to have a better fit (smaller deviance) than the given yk and so we can conclude that the inclusion of the x variable and data in the proposed model is a very significant improvement over the null model. In other words, we reject the null hypothesis with confidence.
Goodness of fit summary
Goodness of fit in linear regression models is generally measured using R2. Since this has no direct analog in logistic regression, various methods including the following can be used instead.
Deviance and likelihood ratio tests
In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model. When a "saturated" model is available (a model with a theoretically perfect fit), deviance is calculated by comparing a given model with the saturated model. This computation gives the likelihood-ratio test:
In the above equation, represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign. can be shown to follow an approximate chi-squared distribution. Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained.
When the saturated model is not available (a common case), deviance is calculated simply as −2·(log likelihood of the fitted model), and the reference to the saturated model's log likelihood can be removed from all that follows without harm.
Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. The model deviance represents the difference between a model with at least one predictor and the saturated model. In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Thus, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a chi-square distribution with degrees of freedom equal to the difference in the number of parameters estimated.
Let
Then the difference of both is:
If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improve the model's fit. This is analogous to the -test used in linear regression analysis to assess the significance of prediction.
Pseudo-R-squared
In linear regression the squared multiple correlation, 2 is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors. In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.
Four of the most commonly used indices and one less commonly used one are examined on this page:
Likelihood ratio 2
Cox and Snell 2
Nagelkerke 2
McFadden 2
Tjur 2
Hosmer–Lemeshow test
The Hosmer–Lemeshow test uses a test statistic that asymptotically follows a distribution to assess whether or not the observed event rates match expected event rates in subgroups of the model population. This test is considered to be obsolete by some statisticians because of its dependence on arbitrary binning of predicted probabilities and relative low power.
Coefficient significance
After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor. In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic.
Likelihood ratio test
The likelihood-ratio test discussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model. In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has significantly smaller deviance (c.f. chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor. There is some debate among statisticians about the appropriateness of so-called "stepwise" procedures. The fear is that they may not preserve nominal statistical properties and may become misleading.
Wald statistic
Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the Wald statistic. The Wald statistic, analogous to the t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.
Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be larger increasing the probability of Type-II error. The Wald statistic also tends to be biased when data are sparse.
Case-control sampling
Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals. Thus, we may evaluate more diseased individuals, perhaps all of the rare outcomes. This is also retrospective sampling, or equivalently it is called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases will produce sufficient control data.
Logistic regression is unique in that it may be estimated on unbalanced data, rather than randomly sampled data, and still yield correct coefficient estimates of the effects of each independent variable on the outcome. That is to say, if we form a logistic model from such data, if the model is correct in the general population, the parameters are all correct except for . We can correct if we know the true prevalence as follows:
where is the true prevalence and is the prevalence in the sample.
Discussion
Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive). To do that, binomial logistic regression first calculates the odds of the event happening for different levels of each independent variable, and then takes its logarithm to create a continuous criterion as a transformed version of the dependent variable. The logarithm of the odds is the of the probability, the is defined as follows:
Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e.
is the Bernoulli-distributed response variable and is the predictor variable; the values are the linear parameters.
The of the probability of success is then fitted to the predictors. The predicted value of the is converted back into predicted odds, via the inverse of the natural logarithm – the exponential function. Thus, although the observed dependent variable in binary logistic regression is a 0-or-1 variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a 'success'. In some applications, the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a 'success'; this categorical prediction can be based on the computed odds of success, with predicted odds above some chosen cutoff value being translated into a prediction of success.
Maximum entropy
Of all the functional forms used for estimating the probabilities of a particular categorical outcome which optimize the fit by maximizing the likelihood function (e.g. probit regression, Poisson regression, etc.), the logistic regression solution is unique in that it is a maximum entropy solution. This is a case of a general property: an exponential family of distributions maximizes entropy, given an expected value. In the case of the logistic model, the logistic function is the natural parameter of the Bernoulli distribution (it is in "canonical form", and the logistic function is the canonical link function), while other sigmoid functions are non-canonical link functions; this underlies its mathematical elegance and ease of optimization. See for details.
Proof
In order to show this, we use the method of Lagrange multipliers. The Lagrangian is equal to the entropy plus the sum of the products of Lagrange multipliers times various constraint expressions. The general multinomial case will be considered, since the proof is not made that much simpler by considering simpler cases. Equating the derivative of the Lagrangian with respect to the various probabilities to zero yields a functional form for those probabilities which corresponds to those used in logistic regression.
As in the above section on multinomial logistic regression, we will consider explanatory variables denoted and which include . There will be a total of K data points, indexed by , and the data points are given by and . The xmk will also be represented as an -dimensional vector . There will be possible values of the categorical variable y ranging from 0 to N.
Let pn(x) be the probability, given explanatory variable vector x, that the outcome will be . Define which is the probability that for the k-th measurement, the categorical outcome is n.
The Lagrangian will be expressed as a function of the probabilities pnk and will minimized by equating the derivatives of the Lagrangian with respect to these probabilities to zero. An important point is that the probabilities are treated equally and the fact that they sum to 1 is part of the Lagrangian formulation, rather than being assumed from the beginning.
The first contribution to the Lagrangian is the entropy:
The log-likelihood is:
Assuming the multinomial logistic function, the derivative of the log-likelihood with respect the beta coefficients was found to be:
A very important point here is that this expression is (remarkably) not an explicit function of the beta coefficients. It is only a function of the probabilities pnk and the data. Rather than being specific to the assumed multinomial logistic case, it is taken to be a general statement of the condition at which the log-likelihood is maximized and makes no reference to the functional form of pnk. There are then (M+1)(N+1) fitting constraints and the fitting constraint term in the Lagrangian is then:
where the λnm are the appropriate Lagrange multipliers. There are K normalization constraints which may be written:
so that the normalization term in the Lagrangian is:
where the αk are the appropriate Lagrange multipliers. The Lagrangian is then the sum of the above three terms:
Setting the derivative of the Lagrangian with respect to one of the probabilities to zero yields:
Using the more condensed vector notation:
and dropping the primes on the n and k indices, and then solving for yields:
where:
Imposing the normalization constraint, we can solve for the Zk and write the probabilities as:
The are not all independent. We can add any constant -dimensional vector to each of the without changing the value of the probabilities so that there are only N rather than independent . In the multinomial logistic regression section above, the was subtracted from each which set the exponential term involving to 1, and the beta coefficients were given by .
Other approaches
In machine learning applications where logistic regression is used for binary classification, the MLE minimises the cross-entropy loss function.
Logistic regression is an important machine learning algorithm. The goal is to model the probability of a random variable being 0 or 1 given experimental data.
Consider a generalized linear model function parameterized by ,
Therefore,
and since , we see that is given by We now calculate the likelihood function assuming that all the observations in the sample are independently Bernoulli distributed,
Typically, the log likelihood is maximized,
which is maximized using optimization techniques such as gradient descent.
Assuming the pairs are drawn uniformly from the underlying distribution, then in the limit of large N,
where is the conditional entropy and is the Kullback–Leibler divergence. This leads to the intuition that by maximizing the log-likelihood of a model, you are minimizing the KL divergence of your model from the maximal entropy distribution. Intuitively searching for the model that makes the fewest assumptions in its parameters.
Comparison with linear regression
Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression. The model of logistic regression, however, is based on quite different assumptions (about the relationship between the dependent and independent variables) from those of linear regression. In particular, the key differences between these two models can be seen in the following two features of logistic regression. First, the conditional distribution is a Bernoulli distribution rather than a Gaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through the logistic distribution function because logistic regression predicts the probability of particular outcomes rather than the outcomes themselves.
Alternatives
A common alternative to the logistic model (logit model) is the probit model, as the related names suggest. From the perspective of generalized linear models, these differ in the choice of link function: the logistic model uses the logit function (inverse logistic function), while the probit model uses the probit function (inverse error function). Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors. Other sigmoid functions or error distributions can be used instead.
Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.
The assumption of linear predictor effects can easily be relaxed using techniques such as spline functions.
History
A detailed history of the logistic regression is given in . The logistic function was developed as a model of population growth and named "logistic" by Pierre François Verhulst in the 1830s and 1840s, under the guidance of Adolphe Quetelet; see for details. In his earliest paper (1838), Verhulst did not specify how he fit the curves to the data. In his more detailed paper (1845), Verhulst determined the three parameters of the model by making the curve pass through three observed points, which yielded poor predictions.
The logistic function was independently developed in chemistry as a model of autocatalysis (Wilhelm Ostwald, 1883). An autocatalytic reaction is one in which one of the products is itself a catalyst for the same reaction, while the supply of one of the reactants is fixed. This naturally gives rise to the logistic equation for the same reason as population growth: the reaction is self-reinforcing but constrained.
The logistic function was independently rediscovered as a model of population growth in 1920 by Raymond Pearl and Lowell Reed, published as , which led to its use in modern statistics. They were initially unaware of Verhulst's work and presumably learned about it from L. Gustave du Pasquier, but they gave him little credit and did not adopt his terminology. Verhulst's priority was acknowledged and the term "logistic" revived by Udny Yule in 1925 and has been followed since. Pearl and Reed first applied the model to the population of the United States, and also initially fitted the curve by making it pass through three points; as with Verhulst, this again yielded poor results.
In the 1930s, the probit model was developed and systematized by Chester Ittner Bliss, who coined the term "probit" in , and by John Gaddum in , and the model fit by maximum likelihood estimation by Ronald A. Fisher in , as an addendum to Bliss's work. The probit model was principally used in bioassay, and had been preceded by earlier work dating to 1860; see . The probit model influenced the subsequent development of the logit model and these models competed with each other.
The logistic model was likely first used as an alternative to the probit model in bioassay by Edwin Bidwell Wilson and his student Jane Worcester in . However, the development of the logistic model as a general alternative to the probit model was principally due to the work of Joseph Berkson over many decades, beginning in , where he coined "logit", by analogy with "probit", and continuing through and following years. The logit model was initially dismissed as inferior to the probit model, but "gradually achieved an equal footing with the probit", particularly between 1960 and 1970. By 1970, the logit model achieved parity with the probit model in use in statistics journals and thereafter surpassed it. This relative popularity was due to the adoption of the logit outside of bioassay, rather than displacing the probit within bioassay, and its informal use in practice; the logit's popularity is credited to the logit model's computational simplicity, mathematical properties, and generality, allowing its use in varied fields.
Various refinements occurred during that time, notably by David Cox, as in .
The multinomial logit model was introduced independently in and , which greatly increased the scope of application and the popularity of the logit model. In 1973 Daniel McFadden linked the multinomial logit to the theory of discrete choice, specifically Luce's choice axiom, showing that the multinomial logit followed from the assumption of independence of irrelevant alternatives and interpreting odds of alternatives as relative preferences; this gave a theoretical foundation for the logistic regression.
Extensions
There are large numbers of extensions:
Multinomial logistic regression (or multinomial logit) handles the case of a multi-way categorical dependent variable (with unordered values, also called "classification"). The general case of having dependent variables with more than two values is termed polytomous regression.
Ordered logistic regression (or ordered logit) handles ordinal dependent variables (ordered values).
Mixed logit is an extension of multinomial logit that allows for correlations among the choices of the dependent variable.
An extension of the logistic model to sets of interdependent variables is the conditional random field.
Conditional logistic regression handles matched or stratified data when the strata are small. It is mostly used in the analysis of observational studies.
| Mathematics | Statistics | null |
226644 | https://en.wikipedia.org/wiki/Unit%20cell | Unit cell | In geometry, biology, mineralogy and solid state physics, a unit cell is a repeating unit formed by the vectors spanning the points of a lattice. Despite its suggestive name, the unit cell (unlike a unit vector, for example) does not necessarily have unit size, or even a particular size at all. Rather, the primitive cell is the closest analogy to a unit vector, since it has a determined size for a given lattice and is the basic building block from which larger cells are constructed.
The concept is used particularly in describing crystal structure in two and three dimensions, though it makes sense in all dimensions. A lattice can be characterized by the geometry of its unit cell, which is a section of the tiling (a parallelogram or parallelepiped) that generates the whole tiling using only translations.
There are two special cases of the unit cell: the primitive cell and the conventional cell. The primitive cell is a unit cell corresponding to a single lattice point, it is the smallest possible unit cell. In some cases, the full symmetry of a crystal structure is not obvious from the primitive cell, in which cases a conventional cell may be used. A conventional cell (which may or may not be primitive) is a unit cell with the full symmetry of the lattice and may include more than one lattice point. The conventional unit cells are parallelotopes in n dimensions.
Primitive cell
A primitive cell is a unit cell that contains exactly one lattice point. For unit cells generally, lattice points that are shared by cells are counted as of the lattice points contained in each of those cells; so for example a primitive unit cell in three dimensions which has lattice points only at its eight vertices is considered to contain of each of them. An alternative conceptualization is to consistently pick only one of the lattice points to belong to the given unit cell (so the other lattice points belong to adjacent unit cells).
The primitive translation vectors , , span a lattice cell of smallest volume for a particular three-dimensional lattice, and are used to define a crystal translation vector
where , , are integers, translation by which leaves the lattice invariant. That is, for a point in the lattice , the arrangement of points appears the same from as from .
Since the primitive cell is defined by the primitive axes (vectors) , , , the volume of the primitive cell is given by the parallelepiped from the above axes as
Usually, primitive cells in two and three dimensions are chosen to take the shape parallelograms and parallelepipeds, with an atom at each corner of the cell. This choice of primitive cell is not unique, but volume of primitive cells will always be given by the expression above.
Wigner–Seitz cell
In addition to the parallelepiped primitive cells, for every Bravais lattice there is another kind of primitive cell called the Wigner–Seitz cell. In the Wigner–Seitz cell, the lattice point is at the center of the cell, and for most Bravais lattices, the shape is not a parallelogram or parallelepiped. This is a type of Voronoi cell. The Wigner–Seitz cell of the reciprocal lattice in momentum space is called the Brillouin zone.
Conventional cell
For each particular lattice, a conventional cell has been chosen on a case-by-case basis by crystallographers based on convenience of calculation. These conventional cells may have additional lattice points located in the middle of the faces or body of the unit cell. The number of lattice points, as well as the volume of the conventional cell is an integer multiple (1, 2, 3, or 4) of that of the primitive cell.
Two dimensions
For any 2-dimensional lattice, the unit cells are parallelograms, which in special cases may have orthogonal angles, equal lengths, or both. Four of the five two-dimensional Bravais lattices are represented using conventional primitive cells, as shown below.
The centered rectangular lattice also has a primitive cell in the shape of a rhombus, but in order to allow easy discrimination on the basis of symmetry, it is represented by a conventional cell which contains two lattice points.
Three dimensions
For any 3-dimensional lattice, the conventional unit cells are parallelepipeds, which in special cases may have orthogonal angles, or equal lengths, or both. Seven of the fourteen three-dimensional Bravais lattices are represented using conventional primitive cells, as shown below.
The other seven Bravais lattices (known as the centered lattices) also have primitive cells in the shape of a parallelepiped, but in order to allow easy discrimination on the basis of symmetry, they are represented by conventional cells which contain more than one lattice point.
| Physical sciences | Crystallography | Physics |
226680 | https://en.wikipedia.org/wiki/Chi-squared%20test | Chi-squared test | A chi-squared test (also chi-square or test) is a statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine whether two categorical variables (two dimensions of the contingency table) are independent in influencing the test statistic (values within the table). The test is valid when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table. For contingency tables with smaller sample sizes, a Fisher's exact test is used instead.
In the standard applications of this test, the observations are classified into mutually exclusive classes. If the null hypothesis that there are no differences between the classes in the population is true, the test statistic computed from the observations follows a frequency distribution. The purpose of the test is to evaluate how likely the observed frequencies would be assuming the null hypothesis is true.
Test statistics that follow a distribution occur when the observations are independent. There are also tests for testing the null hypothesis of independence of a pair of random variables based on observations of the pairs.
Chi-squared tests often refers to tests for which the distribution of the test statistic approaches the distribution asymptotically, meaning that the sampling distribution (if the null hypothesis is true) of the test statistic approximates a chi-squared distribution more and more closely as sample sizes increase.
History
In the 19th century, statistical analytical methods were mainly applied in biological data analysis and it was customary for researchers to assume that observations followed a normal distribution, such as Sir George Airy and Mansfield Merriman, whose works were criticized by Karl Pearson in his 1900 paper.
At the end of the 19th century, Pearson noticed the existence of significant skewness within some biological observations. In order to model the observations regardless of being normal or skewed, Pearson, in a series of articles published from 1893 to 1916, devised the Pearson distribution, a family of continuous probability distributions, which includes the normal distribution and many skewed distributions, and proposed a method of statistical analysis consisting of using the Pearson distribution to model the observation and performing a test of goodness of fit to determine how well the model really fits to the observations.
Pearson's chi-squared test
In 1900, Pearson published a paper on the test which is considered to be one of the foundations of modern statistics. In this paper, Pearson investigated a test of goodness of fit.
Suppose that observations in a random sample from a population are classified into mutually exclusive classes with respective observed numbers of observations (for ), and a null hypothesis gives the probability that an observation falls into the th class. So we have the expected numbers for all , where
Pearson proposed that, under the circumstance of the null hypothesis being correct, as the limiting distribution of the quantity given below is the distribution.
Pearson dealt first with the case in which the expected numbers are large enough known numbers in all cells assuming every observation may be taken as normally distributed, and reached the result that, in the limit as becomes large, follows the distribution with degrees of freedom.
However, Pearson next considered the case in which the expected numbers depended on the parameters that had to be estimated from the sample, and suggested that, with the notation of being the true expected numbers and being the estimated expected numbers, the difference
will usually be positive and small enough to be omitted. In a conclusion, Pearson argued that if we regarded as also distributed as distribution with degrees of freedom, the error in this approximation would not affect practical decisions. This conclusion caused some controversy in practical applications and was not settled for 20 years until Fisher's 1922 and 1924 papers.
Other examples of chi-squared tests
One test statistic that follows a chi-squared distribution exactly is the test that the variance of a normally distributed population has a given value based on a sample variance. Such tests are uncommon in practice because the true variance of the population is usually unknown. However, there are several statistical tests where the chi-squared distribution is approximately valid:
Fisher's exact test
For an exact test used in place of the 2 × 2 chi-squared test for independence when all the row and column totals were fixed by design, see Fisher's exact test. When the row or column margins (or both) are random variables (as in most common research designs) this tends to be overly conservative.
Binomial test
For an exact test used in place of the 2 × 1 chi-squared test for goodness of fit, see binomial test.
Other chi-squared tests
Cochran–Mantel–Haenszel chi-squared test.
McNemar's test, used in certain tables with pairing
Tukey's test of additivity
The portmanteau test in time-series analysis, testing for the presence of autocorrelation
Likelihood-ratio tests in general statistical modelling, for testing whether there is evidence of the need to move from a simple model to a more complicated one (where the simple model is nested within the complicated one).
Yates's correction for continuity
Using the chi-squared distribution to interpret Pearson's chi-squared statistic requires one to assume that the discrete probability of observed binomial frequencies in the table can be approximated by the continuous chi-squared distribution. This assumption is not quite correct and introduces some error.
To reduce the error in approximation, Frank Yates suggested a correction for continuity that adjusts the formula for Pearson's chi-squared test by subtracting 0.5 from the absolute difference between each observed value and its expected value in a contingency table. This reduces the chi-squared value obtained and thus increases its p-value.
Chi-squared test for variance in a normal population
If a sample of size is taken from a population having a normal distribution, then there is a result (see distribution of the sample variance) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of product items whose variation is to be tested. The test statistic in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then has a chi-squared distribution with degrees of freedom. For example, if the sample size is 21, the acceptance region for with a significance level of 5% is between 9.59 and 34.17.
Example chi-squared test for categorical data
Suppose there is a city of 1,000,000 residents with four neighborhoods: , , , and . A random sample of 650 residents of the city is taken and their occupation is recorded as "white collar", "blue collar", or "no collar". The null hypothesis is that each person's neighborhood of residence is independent of the person's occupational classification. The data are tabulated as:
{| class="wikitable" style="text-align: right;"
|-
! !! !! !! !! !! Total
|-
|style="text-align: left;"| White collar || 90 || 60 || 104 || 95 || 349
|-
|style="text-align: left;"| Blue collar || 30 || 50 || 51 || 20 || 151
|-
|style="text-align: left;"| No collar || 30 || 40 || 45 || 35 || 150
|-
!style="text-align: left;"| Total || 150 || 150 || 200 || 150 || 650
|}
Let us take the sample living in neighborhood , 150, to estimate what proportion of the whole 1,000,000 live in neighborhood . Similarly we take to estimate what proportion of the 1,000,000 are white-collar workers. By the assumption of independence under the hypothesis we should "expect" the number of white-collar workers in neighborhood to be
Then in that "cell" of the table, we have
The sum of these quantities over all of the cells is the test statistic; in this case, . Under the null hypothesis, this sum has approximately a chi-squared distribution whose number of degrees of freedom is
If the test statistic is improbably large according to that chi-squared distribution, then one rejects the null hypothesis of independence.
A related issue is a test of homogeneity. Suppose that instead of giving every resident of each of the four neighborhoods an equal chance of inclusion in the sample, we decide in advance how many residents of each neighborhood to include. Then each resident has the same chance of being chosen as do all residents of the same neighborhood, but residents of different neighborhoods would have different probabilities of being chosen if the four sample sizes are not proportional to the populations of the four neighborhoods. In such a case, we would be testing "homogeneity" rather than "independence". The question is whether the proportions of blue-collar, white-collar, and no-collar workers in the four neighborhoods are the same. However, the test is done in the same way.
Applications
In cryptanalysis, the chi-squared test is used to compare the distribution of plaintext and (possibly) decrypted ciphertext. The lowest value of the test means that the decryption was successful with high probability. This method can be generalized for solving modern cryptographic problems.
In bioinformatics, the chi-squared test is used to compare the distribution of certain properties of genes (e.g., genomic content, mutation rate, interaction network clustering, etc.) belonging to different categories (e.g., disease genes, essential genes, genes on a certain chromosome etc.).
| Mathematics | Statistics and probability | null |
226748 | https://en.wikipedia.org/wiki/Stillbirth | Stillbirth | Stillbirth is typically defined as fetal death at or after 20 or 28 weeks of pregnancy, depending on the source. It results in a baby born without signs of life. A stillbirth can often result in the feeling of guilt or grief in the mother. The term is in contrast to miscarriage, which is an early pregnancy loss, and sudden infant death syndrome, where the baby dies a short time after being born alive.
Often the cause is unknown. Causes may include pregnancy complications such as pre-eclampsia and birth complications, problems with the placenta or umbilical cord, birth defects, infections such as malaria and syphilis, and poor health in the mother. Risk factors include a mother's age over 35, smoking, drug use, use of assisted reproductive technology, and first pregnancy. Stillbirth may be suspected when no fetal movement is felt. Confirmation is by ultrasound.
Worldwide prevention of most stillbirths is possible with improved health systems. Around half of stillbirths occur during childbirth, with this being more common in the developing than developed world. Otherwise, depending on how far along the pregnancy is, medications may be used to start labor or a type of surgery known as dilation and evacuation may be carried out. Following a stillbirth, women are at higher risk of another one; however, most subsequent pregnancies do not have similar problems. Depression, financial loss, and family breakdown are known complications.
Worldwide in 2021, there were an estimated 1.9 million stillbirths that occurred after 28 weeks of pregnancy (about 1 for every 72 births). More than three-quarters of estimated stillbirths in 2021 occurred in sub-Saharan Africa and South Asia, with 47% of the global total in sub-Saharan Africa and 32% in South Asia. Stillbirth rates have declined, though more slowly since the 2000s. According to UNICEF, the total number of stillbirths declined by 35%, from 2.9 million in 2000 to 1.9 million in 2021. It is estimated that if the stillbirth rate for each country stays at the 2021 level, 17,5 million babies will be stillborn by 2030.
Causes
As of 2016, there is no international classification system for stillbirth causes. The causes of a large percentage of stillbirths is unknown, even in cases where extensive testing and an autopsy have been performed. A rarely used term to describe these is "sudden antenatal death syndrome", or SADS, a phrase coined in 2000. Many stillbirths occur at full term to apparently healthy pregnant women, and a postmortem evaluation reveals a cause of death in about 40% of autopsied cases.
About 10% of cases are believed to be due to obesity, high blood pressure, or diabetes.
Other risk factors include:
bacterial infection, like syphilis
malaria
birth defects, especially pulmonary hypoplasia
chromosomal aberrations
growth restriction
intrahepatic cholestasis of pregnancy
maternal diabetes
maternal consumption of recreational drugs (such as alcohol, nicotine, etc.) or pharmaceutical drugs contraindicated in pregnancy
postdate pregnancy
placental abruptions
physical trauma
radiation poisoning
Rh disease
celiac disease
female genital mutilation
umbilical cord accidents
Prolapsed umbilical cord – Prolapse of the umbilical cord happens when the foetus is not in a correct position in the pelvis. Membranes rupture and the cord is pushed out through the cervix. When the fetus pushes on the cervix, the cord is compressed and blocks blood and oxygen flow to the fetus. The pregnant woman has approximately 10 minutes to get to a doctor before there is any harm done to the fetus.
Monoamniotic twins – These twins share the same placenta and the same amniotic sac and therefore can interfere with each other's umbilical cords. When entanglement of the cords is detected, it is highly recommended to deliver the foetuses as early as 31 weeks.
Umbilical cord length – A short umbilical cord (<30 cm) can affect the foetus in that foetal movements can cause cord compression, constriction, and rupture. A long umbilical cord (>72 cm) can affect the foetus depending on the way the foetus interacts with the cord. Some foetuses grasp the umbilical cord but it is yet unknown as to whether a foetus is strong enough to compress and stop blood flow through the cord. Also, an active foetus, one that frequently repositions itself in the uterus can accidentally entangle itself with the cord. A hyperactive foetus should be evaluated with ultrasound to rule out cord entanglement.
Cord entanglement – The umbilical cord can wrap around an extremity, the body or the neck of the foetus. When the cord is wrapped around the neck of the fetus, it is called a nuchal cord. These entanglements can cause constriction of blood flow to the fetus. These entanglements can be visualized with ultrasound.
Torsion – This term refers to the twisting of the umbilical around itself. Torsion of the umbilical cord is very common (especially in equine stillbirths) but it is not a natural state of the umbilical cord. The umbilical cord can be untwisted at delivery. The average cord has three twists.
Smoke inhalation – If a pregnant woman gets trapped in a building fire, the smoke and fumes can kill a foetus.
A pregnant woman sleeping on her back after 28 weeks of pregnancy may be a risk factor for stillbirth.
After a stillbirth there is a 2.5% risk of another stillbirth in the next pregnancy (an increase from 0.4%).
In the United States, highest rates of stillbirths happen in pregnant women who:
are of low socioeconomic status
are aged 35 years or older
have chronic medical conditions such as diabetes, high blood pressure, high cholesterol, etc.
are African-American
have previously lost a pregnancy
have multiple children at a time (twins, triplets, etc.)
Diagnosis
It is unknown how much time is needed for a fetus to die. Fetal behavior is consistent and a change in the fetus' movements or sleep-wake cycles can indicate fetal distress. A decrease or cessation in sensations of fetal activity may be an indication of fetal distress or death, Still, medical examination, including a nonstress test, is recommended in the event of any type of any change in the strength or frequency of fetal movement, especially a complete cease; most midwives and obstetricians recommend the use of a kick chart to assist in detecting any changes. Fetal distress or death can be confirmed or ruled out via fetoscopy/doptone, ultrasound, and/or electronic fetal monitoring. If the fetus is alive but inactive, extra attention will be given to the placenta and umbilical cord during ultrasound examination to ensure that there is no compromise of oxygen and nutrient delivery.
Some researchers have tried to develop models to identify, early on, pregnant women who may be at high risk of having a stillbirth.
Definition
There are a number of definitions for stillbirth. To allow comparison, the World Health Organization uses the ICD-10 definitions and recommends that any baby born without signs of life at greater than or equal to 28 completed weeks' gestation be classified as a stillbirth. The WHO uses the ICD-10 definitions of "late fetal deaths" as their definition of stillbirth. Other organisations recommend that any combination of greater than 16, 20, 22, 24 or 28 weeks gestational age or 350 g, 400 g, 500 g or 1000 g birth weight may be considered a stillbirth.
The term is often used in distinction to live birth (the baby was born alive, even if they died shortly thereafter) or miscarriage (early pregnancy loss). The word miscarriage is often used incorrectly to describe stillbirths. The term is mostly used in a human context; however, the same phenomenon can occur in all species of placental mammals.
Constricted umbilical cord
When the umbilical cord is constricted (q.v. "accidents" above), the fetus experiences periods of hypoxia, and may respond by unusually high periods of kicking or struggling, to free the umbilical cord. These are sporadic if constriction is due to a change in the fetus' or mother's position, and may become worse or more frequent as the fetus grows. Extra attention should be given if mothers experience large increases in kicking from previous childbirths, especially when increases correspond to position changes.
Regulating high blood pressure, diabetes and drug use may reduce the risk of a stillbirth. Umbilical cord constriction may be identified and observed by ultrasound, if requested.
Some maternal factors are associated with stillbirth, including being age 35 or older, having diabetes, having a history of addiction to illegal drugs, being overweight or obese, and smoking cigarettes in the three months before getting pregnant.
Treatment
Fetal death in utero does not present an immediate health risk to the pregnant woman, and labour will usually begin spontaneously after two weeks, so the pregnant woman may choose to wait and bear the fetal remains vaginally. After two weeks, the pregnant woman is at risk of developing blood clotting problems, and labor induction is recommended at this point. In many cases, the pregnant woman will find the idea of carrying the dead fetus traumatizing and will elect to have labor induced. Caesarean birth is not recommended unless complications develop during vaginal birth. How the diagnosis of stillbirth is communicated by healthcare workers may have a long-lasting and deep impact on parents. People need to heal physically after a stillbirth just as they do emotionally. In Ireland, for example, people are offered a 'cuddle cot', a cooled cot which allows them to spend a number of days with their child before burial or cremation.
Delivery
In single stillbirths, common practice is to induce labor for the health of the mother due to possible complications such as exsanguination. Induction and labor can take 48 hours. In the case of various complications such as preclampsia, infections, multiples (twins), emergency Cesarean may occur.
Epidemiology
The average stillbirth rate in the United States is approximately 1 in 160 births, which is roughly 26,000 stillbirths each year. In Australia, England, Wales, and Northern Ireland, the rate is approximately 1 in every 200 births; in Scotland, 1 in 167. Rates of stillbirth in the United States have decreased by about two-thirds since the 1950s.
The vast majority of stillbirths worldwide (98%) occur in low- and middle-income countries, where medical care can be of low quality or unavailable. Reliable estimates calculate that, yearly, about 2.6 million stillbirths occur worldwide during the third trimester. Stillbirths were previously not included in the Global Burden of Disease Study which records worldwide deaths from various causes until 2015.
Society and culture
The way people view stillbirths has changed dramatically over time; however, its economic and psychosocial impact is often underestimated. In the early 20th century, when a stillbirth occurred, the baby was taken and discarded and the parents were expected to immediately let go of the attachment and try for another baby. In many countries, parents are expected by friends and family members to recover from the loss of an unborn baby very soon after it happens. Societally-mediated complications such as financial hardship and depression are among the more common results. A stillbirth can have significant psychological effects on the parents, notably causing feelings of guilt in the mother. Further psycho-social effects on parents include apprehension, anger, feelings of worthlessness and not wanting to interact with other people, with these reactions sometimes carried over into pregnancies that occur after the stillbirth. Men also suffer psychologically after stillbirth, although they are more likely to hide their grief and feelings and try to act strong, with the focus on supporting their partner.
Legal definitions
Australia
In Australia, stillbirth is defined as a baby born with no signs of life that weighs more than 400 grams, or more than 20 weeks in gestation. They legally must have their birth registered.
Austria
In Austria, a stillbirth is defined as a birth of a child of at least 500g weight without vital signs, e.g. blood circulation, breath or muscle movements.
Canada
Beginning in 1959, "the definition of a stillbirth was revised to conform, in substance, to the definition of fetal death recommended by the World Health Organization". The definition of "fetal death" promulgated by the World Health Organization in 1950 is as follows:
"Fetal death" means death prior to the complete expulsion or extraction from its mother of a product of human conception, irrespective of the duration of pregnancy and which is not an induced termination of pregnancy. The death is indicated by the fact that after such expulsion or extraction, the fetus does not breathe or show any other evidence of life, such as beating of the heart, pulsation of the umbilical cord, or definite movement of voluntary muscles. Heartbeats are to be distinguished from transient cardiac contractions; respirations are to be distinguished from fleeting respiratory efforts or gasps.
Germany
In Germany, a stillbirth is defined as the birth of a child of at least 500g weight without blood circulation or breath. Details for burial vary amongst the federal states.
Republic of Ireland
Since 1 January 1995, stillbirths occurring in the Republic of Ireland must be registered; stillbirths that occurred before that date can also be registered but evidence is required. For the purposes of civil registration, s.1 of the Stillbirths Registration Act 1994 refers to "...a child weighing at least 500 grammes, or having reached a gestational age of at least 24 weeks who shows no signs of life."
Netherlands
In the Netherlands, stillbirth is defined differently by the Central Bureau of Statistics (CBS) and the Dutch Perinatal Registry (Stichting PRN). The birth and mortality numbers from the CBS include all livebirths, regardless of gestational duration, and all stillbirths from 24 weeks of gestation and onwards. In the Perinatal Registry, gestational duration of both liveborn and stillborn children is available. They register all liveborn and stillborn children from 22, 24 or 28 weeks of gestation and onwards (dependent on the report: fetal, neonatal or perinatal mortality). Therefore, data from these institutions on (still)births cannot be compared simply one-on-one.
United Kingdom
The registration of stillbirths has been required in England and Wales from 1927 and in Scotland from 1939 but is not required in Northern Ireland. Sometimes a pregnancy is terminated deliberately during a late phase, for example due to congenital anomaly. UK law requires these procedures to be registered as "stillbirths".
England and Wales
For the purposes of the Births and Deaths Registration Act 1926 (as amended), section 12 contains the definition:A similar definition is applied within the Births and Deaths Registration Act 1953 (as amended), contained in s.41.
The above definitions apply within those Acts thus other legislation will not necessarily be in identical terms.
s.2 of the 1953 Act requires that registration of a birth takes place within 42 days of the birth except where an inquest takes place or the child has been "found exposed" in which latter case the time limit runs from the time of finding.
Extracts from the register of stillbirths are restricted to those who have obtained consent from the Registrar General for England and Wales.
Scotland
Section 56(1) of the Registration of Births, Deaths and Marriages (Scotland) Act 1965 (as amended) contains the definition:s.21(1) of the same Act requires that:In the general case, s.14 of the Act requires that a birth has to be registered within 21 days of the birth or of the child being found.
Unlike the registers for births, marriages, civil partnerships and deaths, the register of still-births is not open to public access and issue of extracts requires the permission of the Registrar General for Scotland.
Northern Ireland
In Northern Ireland, the Births and Deaths Registration (Northern Ireland) Order 1976, as amended contains the definition:Registration of stillbirths can be made by a relative or certain other persons involved with the stillbirth but it is not compulsory to do so. Registration takes place with the District Registrar for the Registration District where the still-birth occurred or for the District in which the mother is resident. A stillbirth certificate will be issued to the registrant with further copies only available to those obtaining official consent for their issue. Registration may be made within three months of the still-birth.
United States
In the United States, there is no standard definition of the term 'stillbirth'.
In the U.S., the Born-Alive Infants Protection Act of 2002 specifies that any breathing, heartbeat, pulsating umbilical cord or confirmed voluntary muscle movement indicate live birth rather than stillbirth.
The Centers for Disease Control and Prevention collects statistical information on "live births, fetal deaths, and induced termination of pregnancy" from 57 reporting areas in the United States. Each reporting area has different guidelines and definitions for what is being reported; many do not use the term "stillbirth" at all. The federal guidelines suggest (at page 1) that fetal death and stillbirth can be interchangeable terms. The CDC definition of "fetal death" is based on the definition promulgated by the World Health Organization in 1950 (see section above on Canada). Researchers are learning more about the long term psychiatric sequelae of traumatic birth and believe the effects may be intergenerational
The CDC states that, in the US, a stillbirth is typically defined as the loss of a fetus during or after the 20th week of pregnancy. Stillbirths can further be classified as early (occurring between week 20 and week 27 of pregnancy), late (occurring between week 28 and week 36 of pregnancy), and term (occurring during or after week 37 of pregnancy). In the US, approximately 21,000 babies are stillborn annually, and stillbirth affects around 1 in 175 births.
The federal guidelines recommend reporting those fetal deaths whose birth weight is over 12.5 oz (350 g), or those more than 20 weeks gestation. Forty-one areas use a definition very similar to the federal definition, thirteen areas use a shortened definition of fetal death, and three areas have no formal definition of fetal death. Only 11 areas specifically use the term 'stillbirth', often synonymously with late fetal death; however, they are split between whether stillbirths are "irrespective of the duration of pregnancy", or whether some age or weight constraint is applied. A movement in the U.S. has changed the way that stillbirths are documented through vital records. Previously, only the deaths were reported. However, 27 states have enacted legislation that offers some variation of a birth certificate as an option for parents who choose to pay for one. Parents may not claim a tax exemption for stillborn infants, even if a birth certificate is offered. To claim an exemption, the birth must be certified as live, even if the infant only lives for a very brief period.
After Dobbs v. Jackson Women's Health Organization, some states restricted women's access to abortion, even when the pregnancy is nonviable. Legal restrictions on medications and procedures that have been used for abortions may also impact treatment options for women undergoing a miscarriage or stillbirth.
| Biology and health sciences | Human reproduction | Biology |
226928 | https://en.wikipedia.org/wiki/Cobra | Cobra | Cobra is the common name of various venomous snakes, most of which belong to the genus Naja.
Many cobras are capable of rearing upwards and producing a hood when threatened.
Other snakes known as "cobras"
While the members of the genus Naja constitute the true cobras, the name cobra is also applied to these other genera and species:
The rinkhals, ringhals or ring-necked spitting cobra (Hemachatus haemachatus) so-called for its neck band as well as its habit of rearing upwards and producing a hood when threatened
The king cobra or hamadryad (Ophiophagus hannah)
The two species of tree cobras, Goldie's tree cobra (Pseudohaje goldii) and the black tree cobra (Pseudohaje nigra)
The two species of shield-nosed cobras, the Cape coral snake (Aspidelaps lubricus) and the shield-nosed cobra (Aspidelaps scutatus)
The two species of black desert cobras or desert black snakes, Walterinnesia aegyptia and Walterinnesia morgani, neither of which rears upwards and produces a hood when threatened
The eastern coral snake or American cobra (Micrurus fulvius), which also does not rear upwards and produce a hood when threatened
The false water cobra (Hydrodynastes gigas) is the only "cobra" species that is not a member of the Elapidae. It does not rear upwards, produces only a slight flattening of the neck when threatened, and is only mildly venomous.
| Biology and health sciences | Snakes | Animals |
226985 | https://en.wikipedia.org/wiki/Vermiculite | Vermiculite | Vermiculite is a hydrous phyllosilicate mineral which undergoes significant expansion when heated. Exfoliation occurs when the mineral is heated sufficiently; commercial furnaces can routinely produce this effect. Vermiculite forms by the weathering or hydrothermal alteration of biotite or phlogopite.
Large commercial vermiculite mines exist in the United States, Russia, South Africa, China, and Brazil.
Occurrence
Vermiculite was first described in 1824 for an occurrence in Millbury, Massachusetts. Its name is from the Latin , "to breed worms", for the manner in which it exfoliates when heated.
It typically occurs as an alteration product at the contact between felsic and mafic or ultramafic rocks such as pyroxenites and dunites. It also occurs in carbonatites and metamorphosed magnesium-rich limestone. Associated mineral phases include: corundum, apatite, serpentine, and talc. It occurs interlayered with chlorite, biotite and phlogopite.
Structure
Vermiculite is a 2:1 clay, meaning it has two tetrahedral sheets for every one octahedral sheet. It is a limited-expansion clay with a medium shrink–swell capacity. Vermiculite has a high cation-exchange capacity (CEC) at 100–150 meq/100 g. Vermiculite clays are weathered micas in which the potassium ions between the molecular sheets are replaced by magnesium and iron ions.
Commercial uses
Molded shapes
This process involves mixing exfoliated vermiculite with inorganic bonding agents such as sodium silicate, cement (specific quantities), and other compounds, such as those containing potassium, to produce an 'earth damp' mixture. This material is then hydraulically pressed into shape in a mold and then heat cured at temperatures up to 180 °C for up to 24 hours, depending upon the thickness of the moulded part. Such parts can withstand service temperatures of up to 1150 °C and are often used in the aluminium smelting industry as back-up insulation behind the carbon cathode in the pot cells which contain the molten mixture of cryolite and alumina. The moulded shapes and boards are used in:
Open fireplaces
High-temperature or refractory insulation
Acoustic panels
Fireproofing of structural steel and pipes
Calcium silicate boards
Exfoliated vermiculite is added to a calcium silicate slurry. This is then dewatered by pressing or by using one of the Fourdriner/Magnani/Hatschek processes to form a flat board which is then heat cured under pressure (typically 10–15 bar) for periods of up to 24 hours.
Brake linings
Finer grades of exfoliated vermiculite are being used in brake linings primarily for the automotive market. The properties of vermiculite that make it an appropriate choice for use in brake linings include its thermal resistance, ease of addition to other raw materials to achieve a homogeneous mix, and its shape and surface characteristics.
Roof and floor screeds and insulating concretes
Exfoliated vermiculite (typically the finer grades) can be added at site to Portland cement and other aggregates, rheological aids, and water to produce roof and floor concrete screeds which are lightweight and insulating. In many cases, vermiculite-based roof screeds are used in conjunction with other insulation materials, such as polystyrene board, to form a total roofing system. A bituminous binder can also be used with exfoliated vermiculite to produce a dry, lightweight roof screed which has the advantages of low thermal conductivity, low moisture content, and ease of placement (by pouring from the bag and then tamping).
Soilless growing medium
Exfoliated vermiculite is combined with other materials such as peat or composted pine bark to produce soilless growing medium for the professional horticulturalist and for the home gardener. These mixes promote faster root growth and give quick anchorage to young roots. The mixture helps retain air, fertilizer, and moisture, releasing them as the plant requires them. These mixes were pioneered by Boodley and Sheldrake. Exfoliated vermiculite is also used as a growing medium for hydroponics.
Seed germination
Vermiculite, alone or mixed with soil or peat, is used to germinate seeds; very little watering is required. When vermiculite is used alone, seedlings should be fed with a weak fertilizer solution when the first true leaves appear, e.g. with one teaspoon of 5-10-5 soluble fertilizer per US gallon of water (1:768 ratio), gradually increased to one tablespoon (1:256 ratio) when transplanting.
Root crop storage
Pour vermiculite around bulbs placed in container. If clumps are dug, allow to dry for a few hours in the sun and then place in cartons or bushel baskets and cover with vermiculite. The absorptive power of vermiculite acts as a regulator that prevents mildew and moisture fluctuation during the storage period. It will not absorb moisture from the inside of stored tubers, but it does take up free water from the outside, preventing storage rot.
Soil conditioner
Where the native soil is heavy or sticky, gentle mixing of vermiculite as a soil conditioner—up to one-half the volume of the soil—is recommended. This creates air channels and allows the soil mix to breathe. Mixing vermiculite in flower and vegetable gardens or in potted plants will provide the necessary air to maintain vigorous plant growth. Where soils are sandy, mixing of vermiculite into the soil will allow the soil to hold the water and air needed for growth.
As loose-fill insulation
Exfoliated vermiculite treated with a water repellent is used to fill the pores and cavities of masonry construction and hollow blockwork to enhance fire ratings (e.g. Underwriters Laboratories Wall and Partition designs), thermal insulation, and acoustic performance. Expanded vermiculite has also been used as thermal insulation in the attics and walls of houses and in water heaters, fire safes, stoves, furnaces, and refrigerators.
Refractory/Insulation gunning and castable mixes
Exfoliated vermiculite can be combined with high alumina (also known as calcium aluminate) cements and other aggregates such as expanded shale, clay, and slate or sodium silicate to produce refractory/insulation concretes and mortars. In the early days of their use, these products were batched at or very close to the place of installation. This continues to be the case in some limited circumstances; however, more and more use is being made of pre-batched, proprietary mixes. Mixes containing vermiculite are used in areas where strength and corrosion/abrasion resistance are of secondary importance, the most important factor being the insulation performance of the in-place refractory lining. These mixes are used in industries including iron/steel, cement, and hydrocarbon processing.
Fire protection
Vermiculite is used as an additive to fireproof wallboard.
High temperature coating
Vermiculite dispersions are typically either chemically or physically very finely delaminated vermiculite in a fluid medium. These dispersions can be used to make vermiculite 'paper' sheets by pouring them onto a piece of smooth, low surface-energy plastic, and allowing to dry. The resulting sheet can then be peeled off the plastic. Typical end-uses for vermiculite dispersions include inclusion in high temperature coatings or binders for construction materials, gaskets, specialty papers/textiles, oxidation-resistant coating on carbon based composites, and as barrier coatings for films.
Waste treatment
The cation exchange capacity (up to 1,000 milliequivalents per kg) of vermiculite allows it to be used in fluid purification processes for waste water, chemical processing, and the pollution-control of air in mines and gases in industrial processes. In addition to its ion exchange properties, exfoliated vermiculite can retain liquids within the inter-laminar voids of the individual particles, as well as between the particles themselves.
Others
As a packing material, valued for its high absorbency.
As a cooling substrate in blacksmithing.
As a substrate for various animals and/or a medium for incubation of eggs.
As a lightweight aggregate for plaster, proprietary concrete compounds, firestop mortar, and cementitious spray fireproofing: Exfoliated vermiculite is used in both hand and spray-applied general building plasters to improve coverage, ease of handling, adhesion to a wide variety of substrates, fire resistance, and resistance to chipping/cracking/shrinkage.
As a component of the interior fill for firestop pillows, along with graphite.
As a carrier for dry handling and slow release of agricultural chemicals.
As a hot topping: both exfoliated and crude vermiculite have been used for hot topping in the steel industry. When poured onto molten metal, crude vermiculite exfoliates immediately and forms an insulating layer, allowing the material to be transported to the next production process without losing too much heat.
Used to permit slow cooling of hot pieces in glassblowing, lampwork, steelwork, and glass beadmaking.
Used in in-ground swimming pools to provide a smooth pool base: Finer grades of exfoliated vermiculite plus Portland cement may be combined either on-site or in a factory premix to provide a base for swimming pool vinyl liners. These mixes are pumped into place using a rotor stator pump, or hand poured.
Used in commercial hand warmers.
Used in AGA cookers as insulation.
Used in explosives storage as a blast mitigant.
Used to absorb hazardous liquids for solid disposal.
Used in gas fireplaces to simulate embers.
Used as part of a substrate for cultivation of fungi.
Commercial manufacture of exfoliated vermiculite
In 2014, South Africa, Brazil, the US, and China were the top producers of mined, concentrated and unexfoliated vermiculite, with about 90% world share. South Africa's production is decreasing, while Brazil's is significantly increasing.
While some end processors and exfoliators of vermiculite specialize, with proprietary products sold in a wide variety of industries, some have more varied end products, with less stringent technical requirements. Some vermiculite exfoliators blend with lower-cost perlite also. Vermiculite exfoliators have an international trade association called The Vermiculite Association to represent the industry's interests and to exchange information.
Asbestos contamination
Although not all vermiculite contains asbestos, some products were made with vermiculite that contained asbestos until the early 1990s. Vermiculite mines throughout the world are now regularly tested for it and are supposed to sell products that contain no asbestos. The former vermiculite mine in Libby, Montana, did have tremolite asbestos as well as winchite and richterite (both fibrous amphiboles)—in fact, it was formed underground through essentially the same geologic processes as the contaminants.
Pure vermiculite does not contain asbestos and is non-toxic. Impure vermiculite may contain, apart from asbestos, also minor diopside or remnants of the precursor minerals biotite or phlogopite.
Controversy over health risks
The largest and oldest vermiculite mine in the United States was started in the 1920s, at Libby, Montana, and the vermiculite was sold under the commercial name Zonolite. The Zonolite brand and the mine were acquired by the W. R. Grace and Company in 1963. Mining operations at the Libby site stopped in 1990 in response to asbestos contamination. While in operation, the Libby mine may have produced 80% of the world's supply of vermiculite.
The United States government estimates that vermiculite was used in more than 35 million homes, but does not recommend its removal. Nevertheless, homes or structures containing vermiculite or vermiculite insulation dating from before the mid-1990s—and especially those known to contain the "Zonolite" brand—may contain asbestos, and therefore may be a health concern.
An article published in The Salt Lake Tribune on December 3, 2006, reported that vermiculite and Zonolite had been found to contain asbestos, which had led to cancers such as those found in asbestos-related cases. The article stated that there had been a cover-up by W. R. Grace and Company and others regarding the health risks associated with vermiculite and that several sites in the Salt Lake Valley had been remediated by the EPA when they were shown to be contaminated with asbestos. W. R. Grace and Company has vigorously denied these charges.
The vermiculite deposit at the mine in Libby, Montana, was (and is) heavily contaminated with asbestos. Numerous people were knowingly exposed to the harmful dust of vermiculite that contained asbestos. Unfortunately, the mine had been operating since the 1920s, and environmental and industrial controls were virtually non-existent until the mine was purchased by the W. R. Grace and Company in 1963. Yet, knowing the human health risks, the mining company still continued to operate there until 1990. Consequently, many of the former miners and residents of Libby have been affected and continue to suffer health problems. Over 400 people in the town have died from asbestos-related disease due to contamination from vermiculite mining from nearby Zonolite Mountain, where soil samples were found to be loaded with fibrous tremolite (known to be a very hazardous form of asbestos), and countless others there who insulated their homes with Zonolite have succumbed to asbestos-related diseases, most of whom never were employed in environments where asbestos was an issue.
After a 1999 Seattle Post-Intelligencer story claimed that asbestos-related disease was common in the town, the EPA, in response to political pressure, made cleanup of the site a priority and called Libby the worst case of community-wide exposure to a toxic substance in U.S. history. The EPA has spent $120 million in Superfund money on cleanup. In October 2006, W. R. Grace and Company tried to appeal the fines ($54.5 million) levied on them from the EPA, but the Supreme Court rejected the appeal. The United States government pursued criminal charges against several former executives and managers of the mine for allegedly disregarding and covering up health risks to employees. They were also accused of wire fraud, and of obstructing the government's cleanup efforts. As of the indictment date, about 1,200 residents of the Libby area had been identified as suffering from some kind of asbestos-related abnormality. The case ended in acquittals on May 8, 2009. On June 17, 2009, the EPA issued a public health emergency in and near Libby, thereby allowing federal agencies to provide funding for health care, and for removal of contaminated insulation from affected homes.
| Physical sciences | Silicate minerals | Earth science |
227021 | https://en.wikipedia.org/wiki/Computer%20Go | Computer Go | Computer Go is the field of artificial intelligence (AI) dedicated to creating a computer program that plays the traditional board game Go. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such as alpha-beta minimax that performed well as AIs for checkers and chess fell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation of human-like AI.
The application of Monte Carlo tree search to Go algorithms provided a notable improvement in the late 2000s decade, with programs finally able to achieve a low-dan level: that of an advanced amateur. High-dan amateurs and professionals could still exploit these programs' weaknesses and win consistently, but computer performance had advanced past the intermediate (single-digit kyu) level. The tantalizing unmet goal of defeating the best human players without a handicap, long thought unreachable, brought a burst of renewed interest. The key insight proved to be an application of machine learning and deep learning. DeepMind, a Google acquisition dedicated to AI research, produced AlphaGo in 2015 and announced it to the world in 2016. AlphaGo defeated Lee Sedol, a 9 dan professional, in a no-handicap match in 2016, then defeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years. Just as checkers had fallen to machines in 1995 and chess in 1997, computer programs finally conquered humanity's greatest Go champions in 2016–2017. DeepMind did not release AlphaGo for public use, but various programs have been built since based on the journal articles DeepMind released describing AlphaGo and its variants.
Overview and history
Professional Go players see the game as requiring intuition, creative and strategic thinking. It has long been considered a difficult challenge in the field of artificial intelligence (AI) and is considerably more difficult to solve than chess. Many in the field considered Go to require more elements that mimic human thought than chess. Mathematician I. J. Good wrote in 1965:
Prior to 2015, the best Go programs only managed to reach amateur dan level. On the small 9×9 board, the computer fared better, and some programs managed to win a fraction of their 9×9 games against professional players. Prior to AlphaGo, some researchers had claimed that computers would never defeat top humans at Go.
Early decades
The first Go program was written by Albert Lindsey Zobrist in 1968 as part of his thesis on pattern recognition. It introduced an influence function to estimate territory and Zobrist hashing to detect ko.
In April 1981, Jonathan K Millen published an article in Byte discussing Wally, a Go program with a 15x15 board that fit within the KIM-1 microcomputer's 1K RAM. Bruce F. Webster published an article in the magazine in November 1984 discussing a Go program he had written for the Apple Macintosh, including the MacFORTH source. Programs for Go were weak; a 1983 article estimated that they were at best equivalent to 20 kyu, the rating of a naive novice player, and often restricted themselves to smaller boards. AIs who played on the Internet Go Server (IGS) on 19x19 size boards had around 20–15 kyu strength in 2003, after substantial improvements in hardware.
In 1998, very strong players were able to beat computer programs while giving handicaps of 25–30 stones, an enormous handicap that few human players would ever take. There was a case in the 1994 World Computer Go Championship where the winning program, Go Intellect, lost all three games against the youth players while receiving a 15-stone handicap. In general, players who understood and exploited a program's weaknesses could win even through large handicaps.
2007–2014: Monte Carlo tree search
In 2006 (with an article published in 2007), Rémi Coulom produced a new algorithm he called Monte Carlo tree search. In it, a game tree is created as usual of potential futures that branch with every move. However, computers "score" a terminal leaf of the tree by repeated random playouts (similar to Monte Carlo strategies for other problems). The advantage is that such random playouts can be done very quickly. The intuitive objection - that random playouts do not correspond to the actual worth of a position - turned out not to be as fatal to the procedure as expected; the "tree search" side of the algorithm corrected well enough for finding reasonable future game trees to explore. Programs based on this method such as MoGo and Fuego saw better performance than classic AIs from earlier. The best programs could do especially well on the small 9x9 board, which had fewer possibilities to explore. In 2009, the first such programs appeared which could reach and hold low dan-level ranks on the KGS Go Server on the 19x19 board.
In 2010, at the 2010 European Go Congress in Finland, MogoTW played 19x19 Go against Catalin Taranu (5p). MogoTW received a seven-stone handicap and won.
In 2011, Zen reached 5 dan on the server KGS, playing games of 15 seconds per move. The account which reached that rank uses a cluster version of Zen running on a 26-core machine.
In 2012, Zen beat Takemiya Masaki (9p) by 11 points at five stones handicap, followed by a 20-point win at four stones handicap.
In 2013, Crazy Stone beat Yoshio Ishida (9p) in a 19×19 game at four stones handicap.
The 2014 Codecentric Go Challenge, a best-of-five match in an even 19x19 game, was played between Crazy Stone and Franz-Jozef Dickhut (6d). No stronger player had ever before agreed to play a serious competition against a go program on even terms. Franz-Jozef Dickhut won, though Crazy Stone won the first match by 1.5 points.
2015 onwards: The deep learning era
AlphaGo, developed by Google DeepMind, was a significant advance in computer strength compared to previous Go programs. It used techniques that combined deep learning and Monte Carlo tree search. In October 2015, it defeated Fan Hui, the European Go champion, five times out of five in tournament conditions. In March 2016, AlphaGo beat Lee Sedol in the first three of five matches. This was the first time that a 9-dan master had played a professional game against a computer without handicap. Lee won the fourth match, describing his win as "invaluable". AlphaGo won the final match two days later. With this victory, AlphaGo became the first program to beat a 9 dan human professional in a game without handicaps on a full-sized board.
In May 2017, AlphaGo beat Ke Jie, who at the time was ranked top in the world, in a three-game match during the Future of Go Summit.
In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.
After the basic principles of AlphaGo were published in the journal Nature, other teams have been able to produce high-level programs. Work on Go AI since has largely consisted of emulating the techniques used to build AlphaGo, which proved so much stronger than everything else. By 2017, both Zen and Tencent's project Fine Art were capable of defeating very high-level professionals some of the time. The open source Leela Zero engine was created as well.
Challenges for strategy and performance for classic AIs
For a long time, it was a widely held opinion that computer Go posed a problem fundamentally different from computer chess. Many considered a strong Go-playing program something that could be achieved only in the far future, as a result of fundamental advances in general artificial intelligence technology. Those who thought the problem feasible believed that domain knowledge would be required to be effective against human experts. Therefore, a large part of the computer Go development effort was during these times focused on ways of representing human-like expert knowledge and combining this with local search to answer questions of a tactical nature. The result of this were programs that handled many specific situations well but which had very pronounced weaknesses in their overall handling of the game. Also, these classical programs gained almost nothing from increases in available computing power. Progress in the field was generally slow.
Size of board
The large board (19×19, 361 intersections) is often noted as one of the primary reasons why a strong program is hard to create. The large board size prevents an alpha-beta searcher from achieving deep look-ahead without significant search extensions or pruning heuristics.
In 2002, a computer program called MIGOS (MIni GO Solver) completely solved the game of Go for the 5×5 board. Black wins, taking the whole board.
Number of move options
Continuing the comparison to chess, Go moves are not as limited by the rules of the game. For the first move in chess, the player has twenty choices. Go players begin with a choice of 55 distinct legal moves, accounting for symmetry. This number rises quickly as symmetry is broken, and soon almost all of the 361 points of the board must be evaluated.
Evaluation function
One of the most basic tasks in a game is to assess a board position: which side is favored, and by how much? In chess, many future positions in a tree are direct wins for one side, and boards have a reasonable heuristic for evaluation in simple material counting, as well as certain positional factors such as pawn structure. A future where one side has lost their queen for no benefit clearly favors the other side. These types of positional evaluation rules cannot efficiently be applied to Go. The value of a Go position depends on a complex analysis to determine whether or not the group is alive, which stones can be connected to one another, and heuristics around the extent to which a strong position has influence, or the extent to which a weak position can be attacked. A stone placed might not have immediate influence, but after many moves could become highly important in retrospect as other areas of the board take shape.
Poor evaluation of board states will cause the AI to work toward positions it incorrectly believes favor it, but actually do not.
Life and death
One of the main concerns for a Go player is which groups of stones can be kept alive and which can be captured. This general class of problems is known as life and death. Knowledge-based AI systems sometimes attempted to understand the life and death status of groups on the board. The most direct approach is to perform a tree search on the moves which potentially affect the stones in question, and then to record the status of the stones at the end of the main line of play. However, within time and memory constraints, it is not generally possible to determine with complete accuracy which moves could affect the 'life' of a group of stones. This implies that some heuristic must be applied to select which moves to consider. The net effect is that for any given program, there is a trade-off between playing speed and life and death reading abilities.
State representation
An issue that all Go programs must tackle is how to represent the current state of the game. The most direct way of representing a board is as a one- or two-dimensional array, where elements in the array represent points on the board, and can take on a value corresponding to a white stone, a black stone, or an empty intersection. Additional data is needed to store how many stones have been captured, whose turn it is, and which intersections are illegal due to the Ko rule. In general, machine learning programs stop there at this simplest form and let the organic AIs come to their own understanding of the meaning of the board, likely simply using Monte Carlo playouts to "score" a board as good or bad for a player. "Classic" AI programs that attempted to directly model a human's strategy might go further, however, such as layering on data such as stones believed to be dead, stones that are unconditionally alive, stones in a seki state of mutual life, and so forth in their representation of the state of the game.
System design
Historically, symbolic artificial intelligence techniques have been used to approach the problem of Go AI. Neural networks began to be tried as an alternative approach in the 2000s decade, as they required immense computing power that was expensive-to-impossible to reach in earlier decades. These approaches attempt to mitigate the problems of the game of Go having a high branching factor and numerous other difficulties.
The only choice a program needs to make is where to place its next stone. However, this decision is made difficult by the wide range of impacts a single stone can have across the entire board, and the complex interactions various stones' groups can have with each other. Various architectures have arisen for handling this problem. Popular techniques and design philosophies include:
some form of tree search,
pattern matching and knowledge-based systems,
the application of Monte Carlo methods,
the use of machine learning.
Minimax tree search
One traditional AI technique for creating game playing software is to use a minimax tree search. This involves playing out all hypothetical moves on the board up to a certain point, then using an evaluation function to estimate the value of that position for the current player. The move which leads to the best hypothetical board is selected, and the process is repeated each turn. While tree searches have been very effective in computer chess, they have seen less success in Computer Go programs. This is partly because it has traditionally been difficult to create an effective evaluation function for a Go board, and partly because the large number of possible moves each side can make each leads to a high branching factor. This makes this technique very computationally expensive. Because of this, many programs which use search trees extensively can only play on the smaller 9×9 board, rather than full 19×19 ones.
There are several techniques, which can greatly improve the performance of search trees in terms of both speed and memory. Pruning techniques such as alpha–beta pruning, Principal Variation Search, and MTD(f) can reduce the effective branching factor without loss of strength. In tactical areas such as life and death, Go is particularly amenable to caching techniques such as transposition tables. These can reduce the amount of repeated effort, especially when combined with an iterative deepening approach. In order to quickly store a full-sized Go board in a transposition table, a hashing technique for mathematically summarizing is generally necessary. Zobrist hashing is very popular in Go programs because it has low collision rates, and can be iteratively updated at each move with just two XORs, rather than being calculated from scratch. Even using these performance-enhancing techniques, full tree searches on a full-sized board are still prohibitively slow. Searches can be sped up by using large amounts of domain specific pruning techniques, such as not considering moves where your opponent is already strong, and selective extensions like always considering moves next to groups of stones which are about to be captured. However, both of these options introduce a significant risk of not considering a vital move which would have changed the course of the game.
Results of computer competitions show that pattern matching techniques for choosing a handful of appropriate moves combined with fast localized tactical searches (explained above) were once sufficient to produce a competitive program. For example, GNU Go was competitive until 2008.
Knowledge-based systems
Human novices often learn from the game records of old games played by master players. AI work in the 1990s often involved attempting to "teach" the AI human-style heuristics of Go knowledge. In 1996, Tim Klinger and David Mechner acknowledged the beginner-level strength of the best AIs and argued that "it is our belief that with better tools for representing and maintaining Go knowledge, it will be possible to develop stronger Go programs." They proposed two ways: recognizing common configurations of stones and their positions and concentrating on local battles. In 2001, one paper concluded that "Go programs are still lacking in both quality and quantity of knowledge," and that fixing this would improve Go AI performance.
In theory, the use of expert knowledge would improve Go software. Hundreds of guidelines and rules of thumb for strong play have been formulated by both high-level amateurs and professionals. The programmer's task is to take these heuristics, formalize them into computer code, and utilize pattern matching and pattern recognition algorithms to recognize when these rules apply. It is also important to be able to "score" these heuristics so that when they offer conflicting advice, the system has ways to determine which heuristic is more important and applicable to the situation. Most of the relatively successful results come from programmers' individual skills at Go and their personal conjectures about Go, but not from formal mathematical assertions; they are trying to make the computer mimic the way they play Go. Competitive programs around 2001 could contain 50–100 modules that dealt with different aspects and strategies of the game, such as joseki.
Some examples of programs which have relied heavily on expert knowledge are Handtalk (later known as Goemate), The Many Faces of Go, Go Intellect, and Go++, each of which has at some point been considered the world's best Go program. However, these methods ultimately had diminishing returns, and never really advanced past an intermediate level at best on a full-sized board. One particular problem was overall game strategy. Even if an expert system recognizes a pattern and knows how to play a local skirmish, it may miss a looming deeper strategic problem in the future. The result is a program whose strength is less than the sum of its parts; while moves may be good on an individual tactical basis, the program can be tricked and maneuvered into ceding too much in exchange, and find itself in an overall losing position. As the 2001 survey put it, "just one bad move can ruin a good game. Program performance over a full game can be much lower than master level."
Monte-Carlo methods
One major alternative to using hand-coded knowledge and searches is the use of Monte Carlo methods. This is done by generating a list of potential moves, and for each move playing out thousands of games at random on the resulting board. The move which leads to the best set of random games for the current player is chosen as the best move. No potentially fallible knowledge-based system is required. However, because the moves used for evaluation are generated at random it is possible that a move which would be excellent except for one specific opponent response would be mistakenly evaluated as a good move. The result of this are programs which are strong in an overall strategic sense, but are imperfect tactically. This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution. Some programs which use Monte-Carlo techniques are Fuego, The Many Faces of Go v12, Leela, MoGo, Crazy Stone, MyGoFriend, and Zen.
In 2006, a new search technique, upper confidence bounds applied to trees (UCT), was developed and applied to many 9x9 Monte-Carlo Go programs with excellent results. UCT uses the results of the play outs collected so far to guide the search along the more successful lines of play, while still allowing alternative lines to be explored. The UCT technique along with many other optimizations for playing on the larger 19x19 board has led MoGo to become one of the strongest research programs. Successful early applications of UCT methods to 19x19 Go include MoGo, Crazy Stone, and Mango. MoGo won the 2007 Computer Olympiad and won one (out of three) blitz game against Guo Juan, 5th Dan Pro, in the much less complex 9x9 Go. The Many Faces of Go won the 2008 Computer Olympiad after adding UCT search to its traditional knowledge-based engine.
Monte-Carlo based Go engines have a reputation of being much more willing to play tenuki, moves elsewhere on the board, rather than continue a local fight than human players. This was often perceived as a weakness early in these program's existence. That said, this tendency has persisted in AlphaGo's playstyle with dominant results, so this may be more of a "quirk" than a "weakness."
Machine learning
The skill level of knowledge-based systems is closely linked to the knowledge of their programmers and associated domain experts. This limitation has made it difficult to program truly strong AIs. A different path is to use machine learning techniques. In these, the only thing that the programmers need to program are the rules and simple scoring algorithms of how to analyze the worth of a position. The software will then automatically generates its own sense of patterns, heuristics, and strategies, in theory.
This is generally done by allowing a neural network or genetic algorithm to either review a large database of professional games, or play many games against itself or other people or programs. These algorithms are then able to utilize this data as a means of improving their performance. Machine learning techniques can also be used in a less ambitious context to tune specific parameters of programs that rely mainly on other techniques. For example, Crazy Stone learns move generation patterns from several hundred sample games, using a generalization of the Elo rating system.
The most famous example of this approach is AlphaGo, which proved far more effective than previous AIs. In its first version, it had one layer that analyzed millions of existing positions to determine likely moves to prioritize as worthy of further analysis, and another layer that tried to optimize its own winning chances using the suggested likely moves from the first layer. AlphaGo used Monte Carlo tree search to score the resulting positions. A later version of AlphaGo, AlphaGoZero, eschewed learning from existing Go games, and instead learnt only from playing itself repeatedly. Other earlier programs using neural nets include NeuroGo and WinHonte.
Computer Go and other fields
Computer Go research results are being applied to other similar fields such as cognitive science, pattern recognition and machine learning. Combinatorial Game Theory, a branch of applied mathematics, is a topic relevant to computer Go.
John H. Conway suggested applying surreal numbers to analysis of the endgame in Go. This idea has been further developed by Elwyn R. Berlekamp and David Wolfe in their book Mathematical Go. Go endgames have been proven to be PSPACE-hard if the absolute best move must be calculated on an arbitrary mostly filled board. Certain complicated situations such as Triple Ko, Quadruple Ko, Molasses Ko, and Moonshine Life make this problem difficult. (In practice, strong Monte Carlo algorithms can still handle normal Go endgame situations well enough, and the most complicated classes of life-and-death endgame problems are unlikely to come up in a high-level game.)
Various difficult combinatorial problems (any NP-hard problem) can be converted to Go-like problems on a sufficiently large board; however, the same is true for other abstract board games, including chess and minesweeper, when suitably generalized to a board of arbitrary size. NP-complete problems do not tend in their general case to be easier for unaided humans than for suitably programmed computers: unaided humans are much worse than computers at solving, for example, instances of the subset sum problem.
List of Go-playing computer programs
AlphaGo, a machine learning program by Google DeepMind, and the first computer program to win in no-handicap matches against a 9-dan human Go player
BaduGI, a program by Jooyoung Lee
Crazy Stone, by Rémi Coulom (sold as Saikyo no Igo in Japan)
Darkforest, by Facebook
Fine Art, by Tencent
Fuego, an open source Monte Carlo program
Goban, a Macintosh Go program by Sen:te (requires free Goban Extensions)
GNU Go, an open source classical Go program
KataGo, by David Wu.
Leela, the first Monte Carlo program for the public
Leela Zero, a reimplementation of the system described in the AlphaGo Zero paper
The Many Faces of Go, by David Fotland (sold as AI Igo in Japan)
MyGoFriend, a program by Frank Karger
MoGo by Sylvain Gelly; parallel version by many people.
Pachi, an open source Monte Carlo program by Petr Baudiš
Smart Go, by Anders Kierulf, inventor of the Smart Game Format
Steenvreter, by Erik van der Werf
Zen, by Yoji Ojima aka Yamato (sold as Tencho no Igo in Japan); parallel version by Hideki Kato.
Competitions among computer Go programs
Several annual competitions take place between Go computer programs, including Go events at the Computer Olympiad. Regular, less formal, competitions between programs used to occur on the KGS Go Server (monthly) and the Computer Go Server (continuous).
Many programs are available that allow computer Go engines to play against each other; they almost always communicate via the Go Text Protocol (GTP).
History
The first computer Go competition was sponsored by Acornsoft, and the first regular ones by USENIX. They ran from 1984 to 1988. These competitions introduced Nemesis, the first competitive Go program from Bruce Wilcox, and G2.5 by David Fotland, which would later evolve into Cosmos and The Many Faces of Go.
One of the early drivers of computer Go research was the Ing Prize, a relatively large money award sponsored by Taiwanese banker Ing Chang-ki, offered annually between 1985 and 2000 at the World Computer Go Congress (or Ing Cup). The winner of this tournament was allowed to challenge young players at a handicap in a short match. If the computer won the match, the prize was awarded and a new prize announced: a larger prize for beating the players at a lesser handicap. The series of Ing prizes was set to expire either 1) in the year 2000 or 2) when a program could beat a 1-dan professional at no handicap for 40,000,000 NT dollars. The last winner was Handtalk in 1997, claiming 250,000 NT dollars for winning an 11-stone handicap match against three 11–13 year old amateur 2–6 dans. At the time the prize expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a nine-stone handicap match.
Many other large regional Go tournaments ("congresses") had an attached computer Go event. The European Go Congress has sponsored a computer tournament since 1987, and the USENIX event evolved into the US/North American Computer Go Championship, held annually from 1988 to 2000 at the US Go Congress.
Japan started sponsoring computer Go competitions in 1995. The FOST Cup was held annually from 1995 to 1999 in Tokyo. That tournament was supplanted by the Gifu Challenge, which was held annually from 2003 to 2006 in Ogaki, Gifu. The Computer Go UEC Cup has been held annually since 2007.
Scoring formalization in computer-computer games
When two computers play a game of Go against each other, the ideal is to treat the game in a manner identical to two humans playing while avoiding any intervention from actual humans. However, this can be difficult during end game scoring. The main problem is that Go playing software, which usually communicates using the standardized Go Text Protocol (GTP), will not always agree with respect to the alive or dead status of stones.
While there is no general way for two different programs to "talk it out" and resolve the conflict, this problem is avoided for the most part by using Chinese, Tromp-Taylor, or American Go Association (AGA) rules in which continued play (without penalty) is required until there is no more disagreement on the status of any stones on the board. In practice, such as on the KGS Go Server, the server can mediate a dispute by sending a special GTP command to the two client programs indicating they should continue placing stones until there is no question about the status of any particular group (all dead stones have been captured). The CGOS Go Server usually sees programs resign before a game has even reached the scoring phase, but nevertheless supports a modified version of Tromp-Taylor rules requiring a full play out.
These rule sets mean that a program which was in a winning position at the end of the game under Japanese rules (when both players have passed) could theoretically lose because of poor play in the resolution phase, but this is very unlikely and considered a normal part of the game under all of the area rule sets.
The main drawback to the above system is that some rule sets (such as the traditional Japanese rules) penalize the players for making these extra moves, precluding the use of additional playout for two computers. Nevertheless, most modern Go Programs support Japanese rules against humans.
Historically, another method for resolving this problem was to have an expert human judge the final board. However, this introduces subjectivity into the results and the risk that the expert would miss something the program saw.
| Technology | Artificial intelligence concepts | null |
227100 | https://en.wikipedia.org/wiki/Silver%20nitrate | Silver nitrate | Silver nitrate is an inorganic compound with chemical formula . It is a versatile precursor to many other silver compounds, such as those used in photography. It is far less sensitive to light than the halides. It was once called lunar caustic because silver was called luna by ancient alchemists who associated silver with the moon. In solid silver nitrate, the silver ions are three-coordinated in a trigonal planar arrangement.
Synthesis and structure
Albertus Magnus, in the 13th century, documented the ability of nitric acid to separate gold and silver by dissolving the silver. Indeed silver nitrate can be prepared by dissolving silver in nitric acid followed by evaporation of the solution. The stoichiometry of the reaction depends upon the concentration of nitric acid used.
3 Ag + 4 HNO3 (cold and diluted) → 3 AgNO3 + 2 H2O + NO
Ag + 2 HNO3 (hot and concentrated) → AgNO3 + H2O + NO2
The structure of silver nitrate has been examined by X-ray crystallography several times. In the common orthorhombic form stable at ordinary temperature and pressure, the silver atoms form pairs with Ag---Ag contacts of 3.227 Å. Each Ag+ center is bonded to six oxygen centers of both uni- and bidentate nitrate ligands. The Ag-O distances range from 2.384 to 2.702 Å.
Reactions
A typical reaction with silver nitrate is to suspend a rod of copper in a solution of silver nitrate and leave it for a few hours. The silver nitrate reacts with copper to form hairlike crystals of silver metal and a blue solution of copper nitrate:
2 AgNO3 + Cu → Cu(NO3)2 + 2 Ag
Silver nitrate decomposes when heated:
2 AgNO3(l) → 2 Ag(s) + O2(g) + 2 NO2(g)
Qualitatively, decomposition is negligible below the melting point, but becomes appreciable around 250 °C and fully decomposes at 440 °C.
Most metal nitrates thermally decompose to the respective oxides, but silver oxide decomposes at a lower temperature than silver nitrate, so the decomposition of silver nitrate yields elemental silver instead.
Uses
Precursor to other silver compounds
Silver nitrate is the least expensive salt of silver; it offers several other advantages as well. It is non-hygroscopic, in contrast to silver fluoroborate and silver perchlorate. In addition, it is relatively stable to light, and it dissolves in numerous solvents, including water. The nitrate can be easily replaced by other ligands, rendering AgNO3 versatile. Treatment with solutions of halide ions gives a precipitate of AgX (X = Cl, Br, I). When making photographic film, silver nitrate is treated with halide salts of sodium or potassium to form insoluble silver halide in situ in photographic gelatin, which is then applied to strips of tri-acetate or polyester. Similarly, silver nitrate is used to prepare some silver-based explosives, such as the fulminate, azide, or acetylide, through a precipitation reaction.
Treatment of silver nitrate with base gives dark grey silver oxide:
2 AgNO3 + 2 NaOH → Ag2O + 2 NaNO3 + H2O
Halide abstraction
The silver cation, , reacts quickly with halide sources to produce the insoluble silver halide, which is a cream precipitate if is used, a white precipitate if is used and a yellow precipitate if is used. This reaction is commonly used in inorganic chemistry to abstract halides:
(aq) + (aq) → AgX(s)
where = , , or .
Other silver salts with non-coordinating anions, namely silver tetrafluoroborate and silver hexafluorophosphate are used for more demanding applications.
Similarly, this reaction is used in analytical chemistry to confirm the presence of chloride, bromide, or iodide ions. Samples are typically acidified with dilute nitric acid to remove interfering ions, e.g. carbonate ions and sulfide ions. This step avoids confusion of silver sulfide or silver carbonate precipitates with that of silver halides. The color of precipitate varies with the halide: white (silver chloride), pale yellow/cream (silver bromide), yellow (silver iodide). AgBr and especially AgI photo-decompose to the metal, as evidenced by a grayish color on exposed samples.
The same reaction was used on steamships in order to determine whether or not boiler feedwater had been contaminated with seawater. It is still used to determine if moisture on formerly dry cargo is a result of condensation from humid air, or from seawater leaking through the hull.
Organic synthesis
Silver nitrate is used in many ways in organic synthesis, e.g. for deprotection and oxidations. binds alkenes reversibly, and silver nitrate has been used to separate mixtures of alkenes by selective absorption. The resulting adduct can be decomposed with ammonia to release the free alkene. Silver nitrate is highly soluble in water but is poorly soluble in most organic solvents, except acetonitrile (111.8 g/100 g, 25 °C).
Biology
In histology, silver nitrate is used for silver staining, for demonstrating reticular fibers, proteins and nucleic acids. For this reason it is also used to demonstrate proteins in PAGE gels. It can be used as a stain in scanning electron microscopy.
Cut flower stems can be placed in a silver nitrate solution, which prevents the production of ethylene. This delays ageing of the flower.
Indelible ink
Silver nitrate produces long-lasting stain when applied to skin and is one of indelible ink’s ingredients. An electoral stain makes use of this to mark a finger of people who have voted in an election, allowing easy identification to prevent double-voting.
In addition to staining skin, silver nitrate has a history of use in stained glass. In the 14th century, artists began using a "silver stain" (also known as a yellow stain) made from silver nitrate to create a yellow effect on clear glass. The stain would produce a stable color that could range from pale lemon to deep orange or gold. Silver stain was often used with glass paint, and was applied to the opposite side of the glass as the paint. It was also used to create a mosaic effect by reducing the number of pieces of glass in a window. Despite the age of the technique, this process of creating stained glass remains almost entirely unchanged.
Medicine
Silver salts have antiseptic properties. In 1881 Credé introduced a method known as Credé's prophylaxis, which used of dilute (2%) solutions of silver nitrate in newborn babies' eyes at birth to prevent contraction of gonorrhea from the mother, which could cause blindness via ophthalmia neonatorum. (Modern antibiotics are now used instead).
Fused silver nitrate, shaped into sticks, was traditionally called "lunar caustic". It is used as a cauterizing agent, for example to remove granulation tissue around a stoma. General Sir James Abbott noted in his journals that in India in 1827 it was infused by a British surgeon into wounds in his arm resulting from the bite of a mad dog to cauterize the wounds and prevent the onset of rabies.
Silver nitrate is used to cauterize superficial blood vessels in the nose to help prevent nosebleeds.
Dentists sometimes use silver nitrate-infused swabs to heal oral ulcers. Silver nitrate is used by some podiatrists to kill cells located in the nail bed.
The Canadian physician C. A. Douglas Ringrose researched the use of silver nitrate for sterilization procedures, believing that silver nitrate could be used to block and corrode the fallopian tubes. The technique was ineffective.
Disinfection
Much research has been done in evaluating the ability of the silver ion at inactivating Escherichia coli, a microorganism commonly used as an indicator for fecal contamination and as a surrogate for pathogens in drinking water treatment. Concentrations of silver nitrate evaluated in inactivation experiments range from 10–200 micrograms per liter as Ag+.
Silver's antimicrobial activity saw many applications prior to the discovery of modern antibiotics, when it fell into near disuse. Its association with argyria made consumers wary and led them to turn away from it when given an alternative.
Against warts
Repeated daily application of silver nitrate can induce adequate destruction of cutaneous warts, but occasionally pigmented scars may develop. In a placebo-controlled study of 70 patients, silver nitrate given over nine days resulted in clearance of all warts in 43% and improvement in warts in 26% one month after treatment compared to 11% and 14%, respectively, in the placebo group.
Safety
As an oxidant, silver nitrate should be properly stored away from organic compounds. It reacts explosively with ethanol. Despite its common usage in extremely low concentrations to prevent gonorrhea and control nosebleeds, silver nitrate is still very toxic and corrosive. Brief exposure will not produce any immediate side effects other than the purple, brown or black stains on the skin, but upon constant exposure to high concentrations, side effects will be noticeable, which include burns. Long-term exposure may cause eye damage. Silver nitrate is known to be a skin and eye irritant. Silver nitrate has not been thoroughly investigated for potential carcinogenic effect.
Silver nitrate is currently unregulated in water sources by the United States Environmental Protection Agency. However, if more than 1 gram of silver is accumulated in the body, a condition called argyria may develop. Argyria is a permanent cosmetic condition in which the skin and internal organs turn a blue-gray color. The United States Environmental Protection Agency used to have a maximum contaminant limit for silver in water until 1990, when it was determined that argyria did not impact the function of any affected organs despite the discolouration. Argyria is more often associated with the consumption of colloidal silver solutions rather than with silver nitrate, since it is only used at extremely low concentrations to disinfect the water. However, it is still important to be wary before ingesting any sort of silver-ion solution.
| Physical sciences | Nitric oxyanions | Chemistry |
227323 | https://en.wikipedia.org/wiki/Wilson%27s%20theorem | Wilson's theorem | In algebra and number theory, Wilson's theorem states that a natural number n > 1 is a prime number if and only if the product of all the positive integers less than n is one less than a multiple of n. That is (using the notations of modular arithmetic), the factorial satisfies
exactly when n is a prime number. In other words, any integer n > 1 is a prime number if, and only if, (n − 1)! + 1 is divisible by n.
History
The theorem was first stated by Ibn al-Haytham . Edward Waring announced the theorem in 1770 without proving it, crediting his student John Wilson for the discovery. Lagrange gave the first proof in 1771. There is evidence that Leibniz was also aware of the result a century earlier, but never published it.
Example
For each of the values of n from 2 to 30, the following table shows the number (n − 1)! and the remainder when (n − 1)! is divided by n. (In the notation of modular arithmetic, the remainder when m is divided by n is written m mod n.)
The background color is blue for prime values of n, gold for composite values.
Proofs
As a biconditional (if and only if) statement, the proof has two halves: to show that equality does not hold when is composite, and to show that it does hold when is prime.
Composite modulus
Suppose that is composite. Therefore, it is divisible by some prime number where . Because divides , there is an integer such that . Suppose for the sake of contradiction that were congruent to modulo . Then would also be congruent to modulo : indeed, if then for some integer , and consequently is one less than a multiple of . On the other hand, since , one of the factors in the expanded product is . Therefore . This is a contradiction; therefore it is not possible that when is composite.
In fact, more is true. With the sole exception of the case , where , if is composite then is congruent to 0 modulo . The proof can be divided into two cases: First, if can be factored as the product of two unequal numbers, , where , then both and will appear as factors in the product and so is divisible by . If has no such factorization, then it must be the square of some prime larger than 2. But then , so both and will be factors of , and so divides in this case, as well.
Prime modulus
The first two proofs below use the fact that the residue classes modulo a prime number are a finite field—see the article Prime field for more details.
Elementary proof
The result is trivial when , so assume is an odd prime, . Since the residue classes modulo form a field, every non-zero residue has a unique multiplicative inverse . Euclid's lemma implies that the only values of for which are . Therefore, with the exception of , the factors in the expanded form of can be arranged in disjoint pairs such that product of each pair is congruent to 1 modulo . This proves Wilson's theorem.
For example, for , one has
Proof using Fermat's little theorem
Again, the result is trivial for p = 2, so suppose p is an odd prime, . Consider the polynomial
g has degree , leading term , and constant term . Its roots are 1, 2, ..., .
Now consider
h also has degree and leading term . Modulo p, Fermat's little theorem says it also has the same roots, 1, 2, ..., .
Finally, consider
f has degree at most p − 2 (since the leading terms cancel), and modulo p also has the roots 1, 2, ..., . But Lagrange's theorem says it cannot have more than p − 2 roots. Therefore, f must be identically zero (mod p), so its constant term is . This is Wilson's theorem.
Proof using the Sylow theorems
It is possible to deduce Wilson's theorem from a particular application of the Sylow theorems. Let p be a prime. It is immediate to deduce that the symmetric group has exactly elements of order p, namely the p-cycles . On the other hand, each Sylow p-subgroup in is a copy of . Hence it follows that the number of Sylow p-subgroups is . The third Sylow theorem implies
Multiplying both sides by gives
that is, the result.
Applications
Primality tests
In practice, Wilson's theorem is useless as a primality test because computing (n − 1)! modulo n for large n is computationally complex, and much faster primality tests are known (indeed, even trial division is considerably more efficient).
Used in the other direction, to determine the primality of the successors of large factorials, it is indeed a very fast and effective method. This is of limited utility, however.
Quadratic residues
Using Wilson's Theorem, for any odd prime , we can rearrange the left hand side of
to obtain the equality
This becomes
or
We can use this fact to prove part of a famous result: for any prime p such that p ≡ 1 (mod 4), the number (−1) is a square (quadratic residue) mod p. For this, suppose p = 4k + 1 for some integer k. Then we can take m = 2k above, and we conclude that (m!)2 is congruent to (−1) (mod p).
Formulas for primes
Wilson's theorem has been used to construct formulas for primes, but they are too slow to have practical value.
p-adic gamma function
Wilson's theorem allows one to define the p-adic gamma function.
Gauss's generalization
Gauss proved that
where p represents an odd prime and a positive integer. That is, the product of the positive integers less than and relatively prime to is one less than a multiple of when is equal to 4, or a power of an odd prime, or twice a power of an odd prime; otherwise, the product is one more than a multiple of . The values of m for which the product is −1 are precisely the ones where there is a primitive root modulo m.
| Mathematics | Modular arithmetic | null |
227407 | https://en.wikipedia.org/wiki/Minnow | Minnow | Minnow is the common name for a number of species of small freshwater fish, belonging to several genera of the family Cyprinidae and in particular the subfamily Leuciscinae. They are also known in Ireland as pinkeens.
While the common name can refer to a range of taxa, smaller fish in the subfamily Leusciscinae are considered by anglers to be "true" minnows.
Types of minnows
Bluntnose minnow (Pimephales notatus): The bluntnose minnow is a primary bait fish for Northern America, and has a very high tolerance for variable water qualities, which helps its distribution throughout many regions. The snout of the bluntnose minnow overhangs the mouth, giving it the bluntnose. There is a dark lateral line which stretches from the opercle to the base of the tail, where a large black spot is located. The average size of the adult is approximately .
Common shiner (Notropis cornutus): These fish are one of the most common type of bait fish and are almost exclusively stream dwellers. The common shiner can be identified by the nine rays on its anal fin and terminal mouth. This minnow is typically bluish silver on the sides and greenish blue on the back., save for breeding season in which case the male gains a rose colored tail and anal fin. The shiner grows about 5–10 cm (2–4 in) within one year and reach a size of at adulthood. Notropis potteri is known as the chub shiner.
Common emerald shiner (Notropis atherinoides atherinoides): Common shiners are most abundant in the Great Lakes of North America, primarily Lake Erie. The name of the emerald shiner comes from the greenish emerald band that expands from the back of the gill cover to the tail. This type of minnow has a short, rounded snout, the only difference between the common emerald shiner and the silver shiner is that the silver shiner has a longer snout and a larger eye. These fish grow to an average length of about 6 cm. This is one of the most common bait fish used in the Lake Erie region of Ohio and many fishermen hold it over all other bait.
Cheat minnow, a species in the genus Pararhinichthys
Cutlips minnow, a species in the genus Exoglossum
Desert minnows, fishes in the genus Dionda
Eurasian minnows, fishes in the genus Phoxinus
Fathead minnow (rosy-red minnow), a species in the genus Pimephales
Loach minnow, a species of the genus Rhinichthys
Balkan minnows, of the genus Pelasgus
Ozark minnow, a species in the genus Notropis
Pikeminnows, fishes in the genus Ptychocheilus
Pugnose minnow, a species in the genus Opsopoeodus
Rhodes minnow, a species in the genus Squalius
Silverjaw minnow/Longjaw minnow, species in the genus Ericymba
Silvery minnows, fishes in the genus Hybognathus
Suckermouth minnows, fishes in the genus Phenacobius
White Cloud Mountain minnow/Vietnamese cardinal minnow, species in the genus Tanichthys
Other fish specifically called minnows include
in the Southern Hemisphere, some fish in the family Galaxiidae, in particular those of genus Galaxias
in Southeast Asia, the danionins, including Razorbelly minnows
the Drakensberg minnow (Labeobarbus aspius) from the Congo Democratic Republic
the Maluti minnow (Pseudobarbus quathlambae) from Lesotho
the Falklands minnow from the Falkland Islands, a vernacular name for the Common galaxias
the pike topminnow (Belonesox belizanus) are confused for the northern pike, (Esox lucius), also called "minnow" for the little size.
the minnows of the deep (Cyclothone sp.), small bioluminescent bristlemouth fish approximately long
As food
While primarily used for bait, minnows can also be eaten directly by humans. Some Native American cultures have used minnows as food. If minnows are small enough, they can be eaten whole.
Threats and conservation issues
Generally, minnows breed with the slightest rainfall and within a wide temperature range. Contrary to the long-standing presumptions, climate change poses 'negligible' threat to minnows' reproduction. Minnows are also flexible in attaining pre-spawning fitness, which makes them avoid 'skipped spawning' decisions while facing climatic variabilities.
| Biology and health sciences | Cypriniformes | Animals |
227478 | https://en.wikipedia.org/wiki/Germination | Germination | Germination is the process by which an organism grows from a seed or spore. The term is applied to the sprouting of a seedling from a seed of an angiosperm or gymnosperm, the growth of a sporeling from a spore, such as the spores of fungi, ferns, bacteria, and the growth of the pollen tube from the pollen grain of a seed plant.
Seed plants
Germination is usually the growth of a plant contained within a seed resulting in the formation of the seedling. It is also the process of reactivation of metabolic machinery of the seed resulting in the emergence of radicle and plumule. The seed of a vascular plant is a small package produced in a fruit or cone after the union of male and female reproductive cells. All fully developed seeds contain an embryo and, in most plant species some store of food reserves, wrapped in a seed coat. Dormant seeds are viable seeds that do not germinate because they require specific internal or environmental stimuli to resume growth. Under proper conditions, the seed begins to germinate and the embryo resumes growth, developing into a seedling.
Disturbance of soil can result in vigorous plant growth by exposing seeds already in the soil to changes in environmental factors where germination may have previously been inhibited by depth of the seeds or soil that was too compact. This is often observed at gravesites after a burial.
Seed germination depends on both internal and external conditions. The most important external factors include right temperature, water, oxygen or air and sometimes light or darkness. Various plants require different variables for successful seed germination. Often this depends on the individual seed variety and is closely linked to the ecological conditions of a plant's natural habitat. For some seeds, their future germination response is affected by environmental conditions during seed formation; most often these responses are types of seed dormancy.
Water is required for germination. Mature seeds are often extremely dry and need to take in significant amounts of water, relative to the dry weight of the seed, before cellular metabolism and growth can resume. Most seeds need enough water to moisten the seeds but not enough to soak them. The uptake of water by seeds is called imbibition, which leads to the swelling and the breaking of the seed coat. When seeds are formed, most plants store a food reserve with the seed, such as starch, proteins, or oils. This food reserve provides nourishment to the growing embryo. When the seed imbibes water, hydrolytic enzymes are activated which break down these stored food resources into metabolically useful chemicals. After the seedling emerges from the seed coat and starts growing roots and leaves, the seedling's food reserves are typically exhausted; at this point photosynthesis provides the energy needed for continued growth and the seedling now requires a continuous supply of water, nutrients, and light.
Oxygen is required by the germinating seed for metabolism. Oxygen is used in aerobic respiration, the main source of the seedling's energy until it grows leaves. Oxygen is an atmospheric gas that is found in soil pore spaces; if a seed is buried too deeply within the soil or the soil is waterlogged, the seed can be oxygen starved. Some seeds have impermeable seed coats that prevent oxygen from entering the seed, causing a type of physical dormancy which is broken when the seed coat is worn away enough to allow gas exchange and water uptake from the environment.
In a small number of plants, such as rice, anaerobic germination can occur in waterlogged conditions. The seed produces a hollow coleoptile that acts like a 'snorkel', providing the seed with access to oxygen.
Temperature affects cellular metabolic and growth rates. Seeds from different species and even seeds from the same plant germinate over a wide range of temperatures. Seeds often have a temperature range within which they will germinate, and they will not do so above or below this range. Many seeds germinate at temperatures slightly above 60-75 F (16–24 C) [room-temperature in centrally heated houses], while others germinate just above freezing and others germinate only in response to alternations in temperature between warm and cool. Some seeds germinate when the soil is cool 28–40 F (-2 - 4 C), and some when the soil is warm 76-90 F (24–32 C). Some seeds require exposure to cold temperatures (vernalization) to break dormancy. Some seeds in a dormant state will not germinate even if conditions are favorable. Seeds that are dependent on temperature to end dormancy have a type of physiological dormancy. For example, seeds requiring the cold of winter are inhibited from germinating until they take in water in the fall and experience cooler temperatures. Cold stratification is a process that induces the dormancy breaking prior to light emission that promotes germination . Four degrees Celsius is cool enough to end dormancy for most cool dormant seeds, but some groups, especially within the family Ranunculaceae and others, need conditions cooler than -5 C. Some seeds will only germinate after hot temperatures during a forest fire which cracks their seed coats; this is a type of physical dormancy.
Most common annual vegetables have optimal germination temperatures between 75–90 F (24–32 C), though many species (e.g. radishes or spinach) can germinate at significantly lower temperatures, as low as 40 F (4 C), thus allowing them to be grown from seeds in cooler climates. Suboptimal temperatures lead to lower success rates and longer germination periods.
Light or darkness can be an environmental trigger for germination and is a type of physiological dormancy. Most seeds are not affected by light or darkness, but many photoblastic seeds, including species found in forest settings, will not germinate until an opening in the canopy allows sufficient light for the growth of the seedling.
Scarification mimics natural processes that weaken the seed coat before germination. In nature, some seeds require particular conditions to germinate, such as the heat of a fire (e.g., many Australian native plants), or soaking in a body of water for a long period of time. Others need to be passed through an animal's digestive tract to weaken the seed coat enough to allow the seedling to emerge.
Dormancy
Some live seeds are dormant and need more time, and/or need to be subjected to specific environmental conditions before they will germinate. Seed dormancy can originate in different parts of the seed, for example, within the embryo; in other cases the seed coat is involved. Dormancy breaking often involves changes in membranes, initiated by dormancy-breaking signals. This generally occurs only within hydrated seeds. Factors affecting seed dormancy include the presence of certain plant hormones, notably abscisic acid, which inhibits germination, and gibberellin, which ends seed dormancy. In brewing, barley seeds are treated with gibberellin to ensure uniform seed germination for the production of barley malt.
Seedling establishment
In some definitions, the appearance of the radicle marks the end of germination and the beginning of "establishment", a period that utilizes the food reserves stored in the seed. Germination and establishment as an independent organism are critical phases in the life of a plant when they are the most vulnerable to injury, disease, and water stress. The germination index can be used as an indicator of phytotoxicity in soils. The mortality between dispersal of seeds and completion of the establishment can be so high that many species have adapted to produce large numbers of seeds.
Germination rate and germination capacity
In agriculture and gardening, the germination rate describes how many seeds of a particular plant species, variety or seedlot are likely to germinate over a given period. It is a measure of germination time course and is usually expressed as a percentage, e.g., an 85% germination rate indicates that about 85 out of 100 seeds will probably germinate under proper conditions over the germination period given. Seed germination rate is determined by the seed genetic composition, morphological features and environmental factors. The germination rate is useful for calculating the number of seeds needed for a given area or desired number of plants. For seed physiologists and seed scientists "germination rate" is the reciprocal of time taken for the process of germination to complete starting from time of sowing. On the other hand, the number of seed able to complete germination in a population (i.e. seed lot) is referred to as germination capacity.
Soil salinity is one of the stress factors that can limit the germination rate. Environmental stress activates some stress-related activities [CuZn-superoxide dismutase (SOD), Mn-SOD, L-ascorbate oxidase (AO), DNA polymerase Delta 1 (POLD)-1, Chaperon (CHAPE) and heat shock protein (HSP)-21], genetic template stability and photosynthetic pigment activation. Application of exogenic glutamine limiting this process. Research carried out on onion seeds shows a reduction in the mean germination time, an increase in the coefficient of germination velocity, the germination index and germination percentage after administration of exogenous glutamine to plants.
Repair of DNA damage
Seed quality deteriorates with age, and this is associated with accumulation of genome damage. During germination, repair processes are activated to deal with accumulated DNA damage. In particular, single- and double-strand breaks in DNA can be repaired. The DNA damage checkpoint kinase ATM has a major role in integrating progression through germination with repair responses to the DNA damages accumulated by the aged seed.
Dicot germination
The part of the plant that first emerges from the seed is the embryonic root, termed the radicle or primary root. It allows the seedling to become anchored in the ground and start absorbing water. After the root absorbs water, an embryonic shoot emerges from the seed. This shoot comprises three main parts: the cotyledons (seed leaves), the section of shoot below the cotyledons (hypocotyl), and the section of shoot above the cotyledons (epicotyl). The way the shoot emerges differs among plant groups.
Epigeal
Epigeal germination (or epigeous germination) is a botanical term indicating that the germination takes place above the ground. In epigeal germination, the hypocotyl elongates and forms a hook, pulling rather than pushing the cotyledons and apical meristem through the soil. Once it reaches the surface, it straightens and pulls the cotyledons and shoot tip of the growing seedlings into the air. Beans, tamarind, and papaya are examples of plants that germinate this way.
Hypogeal
Germination can also be done by hypogeal germination (or hypogeous germination), where the epicotyl elongates and forms the hook. In this type of germination, the cotyledons stay underground where they eventually decompose. Peas, chickpeas and mango, for example, germinate this way.
Monocot germination
In monocot seeds, the embryo's radicle and cotyledon are covered by a coleorhiza and coleoptile, respectively. The coleorhiza is the first part to grow out of the seed, followed by the radicle. The coleoptile is then pushed up through the ground until it reaches the surface. There, it stops elongating and the first leaves emerge.
Precocious germination
When a seed germinates without undergoing all four stages of seed development, i.e., globular, heart shape, torpedo shape, and cotyledonary stage, it is known as precocious germination.
Pollen germination
Another germination event during the life cycle of gymnosperms and flowering plants is the germination of a pollen grain after pollination. Like seeds, pollen grains are severely dehydrated before being released to facilitate their dispersal from one plant to another. They consist of a protective coat containing several cells (up to 8 in gymnosperms, 2–3 in flowering plants). One of these cells is a tube cell. Once the pollen grain lands on the stigma of a receptive flower (or a female cone in gymnosperms), it takes up water and germinates. Pollen germination is facilitated by hydration on the stigma, as well as by the structure and physiology of the stigma and style. Pollen can also be induced to germinate in vitro (in a petri dish or test tube).
During germination, the tube cell elongates into a pollen tube. In the flower, the pollen tube then grows towards the ovule where it discharges the sperm produced in the pollen grain for fertilization. The germinated pollen grain with its two sperm cells is the mature male microgametophyte of these plants.
Self-incompatibility
Since most plants carry both male and female reproductive organs in their flowers, there is a high risk of self-pollination and thus inbreeding. Some plants use the control of pollen germination as a way to prevent this self-pollination. Germination and growth of the pollen tube involve molecular signaling between stigma and pollen. In self-incompatibility in plants, the stigma of certain plants can molecularly recognize pollen from the same plant and prevent it from germinating.
Spore germination
Germination can also refer to the emergence of cells from resting spores and the growth of sporeling hyphae or thalli from spores in fungi, algae and some plants.
Conidia are asexual reproductive (reproduction without the fusing of gametes) spores of fungi which germinate under specific conditions. A variety of cells can be formed from the germinating conidia. The most common are germ tubes which grow and develop into hyphae. The initial formation and subsequent elongation of the germ tube in the fungus Aspergillus niger has been captured in 3D using holotomography microscopy. Another type of cell is a conidial anastomosis tube (CAT); these differ from germ tubes in that they are thinner, shorter, lack branches, exhibit determinate growth and home toward each other. Each cell is of a tubular shape, but the conidial anastomosis tube forms a bridge that allows fusion between conidia.
Resting spores
In resting spores, germination involves cracking the thick cell wall of the dormant spore. For example, in zygomycetes the thick-walled zygosporangium cracks open and the zygospore inside gives rise to the emerging sporangiophore. In slime molds, germination refers to the emergence of amoeboid cells from the hardened spore. After cracking the spore coat, further development involves cell division, but not necessarily the development of a multicellular organism (for example in the free-living amoebas of slime molds).
Ferns and mosses
In plants such as bryophytes, ferns, and a few others, spores germinate into independent gametophytes. In the bryophytes (e.g., mosses and liverworts), spores germinate into protonemata, similar to fungal hyphae, from which the gametophyte grows. In ferns, the gametophytes are small, heart-shaped prothalli that can often be found underneath a spore-shedding adult plant.
Bacteria
Bacterial spores can be exospores or endospores which are dormant structures produced by a number of different bacteria. They have no or very low metabolic activity and are formed in response to adverse environmental conditions. They allow survival and are not a form of reproduction. Under suitable conditions the spore germinates to produce a viable bacterium. Endospores are formed inside the mother cell, whereas exospores are formed at the end of the mother cell as a bud.
Light-stimulated germination
As mentioned earlier, light can be an environmental factor that stimulates the germination process. The seed needs to be able to determine when is the perfect time to germinate and they do that by sensing environmental cues. Once germination starts, the stored nutrients that have accumulated during maturation start to be digested which then supports cell expansion and overall growth. Within light-stimulated germination, phytochrome B (PHYB) is the photoreceptor that is responsible for the beginning stages of germination. When red light is present, PHYB is converted to its active form and moves from the cytoplasm to the nucleus where it upregulates the degradation of PIF1. PIF1, phytochrome-interaction-factor-1, negatively regulates germination by increasing the expression of proteins that repress the synthesis of gibberellin (GA), a major hormone in the germination process. Another factor that promotes germination is HFR1 which accumulates in light in some way and forms inactive heterodimers with PIF1.
Although the exact mechanism is not known, nitric oxide (NO) plays a role in this pathway as well. NO is thought to repress PIF1 gene expression and stabilises HFR1 in some way to support the start of germination. Bethke et al. (2006) exposed dormant Arabidopsis seeds to NO gas and within the next 4 days, 90% of the seeds broke dormancy and germinated. The authors also looked at how NO and GA effects the vacuolation process of aleurone cells that allow the movement of nutrients to be digested. A NO mutant resulted in inhibition of vacuolation but when GA was later added the process was active again leading to the belief that NO is prior to GA in the pathway. NO may also lead to the decrease in sensitivity of abscisic acid (ABA), a plant hormone largely responsible for seed dormancy. The balance between GA and ABA is important. When ABA levels are higher than GA then that leads to dormant seeds and when GA levels are higher, seeds germinate. The switch between seed dormancy and germination needs to occur at a time when the seed has the best chances of surviving and an important cue that begins the process of seed germination and overall plant growth is light.
| Biology and health sciences | Plant reproduction | Biology |
227500 | https://en.wikipedia.org/wiki/Seedbed | Seedbed | A seedbed or seedling bed is the local soil environment in which seeds are planted. Often, it comprises not only the soil but also a specially prepared cold frame, hotbed or raised bed used to grow the seedlings in a controlled environment into larger young plants before transplanting them into a garden or field. A seedling bed increases the number of seeds that germinate.
Soil type
The soil of a seedbed needs to be loose and smoothed, without large lumps. These traits are needed so that seeds can be planted easily, and at a specific depth for best germination. Large lumps and uneven surface would tend to make the planting depth random. Many types of seedlings also need loose soil with minimal rocky content for best conditions to grow their roots. (For example, carrots grown in rocky soil will tend not to grow straight.)
Seedbed preparation
Seedbed preparation in farm fields often involves secondary tillage via harrows and cultivators. This may follow primary tillage (if any) by moldboard plows or chisel plows. No-till farming methods avoid tillage for seedbed preparation as well as later weed control.
Seedbed preparation in gardens often involves secondary tillage via hand tools such as rakes and hoes. This may follow primary tillage (if any) by shovels, picks, or mattocks. Rotary tillers provide a powered alternative that takes care of both primary and secondary tillage.
The preparation of a seedbed may include:
The removal of debris. Insect eggs and disease spores are often found in plant debris and so this is removed from the plot. Stones and larger debris will also physically prevent the seedlings from growing.
Levelling. The site will have been levelled for even drainage.
Breaking up the soil. Compacted soil will be broken up by digging. This allows air and water to enter, and helps the seedling penetrate the soil. Smaller seeds require a finer soil structure. The surface the soil can be broken down into a fine granular structure using a tool such as a rake.
Soil improvement. The soil structure may be improved by the introduction of organic matter such as compost or peat.
Fertilizing. The nitrate and phosphate levels of the soil can be adjusted with fertilizer. If the soil is deficient in any micro nutrients, these too can be added.
The seedlings may be left to grow to adult plant.
| Technology | Buildings and infrastructure | null |
227533 | https://en.wikipedia.org/wiki/Hornet | Hornet | Hornets (insects in the genus Vespa) are the largest of the eusocial wasps, and are similar in appearance to yellowjackets, their close relatives. Some species can reach up to in length. They are distinguished from other vespine wasps by the relatively large top margin of the head. Worldwide, 22 species of Vespa are recognized. Most species only occur in the tropics of Asia, though the European hornet (V. crabro) is widely distributed throughout Europe, Russia, North America, and north-eastern Asia. Wasps native to North America in the genus Dolichovespula are commonly referred to as hornets (e.g., baldfaced hornets), but all of them are actually yellowjackets.
Like other social wasps, hornets build communal nests by chewing wood to make a papery pulp. Each nest has one queen, which lays eggs and is attended by workers that, while genetically female, cannot lay fertile eggs. Most species make exposed nests in trees and shrubs, but some (such as Vespa orientalis) build their nests underground or in other cavities. In the tropics, these nests may last year-round, but in temperate areas, the nest dies over the winter, with lone queens hibernating in leaf litter or other insulative material until the spring. Male hornets are docile and do not have stingers.
Hornets are often considered pests because they aggressively guard their nesting sites when threatened and their stings can be more dangerous than those of bees.
Classification
While taxonomically well defined, some confusion may remain about the differences between hornets and other wasps of the family Vespidae, specifically the yellowjackets, which are members of the same subfamily. Also, a related genus of Asian nocturnal vespines, Provespa, is referred to as "night wasps" or "night hornets", though they are not true hornets.
Some other large wasps are sometimes referred to as hornets, most notably the bald-faced hornet (Dolichovespula maculata) found in North America. It is set apart by its black and ivory coloration. The name "hornet" is used for this species primarily because of its habit of making aerial nests (similar to some of the true hornets) rather than subterranean nests. Another example is the Australian hornet (Abispa ephippium), which is actually a species of potter wasp.
Distribution
Hornets are found mainly in the Northern Hemisphere. The European hornet (V. crabro) is the best-known species, widely distributed in Europe (but is never found north of the 63rd parallel), and European Russia (except in extreme northern areas). In the east, the species' distribution area stretches over the Ural Mountains to western Siberia (found in the vicinity of Khanty-Mansiysk). In Asia, the European hornet is found in southern Siberia, as well as in eastern China. The European hornet was accidentally introduced to eastern North America about the middle of the 19th century and has lived there since at about the same latitudes as in Europe. However, it has never been found in western North America.
The Asian giant hornet (V. mandarinia) lives in the Primorsky Krai, Khabarovsky Krai (southern part), and Jewish Autonomous Oblast regions of Russia, and China, Korea, Taiwan, Cambodia, Laos, Vietnam, Indochina, India, Nepal, Sri Lanka, and Thailand, but is most commonly found in the mountains of Japan, where they are commonly known as the giant sparrow bee.
The Oriental hornet (V. orientalis) occurs in semidry, subtropical areas of central Asia (Azerbaijan, Armenia, Dagestan in Russia, Iran, Afghanistan, Oman, Pakistan, Bangladesh, Turkmenistan, Uzbekistan, Tajikistan, Kyrghyzstan, southern Kazakhstan), and southern Europe (Italy, Malta, Albania, Romania, Turkey, Greece, Bulgaria, Cyprus).
The Asian hornet (V. velutina) has been introduced to France, Spain, Portugal, Italy, and the United Kingdom.
Stings
Hornets have stingers used to kill prey and defend nests. Hornet stings are more painful to humans than typical wasp stings because hornet venom contains a large amount (5%) of acetylcholine. Individual hornets can sting repeatedly. Unlike honey bees, hornets do not die after stinging because their stingers are very finely barbed (only visible under high magnification) and can easily be withdrawn, so are not pulled out of their bodies when disengaging.
The toxicity of hornet stings varies according to hornet species; some deliver just a typical insect sting, while others are among the most venomous known insects. Single hornet stings are not in themselves fatal, except sometimes to allergic victims. Multiple stings by hornets (other than V. crabro) may be fatal because of highly toxic species-specific components of their venom.
The stings of the Asian giant hornet (V. mandarinia) are among the most venomous known, and are thought to cause 30–50 human deaths annually in Japan. Between July and September 2013, hornet stings caused the death of 42 people in China. Asian giant hornet's venom can cause allergic reactions and multiple organ failure leading to death, though dialysis can be used to remove the toxins from the bloodstream. As with other wasps, death due to a single sting on the skin only occurs when an allergy is present, and serious outcomes with Asian giant hornet stings in China and Japan are only documented with many stings or anaphylactic shock due to an existing allergy.
People who are allergic to wasp venom may also be allergic to hornet stings. Allergic reactions are commonly treated with epinephrine (adrenaline) injection using a device such as an epinephrine autoinjector, with prompt follow-up treatment in a hospital. In severe cases, allergic individuals may go into anaphylactic shock and die unless treated promptly. In general, Vespa stings induce the release of histamine due to the various mastoparans that they contain. However V. orientalis mastoparan is the interesting exception because it does not induce histamine increase in victim tissue – because it does not cause mast cell degranulation – and is not immunogenic.
Attack pheromone
Hornets, like many social wasps, can mobilize the entire nest to sting in defense, which is highly dangerous to humans and other animals. The attack pheromone is released in case of threat to the nest. In the case of the Asian giant hornet (V. mandarinia), this is also used to mobilize many workers at once when attacking colonies of their prey, honey bees and other Vespa species. Three biologically active chemicals, 2-Pentanol, Isoamyl alcohol, and 1-methylbutyl 3-methylbutanoate, have been identified for this species. In field tests, 2-Pentanol alone triggered mild alarm and defensive behavior, but adding the other two compounds increased aggressiveness in a synergistic effect. In the European hornet (V. crabro) the major compound of the alarm pheromone is 2-methyl-3-butene-2-ol.
If a hornet is killed near a nest, it may release pheromones that can cause the other hornets to attack. Materials that come into contact with these pheromones, such as clothes, skin, and dead prey or hornets, can also trigger an attack, as can certain food flavorings, such as banana and apple flavorings, and fragrances that contain C5 alcohols and C esters.
Life cycle
In V. crabro, the nest is founded in spring by a fertilized female known as the queen. She generally selects sheltered places such as dark, hollow tree trunks. She first builds a series of cells (up to 50) out of chewed tree bark. The cells are arranged in horizontal layers named combs, each cell being vertical and closed at the top. An egg is then laid in each cell. After 5–8 days, the egg hatches. Over the following two weeks, the larva progresses through five stages of development. During this time, the queen feeds it a protein-rich diet of insects. Then, the larva spins a silk cap over the cell's opening, and during the next two weeks, transforms into an adult, a process called metamorphosis. The adult then eats its way through the silk cap. This first generation of workers, invariably females, now gradually undertakes all the tasks formerly carried out by the queen (foraging, nest building, taking care of the brood, etc.) with the exception of egg-laying, which remains exclusive to the queen.
As the colony size grows, new combs are added, and an envelope is built around the cell layers until the nest is entirely covered, with the exception of an entry hole. To build cells in total darkness, they apparently use gravity to aid them. At the peak of its population, which occurs in late summer, the colony can reach a size of 700 workers.
At this time, the queen starts producing the first reproductive individuals. Fertilized eggs develop into females (called "gynes" by entomologists), and unfertilized ones develop into males (sometimes called "drones" as with honeybee drones). Adult males do not participate in nest maintenance, foraging, or caretaking of the larvae. In early to mid-autumn, they leave the nest and mate during "nuptial flights".
Other temperate species (e.g., the yellow hornet, V. simillima, or the Oriental hornet, V. orientalis) have similar cycles. In the case of tropical species (e.g., V. tropica), life histories may well differ, and in species with both tropical and temperate distributions (such as the Asian giant hornet, V. mandarinia), the cycle likely depends on latitude.
Diet and feeding
Adult hornets and their relatives (e.g., yellowjackets) feed themselves with nectar and sugar-rich plant foods. Thus, they can often be found feeding on the sap of oak trees, rotting sweet fruits, honey, and any sugar-containing foodstuffs. Hornets frequently fly into orchards to feed on overripe fruit, and tend to gnaw a hole in fruit to become totally immersed in its pulp. A person who accidentally picks fruit with a feeding hornet can be attacked by the disturbed insect.
The adults also attack various insects, which they kill with stings and jaws. Due to their size and the power of their venom, hornets can kill large insects such as honey bees, grasshoppers, locusts, and katydids without difficulty. The victim is fully masticated and then fed to the larvae developing in the nest, rather than consumed by the adult hornets. Some of their prey being considered pests, hornets may be considered beneficial under some circumstances.
The larvae of hornets produce a sweet secretion containing sugars and amino acids that workers and queens consume.
Predatory strategies
Hornets' ability to prey upon honey bees is favored by a number of adaptations. Vespa have a larger body size compared to their prey, a heavy exoskeleton to resist bee attacks, and strong mandibles and venomous sting. As concerns hornet hunting strategies, it has been demonstrated that some species such as V. tropica and V. velutina, can use both visual and olfactory cues for the long-range detection of honey bee colonies. Foragers of V. tropica can readily associate color and shape with potential food sources and exhibit color generalization. V. velutina foragers visually distinguish between bee dummy bait and cotton ball dummy bait, both treated with bee odors, preferring bee dummies. Foraging hornets are also selectively attracted to honey bee colony odors, in particular honey and pollen, as well as honey bee pheromones, which may signal a high prey density. In laboratory assays, workers of V. velutina oriented especially towards geraniol, a component of the honey bee worker aggregation pheromone, that could therefore represent an honest signal for hornets. Behavioral, chemical and electrophysiological analyses have also demonstrated that Vespa bicolor is attracted to (Z)-11-eicosen-1-ol, which is major compound in the alarm pheromones of both Asian (Apis cerana) and European (Apis mellifera) honey bees, and its antennae respond to this compound. Intriguingly, this hornet attraction to honey bee pheromone is also exploited by the orchid Dendrobium christyanum, which mimics the honey bee alarm pheromone in its flowers' scent to attract hornets that visit and pollinate the flowers. Bee-hunting hornets therefore likely visit the non-rewarding flowers in search of prey.
Species
While a history of recognizing subspecies exists within many of the Vespa species, the most recent taxonomic revision of the genus treats all subspecific names in the genus Vespa as synonyms, effectively relegating them to no more than informal names for regional color forms.
Vespa affinis
Vespa analis
Vespa basalis
Vespa bellicosa
Vespa bicolor
Vespa bilineata
Vespa binghami
Vespa ciliata
Vespa cordifera
Vespa crabro
Vespa crabroniformis
Vespa dasypodia
Vespa ducalis
Vespa dybowskii
Vespa fervida
Vespa fumida
Vespa luctuosa
Vespa mandarinia
Vespa mocsaryana
Vespa multimaculata
Vespa nigra
Vespa orientalis
Vespa philippinensis
Vespa picea
Vespa simillima
Vespa soror
Vespa tortonica
Vespa tropica
Vespa velutina
Vespa vivax
Notable species
Asian giant hornet (V. mandarinia) (one of its color forms is also known as the Japanese giant hornet)
Asian hornet (V. velutina) (also known as the yellow-legged hornet or Asian predatory wasp)
Black hornet (V. dybowskii)
Black-bellied hornet (V. basalis)
Black shield wasp (V. bicolor)
European hornet (V. crabro) (also known as the Old World hornet or brown hornet)
Greater banded hornet (V. tropica)
Lesser banded hornet (V. affinis)
Oriental hornet (V. orientalis)
Yellow hornet (V. simillima) (one of its color forms is also known as the Japanese yellow hornet or Japanese hornet)
Vespa luctuosa (a species which has the most lethal wasp venom per volume)
As food and medicine
Hornet larvae are widely accepted as food in mountainous regions in China. Hornets and their nests are treated as medicine in traditional Chinese medicine.
Gallery
| Biology and health sciences | Hymenoptera | null |
227675 | https://en.wikipedia.org/wiki/Vascular%20cambium | Vascular cambium | The vascular cambium is the main growth tissue in the stems and roots of many plants, specifically in dicots such as buttercups and oak trees, gymnosperms such as pine trees, as well as in certain other vascular plants. It produces secondary xylem inwards, towards the pith, and secondary phloem outwards, towards the bark.
In herbaceous plants, it occurs in the vascular bundles which are often arranged like beads on a necklace forming an interrupted ring inside the stem. In woody plants, it forms a cylinder of unspecialized meristem cells, as a continuous ring from which the new tissues are grown. Unlike the xylem and phloem, it does not transport water, minerals or food through the plant. Other names for the vascular cambium are the main cambium, wood cambium, or bifacial cambium.
Occurrence
Vascular cambia are found in all seed plants except for five angiosperm lineages which have independently lost it; Nymphaeales, Ceratophyllum, Nelumbo, Podostemaceae, and monocots. In dicot and gymnosperm trees, the vascular cambium is the obvious line separating the bark and wood; they also have a cork cambium. For successful grafting, the vascular cambia of the rootstock and scion must be aligned so they can grow together.
Structure and function
The cambium present between primary xylem and primary phloem is called the intrafascicular cambium (within vascular bundles). During secondary growth, cells of medullary rays, in a line (as seen in section; in three dimensions, it is a sheet) between neighbouring vascular bundles, become meristematic and form new interfascicular cambium (between vascular bundles). The fascicular and interfascicular cambia thus join up to form a ring (in three dimensions, a tube) which separates the primary xylem and primary phloem, the cambium ring. The vascular cambium produces secondary xylem on the inside of the ring, and secondary phloem on the outside, pushing the primary xylem and phloem apart.
The vascular cambium usually consists of two types of cells:
Fusiform initials (tall, axially oriented)
Ray initials (smaller and round to angular in shape)
Maintenance of cambial meristem
The vascular cambium is maintained by a network of interacting signal feedback loops. Currently, both hormones and short peptides have been identified as information carriers in these systems. While similar regulation occurs in other plant meristems, the cambial meristem receives signals from both the xylem and phloem sides for the meristem. Signals received from outside the meristem act to down regulate internal factors, which promotes cell proliferation and differentiation.
Hormonal regulation
The phytohormones that are involved in the vascular cambial activity are auxins, ethylene, gibberellins, cytokinins, abscisic acid and probably more to be discovered. Each one of these plant hormones is vital for regulation of cambial activity. Combination of different concentrations of these hormones is very important in plant metabolism.
Auxin hormones are proven to stimulate mitosis, cell production and regulate interfascicular and fascicular cambium. Applying auxin to the surface of a tree stump allowed decapitated shoots to continue secondary growth. The absence of auxin hormones will have a detrimental effect on a plant. It has been shown that mutants without auxin will exhibit increased spacing between the interfascicular cambiums and reduced growth of the vascular bundles. The mutant plant will therefore experience a decrease in water, nutrients, and photosynthates being transported throughout the plant, eventually leading to death. Auxin also regulates the two types of cell in the vascular cambium, ray and fusiform initials. Regulation of these initials ensures the connection and communication between xylem and phloem is maintained for the translocation of nourishment and sugars are safely being stored as an energy resource. Ethylene levels are high in plants with an active cambial zone and are still currently being studied. Gibberellin stimulates the cambial cell division and also regulates differentiation of the xylem tissues, with no effect on the rate of phloem differentiation. Differentiation is an essential process that changes these tissues into a more specialized type, leading to an important role in maintaining the life form of a plant. In poplar trees, high concentrations of gibberellin is positively correlated to an increase of cambial cell division and an increase of auxin in the cambial stem cells. Gibberellin is also responsible for the expansion of xylem through a signal traveling from the shoot to the root. Cytokinin hormone is known to regulate the rate of the cell division instead of the direction of cell differentiation. A study demonstrated that the mutants are found to have a reduction in stem and root growth but the secondary vascular pattern of the vascular bundles were not affected with a treatment of cytokinin.
Cambium as food
The cambium of most trees are edible. In Scandinavia, it was historically used as a flour to make bark bread.
| Biology and health sciences | Plant tissues | null |
227682 | https://en.wikipedia.org/wiki/Meristem | Meristem | In cell biology, the meristem is a type of tissue found in plants. It consists of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they become differentiated and lose the ability to divide.
Differentiated plant cells generally cannot divide or produce cells of a different type. Meristematic cells are undifferentiated or incompletely differentiated. They are totipotent and capable of continued cell division. Division of meristematic cells provides new cells for expansion and differentiation of tissues and the initiation of new organs, providing the basic structure of the plant body. The cells are small, with small vacuoles or none, and protoplasm filling the cell completely. The plastids (chloroplasts or chromoplasts) are undifferentiated, but are present in rudimentary form (proplastids). Meristematic cells are packed closely together without intercellular spaces. The cell wall is a very thin primary cell wall.
The term meristem was first used in 1858 by Swiss botanist Carl Wilhelm von Nägeli (1817–1891) in his book ("Contributions to Scientific Botany"). It is derived , in recognition of its inherent function.
There are three types of meristematic tissues: apical (at the tips), intercalary or basal (in the middle), and lateral (at the sides also known as cambium). At the meristem summit, there is a small group of slowly dividing cells, which is commonly called the central zone. Cells of this zone have a stem cell function and are essential for meristem maintenance. The proliferation and growth rates at the meristem summit usually differ considerably from those at the periphery.
Primary meristems
Apical meristems give rise to the primary plant body and are responsible for primary growth, or an increase in length or height. Apical meristems may differentiate into three kinds of primary meristem:
Protoderm: lies around the outside of the stem and develops into the epidermis.
Procambium: lies just inside of the protoderm and develops into primary xylem and primary phloem. It also produces the vascular cambium, and cork cambium (secondary meristems). The cork cambium further differentiates into the phelloderm (to the inside) and the phellem, or cork (to the outside). All three of these layers (cork cambium, phellem, and phelloderm) constitute the periderm. In roots, the procambium can also give rise to the pericycle, which produces lateral roots in eudicots.
Ground meristem: Composed of parenchyma, collenchyma and sclerenchyma cells that develop into the cortex and the pith.
Secondary meristems
After the primary growth, lateral meristems develop as secondary plant growth. This growth adds to the plant in diameter from the established stem but not all plants exhibit secondary growth. There are two types of secondary meristems: the vascular cambium and the cork cambium.
Vascular cambium, which produces secondary xylem and secondary phloem. This is a process that may continue throughout the life of the plant. This is what gives rise to wood in plants. Such plants are called arboraceous. This does not occur in plants that do not go through secondary growth (known as herbaceous plants).
Cork cambium, which gives rise to the periderm, which replaces the epidermis.
==Apical meristems==
Apical Meristems are the completely undifferentiated (indeterminate) meristems in a plant. These differentiate into three kinds of primary meristems. The primary meristems in turn produce the two secondary meristem types. These secondary meristems are also known as lateral meristems as they are involved in lateral growth.
There are two types of apical meristem tissue: shoot apical meristem (SAM), which gives rise to organs like the leaves and flowers, and root apical meristem (RAM), which provides the meristematic cells for future root growth. SAM and RAM cells divide rapidly and are considered indeterminate, in that they do not possess any defined end status. In that sense, the meristematic cells are frequently compared to the stem cells in animals, which have an analogous behavior and function.
The apical meristems are layered where the number of layers varies according to plant type. In general the outermost layer is called the tunica while the innermost layers are the corpus. In monocots, the tunica determines the physical characteristics of the leaf edge and margin. In dicots, layer two of the corpus determines the characteristics of the edge of the leaf. The corpus and tunica play a critical part of the plant physical appearance as all plant cells are formed from the meristems. Apical meristems are found in two locations: the root and the stem. Some arctic plants have an apical meristem in the lower/middle parts of the plant. It is thought that this kind of meristem evolved because it is advantageous in arctic conditions.
Shoot Apical Meristems
Shoot apical meristems are the source of all above-ground organs, such as leaves and flowers. Cells at the shoot apical meristem summit serve as stem cells to the surrounding peripheral region, where they proliferate rapidly and are incorporated into differentiating leaf or flower primordia.
The shoot apical meristem is the site of most of the embryogenesis in flowering plants. Primordia of leaves, sepals, petals, stamens, and ovaries are initiated here at the rate of one every time interval, called a plastochron. It is where the first indications that flower development has been evoked are manifested. One of these indications might be the loss of apical dominance and the release of otherwise dormant cells to develop as auxiliary shoot meristems, in some species in axils of primordia as close as two or three away from the apical dome.
The shoot apical meristem consists of four distinct cell groups:
Stem cells
The immediate daughter cells of the stem cells
A subjacent organizing center
Founder cells for organ initiation in surrounding regions
These four distinct zones are maintained by a complex signalling pathway. In Arabidopsis thaliana, 3 interacting CLAVATA genes are required to regulate the size of the stem cell reservoir in the shoot apical meristem by controlling the rate of cell division. CLV1 and CLV2 are predicted to form a receptor complex (of the LRR receptor-like kinase family) to which CLV3 is a ligand. CLV3 shares some homology with the ESR proteins of maize, with a short 14 amino acid region being conserved between the proteins. Proteins that contain these conserved regions have been grouped into the CLE family of proteins.
CLV1 has been shown to interact with several cytoplasmic proteins that are most likely involved in downstream signalling. For example, the CLV complex has been found to be associated with Rho/Rac small GTPase-related proteins. These proteins may act as an intermediate between the CLV complex and a mitogen-activated protein kinase (MAPK), which is often involved in signalling cascades. KAPP is a kinase-associated protein phosphatase that has been shown to interact with CLV1. KAPP is thought to act as a negative regulator of CLV1 by dephosphorylating it.
Another important gene in plant meristem maintenance is WUSCHEL (shortened to WUS), which is a target of CLV signaling in addition to positively regulating CLV, thus forming a feedback loop. WUS is expressed in the cells below the stem cells of the meristem and its presence prevents the differentiation of the stem cells. CLV1 acts to promote cellular differentiation by repressing WUS activity outside of the central zone containing the stem cells.
The function of WUS in the shoot apical meristem is linked to the phytohormone cytokinin. Cytokinin activates histidine kinases which then phosphorylate histidine phosphotransfer proteins. Subsequently, the phosphate groups are transferred onto two types of Arabidopsis response regulators (ARRs): Type-B ARRS and Type-A ARRs. Type-B ARRs work as transcription factors to activate genes downstream of cytokinin, including A-ARRs. A-ARRs are similar to B-ARRs in structure; however, A-ARRs do not contain the DNA binding domains that B-ARRs have, and which are required to function as transcription factors. Therefore, A-ARRs do not contribute to the activation of transcription, and by competing for phosphates from phosphotransfer proteins, inhibit B-ARRs function. In the SAM, B-ARRs induce the expression of WUS which induces stem cell identity. WUS then suppresses A-ARRs. As a result, B-ARRs are no longer inhibited, causing sustained cytokinin signaling in the center of the shoot apical meristem. Altogether with CLAVATA signaling, this system works as a negative feedback loop. Cytokinin signaling is positively reinforced by WUS to prevent the inhibition of cytokinin signaling, while WUS promotes its own inhibitor in the form of CLV3, which ultimately keeps WUS and cytokinin signaling in check.
Root apical meristem
Unlike the shoot apical meristem, the root apical meristem produces cells in two dimensions. It harbors two pools of stem cells around an organizing center called the quiescent center (QC) cells and together produces most of the cells in an adult root. At its apex, the root meristem is covered by the root cap, which protects and guides its growth trajectory. Cells are continuously sloughed off the outer surface of the root cap. The QC cells are characterized by their low mitotic activity. Evidence suggests that the QC maintains the surrounding stem cells by preventing their differentiation, via signal(s) that are yet to be discovered. This allows a constant supply of new cells in the meristem required for continuous root growth. Recent findings indicate that QC can also act as a reservoir of stem cells to replenish whatever is lost or damaged. Root apical meristem and tissue patterns become established in the embryo in the case of the primary root, and in the new lateral root primordium in the case of secondary roots.
Intercalary meristem
In angiosperms, intercalary (sometimes called basal) meristems occur in monocot (in particular, grass) stems at the base of nodes and leaf blades. Horsetails and Welwitschia also exhibit intercalary growth. Intercalary meristems are capable of cell division, and they allow for rapid growth and regrowth of many monocots. Intercalary meristems at the nodes of bamboo allow for rapid stem elongation, while those at the base of most grass leaf blades allow damaged leaves to rapidly regrow. This leaf regrowth in grasses evolved in response to damage by grazing herbivores and/or wildfires.
Floral meristem
When plants begin flowering, the shoot apical meristem is transformed into an inflorescence meristem, which goes on to produce the floral meristem, which produces the sepals, petals, stamens, and carpels of the flower.
In contrast to vegetative apical meristems and some efflorescence meristems, floral meristems cannot continue to grow indefinitely. Their growth is limited to the flower with a particular size and form. The transition from shoot meristem to floral meristem requires floral meristem identity genes, that both specify the floral organs and cause the termination of the production of stem cells. AGAMOUS (AG) is a floral homeotic gene required for floral meristem termination and necessary for proper development of the stamens and carpels. AG is necessary to prevent the conversion of floral meristems to inflorescence shoot meristems, but is identity gene LEAFY (LFY) and WUS and is restricted to the centre of the floral meristem or the inner two whorls. This way floral identity and region specificity is achieved. WUS activates AG by binding to a consensus sequence in the AG's second intron and LFY binds to adjacent recognition sites. Once AG is activated it represses expression of WUS leading to the termination of the meristem.
Through the years, scientists have manipulated floral meristems for economic reasons. An example is the mutant tobacco plant "Maryland Mammoth". In 1936, the department of agriculture of Switzerland performed several scientific tests with this plant. "Maryland Mammoth" is peculiar in that it grows much faster than other tobacco plants.
Apical dominance
Apical dominance is where one meristem prevents or inhibits the growth of other meristems. As a result, the plant will have one clearly defined main trunk. For example, in trees, the tip of the main trunk bears the dominant shoot meristem. Therefore, the tip of the trunk grows rapidly and is not shadowed by branches. If the dominant meristem is cut off, one or more branch tips will assume dominance. The branch will start growing faster and the new growth will be vertical. Over the years, the branch may begin to look more and more like an extension of the main trunk. Often several branches will exhibit this behavior after the removal of apical meristem, leading to a bushy growth.
The mechanism of apical dominance is based on auxins, types of plant growth regulators. These are produced in the apical meristem and transported towards the roots in the cambium. If apical dominance is complete, they prevent any branches from forming as long as the apical meristem is active. If the dominance is incomplete, side branches will develop.
Recent investigations into apical dominance and the control of branching have revealed a new plant hormone family termed strigolactones. These compounds were previously known to be involved in seed germination and communication with mycorrhizal fungi and are now shown to be involved in inhibition of branching.
Diversity in meristem architectures
The SAM contains a population of stem cells that also produce the lateral meristems while the stem elongates. It turns out that the mechanism of regulation of the stem cell number might be evolutionarily conserved. The CLAVATA gene CLV2 responsible for maintaining the stem cell population in Arabidopsis thaliana is very closely related to the maize gene FASCIATED EAR 2(FEA2) also involved in the same function. Similarly, in rice, the FON1-FON2 system seems to bear a close relationship with the CLV signaling system in Arabidopsis thaliana. These studies suggest that the regulation of stem cell number, identity and differentiation might be an evolutionarily conserved mechanism in monocots, if not in angiosperms. Rice also contains another genetic system distinct from FON1-FON2, that is involved in regulating stem cell number. This example underlines the innovation that goes about in the living world all the time.
Role of the KNOX-family genes
Genetic screens have identified genes belonging to the KNOX family in this function. These genes essentially maintain the stem cells in an undifferentiated state. The KNOX family has undergone quite a bit of evolutionary diversification while keeping the overall mechanism more or less similar. Members of the KNOX family have been found in plants as diverse as Arabidopsis thaliana, rice, barley and tomato. KNOX-like genes are also present in some algae, mosses, ferns and gymnosperms. Misexpression of these genes leads to the formation of interesting morphological features. For example, among members of Antirrhineae, only the species of the genus Antirrhinum lack a structure called spur in the floral region. A spur is considered an evolutionary innovation because it defines pollinator specificity and attraction. Researchers carried out transposon mutagenesis in Antirrhinum majus, and saw that some insertions led to formation of spurs that were very similar to the other members of Antirrhineae, indicating that the loss of spur in wild Antirrhinum majus populations could probably be an evolutionary innovation.
The KNOX family has also been implicated in leaf shape evolution (See below for a more detailed discussion). One study looked at the pattern of KNOX gene expression in A. thaliana, that has simple leaves and Cardamine hirsuta, a plant having complex leaves. In A. thaliana, the KNOX genes are completely turned off in leaves, but in C.hirsuta, the expression continued, generating complex leaves. Also, it has been proposed that the mechanism of KNOX gene action is conserved across all vascular plants, because there is a tight correlation between KNOX expression and a complex leaf morphology.
Indeterminate growth of meristems
Though each plant grows according to a certain set of rules, each new root and shoot meristem can go on growing for as long as it is alive. In many plants, meristematic growth is potentially indeterminate, making the overall shape of the plant not determinate in advance. This is the primary growth. Primary growth leads to lengthening of the plant body and organ formation. All plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation. Primary growth gives rise to the apical part of many plants.
The growth of nitrogen-fixing root nodules on legume plants such as soybean and pea is either determinate or indeterminate. Thus, soybean (or bean and Lotus japonicus) produce determinate nodules (spherical), with a branched vascular system surrounding the central infected zone. Often, Rhizobium-infected cells have only small vacuoles. In contrast, nodules on pea, clovers, and Medicago truncatula are indeterminate, to maintain (at least for some time) an active meristem that yields new cells for Rhizobium infection. Thus zones of maturity exist in the nodule. Infected cells usually possess a large vacuole. The plant vascular system is branched and peripheral.
Cloning
Under appropriate conditions, each shoot meristem can develop into a complete, new plant or clone. Such new plants can be grown from shoot cuttings that contain an apical meristem. Root apical meristems are not readily cloned, however. This cloning is called asexual reproduction or vegetative reproduction and is widely practiced in horticulture to mass-produce plants of a desirable genotype. This process known as mericloning, has been shown to reduce or eliminate viruses present in the parent plant in multiple species of plants.
Propagating through cuttings is another form of vegetative propagation that initiates root or shoot production from secondary meristematic cambial cells. This explains why basal 'wounding' of shoot-borne cuttings often aids root formation.
Induced meristems
Meristems may also be induced in the roots of legumes such as soybean, Lotus japonicus, pea, and Medicago truncatula after infection with soil bacteria commonly called Rhizobia. Cells of the inner or outer cortex in the so-called "window of nodulation" just behind the developing root tip are induced to divide. The critical signal substance is the lipo-oligosaccharide Nod factor, decorated with side groups to allow specificity of interaction. The Nod factor receptor proteins NFR1 and NFR5 were cloned from several legumes including Lotus japonicus, Medicago truncatula and soybean (Glycine max). Regulation of nodule meristems utilizes long-distance regulation known as the autoregulation of nodulation (AON). This process involves a leaf-vascular tissue located LRR receptor kinases (LjHAR1, GmNARK and MtSUNN), CLE peptide signalling, and KAPP interaction, similar to that seen in the CLV1,2,3 system. LjKLAVIER also exhibits a nodule regulation phenotype though it is not yet known how this relates to the other AON receptor kinases.
Lateral Meristems
Lateral meristems, the form of secondary plant growth, add growth to the plants in their diameter. This is primarily observed in perennial dicots that survive from year to year. There are two types of lateral meristems: vascular cambium and cork cambium.
In vascular cambium, the primary phloem and xylem are produced by the apical meristem. After this initial development, secondary phloem and xylem are produced by the lateral meristem. The two are connected through a thin layer of parenchymal cells which are differentiated into the fascicular cambium. The fascicular cambium divides to create the new secondary phloem and xylem. Following this the cortical parenchyma between vascular cylinders differentiates interfascicular cambium. This process repeats for indeterminate growth.
Cork cambium creates a protective covering around the outside of a plant. This occurs after the secondary xylem and phloem has expanded already. Cortical parenchymal cells differentiate into cork cambium near the epidermis which lays down new cells called phelloderm and cork cells. These cork cells are impermeable to water and gases because of a substance called suberin that coats them.
| Biology and health sciences | Plant tissues | null |
227807 | https://en.wikipedia.org/wiki/Humerus | Humerus | The humerus (; : humeri) is a long bone in the arm that runs from the shoulder to the elbow. It connects the scapula and the two bones of the lower arm, the radius and ulna, and consists of three sections. The humeral upper extremity consists of a rounded head, a narrow neck, and two short processes (tubercles, sometimes called tuberosities). The body is cylindrical in its upper portion, and more prismatic below. The lower extremity consists of 2 epicondyles, 2 processes (trochlea and capitulum), and 3 fossae (radial fossa, coronoid fossa, and olecranon fossa). As well as its true anatomical neck, the constriction below the greater and lesser tubercles of the humerus is referred to as its surgical neck due to its tendency to fracture, thus often becoming the focus of surgeons.
Etymology
The word "humerus" is derived from Late Latin , from Latin , meaning upper arm, shoulder, and is linguistically related to Gothic (shoulder) and Greek .
Structure
Upper extremity
The upper or proximal extremity of the humerus consists of the bone's large rounded head joined to the body by a constricted portion called the neck, and two eminences, the greater and lesser tubercles.
Head
The head (caput humeri), is nearly hemispherical in form. It is directed upward, medialward, and a little backward, and articulates with the glenoid cavity of the scapula to form the glenohumeral joint (shoulder joint). The circumference of its articular surface is slightly constricted and is termed the anatomical neck, in contradistinction to a constriction below the tubercles called the surgical neck which is frequently the seat of fracture. Fracture of the anatomical neck rarely occurs.
The diameter of the humeral head is generally larger in men than in women.
Anatomical neck
The anatomical neck (collum anatomicum) is obliquely directed, forming an obtuse angle with the body. It is most prominent in the lower half of its circumference, while in the upper half, it is represented by a narrow groove separating the head from the tubercles. The line separating the head from the rest of the upper end is called the anatomical neck. It affords attachment to the articular capsule of the shoulder-joint, and is perforated by numerous vascular foramens. Fracture of the anatomical neck rarely occurs.
The anatomical neck of the humerus is an indentation distal to the head of the humerus on which the articular capsule attaches.
Surgical neck
The surgical neck is a narrow area distal to the tubercles that is a common site of fracture. It makes contact with the axillary nerve and the posterior humeral circumflex artery.
Greater tubercle
The greater tubercle (tuberculum majus; greater tuberosity) is a large, posteriorly placed projection that is placed laterally. The greater tubercle is where supraspinatus, infraspinatus and teres minor muscles are attached. The crest of the greater tubercle forms the lateral lip of the bicipital groove and is the site for insertion of pectoralis major.
The greater tubercle is just lateral to the anatomical neck. Its upper surface is rounded and marked by three flat impressions: the highest of these gives insertion to the supraspinatus muscle; the middle to the infraspinatus muscle; the lowest one, and the body of the bone for about 2.5 cm. below it, to the teres minor muscle. The lateral surface of the greater tubercle is convex, rough, and continuous with the lateral surface of the body.
Lesser tubercle
The lesser tubercle (tuberculum minus; lesser tuberosity) is smaller, anterolaterally placed to the head of the humerus. The lesser tubercle provides insertion to subscapularis muscle. Both these tubercles are found in the proximal part of the shaft. The crest of the lesser tubercle forms the medial lip of the bicipital groove and is the site for insertion of teres major and latissimus dorsi muscles.
The lesser tuberosity, is more prominent than the greater: it is situated in front, and is directed medialward and forward. Above and in front it presents an impression for the insertion of the tendon of the subscapularis muscle.
Bicipital groove
The tubercles are separated from each other by a deep groove, the bicipital groove (intertubercular groove; bicipital sulcus), which lodges the long tendon of the biceps brachii muscle and transmits a branch of the anterior humeral circumflex artery to the shoulder-joint. It runs obliquely downward, and ends near the junction of the upper with the middle third of the bone. In the fresh state its upper part is covered with a thin layer of cartilage, lined by a prolongation of the synovial membrane of the shoulder-joint; its lower portion gives insertion to the tendon of the latissimus dorsi muscle. It is deep and narrow above, and becomes shallow and a little broader as it descends. Its lips are called, respectively, the crests of the greater and lesser tubercles (bicipital ridges), and form the upper parts of the anterior and medial borders of the body of the bone.
Shaft
The body or shaft of the humerus is triangular to cylindrical in cut section and is compressed anteroposteriorly. It has 3 surfaces, namely:
Anterolateral surface: the area between the lateral border of the humerus to the line drawn as a continuation of the crest of the greater tubercle. The antero-lateral surface is directed lateralward above, where it is smooth, rounded, and covered by the deltoid muscle; forward and lateralward below, where it is slightly concave from above downward, and gives origin to part of the brachialis. About the middle of this surface is a rough, rectangular elevation, the deltoid tuberosity for the insertion of the deltoid muscle; below this is the radial sulcus, directed obliquely from behind, forward, and downward, and transmitting the radial nerve and profunda artery.
Anteromedial surface: the area between the medial border of the humerus to the line drawn as a continuation of the crest of the greater tubercle. The antero-medial surface, less extensive than the antero-lateral, is directed medialward above, forward and medialward below; its upper part is narrow, and forms the floor of the intertubercular groove which gives insertion to the tendon of the latissimus dorsi muscle; its middle part is slightly rough for the attachment of some of the fibers of the tendon of insertion of the coracobrachialis muscle; its lower part is smooth, concave from above downward, and gives origin to the brachialis muscle.
Posterior surface: the area between the medial and lateral borders. The posterior surface appears somewhat twisted, so that its upper part is directed a little medialward, its lower part backward and a little lateralward. Nearly the whole of this surface is covered by the lateral and medial heads of the Triceps brachii, the former arising above, the latter below the radial sulcus.
Its three borders are:
Anterior: the anterior border runs from the front of the greater tubercle above to the coronoid fossa below, separating the antero-medial from the antero-lateral surface. Its upper part is a prominent ridge, the crest of the greater tubercle; it serves for the insertion of the tendon of the pectoralis major muscle. About its center it forms the anterior boundary of the deltoid tuberosity, on which the deltoid muscle attaches; below, it is smooth and rounded, affording attachment to the brachialis muscle.
Lateral: the lateral border runs from the back part of the greater tubercle to the lateral epicondyle, and separates the anterolateral from the posterior surface. Its upper half is rounded and indistinctly marked, serving for the attachment of the lower part of the insertion of the teres minor muscle, and below this giving origin to the lateral head of the triceps brachii muscle; its center is traversed by a broad but shallow oblique depression, the spiral groove (musculospiral groove). The radial nerve runs in the spiral groove. Its lower part forms a prominent, rough margin, a little curved from backward, forward the lateral supracondylar ridge, which presents an anterior lip for the origin of the brachioradialis muscle two-thirds above, and extensor carpi radialis longus muscle one-third below, a posterior lip for the triceps brachii muscle, and an intermediate ridge for the attachment of the lateral intermuscular septum.
Medial: the medial border extends from the lesser tubercle to the medial epicondyle. Its upper third consists of a prominent ridge, the crest of the lesser tubercle, which gives insertion to the tendon of the teres major muscle. About its center is a slight impression for the insertion of the coracobrachialis muscle, and just below this is the entrance of the nutrient canal, directed downward; sometimes there is a second nutrient canal at the commencement of the radial sulcus. The inferior third of this border is raised into a slight ridge, the medial supracondylar ridge, which became very prominent below; it presents an anterior lip for the origins of the brachialis muscle and the pronator teres muscle, a posterior lip for the medial head of the triceps brachii muscle, and an intermediate ridge for the attachment of the medial intermuscular septum.
The deltoid tuberosity is a roughened surface on the lateral surface of the shaft of the humerus and acts as the site of insertion of deltoideus muscle. The posterorsuperior part of the shaft has a crest, beginning just below the surgical neck of the humerus and extends till the superior tip of the deltoid tuberosity. This is where the lateral head of triceps brachii is attached.
The radial sulcus, also known as the spiral groove, is found on the posterior surface of the shaft and is a shallow oblique groove through which the radial nerve passes along with deep vessels. This is located posteroinferior to the deltoid tuberosity. The inferior boundary of the spiral groove is continuous distally with the lateral border of the shaft.
The nutrient foramen of the humerus is located in the anteromedial surface of the humerus. The nutrient arteries enter the humerus through this foramen.
Distal humerus
The distal or lower extremity of the humerus is flattened from before backward, and curved slightly forward; it ends below in a broad, articular surface, which is divided into two parts by a slight ridge. Projecting on either side are the lateral and medial epicondyles.
Articular surface
The articular surface extends a little lower than the epicondyles, and is curved slightly forward; its medial extremity occupies a lower level than the lateral. The lateral portion of this surface consists of a smooth, rounded eminence, named the capitulum of the humerus; it articulates with the cup-shaped depression on the head of the radius, and is limited to the front and lower part of the bone.
Fossae
Above the front part of the trochlea is a small depression, the coronoid fossa, which receives the coronoid process of the ulna during flexion of the forearm.
Above the back part of the trochlea is a deep triangular depression, the olecranon fossa, in which the summit of the olecranon is received in extension of the forearm.
The coronoid fossa is the medial hollow part on the anterior surface of the distal humerus. The coronoid fossa is smaller than the olecranon fossa and receives the coronoid process of the ulna during maximum flexion of the elbow.
Above the front part of the capitulum is a slight depression, the radial fossa, which receives the anterior border of the head of the radius, when the forearm is flexed.
These fossæ are separated from one another by a thin, transparent lamina of bone, which is sometimes perforated by a supratrochlear foramen; they are lined in the fresh state by the synovial membrane of the elbow-joint, and their margins afford attachment to the anterior and posterior ligaments of this articulation.
The capitulum is a rounded eminence forming the lateral part of the distal humerus. The head of the radius articulates with the capitulum.
The trochlea is spool-shaped medial portion of the distal humerus and articulates with the ulna.
Epicondyles
The epicondyles are continuous above with the supracondylar ridges.
The lateral epicondyle is a small, tuberculated eminence, curved a little forward, and giving attachment to the radial collateral ligament of the elbow-joint, and to a tendon common to the origin of the supinator and some of the extensor muscles.
The medial epicondyle, larger and more prominent than the lateral, is directed a little backward; it gives attachment to the ulnar collateral ligament of the elbow-joint, to the pronator teres, and to a common tendon of origin of some of the flexor muscles of the forearm; the ulnar nerve runs in a groove on the back of this epicondyle.
The medial supracondylar crest forms the sharp medial border of the distal humerus continuing superiorly from the medial epicondyle. The lateral supracondylar crest forms the sharp lateral border of the distal humerus continuing superiorly from the lateral epicondyle.
Borders
The medial portion of the articular surface is named the trochlea, and presents a deep depression between two well-marked borders; it is convex from before backward, concave from side to side, and occupies the anterior, lower, and posterior parts of the extremity.
The lateral border separates it from the groove which articulates with the margin of the head of the radius.
The medial border is thicker, of greater length, and consequently more prominent, than the lateral.
The grooved portion of the articular surface fits accurately within the semilunar notch of the ulna; it is broader and deeper on the posterior than on the anterior aspect of the bone, and is inclined obliquely downward and forward toward the medial side.
Articulations
At the shoulder, the head of the humerus articulates with the glenoid fossa of the scapula. More distally, at the elbow, the capitulum of the humerus articulates with the head of the radius, and the trochlea of the humerus articulates with the trochlear notch of the ulna.
Nerves
The axillary nerve is located at the proximal end, against the shoulder girdle. Dislocation of the humerus's glenohumeral joint has the potential to injure the axillary nerve or the axillary artery. Signs and symptoms of this dislocation include a loss of the normal shoulder contour and a palpable depression under the acromion.
The radial nerve follows the humerus closely. At the midshaft of the humerus, the radial nerve travels from the posterior to the anterior aspect of the bone in the spiral groove. A fracture of the humerus in this region can result in radial nerve injury.
The ulnar nerve lies at the distal end of the humerus near the elbow. When struck, it can cause a distinct tingling sensation, and sometimes a significant amount of pain. It is sometimes popularly referred to as 'the funny bone', possibly due to this sensation (a "funny" feeling), as well as the fact that the bone's name is a homophone of 'humorous'. It lies posterior to the medial epicondyle, and is easily damaged in elbow injuries.
Function
Muscular attachment
The deltoid originates on the lateral third of the clavicle, acromion and the crest of the spine of the scapula. It is inserted on the deltoid tuberosity of the humerus and has several actions including abduction, extension, and circumduction of the shoulder. The supraspinatus also originates on the spine of the scapula. It inserts on the greater tubercle of the humerus, and assists in abduction of the shoulder.
The pectoralis major, teres major, and latissimus dorsi insert at the intertubercular groove of the humerus. They work to adduct and medially, or internally, rotate the humerus.
The infraspinatus and teres minor insert on the greater tubercle, and work to laterally, or externally, rotate the humerus. In contrast, the subscapularis muscle inserts onto the lesser tubercle and works to medially, or internally, rotate the humerus.
The biceps brachii, brachialis, and brachioradialis (which attaches distally) act to flex the elbow. (The biceps do not attach to the humerus.) The triceps brachii and anconeus extend the elbow, and attach to the posterior side of the humerus.
The four muscles of supraspinatus, infraspinatus, teres minor and subscapularis form a musculo-ligamentous girdle called the rotator cuff. This cuff stabilizes the very mobile but inherently unstable glenohumeral joint. The other muscles are used as counterbalances for the actions of lifting/pulling and pressing/pushing.
Other animals
Primitive fossils of amphibians had little, if any, shaft connecting the upper and lower extremities, making their limbs very short. In most living tetrapods, however, the humerus has a similar form to that of humans. In many reptiles and some mammals (where it is the primitive state), the lower extremity includes a large opening called the entepicondylar foramen to allow the passage of nerves and blood vessels.
Additional images
Ossification
During embryonic development, the humerus is one of the first structures to ossify, beginning with the first ossification center in the shaft of the bone. Ossification of the humerus occurs predictably in the embryo and fetus, and is therefore used as a fetal biometric measurement when determining gestational age of a fetus. At birth, the neonatal humerus is only ossified in the shaft. The epiphyses are cartilaginous at birth. The medial humeral head develops an ossification center around 4 months of age and the greater tuberosity around 10 months of age. These ossification centers begin to fuse at 3 years of age. The process of ossification is complete by 13 years of age, though the epiphyseal plate (growth plate) persists until skeletal maturity, usually around 17 years of age.
| Biology and health sciences | Skeletal system | Biology |
227931 | https://en.wikipedia.org/wiki/Tamarind | Tamarind | Tamarind (Tamarindus indica) is a leguminous tree bearing edible fruit that is indigenous to tropical Africa and naturalized in Asia. The genus Tamarindus is monotypic, meaning that it contains only this species. It belongs to the family Fabaceae.
The tamarind tree produces brown, pod-like fruits that contain a sweet, tangy pulp, which is used in cuisines around the world. The pulp is also used in traditional medicine and as a metal polish. The tree's wood can be used for woodworking and tamarind seed oil can be extracted from the seeds. Tamarind's tender young leaves are used in South Indian and Filipino cuisine. Because tamarind has multiple uses, it is cultivated around the world in tropical and subtropical zones.
Description
The tamarind is a long-living, medium-growth tree, which attains a maximum crown height of . The crown has an irregular, vase-shaped outline of dense foliage. The tree grows well in full sun. It prefers clay, loam, sandy, and acidic soil types, with a high resistance to drought and aerosol salt (wind-borne salt as found in coastal areas).
The evergreen leaves are alternately arranged and paripinnately compound. The leaflets are bright green, elliptic-ovular, pinnately veined, and less than in length. The branches droop from a single, central trunk as the tree matures, and are often pruned in agriculture to optimize tree density and ease of fruit harvest. At night, the leaflets close up.
As a tropical species, it is frost-sensitive. The pinnate leaves with opposite leaflets give a billowing effect in the wind. Tamarind timber consists of hard, dark red heartwood and softer, yellowish sapwood.
The tamarind flowers bloom (although inconspicuously), with red and yellow elongated flowers. Flowers are 2.5 cm (1 in) wide, five-petalled, borne in small racemes, and yellow with orange or red streaks. Buds are pink as the four sepals are pink and are lost when the flower blooms.
Fruit
The fruit is an indehiscent legume, sometimes called a pod, in length, with a hard, brown shell.
The fruit has a fleshy, juicy, acidic pulp. It is mature when the flesh is coloured brown or reddish brown. The tamarinds of Asia have longer pods (containing six to 12 seeds), whereas African and West Indian varieties have shorter pods (containing one to six seeds). The seeds are somewhat flattened, and a glossy brown. The fruit is sweet and sour in taste.
History
Etymology
The name derives from , romanized tamr hindi, "Indian date". Several early medieval herbalists and physicians wrote tamar indi, medieval Latin use was tamarindus, and Marco Polo wrote of tamarandi.
In Colombia, Nicaragua, Costa Rica, Ecuador, Cuba, the Dominican Republic, Guatemala, El Salvador, Honduras, Mexico, Peru, Puerto Rico, Venezuela, Italy, Spain, and throughout the Lusosphere, it is called tamarindo. In those countries it is often used to make the beverage of the same name (or agua de tamarindo). In the Caribbean, tamarind is sometimes called tamón.
Countries in Southeast Asia like Indonesia call it asam jawa (Javanese sour fruit) or simply asam, and sukaer in Timor. While in the Philippines, it is called sampalok or sampaloc in Filipino, and sambag in Cebuano. Tamarind (Tamarindus indica) is sometimes confused with "Manila tamarind" (Pithecellobium dulce). While in the same taxonomic family Fabaceae, Manila tamarind is a different plant native to Mexico and known locally as guamúchili.
Taxonomy
Tamarindus indica is probably indigenous to tropical Africa, but has been cultivated for so long on the Indian subcontinent that it is sometimes reported to be indigenous there. It grows wild in Africa in locales as diverse as Sudan, Cameroon, Nigeria, Kenya, Zambia, Somalia, Tanzania and Malawi. In Arabia, it is found growing wild in Oman, especially Dhofar, where it grows on the sea-facing slopes of mountains. It reached South Asia likely through human transportation and cultivation several thousand years ago. It is widely distributed throughout the tropics, from Africa to South Asia.
In the 16th century, it was introduced to Mexico and Central America, and to a lesser degree to South America, by Spanish and Portuguese colonists, to the degree that it became a staple ingredient in the region's cuisine.
India is the largest producer of tamarind. The consumption of tamarind is widespread due to its central role in the cuisines of the Indian subcontinent, Southeast Asia, and the Americas, especially Mexico.
Uses
Nutrition
Raw tamarind is 63% carbohydrates, 31% water, 3% protein, and 1% fat (table). In a reference amount of , raw tamarind supplies 240 calories of food energy, and is a rich source (20% or more of the Daily Value, DV) of thiamine (36% DV) and dietary minerals, including magnesium and potassium at 22% and 21% DV, respectively (table).
Culinary
The fruit is harvested by pulling the pod from its stalk. A mature tree can produce up to of fruit per year. Veneer grafting, shield (T or inverted T) budding, and air layering may be used to propagate desirable cultivars. Such trees will usually fruit within three to four years if provided optimum growing conditions.
The fruit pulp is edible. The hard green pulp of a young fruit is considered by many to be too sour, but is often used as a component of savory dishes, as a pickling agent or as a means of making certain poisonous yams in Ghana safe for human consumption. As the fruit matures it becomes sweeter and less sour (acidic) and the ripened fruit is considered more palatable. The sourness varies between cultivars and some sweet tamarind ones have almost no acidity when ripe. In Western cuisine, tamarind pulp is found in Worcestershire sauce, HP Sauce, and some brands of barbecue sauce (especially in Australia, with the tamarind derived from Worcestershire sauce).
Tamarind paste has many culinary uses including as a flavoring for chutneys, curries, and the traditional sharbat syrup drink. Tamarind sweet chutney is popular in India and Pakistan as a dressing for many snacks and often served with samosa. Tamarind pulp is a key ingredient in flavoring curries and rice in south Indian cuisine, in the Chigali lollipop, in rasam, Koddel and in certain varieties of masala chai.
Across the Middle East, from the Levant to Iran, tamarind is used in savory dishes, notably meat-based stews, and often combined with dried fruits to achieve a sweet-sour tang.
In the Philippines, the whole fruit is used as one of the souring agents of the sour soup sinigang (which can also use other sour fruits), as well as another type of soup called sinampalukan (which also uses tamarind leaves). The fruit pulp are also cooked in sugar and/or salt to make champóy na sampalok (or simply "sampalok candy"), a traditional tamarind candy. Indonesia also has a similarly sour, tamarind-based soup dish called sayur asem. Tamarind pulp mixed with liquid is also used in beverage as tamarind juice. In Java, Indonesia, tamarind juice is known as es asem or gula asem, tamarind juice served with palm sugar and ice as a fresh sour and sweet beverage.
In Mexico, Central America, and the Caribbean, the pulp is diluted with water and sugared to make an agua fresca drink. It is widely used throughout all of Mexico for candy making, including tamarind mixed with chilli powder candy.
In Sokoto, Nigeria, tamarind pulp is used to fix the color in dyed leather products by neutralizing the alkali substances used in tanning.
The leaves and bark are also edible, and the seeds can be cooked to make safe for consumption. Blanched, tender tamarind leaves are used in a Burmese salad called magyi ywet thoke (; ), a salad from Upper Myanmar that features tender blanched tamarind leaves, garlic, onions, roasted peanuts, and pounded dried shrimp.
Seed oil and kernel powder
Tamarind seed oil is made from the kernel of tamarind seeds. The kernel is difficult to isolate from its thin but tough shell (or testa). It has a similar consistency to linseed oil, and can be used to make paint or varnish.
Tamarind kernel powder is used as sizing material for textile and jute processing, and in the manufacture of industrial gums and adhesives. It is de-oiled to stabilize its colour and odor on storage.
Folk medicine
Throughout Southeast Asia, the fruit of the tamarind is used as a poultice applied to the foreheads of people with fevers. The fruit exhibits laxative effects due to its high quantities of malic acid, tartaric acid, and potassium bitartrate. Its use for the relief of constipation has been documented throughout the world. Extract of steamed and sun-dried old tamarind pulp in Java (asem kawa) are used to treat skin problems like rashes and irritation; it can also be ingested after dilution as an abortifacient.
Woodworking
Tamarind wood is used to make furniture, boats (as per Rumphius) carvings, turned objects such as mortars and pestles, chopping blocks, and other small specialty wood items like krises. Tamarind heartwood is reddish brown, sometimes with a purplish hue. The heartwood in tamarind tends to be narrow and is usually only present in older and larger trees. The pale yellow sapwood is sharply demarcated from the heartwood. Heartwood is said to be durable to very durable in decay resistance, and is also resistant to insects. Its sapwood is not durable and is prone to attack by insects and fungi as well as spalting. Due to its density and interlocked grain, tamarind is considered difficult to work. Heartwood has a pronounced blunting effect on cutting edges. Tamarind turns, glues, and finishes well. The heartwood is able to take a high natural polish.
Metal polish
In homes and temples, especially in Buddhist Asian countries including Myanmar, the fruit pulp is used to polish brass shrine statues and lamps, and copper, brass, and bronze utensils. Tamarind contains tartaric acid, a weak acid that can remove tarnish. Lime, another acidic fruit, is used similarly.
Research
In dogs, the tartaric acid of tamarind causes acute kidney injury, which can often be fatal.
Lupeol, catechins, epicatechin, quercetin, and isorhamnetin are present in the leaf extract. Ultra-high performance liquid chromatography analyses revealed that tamarind seeds contained catechin, procyanidin B2, caffeic acid, ferulic acid, chloramphenicol, myricetin, morin, quercetin, apigenin and kaempferol.
Cultivation
Seeds can be scarified or briefly boiled to enhance germination. They retain their germination capability for several months if kept dry.
The tamarind has long been naturalized in Indonesia, Malaysia, Sri Lanka, the Philippines, the Caribbean, and Pacific Islands. Thailand has the largest plantations of the ASEAN nations, followed by Indonesia, Myanmar, and the Philippines. In parts of Southeast Asia, tamarind is called asam. It is cultivated all over India, especially in Maharashtra, Chhattisgarh, Karnataka, Telangana, Andhra Pradesh, and Tamil Nadu. Extensive tamarind orchards in India produce annually.
In the United States, it is a large-scale crop introduced for commercial use (second in net production quantity only to India), mainly in southern states, notably south Florida, and as a shade tree, along roadsides, in dooryards and in parks.
A traditional food plant in Africa, tamarind has the potential to improve nutrition, boost food security, foster rural development and support sustainable landcare. In Madagascar, its fruit and leaves are a well-known favorite of the ring-tailed lemur, providing as much as 50 percent of their food resources during the year if available.
Horticulture
Throughout South Asia and the tropical world, tamarind trees are used as ornamental, garden, and cash crop plantings. Commonly used as a bonsai species in many Asian countries, it is also grown as an indoor bonsai in temperate parts of the world.
| Biology and health sciences | Fabales | null |
227960 | https://en.wikipedia.org/wiki/XMM-Newton | XMM-Newton | XMM-Newton, also known as the High Throughput X-ray Spectroscopy Mission and the X-ray Multi-Mirror Mission, is an X-ray space observatory launched by the European Space Agency in December 1999 on an Ariane 5 rocket. It is the second cornerstone mission of ESA's Horizon 2000 programme. Named after physicist and astronomer Sir Isaac Newton, the spacecraft is tasked with investigating interstellar X-ray sources, performing narrow- and broad-range spectroscopy, and performing the first simultaneous imaging of objects in both X-ray and optical (visible and ultraviolet) wavelengths.
Initially funded for two years, with a ten-year design life, the spacecraft remains in good health and has received repeated mission extensions, most recently in March 2023 and is scheduled to operate until the end of 2026. ESA plans to succeed XMM-Newton with the Advanced Telescope for High Energy Astrophysics (ATHENA), the second large mission in the Cosmic Vision 2015–2025 plan, to be launched in 2035. XMM-Newton is similar to NASA's Chandra X-ray Observatory, also launched in 1999.
As of May 2018, close to 5,600 papers have been published about either XMM-Newton or the scientific results it has returned.
Concept and mission history
The observational scope of XMM-Newton includes the detection of X-ray emissions from astronomical objects, detailed studies of star-forming regions, investigation of the formation and evolution of galaxy clusters, the environment of supermassive black holes and mapping of the mysterious dark matter.
In 1982, even before the launch of XMM-Newton predecessor EXOSAT in 1983, a proposal was generated for a "multi-mirror" X-ray telescope mission. The XMM mission was formally proposed to the ESA Science Programme Committee in 1984 and gained approval from the Agency's Council of Ministers in January 1985. That same year, several working groups were established to determine the feasibility of such a mission, and mission objectives were presented at a workshop in Denmark in June 1985. At this workshop, it was proposed that the spacecraft contain 12 low-energy and 7 high-energy X-ray telescopes. The spacecraft's overall configuration was developed by February 1987, and drew heavily from lessons learned during the EXOSAT mission; the Telescope Working Group had reduced the number of X-ray telescopes to seven standardised units. In June 1988 the European Space Agency approved the mission and issued a call for investigation proposals (an "announcement of opportunity"). Improvements in technology further reduced the number of X-ray telescopes needed to just three.
In June 1989, the mission's instruments had been selected and work began on spacecraft hardware. A project team was formed in January 1993 and based at the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands. Prime contractor Dornier Satellitensysteme (a subsidiary of the former DaimlerChrysler Aerospace) was chosen in October 1994 after the mission was approved into the implementation phase, with development and construction beginning in March 1996 and March 1997, respectively. The XMM Survey Science Centre was established at the University of Leicester in 1995. The three flight mirror modules for the X-ray telescopes were delivered by Italian subcontractor Media Lario in December 1998, and spacecraft integration and testing was completed in September 1999.
XMM left the ESTEC integration facility on 9 September 1999, taken by road to Katwijk then by the barge Emeli to Rotterdam. On 12 September, the spacecraft left Rotterdam for French Guiana aboard Arianespace transport ship MN Toucan. The Toucan docked at the French Guianese town of Kourou on 23 September, and was transported to Guiana Space Centre Ariane 5 Final Assembly Building for final launch preparation.
Launch of XMM took place on 10 December 1999 at 14:32 UTC from the Guiana Space Centre. XMM was lofted into space aboard an Ariane 5 rocket, and placed into a highly elliptical, 40-degree orbit that had a perigee of and an apogee of . Forty minutes after being released from the Ariane upper stage, telemetry confirmed to ground stations that the spacecraft's solar arrays had successfully deployed. Engineers waited an additional 22 hours before commanding the on-board propulsion systems to fire a total of five times, which, between 10 and 16 December, changed the orbit to with a 38.9-degree inclination. This resulted in the spacecraft making one complete revolution of the Earth approximately every 48 hours.
Immediately after launch, XMM began its Launch and Early Orbit phase of operations. On 17 and 18 December 1999, the X-ray modules and Optical Monitor doors were opened, respectively. Instrument activation started on 4 January 2000, and the Instrument Commissioning phase began on 16 January. The Optical Monitor (OM) attained first light on 5 January, the two European Photon Imaging Camera (EPIC) MOS-CCDs followed on 16 January and the EPIC pn-CCD on 22 January, and the Reflection Grating Spectrometers (RGS) saw first light on 2 February. On 3 March, the Calibration and Performance Validation phase began, and routine science operations began on 1 June.
During a press conference on 9 February 2000, ESA presented the first images taken by XMM and announced that a new name had been chosen for the spacecraft. Whereas the program had formally been known as the High Throughput X-ray Spectroscopy Mission, the new name would reflect the nature of the program and the originator of the field of spectroscopy. Explaining the new name of XMM-Newton, Roger Bonnet, ESA's former Director of Science, said, "We have chosen this name because Sir Isaac Newton was the man who invented spectroscopy and XMM is a spectroscopy mission." He noted that because Newton is synonymous with gravity and one of the goals of the satellite was to locate large numbers of black hole candidates, "there was no better choice than XMM-Newton for the name of this mission."
Including all construction, spacecraft launch, and two years of operation, the project was accomplished within a budget of (1999 conditions).
Operation
The spacecraft has the ability to lower the operating temperature of both the EPIC and RGS cameras, a function that was included to counteract the deleterious effects of ionising radiation on the camera pixels. In general, the instruments are cooled to reduce the amount of dark current within the devices. During the night of 3–4 November 2002, RGS-2 was cooled from its initial temperature of down to , and a few hours later to . After analysing the results, it was determined the optimal temperature for both RGS units would be , and during 13–14 November, both RGS-1 and RGS-2 were set to this level. During 6–7 November, the EPIC MOS-CCD detectors were cooled from their initial operating temperature of to a new setting of . After these adjustments, both the EPIC and RGS cameras showed dramatic improvements in quality.
On 18 October 2008, XMM-Newton suffered an unexpected communications failure, during which time there was no contact with the spacecraft. While some concern was expressed that the vehicle may have suffered a catastrophic event, photographs taken by amateur astronomers at the Starkenburg Observatory in Germany and at other locations worldwide showed that the spacecraft was intact and appeared on course. A weak signal was finally detected using a antenna in New Norcia, Western Australia, and communication with XMM-Newton suggested that the spacecraft's Radio Frequency switch had failed. After troubleshooting a solution, ground controllers used NASA's antenna at the Goldstone Deep Space Communications Complex to send a command that changed the switch to its last working position. ESA stated in a press release that on 22 October, a ground station at the European Space Astronomy Centre (ESAC) made contact with the satellite, confirming the process had worked and that the satellite was back under control.
Mission extensions
Because of the spacecraft's good health and the significant returns of data, XMM-Newton has received several mission extensions by ESA's Science Programme Committee. The first extension came during November 2003 and extended operations through March 2008. The second extension was approved in December 2005, extending work through March 2010. A third extension was passed in November 2007, which provided for operations through 2012. As part of the approval, it was noted that the satellite had enough on-board consumables (fuel, power and mechanical health) to theoretically continue operations past 2017. The fourth extension in November 2010 approved operations through 2014. A fifth extension was approved in November 2014 and affirmed in November 2016, continuing operations through 2018. A sixth extension was approved in December 2017, continuing operations through the end of 2020. A seventh extension was approved in November 2018, continuing operations through the end of 2022. An eighth extension was approved in March 2023, continuing operations through the end of 2026, with indicative extension up to 2029.
Spacecraft
XMM-Newton is a long space telescope, and is wide with solar arrays deployed. At launch it weighed . The spacecraft has three degrees of stabilisation, which allow it to aim at a target with an accuracy of 0.25 to 1 arcseconds. This stabilisation is achieved through the use of the spacecraft's Attitude & Orbit Control Subsystem. These systems also allow the spacecraft to point at different celestial targets, and can turn the craft at a maximum of 90 degrees per hour. The instruments on board XMM-Newton are three European Photon Imaging Cameras (EPIC), two Reflection Grating Spectrometers (RGS), and an Optical Monitor.
The spacecraft is roughly cylindrical in shape, and has four major components. At the fore of the spacecraft is the Mirror Support Platform, which supports the X-ray telescope assemblies and grating systems, the Optical Monitor, and two star trackers. Surrounding this component is the Service Module, which carries various spacecraft support systems: computer and electric busses, consumables (such as fuel and coolant), solar arrays, the Telescope Sun Shield, and two S-band antennas. Behind these units is the Telescope Tube, a long, hollow carbon fibre structure which provides exact spacing between the mirrors and their detection equipment. This section also hosts outgassing equipment on its exterior, which helps remove any contaminants from the interior of the satellite. At the aft end of spacecraft is the Focal Plane Assembly, which supports the Focal Plane Platform (carrying the cameras and spectrometers) and the data-handling, power distribution, and radiator assemblies.
Instruments
European Photon Imaging Cameras
The three European Photon Imaging Cameras (EPIC) are the primary instruments aboard XMM-Newton. The system is composed of two MOS–CCD cameras and a single pn-CCD camera, with a total field of view of 30 arcminutes and an energy sensitivity range between (). Each camera contains a six-position filter wheel, with three types of X-ray-transparent filters, a fully open and a fully closed position; each also contains a radioactive source used for internal calibration. The cameras can be independently operated in a variety of modes, depending on the image sensitivity and speed needed, as well as the intensity of the target.
The two MOS-CCD cameras are used to detect low-energy X-rays. Each camera is composed of seven silicon chips (one in the centre and six circling it), with each chip containing a matrix of 600 × 600 pixels, giving the camera a total resolution of about 2.5 megapixels. As discussed above, each camera has a large adjacent radiator which cools the instrument to an operating temperature of . They were developed and built by the University of Leicester Space Research Centre and EEV Ltd.
The pn-CCD camera is used to detect high-energy X-rays, and is composed of a single silicon chip with twelve individual embedded CCDs. Each CCD is 64 × 189 pixels, for a total capacity of 145,000 pixels. At the time of its construction, the pn-CCD camera on XMM-Newton was the largest such device ever made, with a sensitive area of . A radiator cools the camera to . This system was made by the Astronomisches Institut Tübingen, the Max Planck Institute for Extraterrestrial Physics, and PNSensor, all of Germany.
The EPIC system records three types of data about every X-ray that is detected by its CCD cameras. The time that the X-ray arrives allows scientists to develop light curves, which projects the number of X-rays that arrive over time and shows changes in the brightness of the target. Where the X-ray hits the camera allows for a visible image to be developed of the target. The amount of energy carried by the X-ray can also be detected and helps scientists to determine the physical processes occurring at the target, such as its temperature, its chemical make-up, and what the environment is like between the target and the telescope.
Reflection Grating Spectrometers
The Reflection Grating Spectrometers (RGS) are composed of two Focal Plane Cameras and their associated Reflection Grating Arrays. This system is used to build X-ray spectral data and can determine the elements present in the target, as well as the temperature, quantity and other characteristics of those elements. The RGS system operates in the () range, which allows detection of carbon, nitrogen, oxygen, neon, magnesium, silicon and iron.
The Focal Plane Cameras each consist of nine MOS-CCD devices mounted in a row and following a curve called a Rowland circle. Each CCD contains 384 × 1024 pixels, for a total resolution of more than 3.5 megapixels. The total width and length of the CCD array was dictated by the size of the RGS spectrum and the wavelength range, respectively. Each CCD array is surrounded by a relatively massive wall, providing heat conduction and radiation shielding. Two-stage radiators cool the cameras to an operating temperature of . The camera systems were a joint effort between SRON, the Paul Scherrer Institute, and MSSL, with EEV Ltd and Contraves Space providing hardware.
The Reflection Grating Arrays are attached to two of the primary telescopes. They allow approximately 50% of the incoming X-rays to pass unperturbed to the EPIC system, while redirecting the other 50% onto the Focal Plane Cameras. Each RGA was designed to contain 182 identical gratings, though a fabrication error left one with only 181. Because the telescope mirrors have already focused the X-rays to converge at the focal point, each grating has the same angle of incidence, and as with the Focal Plane Cameras, each grating array conforms to a Rowland circle. This configuration minimises focal aberrations. Each grating is composed of thick silicon carbide substrate covered with a gold film, and is supported by five beryllium stiffeners. The gratings contain a large number of grooves, which actually perform the X-ray deflection; each grating contains an average of 646 grooves per millimetre. The RGAs were built by Columbia University.
Optical Monitor
The Optical Monitor (OM) is a Ritchey–Chrétien optical/ultraviolet telescope designed to provide simultaneous observations alongside the spacecraft's X-ray instruments. The OM is sensitive between nanometres in a 17 × 17 arcminute square field of view co-aligned with the centre of the X-ray telescope's field of view. It has a focal length of and a focal ratio of ƒ/12.7.
The instrument is composed of the Telescope Module, containing the optics, detectors, processing equipment, and power supply; and the Digital Electronics Module, containing the instrument control unit and data processing units. Incoming light is directed into one of two fully redundant detector systems. The light passes through an 11-position filter wheel (one opaque to block light, six broad band filters, one white light filter, one magnifier, and two grisms), then through an intensifier which amplifies the light by one million times, then onto the CCD sensor. The CCD is 384 × 288 pixels in size, of which 256 × 256 pixels are used for observations; each pixel is further subsampled into 8 × 8 pixels, resulting in a final product that is 2048 × 2048 in size. The Optical Monitor was built by the Mullard Space Science Laboratory with contributions from organisations in the United States and Belgium.
Telescopes
Feeding the EPIC and RGS systems are three telescopes designed specifically to direct X-rays into the spacecraft's primary instruments. The telescope assemblies each have a diameter of , are in length, and have a base weight of . The two telescopes with Reflection Grating Arrays weigh an additional . Components of the telescopes include (from front to rear) the mirror assembly door, entrance and X-ray baffles, mirror module, electron deflector, a Reflection Grating Array in two of the assemblies, and exit baffle.
Each telescope consists of 58 cylindrical, nested Wolter Type-1 mirrors developed by Media Lario of Italy, each long and ranging in diameter from , producing a total collecting area of at 1.5 keV and at 8 keV. The mirrors range from thick for the innermost mirror to thick for the outermost mirror, and the separation between each mirror ranges from from innermost to outermost. Each mirror was built by vapour-depositing a 250 nm layer of gold reflecting surface onto a highly polished aluminium mandrel, followed by electroforming a monolithic nickel support layer onto the gold. The finished mirrors were glued into the grooves of an Inconel spider, which keeps them aligned to within the five-micron tolerance required to achieve adequate X-ray resolution. The mandrels were manufactured by Carl Zeiss AG, and the electroforming and final assembly were performed by Media Lario with contributions from Kayser-Threde.
Subsystems
Attitude & Orbit Control System
Spacecraft three-axis attitude control is handled by the Attitude & Orbit Control System (AOCS), composed of four reaction wheels, four inertial measurement units, two star trackers, three fine Sun sensors, and three Sun acquisition sensors. The AOCS was provided by Matra Marconi Space of the United Kingdom.
Coarse spacecraft orientation and orbit maintenance is provided by two sets of four hydrazine thrusters (primary and backup). The hydrazine thrusters were built by DASA-RI of Germany.
The AOCS was upgraded in 2013 with a software patch ('4WD'), to control attitude using the 3 prime reaction wheels plus the 4th, spare wheel, unused since launch, with the aim of saving propellant to extend the spacecraft lifetime. In 2019 the fuel was predicted to last until 2030.
Power systems
Primary power for XMM-Newton is provided by two fixed solar arrays. The arrays are composed of six panels measuring for a total of and a mass of . At launch, the arrays provided 2,200 W of power, and were expected to provide 1,600 W after ten years of operation. Deployment of each array took four minutes. The arrays were provided by Fokker Space of the Netherlands.
When direct sunlight is unavailable, power is provided by two nickel–cadmium batteries providing 24 A·h and weighing each. The batteries were provided by SAFT of France.
Radiation Monitor System
The cameras are accompanied by the EPIC Radiation Monitor System (ERMS), which measures the radiation environment surrounding the spacecraft; specifically, the ambient proton and electron flux. This provides warning of damaging radiation events to allow for automatic shut-down of the sensitive camera CCDs and associated electronics. The ERMS was built by the Centre d'Etude Spatiale des Rayonnements of France.
Visual Monitoring Cameras
The Visual Monitoring Cameras (VMC) on the spacecraft were added to monitor the deployment of solar arrays and the sun shield, and have additionally provided images of the thrusters firing and outgassing of the Telescope Tube during early operations. Two VMCs were installed on the Focal Plane Assembly looking forward. The first is FUGA-15, a black and white camera with high dynamic range and 290 × 290 pixel resolution. The second is IRIS-1, a colour camera with a variable exposure time and 400 × 310 pixel resolution. Both cameras measure and weight . They use active pixel sensors, a technology that was new at the time of XMM-Newton development. The cameras were developed by and IMEC, both of Belgium.
Ground systems
XMM-Newton mission control is located at the European Space Operations Centre (ESOC) in Darmstadt, Germany. Two ground stations, located in Perth and Kourou, are used to maintain continuous contact with the spacecraft through most of its orbit. Back-up ground stations are located in Villafranca del Castillo, Santiago, and Dongara. Because XMM-Newton contains no on-board data storage, science data is transmitted to these ground stations in real time.
Data is then forwarded to the European Space Astronomy Centre Science Operations Centre in Villafranca del Castillo, Spain, where pipeline processing has been performed since March 2012. Data is archived at the ESAC Science Data Centre, and distributed to mirror archives at the Goddard Space Flight Center and the XMM-Newton Survey Science Centre (SSC) at the Institut de Recherche en Astrophysique et Planétologie. Prior to June 2013, the SSC was operated by the University of Leicester, but operations were transferred due to a withdrawal of funding by the United Kingdom.
Observations and discoveries
The space observatory was used to discover the galaxy cluster XMMXCS 2215-1738, 10 billion light years away from Earth.
The object SCP 06F6, discovered by the Hubble Space Telescope (HST) in February 2006, was observed by XMM-Newton in early August 2006 and appeared to show an X-ray glow around it two orders of magnitude more luminous than that of supernovae.
In June 2011, a team from the University of Geneva, Switzerland, reported XMM-Newton seeing a flare that lasted four hours at a peak intensity of 10,000 times the normal rate, from an observation of Supergiant Fast X-ray Transient IGR J18410-0535, where a blue supergiant star shed a plume of matter that was partly ingested by a smaller companion neutron star with accompanying X-ray emissions.
In February 2013 it was announced that XMM-Newton along with NuSTAR have for the first time measured the spin rate of a supermassive black hole, by observing the black hole at the core of galaxy NGC 1365. At the same time, it verified the model that explains the distortion of X-rays emitted from a black hole.
In February 2014, separate analyses extracted from the spectrum of X-ray emissions observed by XMM-Newton a monochromatic signal around 3.5 keV. This signal is coming from different galaxy clusters, and several scenarios of dark matter can justify such a line. For example, a 3.5 keV candidate annihilating into 2 photons, or a 7 keV dark matter particle decaying into photon and neutrino.
In June 2021, one of the largest X-ray surveys using the European Space Agency's XMM-Newton space observatory published initial findings, mapping the growth of 12,000 supermassive black holes at the cores of galaxies and galaxy clusters.
| Technology | Space-based observatories | null |
227969 | https://en.wikipedia.org/wiki/Very%20Large%20Array | Very Large Array | The Karl G. Jansky Very Large Array (VLA) is a centimeter-wavelength radio astronomy observatory in the southwestern United States built in the 1970s. It lies in central New Mexico on the Plains of San Agustin, between the towns of Magdalena and Datil, approximately west of Socorro. The VLA comprises twenty-eight 25-meter radio telescopes (twenty-seven of which are operational while one is always rotating through maintenance) deployed in a Y-shaped array and all the equipment, instrumentation, and computing power to function as an interferometer. Each of the massive telescopes is mounted on double parallel railroad tracks, so the radius and density of the array can be transformed to adjust the balance between its angular resolution and its surface brightness sensitivity. Astronomers using the VLA have made key observations of black holes and protoplanetary disks around young stars, discovered magnetic filaments and traced complex gas motions at the Milky Way's center, probed the Universe's cosmological parameters, and provided new knowledge about the physical mechanisms that produce radio emission.
The VLA stands at an elevation of above sea level. It is a component of the National Radio Astronomy Observatory (NRAO). The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Characteristics
The radio telescope comprises 27 independent antennas in use at a given time plus one spare, each of which has a dish diameter of 25 meters (82 feet) and weighs . The antennas are distributed along the three arms of a track, shaped in a wye (or Y) -configuration, (each of which measures long). Using the rail tracks that follow each of these arms—and that, at one point, intersect with U.S. Route 60 at a level crossing—and a specially designed lifting locomotive ("Hein's Trein"), the antennas can be physically relocated to a number of prepared positions, allowing aperture synthesis interferometry with up to 351 independent baselines: in essence, the array acts as a single antenna with a variable diameter. The angular resolution that can be reached is between 0.2 and 0.04 arcseconds.
There are four commonly used configurations, designated A (the largest) through D (the tightest, when all the dishes are within of the center point). The observatory normally cycles through all the various possible configurations (including several hybrids) every 16 months; the antennas are moved every three to four months. Moves to smaller configurations are done in two stages, first shortening the east and west arms and later shortening the north arm. This allows for a short period of improved imaging of extremely northerly or southerly sources.
The frequency coverage is to (400 cm to 0.7 cm).
The Pete V. Domenici Science Operations Center (DSOC) for the VLA is located on the campus of the New Mexico Institute of Mining and Technology in Socorro, New Mexico. The DSOC also serves as the control center for the Very Long Baseline Array (VLBA), a VLBI array of ten 25-meter dishes located from Hawaii in the west to the U.S. Virgin Islands in the east that constitutes the world's largest dedicated, full-time astronomical instrument.
Upgrade and renaming
In 2011, a decade-long upgrade project resulted in the VLA expanding its technical capacities by factors of up to 8,000. The 1970s-era electronics were replaced with state-of-the-art equipment. To reflect this increased capacity, VLA officials asked for input from both the scientific community and the public in coming up with a new name for the array, and in January 2012 it was announced that the array would be renamed the "Karl G. Jansky Very Large Array". On March 31, 2012, the VLA was officially renamed in a ceremony inside the Antenna Assembly Building.
Key science
The VLA is a multi-purpose instrument designed to allow investigations of many astronomical objects, including radio galaxies, quasars, pulsars, supernova remnants, gamma-ray bursts, radio-emitting stars, the sun and planets, astrophysical masers, black holes, and the hydrogen gas that constitutes a large portion of the Milky Way galaxy as well as external galaxies. In 1989 the VLA was used to receive radio communications from the Voyager 2 spacecraft as it flew by Neptune. A search of the galaxies M31 and M32 was conducted in December 2014 through January 2015 with the intent of quickly searching trillions of systems for extremely powerful signals from advanced civilizations.
It has been used to carry out several large surveys of radio sources, including the NRAO VLA Sky Survey and Faint Images of the Radio Sky at Twenty-Centimeters.
In September 2017 the VLA Sky Survey (VLASS) began. This survey will cover the entire sky visible to the VLA (80% of the Earth's sky) in three full scans. Astronomers expect to find about 10 million new objects with the survey — four times more than what is presently known.
History
The driving force for the development of the VLA was David S. Heeschen. He is noted as having "sustained and guided the development of the best radio astronomy observatory in the world for sixteen years." Congressional approval for the VLA project was given in August 1972, and construction began some six months later. The first antenna was put into place in September 1975 and the complex was formally inaugurated in 1980, after a total investment of . It was the largest configuration of radio telescopes in the world.
In 1997 the VLA featured in Contact, the film adaptation of the book by the same name written by Carl Sagan.
With a view to upgrading the venerable 1970s technology with which the VLA was built, the VLA has evolved into the Expanded Very Large Array (EVLA). The upgrade has enhanced the instrument's sensitivity, frequency range, and resolution with the installation of new hardware at the San Agustin site. A second phase of this upgrade may add up to eight additional antennae in other parts of the state of New Mexico, up to away, if funded.
Magdalena Ridge Observatory is a new observatory a few miles south of the VLA, and is run by VLA collaborator New Mexico Tech. Under construction at this site is a ten-element optical interferometer.
In June 2023, the National Radio Astronomy Observatory announced that they will be replacing the ageing antennae with 160 new ones at the site, plus 100 auxiliary antennae located across North America. The project, estimated to cost about $2 billion to build and around $90 million to run, will vastly expand the capabilities of the current installation and increase the frequency sensitivity from 50 GHz to over 100 GHz. The facility will be renamed the "Next Generation Very Large Array".
Tourism
The VLA is located between the towns of Magdalena and Datil, about west of Socorro, New Mexico. U.S. Route 60 passes east–west through the complex.
The VLA site is open to visitors with paid admission. A visitor center houses a small museum, theater, and a gift shop. A self-guided walking tour is available, as the visitor center is not staffed continuously. Visitors unfamiliar with the area are warned that there is little food on site, or in the sparsely populated surroundings; those unfamiliar with the high desert are warned that the weather is quite variable, and can remain cold into April. For those who cannot travel to the site, the NRAO created a virtual tour of the VLA called the VLA Explorer.
The VLA site was previously closed to visitors from March 2020 through October 2022.
| Technology | Ground-based observatories | null |
228015 | https://en.wikipedia.org/wiki/Viterbi%20algorithm | Viterbi algorithm | The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This is done especially in the context of Markov information sources and hidden Markov models (HMM).
The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, diarization, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
History
The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. It has, however, a history of multiple invention, with at least seven independent discoveries, including those by Viterbi, Needleman and Wunsch, and Wagner and Fischer. It was introduced to natural language processing as a method of part-of-speech tagging as early as 1987.
Viterbi path and Viterbi algorithm have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities.
For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse". Another application is in target tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.
Algorithm
Given a hidden Markov model with a set of hidden states and a sequence of observations , the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time step , the algorithm solves the subproblem where only the observations up to are considered.
Two matrices of size are constructed:
contains the maximum probability of ending up at state at observation , out of all possible sequences of states leading up to it.
tracks the previous state that was used before in this maximum probability state sequence.
Let and be the initial and transition probabilities respectively, and let be the probability of observing at state . Then the values of are given by the recurrence relation
The formula for is identical for , except that is replaced with , and .
The Viterbi path can be found by selecting the maximum of at the final timestep, and following in reverse.
Pseudocode
function Viterbi(states, init, trans, emit, obs) is
input states: S hidden states
input init: initial probabilities of each state
input trans: S × S transition matrix
input emit: S × O emission matrix
input obs: sequence of T observations
prob ← T × S matrix of zeroes
prev ← empty T × S matrix
for each state s in states do
prob[0][s] = init[s] * emit[s][obs[0]]
for t = 1 to T - 1 inclusive do // t = 0 has been dealt with already
for each state s in states do
for each state r in states do
new_prob ← prob[t - 1][r] * trans[r][s] * emit[s][obs[t]]
if new_prob > prob[t][s] then
prob[t][s] ← new_prob
prev[t][s] ← r
path ← empty array of length T
path[T - 1] ← the state s with maximum prob[T - 1][s]
for t = T - 2 to 0 inclusive do
path[t] ← prev[t + 1][path[t + 1]]
return path
end
The time complexity of the algorithm is . If it is known which state transitions have non-zero probability, an improved bound can be found by iterating over only those which link to in the inner loop. Then using amortized analysis one can show that the complexity is , where is the number of edges in the graph, i.e. the number of non-zero entries in the transition matrix.
Example
A doctor wishes to determine whether patients are healthy or have a fever. The only information the doctor can obtain is by asking patients how they feel. The patients may report that they either feel normal, dizzy, or cold.
It is believed that the health condition of the patients operates as a discrete Markov chain. There are two states, "healthy" and "fever", but the doctor cannot observe them directly; they are hidden from the doctor. On each day, the chance that a patient tells the doctor "I feel normal", "I feel cold", or "I feel dizzy", depends only on the patient's health condition on that day.
The observations (normal, cold, dizzy) along with the hidden states (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as:
init = {"Healthy": 0.6, "Fever": 0.4}
trans = {
"Healthy": {"Healthy": 0.7, "Fever": 0.3},
"Fever": {"Healthy": 0.4, "Fever": 0.6},
}
emit = {
"Healthy": {"normal": 0.5, "cold": 0.4, "dizzy": 0.1},
"Fever": {"normal": 0.1, "cold": 0.3, "dizzy": 0.6},
}
In this code, init represents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be {'Healthy': 0.57, 'Fever': 0.43} according to the transition probabilities. The transition probabilities trans represent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilities emit represent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy.
A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day.
Firstly, the probabilities of being healthy or having a fever on the first day are calculated. The probability that a patient will be healthy on the first day and report feeling normal is . Similarly, the probability that a patient will have a fever on the first day and report feeling normal is .
The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of and . This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering.
The rest of the probabilities are summarised in the following table:
From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day.
The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis.
Extensions
A generalization of the Viterbi algorithm, termed the max-sum algorithm (or max-product algorithm) can be used to find the most likely assignment of all or some subset of latent variables in a large number of graphical models, e.g. Bayesian networks, Markov random fields and conditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to a hidden Markov model (HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves message passing and is substantially similar to the belief propagation algorithm (which is the generalization of the forward-backward algorithm).
With an algorithm called iterative Viterbi decoding, one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal with turbo code. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.
An alternative algorithm, the Lazy Viterbi algorithm, has been proposed. For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original Viterbi decoder (using Viterbi algorithm). While the original Viterbi algorithm calculates every node in the trellis of possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy to parallelize in hardware.
Soft output Viterbi algorithm
The soft output Viterbi algorithm (SOVA) is a variant of the classical Viterbi algorithm.
SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account the a priori probabilities of the input symbols, and produces a soft output indicating the reliability of the decision.
The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant, t. Since each node has 2 branches converging at it (with one branch being chosen to form the Survivor Path, and the other being discarded), the difference in the branch metrics (or cost) between the chosen and discarded branches indicate the amount of error in the choice.
This cost is accumulated over the entire sliding window (usually equals at least five constraint lengths), to indicate the soft output measure of reliability of the hard bit decision of the Viterbi algorithm.
| Mathematics | Probability | null |
228058 | https://en.wikipedia.org/wiki/Breeder%20reactor | Breeder reactor | A breeder reactor is a nuclear reactor that generates more fissile material than it consumes. These reactors can be fueled with more-commonly available isotopes of uranium and thorium, such as uranium-238 and thorium-232, as opposed to the rare uranium-235 which is used in conventional reactors. These materials are called fertile materials since they can be bred into fuel by these breeder reactors.
Breeder reactors achieve this because their neutron economy is high enough to create more fissile fuel than they use. These extra neutrons are absorbed by the fertile material that is loaded into the reactor along with fissile fuel. This irradiated fertile material in turn transmutes into fissile material which can undergo fission reactions.
Breeders were at first found attractive because they made more complete use of uranium fuel than light-water reactors, but interest declined after the 1960s as more uranium reserves were found and new methods of uranium enrichment reduced fuel costs.
Types
Many types of breeder reactor are possible:
A "breeder" is simply a nuclear reactor designed for very high neutron economy with an associated conversion rate higher than 1.0. In principle, almost any reactor design could be tweaked to become a breeder. For example, the light-water reactor, a heavily moderated thermal design, evolved into the fast reactor concept, using light water in a low-density supercritical form to increase the neutron economy enough to allow breeding.
Aside from water-cooled, there are many other types of breeder reactor currently envisioned as possible. These include molten-salt cooled, gas cooled, and liquid-metal cooled designs in many variations. Almost any of these basic design types may be fueled by uranium, plutonium, many minor actinides, or thorium, and they may be designed for many different goals, such as creating more fissile fuel, long-term steady-state operation, or active burning of nuclear wastes.
Extant reactor designs are sometimes divided into two broad categories based upon their neutron spectrum, which generally separates those designed to use primarily uranium and transuranics from those designed to use thorium and avoid transuranics. These designs are:
Fast breeder reactors (FBRs) which use 'fast' (i.e. unmoderated) neutrons to breed fissile plutonium (and possibly higher transuranics) from fertile uranium-238. The fast spectrum is flexible enough that it can also breed fissile uranium-233 from thorium, if desired.
Thermal breeder reactors which use 'thermal-spectrum' or 'slow' (i.e. moderated) neutrons to breed fissile uranium-233 from thorium. Due to the behavior of the various nuclear fuels, a thermal breeder is thought commercially feasible only with thorium fuel, which avoids the buildup of the heavier transuranics.
Fast breeder reactor
All current large-scale FBR power stations were liquid metal fast breeder reactors (LMFBR) cooled by liquid sodium. These have been of one of two designs:
Loop type, in which the primary coolant is circulated through primary heat exchangers outside the reactor tank (but inside the biological shield due to radioactive in the primary coolant)
Pool type, in which the primary heat exchangers and pumps are immersed in the reactor tank
There are only two commercially operating breeder reactors : the BN-600 reactor, at 560 MWe, and the BN-800 reactor, at 880 MWe. Both are Russian sodium-cooled reactors. The designs use liquid metal as the primary coolant, to transfer heat from the core to steam used to power the electricity generating turbines. FBRs have been built cooled by liquid metals other than sodium—some early FBRs used mercury; other experimental reactors have used a sodium-potassium alloy. Both have the advantage that they are liquids at room temperature, which is convenient for experimental rigs but less important for pilot or full-scale power stations.
Three of the proposed generation IV reactor types are FBRs:
Gas-cooled fast reactor cooled by helium.
Sodium-cooled fast reactor based on the existing LMFBR and integral fast reactor designs.
Lead-cooled fast reactor based on Soviet naval propulsion units.
FBRs usually use a mixed oxide fuel core of up to 20% plutonium dioxide (PuO2) and at least 80% uranium dioxide (UO2). Another fuel option is metal alloys, typically a blend of uranium, plutonium, and zirconium (used because it is "transparent" to neutrons). Enriched uranium can be used on its own.
Many designs surround the reactor core in a blanket of tubes that contain non-fissile uranium-238, which, by capturing fast neutrons from the reaction in the core, converts to fissile plutonium-239 (as is some of the uranium in the core), which is then reprocessed and used as nuclear fuel. Other FBR designs rely on the geometry of the fuel (which also contains uranium-238), arranged to attain sufficient fast neutron capture. The plutonium-239 (or the fissile uranium-235) fissile cross-section is much smaller in a fast spectrum than in a thermal spectrum, as is the ratio between the 239Pu/235U fission cross-section and the 238U absorption cross-section. This increases the concentration of 239Pu/235U needed to sustain a chain reaction, as well as the ratio of breeding to fission.
On the other hand, a fast reactor needs no moderator to slow down the neutrons at all, taking advantage of the fast neutrons producing a greater number of neutrons per fission than slow neutrons. For this reason ordinary liquid water, being a moderator and neutron absorber, is an undesirable primary coolant for fast reactors. Because large amounts of water in the core are required to cool the reactor, the yield of neutrons and therefore breeding of 239Pu are strongly affected. Theoretical work has been done on reduced moderation water reactors, which may have a sufficiently fast spectrum to provide a breeding ratio slightly over 1. This would likely result in an unacceptable power derating and high costs in a liquid-water-cooled reactor, but the supercritical water coolant of the supercritical water reactor (SCWR) has sufficient heat capacity to allow adequate cooling with less water, making a fast-spectrum water-cooled reactor a practical possibility.
The type of coolants, temperatures, and fast neutron spectrum puts the fuel cladding material (normally austenitic stainless or ferritic-martensitic steels) under extreme conditions. The understanding of the radiation damage, coolant interactions, stresses, and temperatures are necessary for the safe operation of any reactor core. All materials used to date in sodium-cooled fast reactors have known limits. Oxide dispersion-strengthened alloy steel is viewed as the long-term radiation resistant fuel-cladding material that can overcome the shortcomings of today's material choices.
Integral fast reactor
One design of fast neutron reactor, specifically conceived to address the waste disposal and plutonium issues, was the integral fast reactor (IFR, also known as an integral fast breeder reactor, although the original reactor was designed to not breed a net surplus of fissile material).
To solve the waste disposal problem, the IFR had an on-site electrowinning fuel-reprocessing unit that recycled the uranium and all the transuranics (not just plutonium) via electroplating, leaving just short-half-life fission products in the waste. Some of these fission products could later be separated for industrial or medical uses and the rest sent to a waste repository. The IFR pyroprocessing system uses molten cadmium cathodes and electrorefiners to reprocess metallic fuel directly on-site at the reactor. Such systems co-mingle all the minor actinides with both uranium and plutonium. The systems are compact and self-contained, so that no plutonium-containing material needs to be transported away from the site of the breeder reactor. Breeder reactors incorporating such technology would most likely be designed with breeding ratios very close to 1.00, so that after an initial loading of enriched uranium and/or plutonium fuel, the reactor would then be refueled only with small deliveries of natural uranium. A quantity of natural uranium equivalent to a block about the size of a milk crate delivered once per month would be all the fuel such a 1 gigawatt reactor would need. Such self-contained breeders are currently envisioned as the final self-contained and self-supporting ultimate goal of nuclear reactor designers. The project was canceled in 1994 by United States Secretary of Energy Hazel O'Leary.
Other fast reactors
The first fast reactor built and operated was the Los Alamos Plutonium Fast Reactor ("Clementine") in Los Alamos, NM. Clementine was fueled by Ga-stabilized delta-phase Pu and cooled with mercury. It contained a 'window' of Th-232 in anticipation of breeding experiments, but no reports were made available regarding this feature.
Another proposed fast reactor is a fast molten salt reactor, in which the molten salt's moderating properties are insignificant. This is typically achieved by replacing the light metal fluorides (e.g. LiF, BeF2) in the salt carrier with heavier metal chlorides (e.g., KCl, RbCl, ZrCl4).
Several prototype FBRs have been built, ranging in electrical output from a few light bulbs' equivalent (EBR-I, 1951) to over 1,000 MWe. As of 2006, the technology is not economically competitive to thermal reactor technology, but India, Japan, China, South Korea, and Russia are all committing substantial research funds to further development of fast breeder reactors, anticipating that rising uranium prices will change this in the long term. Germany, in contrast, abandoned the technology due to safety concerns. The SNR-300 fast breeder reactor was finished after 19 years despite cost overruns summing up to a total of 3.6 billion, only to then be abandoned.
Thermal breeder reactor
The advanced heavy-water reactor is one of the few proposed large-scale uses of thorium. India is developing this technology, motivated by substantial thorium reserves; almost a third of the world's thorium reserves are in India, which lacks significant uranium reserves.
The third and final core of the Shippingport Atomic Power Station 60 MWe reactor was a light water thorium breeder, which began operating in 1977. It used pellets made of thorium dioxide and uranium-233 oxide; initially, the U-233 content of the pellets was 5–6% in the seed region, 1.5–3% in the blanket region, and none in the reflector region. It operated at 236 MWt, generating 60 MWe, and ultimately produced over 2.1 billion kilowatt hours of electricity. After five years, the core was removed and found to contain nearly 1.4% more fissile material than when it was installed, demonstrating that breeding from thorium had occurred.
A liquid fluoride thorium reactor is also planned as a thorium thermal breeder. Liquid-fluoride reactors may have attractive features, such as inherent safety, no need to manufacture fuel rods, and possibly simpler reprocessing of the liquid fuel. This concept was first investigated at the Oak Ridge National Laboratory Molten-Salt Reactor Experiment in the 1960s. From 2012 it became the subject of renewed interest worldwide.
Fuel resources
Breeder reactors could, in principle, extract almost all of the energy contained in uranium or thorium, decreasing fuel requirements by a factor of 100 compared to widely used once-through light water reactors, which extract less than 1% of the energy in the actinide metal (uranium or thorium) mined from the earth. The high fuel-efficiency of breeder reactors could greatly reduce concerns about fuel supply, energy used in mining, and storage of radioactive waste. With seawater uranium extraction (currently too expensive to be economical), there is enough fuel for breeder reactors to satisfy the world's energy needs for 5 billion years at 1983's total energy consumption rate, thus making nuclear energy effectively a renewable energy. In addition to seawater, the average crustal granite rocks contain significant quantities of uranium and thorium that with breeder reactors can supply abundant energy for the remaining lifespan of the sun on the main sequence of stellar evolution.
Nuclear waste
In broad terms, spent nuclear fuel has three main components. The first consists of fission products, the leftover fragments of fuel atoms after they have been split to release energy. Fission products come in dozens of elements and hundreds of isotopes, all of them lighter than uranium. The second main component of spent fuel is transuranics (atoms heavier than uranium), which are generated from uranium or heavier atoms in the fuel when they absorb neutrons but do not undergo fission. All transuranic isotopes fall within the actinide series on the periodic table, and so they are frequently referred to as the actinides. The largest component is the remaining uranium which is around 98.25% uranium-238, 1.1% uranium-235, and 0.65% uranium-236. The U-236 comes from the non-fission capture reaction where U-235 absorbs a neutron but releases only a high energy gamma ray instead of undergoing fission.
The physical behavior of the fission products is markedly different from that of the actinides. In particular, fission products do not undergo fission and therefore cannot be used as nuclear fuel. Indeed, because fission products are often neutron poisons (absorbing neutrons that could be used to sustain a chain reaction), fission products are viewed as nuclear 'ashes' left over from consuming fissile materials. Furthermore, only seven long-lived fission product isotopes have half-lives longer than a hundred years, which makes their geological storage or disposal less problematic than for transuranic materials.
With increased concerns about nuclear waste, breeding fuel cycles came under renewed interest as they can reduce actinide wastes, particularly plutonium and minor actinides. Breeder reactors are designed to fission the actinide wastes as fuel and thus convert them to more fission products. After spent nuclear fuel is removed from a light water reactor, it undergoes a complex decay profile as each nuclide decays at a different rate. There is a large gap in the decay half-lives of fission products compared to transuranic isotopes. If the transuranics are left in the spent fuel, after 1,000 to 100,000 years the slow decay of these transuranics would generate most of the radioactivity in that spent fuel. Thus, removing the transuranics from the waste eliminates much of the long-term radioactivity of spent nuclear fuel.
Today's commercial light-water reactors do breed some new fissile material, mostly in the form of plutonium. Because commercial reactors were never designed as breeders, they do not convert enough uranium-238 into plutonium to replace the uranium-235 consumed. Nonetheless, at least one-third of the power produced by commercial nuclear reactors comes from fission of plutonium generated within the fuel. Even with this level of plutonium consumption, light water reactors consume only part of the plutonium and minor actinides they produce, and nonfissile isotopes of plutonium build up, along with significant quantities of other minor actinides.
Breeding fuel cycles attracted renewed interest because of their potential to reduce actinide wastes, particularly various isotopes of plutonium and the minor actinides (neptunium, americium, curium, etc.). Since breeder reactors on a closed fuel cycle would use nearly all of the isotopes of these actinides fed into them as fuel, their fuel requirements would be reduced by a factor of about 100. The volume of waste they generate would be reduced by a factor of about 100 as well. While there is a huge reduction in the volume of waste from a breeder reactor, the activity of the waste is about the same as that produced by a light-water reactor.
Waste from a breeder reactor has a different decay behavior because it is made up of different materials. Breeder reactor waste is mostly fission products, while light-water reactor waste is mostly unused uranium isotopes and a large quantity of transuranics. After spent nuclear fuel has been removed from a light-water reactor for longer than 100,000 years, the transuranics would be the main source of radioactivity. Eliminating them would eliminate much of the long-term radioactivity from the spent fuel.
In principle, breeder fuel cycles can recycle and consume all actinides, leaving only fission products. As the graphic in this section indicates, fission products have a peculiar "gap" in their aggregate half-lives, such that no fission products have a half-life between 91 and 200,000 years. As a result of this physical oddity, after several hundred years in storage, the activity of the radioactive waste from an FBR would quickly drop to the low level of the long-lived fission products. However, to obtain this benefit requires the highly efficient separation of transuranics from spent fuel. If the fuel reprocessing methods used leave a large fraction of the transuranics in the final waste stream, this advantage would be greatly reduced.
The FBR's fast neutrons can fission actinide nuclei with even numbers of both protons and neutrons. Such nuclei usually lack the low-speed "thermal neutron" resonances of fissile fuels used in LWRs. The thorium fuel cycle inherently produces lower levels of heavy actinides. The fertile material in the thorium fuel cycle has an atomic weight of 232, while the fertile material in the uranium fuel cycle has an atomic weight of 238. That mass difference means that thorium-232 requires six more neutron capture events per nucleus before the transuranic elements can be produced. In addition to this simple mass difference, the reactor gets two chances to fission the nuclei as the mass increases: First as the effective fuel nuclei U233, and as it absorbs two more neutrons, again as the fuel nuclei U235.
A reactor whose main purpose is to destroy actinides rather than increasing fissile fuel-stocks is sometimes known as a burner reactor. Both breeding and burning depend on good neutron economy, and many designs can do either. Breeding designs surround the core by a breeding blanket of fertile material. Waste burners surround the core with non-fertile wastes to be destroyed. Some designs add neutron reflectors or absorbers.
Design
Conversion ratio
One measure of a reactor's performance is the "conversion ratio", defined as the ratio of new fissile atoms produced to fissile atoms consumed. All proposed nuclear reactors except specially designed and operated actinide burners experience some degree of conversion. As long as there is any amount of a fertile material within the neutron flux of the reactor, some new fissile material is always created. When the conversion ratio is greater than 1, it is often called the "breeding ratio".
For example, commonly used light water reactors have a conversion ratio of approximately 0.6. Pressurized heavy-water reactors running on natural uranium have a conversion ratio of 0.8. In a breeder reactor, the conversion ratio is higher than 1. "Break-even" is achieved when the conversion ratio reaches 1.0 and the reactor produces as much fissile material as it uses.
Doubling time
The doubling time is the amount of time it would take for a breeder reactor to produce enough new fissile material to replace the original fuel and additionally produce an equivalent amount of fuel for another nuclear reactor. This was considered an important measure of breeder performance in early years, when uranium was thought to be scarce. However, since uranium is more abundant than thought in the early days of nuclear reactor development, and given the amount of plutonium available in spent reactor fuel, doubling time has become a less important metric in modern breeder-reactor design.
Burnup
"Burnup" is a measure of how much energy has been extracted from a given mass of heavy metal in fuel, often expressed (for power reactors) in terms of gigawatt-days per ton of heavy metal. Burnup is an important factor in determining the types and abundances of isotopes produced by a fission reactor. Breeder reactors by design have high burnup compared to a conventional reactor, as breeder reactors produce more of their waste in the form of fission products, while most or all of the actinides are meant to be fissioned and destroyed.
In the past, breeder-reactor development focused on reactors with low breeding ratios, from 1.01 for the Shippingport Reactor running on thorium fuel and cooled by conventional light water to over 1.2 for the Soviet BN-350 liquid-metal-cooled reactor. Theoretical models of breeders with liquid sodium coolant flowing through tubes inside fuel elements ("tube-in-shell" construction) suggest breeding ratios of at least 1.8 are possible on an industrial scale. The Soviet BR-1 test reactor achieved a breeding ratio of 2.5 under non-commercial conditions.
Reprocessing
Fission of the nuclear fuel in any reactor unavoidably produces neutron-absorbing fission products. The fertile material from a breeder reactor then needs to be reprocessed to remove those neutron poisons. This step is required to fully utilize the ability to breed as much or more fuel than is consumed. All reprocessing can present a proliferation concern, since it can extract weapons-usable material from spent fuel. The most common reprocessing technique, PUREX, presents a particular concern since it was expressly designed to separate plutonium. Early proposals for the breeder-reactor fuel cycle posed an even greater proliferation concern because they would use PUREX to separate plutonium in a highly attractive isotopic form for use in nuclear weapons.
Several countries are developing reprocessing methods that do not separate the plutonium from the other actinides. For instance, the non-water-based pyrometallurgical electrowinning process, when used to reprocess fuel from an integral fast reactor, leaves large amounts of radioactive actinides in the reactor fuel. More conventional water-based reprocessing systems include SANEX, UNEX, DIAMEX, COEX, and TRUEX, and proposals to combine PUREX with those and other co-processes. All these systems have moderately better proliferation resistance than PUREX, though their adoption rate is low.
In the thorium cycle, thorium-232 breeds by converting first to protactinium-233, which then decays to uranium-233. If the protactinium remains in the reactor, small amounts of uranium-232 are also produced, which has the strong gamma emitter thallium-208 in its decay chain. Similar to uranium-fueled designs, the longer the fuel and fertile material remain in the reactor, the more of these undesirable elements build up. In the envisioned commercial thorium reactors, high levels of uranium-232 would be allowed to accumulate, leading to extremely high gamma-radiation doses from any uranium derived from thorium. These gamma rays complicate the safe handling of a weapon and the design of its electronics; this explains why uranium-233 has never been pursued for weapons beyond proof-of-concept demonstrations.
While the thorium cycle may be proliferation-resistant with regard to uranium-233 extraction from fuel (because of the presence of uranium-232), it poses a proliferation risk from an alternate route of uranium-233 extraction, which involves chemically extracting protactinium-233 and allowing it to decay to pure uranium-233 outside of the reactor. This process is an obvious chemical operation which is not required for normal operation of these reactor designs, but it could feasibly happen beyond the oversight of organizations such as the International Atomic Energy Agency (IAEA), and thus must be safeguarded against.
Production
Like many aspects of nuclear power, fast breeder reactors have been subject to much controversy over the years. In 2010 the International Panel on Fissile Materials said "After six decades and the expenditure of the equivalent of tens of billions of dollars, the promise of breeder reactors remains largely unfulfilled and efforts to commercialize them have been steadily cut back in most countries". In Germany, the United Kingdom, and the United States, breeder reactor development programs have been abandoned. The rationale for pursuing breeder reactors—sometimes explicit and sometimes implicit—was based on the following key assumptions:
It was expected that uranium would be scarce and high-grade deposits would quickly become depleted if fission power were deployed on a large scale; the reality, however, is that since the end of the Cold War, uranium has been much cheaper and more abundant than early designers expected.
It was expected that breeder reactors would quickly become economically competitive with the light-water reactors that dominate nuclear power today, but the reality is that capital costs are at least 25% more than water-cooled reactors.
It was thought that breeder reactors could be as safe and reliable as light-water reactors, but safety issues are cited as a concern with fast reactors that use a sodium coolant, where a leak could lead to a sodium fire.
It was expected that the proliferation risks posed by breeders and their "closed" fuel cycle, in which plutonium would be recycled, could be managed. But since plutonium-breeding reactors produce plutonium from U238, and thorium reactors produce fissile U233 from thorium, all breeding cycles could theoretically pose proliferation risks. However U-232, which is always present in U-233 produced in breeder reactors, is a strong gamma-emitter via its daughter products, and would make weapon handling extremely hazardous and the weapon easy to detect.
Some past anti-nuclear advocates have become pro-nuclear power as a clean source of electricity since breeder reactors effectively recycle most of their waste. This solves one of the most-important negative issues of nuclear power. In the documentary Pandora's Promise, a case is made for breeder reactors because they provide a real high-kW alternative to fossil fuel energy. According to the movie, one pound of uranium provides as much energy as 5,000 barrels of oil.
Notable reactors
The Soviet Union constructed a series of fast reactors, the first being mercury-cooled and fueled with plutonium metal, and the later plants sodium-cooled and fueled with plutonium oxide. BR-1 (1955) was 100W (thermal) was followed by BR-2 at 100 kW and then the 5 MW BR-5. BOR-60 (first criticality 1969) was 60 MW, with construction started in 1965.
Future plants
India
India has been trying to develop fast breeder reactors for decades but suffered repeated delays. By December 2024 the Prototype Fast Breeder Reactor is due to be completed and commissioned. The program is intended to use fertile thorium-232 to breed fissile uranium-233. India is also pursuing thorium thermal breeder reactor technology. India's focus on thorium is due to the nation's large reserves, though known worldwide reserves of thorium are four times those of uranium. India's Department of Atomic Energy said in 2007 that it would simultaneously construct four more breeder reactors of 500 MWe each including two at Kalpakkam.
BHAVINI, an Indian nuclear power company, was established in 2003 to construct, commission, and operate all stage II fast breeder reactors outlined in India's three-stage nuclear power programme. To advance these plans, the FBR-600 is a pool-type sodium-cooled reactor with a rating of 600 MWe.
China
The China Experimental Fast Reactor is a 25 MW(e) prototype for the planned China Prototype Fast Reactor. It started generating power in 2011. China initiated a research and development project in thorium molten-salt thermal breeder-reactor technology (liquid fluoride thorium reactor), formally announced at the Chinese Academy of Sciences annual conference in 2011. Its ultimate target was to investigate and develop a thorium-based molten salt nuclear system over about 20 years.
South Korea
South Korea is developing a design for a standardized modular FBR for export, to complement the standardized pressurized water reactor and CANDU designs they have already developed and built, but has not yet committed to building a prototype.
Russia
Russia has a plan for increasing its fleet of fast breeder reactors significantly. A BN-800 reactor (800 MWe) at Beloyarsk was completed in 2012, succeeding a smaller BN-600. It reached its full power production in 2016. Plans for the construction of a larger BN-1200 reactor (1,200 MWe) was scheduled for completion in 2018, with two additional BN-1200 reactors built by the end of 2030. However, in 2015 Rosenergoatom postponed construction indefinitely to allow fuel design to be improved after more experience of operating the BN-800 reactor, and among cost concerns.
An experimental lead-cooled fast reactor, BREST-300 will be built at the Siberian Chemical Combine in Seversk. The BREST (, ) design is seen as a successor to the BN series and the 300 MWe unit at the SCC could be the forerunner to a 1,200 MWe version for wide deployment as a commercial power generation unit. The development program is as part of an Advanced Nuclear Technologies Federal Program 2010–2020 that seeks to exploit fast reactors for uranium efficiency while 'burning' radioactive substances that would otherwise be disposed of as waste. Its core would measure about 2.3 metres in diameter by 1.1 metres in height and contain 16 tonnes of fuel. The unit would be refuelled every year, with each fuel element spending five years in total within the core. Lead coolant temperature would be around 540 °C, giving a high efficiency of 43%, primary heat production of 700 MWt yielding electrical power of 300 MWe. The operational lifespan of the unit could be 60 years. The design was expected to be completed by NIKIET in 2014 for construction between 2016 and 2020.
By the end of 2024 the cooling tower had been built, and the target for starting operation was 2026.
Japan
In 2006 the United States, France, and Japan signed an "arrangement" to research and develop sodium-cooled fast reactors in support of the Global Nuclear Energy Partnership. In 2007 the Japanese government selected Mitsubishi Heavy Industries as the "core company in FBR development in Japan". Shortly thereafter, Mitsubishi FBR Systems was launched to develop and eventually sell FBR technology.
France
In 2010 the French government allocated 651.6 million to the Commissariat à l'énergie atomique to finalize the design of ASTRID (Advanced Sodium Technological Reactor for Industrial Demonstration), a 600 MW fourth-generation reactor design to be finalized in 2020. the UK had shown interest in the PRISM reactor and was working in concert with France to develop ASTRID. In 2019, CEA announced this design would not be built before mid-century.
United States
Kirk Sorensen, former NASA scientist and chief nuclear technologist at Teledyne Brown Engineering, has long been a promoter of thorium fuel cycle and particularly liquid fluoride thorium reactors. In 2011, Sorensen founded Flibe Energy, a company aimed to develop 20–50 MW LFTR reactor designs to power military bases.
In October 2010 GE Hitachi Nuclear Energy signed a memorandum of understanding with the operators of the US Department of Energy's Savannah River Site, which should allow the construction of a demonstration plant based on the company's S-PRISM fast breeder reactor prior to the design receiving full Nuclear Regulatory Commission licensing approval. In October 2011 The Independent reported that the UK Nuclear Decommissioning Authority (NDA) and senior advisers within the Department for Energy and Climate Change (DECC) had asked for technical and financial details of PRISM, partly as a means of reducing the country's plutonium stockpile.
The traveling wave reactor proposed in a patent by Intellectual Ventures is a fast breeder reactor designed to not need fuel reprocessing during the decades-long lifetime of the reactor. The breed-burn wave in the TWR design does not move from one end of the reactor to the other but gradually from the inside out. Moreover, as the fuel's composition changes through nuclear transmutation, fuel rods are continually reshuffled within the core to optimize the neutron flux and fuel usage at any given point in time. Thus, instead of letting the wave propagate through the fuel, the fuel itself is moved through a largely stationary burn wave. This is contrary to many media reports, which have popularized the concept as a candle-like reactor with a burn region that moves down a stick of fuel. By replacing a static core configuration with an actively managed "standing wave" or "soliton" core, TerraPower's design avoids the problem of cooling a highly variable burn region. Under this scenario, the reconfiguration of fuel rods is accomplished remotely by robotic devices; the containment vessel remains closed during the procedure, and there is no associated downtime.
| Technology | Power generation | null |
228107 | https://en.wikipedia.org/wiki/Stress%20%28mechanics%29 | Stress (mechanics) | In continuum mechanics, stress is a physical quantity that describes forces present during deformation. For example, an object being pulled apart, such as a stretched elastic band, is subject to tensile stress and may undergo elongation. An object being pushed together, such as a crumpled sponge, is subject to compressive stress and may undergo shortening. The greater the force and the smaller the cross-sectional area of the body on which it acts, the greater the stress. Stress has dimension of force per area, with SI units of newtons per square meter (N/m2) or pascal (Pa).
Stress expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the relative deformation of the material. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).
Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original non-deformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. If the deformation changes gradually with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress.
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
The relation between mechanical stress, strain, and the strain rate can be quite complicated, although a linear approximation may be adequate in practice if the quantities are sufficiently small. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
History
Humans have known about stress inside materials since ancient times. Until the 17th century, this understanding was largely intuitive and empirical, though this did not prevent the development of relatively advanced technologies like the composite bow and glass blowing.
Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals.
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo Galilei's rigorous experimental method, René Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals. With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model of a deformed elastic body by introducing the notions of stress and strain. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallel laminar flow.
Definition
Stress is defined as the force across a small boundary per unit area of that boundary, for all orientations of the boundary. Derived from a physical quantity (force) and a purely geometrical quantity (area), stress is also a physical quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them. Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood.
Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S. In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the traction vector T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field.
Normal and shear
In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (compression or tension) perpendicular to the surface, and the shear stress that is parallel to the surface.
If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product . This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress). The shear component is then the vector .
Units
The dimension of stress is that of pressure, and therefore its coordinates are measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress.
Causes and effects
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines or points; and possibly also on very short time intervals (as in the impulses due to collisions). In active matter, self-propulsion of microscopic particles generates macroscopic stress profiles. In general, the stress distribution in a body is expressed as a piecewise continuous function of space and time.
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. If the deformation changes with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article on viscosity. The same for normal viscous stresses can be found in Sharma (2019).
The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
Simple types
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.
Uniaxial normal
A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force, F with continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the single number σ, calculated simply with the magnitude of those forces, F, and cross sectional area, A. On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress. If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress change sign, and the stress is called compressive stress.
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value = F/A will be only the average stress, called engineering stress or nominal stress. If the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle).
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid.
Shear
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed simply by the single number , calculated simply with the magnitude of those forces, F and the cross sectional area, A.Unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it. For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. That average is often sufficient for practical purposes. Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges").
Isotropic
Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand very large amounts of isotropic tensile stress under some circumstances. see Z-tube.
Cylinder
Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
General types
Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction , and zero across any surfaces that are parallel to . When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element.
Cauchy tensor
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
Cauchy observed that the stress vector across a surface will always be a linear function of the surface's normal vector , the unit-length vector that is perpendicular to it. That is, , where the function satisfies
for any vectors and any real numbers .
The function , now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus, is classified as a second-order tensor of type (0,2) or (1,1) depending on convention.
Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered or named , the matrix may be written as
or
The stress vector across a surface with normal vector (which is covariant - "row; horizontal" - vector) with coordinates is then a matrix product (where T in upper index is transposition, and as a result we get covariant (row) vector) (look on Cauchy stress tensor), that is
The linear relation between and follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is , , and . Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written
where the elements are called the orthogonal normal stresses (relative to the chosen coordinate system), and the orthogonal shear stresses.
Change of coordinates
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution.
As a symmetric 3×3 real matrix, the stress tensor has three mutually orthogonal unit-length eigenvectors and three real eigenvalues , such that . Therefore, in a coordinate system with axes , the stress tensor is a diagonal matrix, and has only the three normal components the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
Tensor field
In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
Thin plates
Human-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. These simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate).
Thin beams
The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis).
Analysis
Stress analysis is a branch of applied physics that covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings.
Goals and assumptions
Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces being applied to such a system must be balanced by internal reaction forces, which are almost always surface contact forces between adjacent particles — that is, as stress. Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle to particle, creating a stress distribution throughout the body.
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material; or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations.
Methods
Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. Most stress is analysed by mathematical methods, especially during design.
The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem.
Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.). Engineered structures are usually designed so the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke's law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Still, for two- or three-dimensional cases one must solve a partial differential equation problem.
Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
Measures
Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
| Physical sciences | Solid mechanics | null |
228108 | https://en.wikipedia.org/wiki/Young%27s%20modulus | Young's modulus | Young's modulus (or Young modulus) is a mechanical property of solid materials that measures the tensile or compressive stiffness when the force is applied lengthwise. It is the modulus of elasticity for tension or axial compression. Young's modulus is defined as the ratio of the stress (force per unit area) applied to the object and the resulting axial strain (displacement or deformation) in the linear elastic region of the material.
Although Young's modulus is named after the 19th-century British scientist Thomas Young, the concept was developed in 1727 by Leonhard Euler. The first experiments that used the concept of Young's modulus in its modern form were performed by the Italian scientist Giordano Riccati in 1782, pre-dating Young's work by 25 years. The term modulus is derived from the Latin root term modus, which means measure.
Definition
Young's modulus, , quantifies the relationship between tensile or compressive stress (force per unit area) and axial strain (proportional deformation) in the linear elastic region of a material:
Young's modulus is commonly measured in the International System of Units (SI) in multiples of the pascal (Pa) and common values are in the range of gigapascals (GPa).
Examples:
Rubber (increasing pressure: length increases quickly, meaning low )
Aluminium (increasing pressure: length increases slowly, meaning high )
Linear elasticity
A solid material undergoes elastic deformation when a small load is applied to it in compression or extension. Elastic deformation is reversible, meaning that the material returns to its original shape after the load is removed.
At near-zero stress and strain, the stress–strain curve is linear, and the relationship between stress and strain is described by Hooke's law that states stress is proportional to strain. The coefficient of proportionality is Young's modulus. The higher the modulus, the more stress is needed to create the same amount of strain; an idealized rigid body would have an infinite Young's modulus. Conversely, a very soft material (such as a fluid) would deform without force, and would have zero Young's modulus.
Related but distinct properties
Material stiffness is a distinct property from the following:
Strength: maximum amount of stress that material can withstand while staying in the elastic (reversible) deformation regime;
Geometric stiffness: a global characteristic of the body that depends on its shape, and not only on the local properties of the material; for instance, an I-beam has a higher bending stiffness than a rod of the same material for a given mass per length;
Hardness: relative resistance of the material's surface to penetration by a harder body;
Toughness: amount of energy that a material can absorb before fracture.
The point E is the elastic limit or the yield point of the material within which the stress is proportional to strain and the material regains its original shape after removal of the external force.
Usage
Young's modulus enables the calculation of the change in the dimension of a bar made of an isotropic elastic material under tensile or compressive loads. For instance, it predicts how much a material sample extends under tension or shortens under compression. The Young's modulus directly applies to cases of uniaxial stress; that is, tensile or compressive stress in one direction and no stress in the other directions. Young's modulus is also used in order to predict the deflection that will occur in a statically determinate beam when a load is applied at a point in between the beam's supports.
Other elastic calculations usually require the use of one additional elastic property, such as the shear modulus , bulk modulus , and Poisson's ratio . Any two of these parameters are sufficient to fully describe elasticity in an isotropic material. For example, calculating physical properties of cancerous skin tissue, has been measured and found to be a Poisson’s ratio of 0.43±0.12 and an average Young’s modulus of 52 KPa. Defining the elastic properties of skin may become the first step in turning elasticity into a clinical tool. For homogeneous isotropic materials simple relations exist between elastic constants that allow calculating them all as long as two are known:
Linear versus non-linear
Young's modulus represents the factor of proportionality in Hooke's law, which relates the stress and the strain. However, Hooke's law is only valid under the assumption of an elastic and linear response. Any real material will eventually fail and break when stretched over a very large distance or with a very large force; however, all solid materials exhibit nearly Hookean behavior for small enough strains or stresses. If the range over which Hooke's law is valid is large enough compared to the typical stress that one expects to apply to the material, the material is said to be linear. Otherwise (if the typical stress one would apply is outside the linear range), the material is said to be non-linear.
Steel, carbon fiber and glass among others are usually considered linear materials, while other materials such as rubber and soils are non-linear. However, this is not an absolute classification: if very small stresses or strains are applied to a non-linear material, the response will be linear, but if very high stress or strain is applied to a linear material, the linear theory will not be enough. For example, as the linear theory implies reversibility, it would be absurd to use the linear theory to describe the failure of a steel bridge under a high load; although steel is a linear material for most applications, it is not in such a case of catastrophic failure.
In solid mechanics, the slope of the stress–strain curve at any point is called the tangent modulus. It can be experimentally determined from the slope of a stress–strain curve created during tensile tests conducted on a sample of the material.
Directional materials
Young's modulus is not always the same in all orientations of a material. Most metals and ceramics, along with many other materials, are isotropic, and their mechanical properties are the same in all orientations. However, metals and ceramics can be treated with certain impurities, and metals can be mechanically worked to make their grain structures directional. These materials then become anisotropic, and Young's modulus will change depending on the direction of the force vector. Anisotropy can be seen in many composites as well. For example, carbon fiber has a much higher Young's modulus (is much stiffer) when force is loaded parallel to the fibers (along the grain). Other such materials include wood and reinforced concrete. Engineers can use this directional phenomenon to their advantage in creating structures.
Temperature dependence
The Young's modulus of metals varies with the temperature and can be realized through the change in the interatomic bonding of the atoms, and hence its change is found to be dependent on the change in the work function of the metal. Although classically, this change is predicted through fitting and without a clear underlying mechanism (for example, the Watchman's formula), the Rahemi-Li model demonstrates how the change in the electron work function leads to change in the Young's modulus of metals and predicts this variation with calculable parameters, using the generalization of the Lennard-Jones potential to solids. In general, as the temperature increases, the Young's modulus decreases via where the electron work function varies with the temperature as and is a calculable material property which is dependent on the crystal structure (for example, BCC, FCC). is the electron work function at T=0 and is constant throughout the change.
Calculation
Young's modulus is calculated by dividing the tensile stress, , by the engineering extensional strain, , in the elastic (initial, linear) portion of the physical stress–strain curve:
where
is the Young's modulus (modulus of elasticity);
is the force exerted on an object under tension;
is the actual cross-sectional area, which equals the area of the cross-section perpendicular to the applied force;
is the amount by which the length of the object changes ( is positive if the material is stretched, and negative when the material is compressed);
is the original length of the object.
Force exerted by stretched or contracted material
Young's modulus of a material can be used to calculate the force it exerts under specific strain.
where is the force exerted by the material when contracted or stretched by .
Hooke's law for a stretched wire can be derived from this formula:
where it comes in saturation
and
Note that the elasticity of coiled springs comes from shear modulus, not Young's modulus. When a spring is stretched, its wire's length doesn't change, but its shape does. This is why only the shear modulus of elasticity is involved in the stretching of a spring.
Elastic potential energy
The elastic potential energy stored in a linear elastic material is given by the integral of the Hooke's law:
now by explicating the intensive variables:
This means that the elastic potential energy density (that is, per unit volume) is given by:
or, in simple notation, for a linear elastic material: , since the strain is defined .
In a nonlinear elastic material the Young's modulus is a function of the strain, so the second equivalence no longer holds, and the elastic energy is not a quadratic function of the strain:
Examples
Young's modulus can vary somewhat due to differences in sample composition and test method. The rate of deformation has the greatest impact on the data collected, especially in polymers. The values here are approximate and only meant for relative comparison.
| Physical sciences | Solid mechanics | Physics |
228190 | https://en.wikipedia.org/wiki/Vanillin | Vanillin | Vanillin is an organic compound with the molecular formula . It is a phenolic aldehyde. Its functional groups include aldehyde, hydroxyl, and ether. It is the primary component of the extract of the vanilla bean. Synthetic vanillin is now used more often than natural vanilla extract as a flavoring in foods, beverages, and pharmaceuticals.
Vanillin and ethylvanillin are used by the food industry; ethylvanillin is more expensive, but has a stronger note. It differs from vanillin by having an ethoxy group (−O−CH2CH3) instead of a methoxy group (−O−CH3).
Natural vanilla extract is a mixture of several hundred different compounds in addition to vanillin. Artificial vanilla flavoring is often a solution of pure vanillin, usually of synthetic origin. Because of the scarcity and expense of natural vanilla extract, synthetic preparation of its predominant component has long been of interest. The first commercial synthesis of vanillin began with the more readily available natural compound eugenol (4-allyl-2-methoxyphenol). Today, artificial vanillin is made either from guaiacol or lignin.
Lignin-based artificial vanilla flavoring is alleged to have a richer flavor profile than that from guaiacol-based artificial vanilla; the difference is due to the presence of acetovanillone, a minor component in the lignin-derived product that is not found in vanillin synthesized from guaiacol.
Natural history
Although it is generally accepted that vanilla was domesticated in Mesoamerica and subsequently spread to the Old World in the 16th century, in 2019, researchers published a paper stating that vanillin residue had been discovered inside jars within a tomb in Israel dating to the 2nd millennium BCE, suggesting the possible cultivation of an unidentified, Old World-endemic Vanilla species in Canaan since the Middle Bronze Age. Traces of vanillin were also found in wine jars in Jerusalem, which were used by the Judahite elite before the city was destroyed in 586 BCE.
Vanilla beans, called tlilxochitl, were discovered and cultivated as a flavoring for beverages by native Mesoamerican peoples, most famously the Totonacs of modern-day Veracruz, Mexico. Since at least the early 15th century, the Aztecs used vanilla as a flavoring for chocolate in drinks called xocohotl.
Synthetic history
Vanillin was first isolated as a relatively pure substance in 1858 by Théodore Nicolas Gobley, who obtained it by evaporating a vanilla extract to dryness and recrystallizing the resulting solids from hot water. In 1874, the German scientists Ferdinand Tiemann and Wilhelm Haarmann deduced its chemical structure, at the same time finding a synthesis for vanillin from coniferin, a glucoside of isoeugenol found in pine bark. Tiemann and Haarmann founded a company Haarmann and Reimer (now part of Symrise) and started the first industrial production of vanillin using their process (now known as the Reimer–Tiemann reaction) in Holzminden, Germany. In 1876, Karl Reimer synthesized vanillin (2) from guaiacol (1).
By the late 19th century, semisynthetic vanillin derived from the eugenol found in clove oil was commercially available.
Synthetic vanillin became significantly more available in the 1930s, when production from clove oil was supplanted by production from the lignin-containing waste produced by the sulfite pulping process for preparing wood pulp for the paper industry. By 1981, a single pulp and paper mill in Thorold, Ontario, supplied 60% of the world market for synthetic vanillin. However, subsequent developments in the wood pulp industry have made its lignin wastes less attractive as a raw material for vanillin synthesis. Today, approximately 15% of the world's production of vanillin is still made from lignin wastes, while approximately 85% is synthesized in a two-step process from the petrochemical precursors guaiacol and glyoxylic acid.
Beginning in 2000, Rhodia began marketing biosynthetic vanillin prepared by the action of microorganisms on ferulic acid extracted from rice bran. This product, sold at USD$700/kg under the trademarked name Rhovanil Natural, is not cost-competitive with petrochemical vanillin, which sells for around US$15/kg. However, unlike vanillin synthesized from lignin or guaiacol, it can be labeled as a natural flavoring.
Occurrence
Vanillin is most prominent as the principal flavor and aroma compound in vanilla. Cured vanilla pods contain about 2% by dry weight vanillin. Relatively pure vanillin may be visible as a white dust or "frost" on the exteriors of cured pods of high quality.
It is also found in Leptotes bicolor, a species of orchid native to Paraguay and southern Brazil, and the Southern Chinese red pine.
At lower concentrations, vanillin contributes to the flavor and aroma profiles of foodstuffs as diverse as olive oil, butter, raspberry, and lychee fruits.
Aging in oak barrels imparts vanillin to some wines, vinegar, and spirits.
In other foods, heat treatment generates vanillin from other compounds. In this way, vanillin contributes to the flavor and aroma of coffee, maple syrup, and whole-grain products, including corn tortillas and oatmeal.
Chemistry
Natural production
Natural vanillin is extracted from the seed pods of Vanilla planifolia, a vining orchid native to Mexico, but now grown in tropical areas around the globe. Madagascar is presently the largest producer of natural vanillin.
As harvested, the green seed pods contain vanillin in the form of glucovanillin, its β--glucoside; the green pods do not have the flavor or odor of vanilla. Vanillin is released from glucovanillin by the action of the enzyme β-glucosidase during ripening and during the curing process.
After being harvested, their flavor is developed by a months-long curing process, the details of which vary among vanilla-producing regions, but in broad terms it proceeds as follows:
First, the seed pods are blanched in hot water, to arrest the processes of the living plant tissues. Then, for 1–2 weeks, the pods are alternately sunned and sweated: during the day they are laid out in the sun, and each night wrapped in cloth and packed in airtight boxes to sweat. During this process, the pods become dark brown, and enzymes in the pod release vanillin as the free molecule. Finally, the pods are dried and further aged for several months, during which time their flavors further develop. Several methods have been described for curing vanilla in days rather than months, although they have not been widely developed in the natural vanilla industry, with its focus on producing a premium product by established methods, rather than on innovations that might alter the product's flavor profile.
Biosynthesis
Although the exact route of vanillin biosynthesis in V. planifolia is currently unknown, several pathways are proposed for its biosynthesis. Vanillin biosynthesis is generally agreed to be part of the phenylpropanoid pathway starting with -phenylalanine, which is deaminated by phenylalanine ammonia lyase (PAL) to form t-cinnamic acid. The para position of the ring is then hydroxylated by the cytochrome P450 enzyme cinnamate 4-hydroxylase (C4H/P450) to create p-coumaric acid. Then, in the proposed ferulate pathway, 4-hydroxycinnamoyl-CoA ligase (4CL) attaches p-coumaric acid to coenzyme A (CoA) to create p-coumaroyl CoA. Hydroxycinnamoyl transferase (HCT) then converts p-coumaroyl CoA to 4-coumaroyl shikimate/quinate. This subsequently undergoes oxidation by the P450 enzyme coumaroyl ester 3’-hydroxylase (C3’H/P450) to give caffeoyl shikimate/quinate. HCT then exchanges the shikimate/quinate for CoA to create caffeoyl CoA, and 4CL removes CoA to afford caffeic acid. Caffeic acid then undergoes methylation by caffeic acid O-methyltransferase (COMT) to give ferulic acid. Finally, vanillin synthase hydratase/lyase (vp/VAN) catalyzes hydration of the double bond in ferulic acid followed by a retro-aldol elimination to afford vanillin. Vanillin can also be produced from vanilla glycoside with the additional final step of deglycosylation. In the past p-hydroxybenzaldehyde was speculated to be a precursor for vanillin biosynthesis. However, a 2014 study using radiolabelled precursor indicated that p-hydroxybenzaldehyde is not used to synthesise vanillin or vanillin glucoside in the vanilla orchids.
Chemical synthesis
The demand for vanilla flavoring has long exceeded the supply of vanilla beans. , the annual demand for vanillin was 12,000 tons, but only 1,800 tons of natural vanillin were produced. The remainder was produced by chemical synthesis. Vanillin was first synthesized from eugenol (found in oil of clove) in 1874–75, less than 20 years after it was first identified and isolated. Vanillin was commercially produced from eugenol until the 1920s. Later it was synthesized from lignin-containing "brown liquor", a byproduct of the sulfite process for making wood pulp. Counterintuitively, though it uses waste materials, the lignin process is no longer popular because of environmental concerns, and today most vanillin is produced from guaiacol. Several routes exist for synthesizing vanillin from guaiacol.
At present, the most significant of these is the two-step process practiced by Rhodia since the 1970s, in which guaiacol (1) reacts with glyoxylic acid by electrophilic aromatic substitution. The resulting vanillylmandelic acid (2) is then converted by 4-Hydroxy-3-methoxyphenylglyoxylic acid (3) to vanillin (4) by oxidative decarboxylation.
Wood-based vanillin
15% of the world's production of vanillin is produced from lignosulfonates, a byproduct from the manufacture of cellulose via the sulfite process. The sole producer of wood-based vanillin is the company Borregaard located in Sarpsborg, Norway.
Wood-based vanillin is produced by copper-catalyzed oxidation of the lignin structures in lignosulfonates under alkaline conditions and is claimed by the manufacturing company to be preferred by their customers due to, among other reasons, its much lower carbon footprint than petrochemically synthesized vanillin.
Fermentation
The company Evolva has developed a genetically modified microorganism which can produce vanillin. Because the microbe is a processing aid, the resulting vanillin would not fall under U.S. GMO labeling requirements, and because the production is nonpetrochemical, food using the ingredient can claim to contain "no artificial ingredients".
Using ferulic acid as an input and a specific non GMO species of Amycolatopsis bacteria, natural vanillin can be produced.
Biochemistry
Several studies have suggested that vanillin can affect the performance of antibiotics in laboratory conditions.
Uses
The largest use of vanillin is as a flavoring, usually in sweet foods. The ice cream and chocolate industries together comprise 75% of the market for vanillin as a flavoring, with smaller amounts being used in confections and baked goods.
Vanillin is also used in the fragrance industry, in perfumes, and to mask unpleasant odors or tastes in medicines, livestock fodder, and cleaning products. It is also used in the flavor industry, as a very important key note for many different flavors, especially creamy profiles such as cream soda.
Additionally, vanillin can be used as a general-purpose stain for visualizing spots on thin-layer chromatography plates. This stain yields a range of colors for these different components.
Vanillin–HCl staining can be used to visualize the localisation of tannins in cells.
Vanillin is becoming a popular choice for the development of bio-based plastics.
Manufacturing
Vanillin has been used as a chemical intermediate in the production of pharmaceuticals, cosmetics, and other fine chemicals. In 1970, more than half the world's vanillin production was used in the synthesis of other chemicals. As of 2016, vanillin uses have expanded to include perfumes, flavoring and aromatic masking in medicines, various consumer and cleaning products, and livestock foods.
Adverse effects
Vanillin can trigger migraine headaches in a small fraction of the people who experience migraines.
Some people have allergic reactions to vanilla. They may be allergic to synthetically produced vanilla but not to natural vanilla, or the other way around, or to both.
Vanilla orchid plants can trigger contact dermatitis, especially among people working in the vanilla trade if they come into contact with the plant's sap. An allergic contact dermatitis called vanillism produces swelling and redness, and sometimes other symptoms. The sap of most species of vanilla orchid which exudes from cut stems or where beans are harvested can cause moderate to severe dermatitis if it comes in contact with bare skin. The sap of vanilla orchids contains calcium oxalate crystals, which are thought to be the main causative agent of contact dermatitis in vanilla plantation workers.
A pseudophytodermatitis called vanilla lichen can be caused by flour mites (Tyroglyphus farinae).
Ecology
Scolytus multistriatus, one of the vectors of the Dutch elm disease, uses vanillin as a signal to find a host tree during oviposition.
| Physical sciences | Esters and ethers | Chemistry |
228236 | https://en.wikipedia.org/wiki/Biceps | Biceps | The biceps or biceps brachii (, "two-headed muscle of the arm") is a large muscle that lies on the front of the upper arm between the shoulder and the elbow. Both heads of the muscle arise on the scapula and join to form a single muscle belly which is attached to the upper forearm. While the long head of the biceps crosses both the shoulder and elbow joints, its main function is at the elbow where it flexes and supinates the forearm. Both these movements are used when opening a bottle with a corkscrew: first biceps screws in the cork (supination), then it pulls the cork out (flexion).
Structure
The biceps is one of three muscles in the anterior compartment of the upper arm, along with the brachialis muscle and the coracobrachialis muscle, with which the biceps shares a nerve supply. The biceps muscle has two heads, the short head and the long head, distinguished according to their origin at the coracoid process and supraglenoid tubercle of the scapula, respectively. From its origin on the glenoid, the long head remains tendinous as it passes through the shoulder joint and through the intertubercular groove of the humerus. Extending from its origin on the coracoid, the tendon of the short head runs adjacent to the tendon of the coracobrachialis. Unlike the other muscles in the anterior compartment of the arm, the long head of the biceps muscle crosses two joints, the shoulder joint and the elbow joint.
Both heads of the biceps join in the middle upper arm to form a single muscle mass, usually near the insertion of the deltoid, to form a common muscle belly; although several anatomic studies have demonstrated that the muscle bellies remain distinct structures without confluent fibers. As the muscle extends distally, the two heads rotate 90 degrees externally before inserting onto the radial tuberosity. The short head inserts distally on the tuberosity while the long head inserts proximally closer to the apex of the tuberosity. The bicipital aponeurosis, also called the lacertus fibrosus, is a thick fascial band that organizes close to the musculotendinous junction of the biceps and radiates over and inserts onto the ulnar part of the antebrachial fascia.
The tendon that attaches to the radial tuberosity is partially or completely surrounded by a bursa, the bicipitoradial bursa, which ensures frictionless motion between the biceps tendon and the proximal radius during pronation and supination of the forearm.
Two muscles lie underneath the biceps brachii. These are the coracobrachialis muscle, which like the biceps attaches to the coracoid process of the scapula, and the brachialis muscle which connects to the ulna and along the mid-shaft of the humerus. Besides those, the brachioradialis muscle is adjacent to the biceps and also inserts on the radius bone, though more distally.
Variation
Traditionally described as a two-headed muscle, biceps brachii is one of the most variable muscles of the human body and has a third head arising from the humerus in 10% of cases (normal variation)—most commonly originating near the insertion of the coracobrachialis and joining the short head—but four, five, and even seven supernumerary heads have been reported in rare cases.
One study found a higher than expected number of female cadavers with a third head of biceps brachii, equal incidence between sides of the body, and uniform innervation by musculocutaneous nerve.
The distal biceps tendons are completely separated in 40% and bifurcated in 25% of cases.
Nerve supply
The biceps shares its nerve supply with the other two muscles of the anterior compartment. The muscles are supplied by the musculocutaneous nerve. Fibers of the fifth, sixth and seventh cervical nerves make up the components of the musculocutaneous nerve which supply the biceps.
Blood supply
The blood supply of the biceps is the brachial artery. The distal tendon of the biceps can be useful for palpating the brachial pulse, as the artery runs medial to the tendon in the cubital fossa.
Function
The biceps works across three joints. The most important of these functions is to supinate the forearm and flex the elbow. Besides, the long head of biceps prevents the upward displacement of the head of the humerus. In more detail, the actions are, by joint:
Proximal radioulnar joint of the elbow – The biceps brachii functions as a powerful supinator of the forearm, i.e. it turns the palm upwards. This action, which is aided by the supinator muscle, requires the humeroulnar joint of the elbow to be at least partially flexed. If the humeroulnar joint is fully extended, supination is then primarily carried out by the supinator muscle. The biceps is a particularly powerful supinator of the forearm due to the distal attachment of the muscle at the radial tuberosity, on the opposite side of the bone from the supinator muscle. When flexed, the biceps effectively pulls the radius back into its neutral supinated position in concert with the supinator muscle.
Humeroulnar joint of the elbow – The biceps brachii also functions as an important flexor of the forearm, particularly when the forearm is supinated. Functionally, this action is performed when lifting an object, such as a bag of groceries or when performing a biceps curl. When the forearm is in pronation (the palm faces the ground), the brachialis, brachioradialis, and supinator function to flex the forearm, with minimal contribution from the biceps brachii. Regardless of forearm position, (supinated, pronated, or neutral) the force exerted by the biceps brachii remains the same; however, the brachioradialis has a much greater change in exertion depending on position than the biceps during concentric contractions. That is, the biceps can only exert so much force, and as forearm position changes, other muscles must compensate.
Glenohumeral joint (shoulder joint) – Several weaker functions occur at the glenohumeral joint. The biceps brachii weakly assists in forward flexion of the shoulder joint (bringing the arm forward and upwards). It may also contribute to abduction (bringing the arm out to the side) when the arm is externally (or laterally) rotated. The short head of the biceps brachii also assists with horizontal adduction (bringing the arm across the body) when the arm is internally (or medially) rotated. Finally, the short head of the biceps brachii, due to its attachment to the scapula (or shoulder blade), assists with stabilization of the shoulder joint when a heavy weight is carried in the arm. The tendon of the long head of the biceps also assists in holding the head of the humerus in the glenoid cavity and prevents an impingement of the supraspinatus tendon.
Motor units in the lateral portion of the long head of the biceps are preferentially activated during elbow flexion, while motor units in the medial portion are preferentially activated during forearm supination.
The biceps are usually attributed as representative of strength within a variety of worldwide cultures.
Clinical significance
The proximal tendons of the biceps brachii are commonly involved in pathological processes and are a frequent cause of anterior shoulder pain. Disorders of the distal biceps brachii tendon include insertional tendonitis and partial or complete tears of the tendon. Partial tears are usually characterized by pain and enlargement and abnormal contour of the tendon. Complete tears occur as avulsion of the tendinous portion of the biceps away from its insertion on the tuberosity of the radius, and is often accompanied by a palpable, audible "pop" and immediate pain and soft tissue swelling.
A soft-tissue mass is sometimes encountered in the anterior aspect of the arm, the so-called Reverse Popeye deformity, which paradoxically leads to a decreased strength during flexion of the elbow and supination of the forearm.
Tendon rupture
Tears of the biceps brachii may occur during athletic activities, however avulsion injuries of the distal biceps tendon are frequently occupational in nature and sustained during forceful, eccentric contraction of the biceps muscle while lifting.
Treatment of a biceps tear depends on the severity of the injury. In most cases, the muscle will heal over time with no corrective surgery. Applying cold pressure and using anti-inflammatory medications will ease pain and reduce swelling. More severe injuries require surgery and post-op physical therapy to regain strength and functionality in the muscle. Corrective surgeries of this nature are typically reserved for elite athletes who rely on a complete recovery.
Training
The biceps can be strengthened using weight and resistance training. Examples of well known biceps exercises are the chin-up and biceps curl.
Etymology and grammar
The biceps brachii muscle is the one that gave all muscles their name: it comes from the Latin musculus, "little mouse", because the appearance of the flexed biceps resembles the back of a mouse. The same phenomenon occurred in Greek, in which μῦς, mȳs, means both "mouse" and "muscle".
The term biceps brachii is a Latin phrase meaning "two-headed [muscle] of the arm", in reference to the fact that the muscle consists of two bundles of muscle, each with its own origin, sharing a common insertion point near the elbow joint. The proper plural form of the Latin adjective biceps is bicipites, a form not in general English use. Instead, biceps is used in both singular and plural (i.e., when referring to both arms).
The English form , attested from 1939, is a back formation derived from misinterpreting the s of biceps as the English plural marker -s.
Adriaan van den Spiegel called the biceps a Pisciculus) due to its fusiform shape, which is why in the Italian-language medical literature it is sometimes called il pescetto, "the small fish".
History
Leonardo da Vinci expressed the original idea of the biceps acting as a supinator in a series of annotated drawings made between 1505 and 1510; in which the principle of the biceps as a supinator, as well as its role as a flexor to the elbow were devised. However, this function remained undiscovered by the medical community as da Vinci was not regarded as a teacher of anatomy, nor were his results publicly released. It was not until 1713 that this movement was re-discovered by William Cheselden and subsequently recorded for the medical community. It was rewritten several times by different authors wishing to present information to different audiences. The most notable recent expansion upon Cheselden's recordings was written by Guillaume Duchenne in 1867, in a journal named Physiology of Motion. It remains one of the major references on supination action of the biceps brachii.
Other species
Neanderthals
In Neanderthals, the radial bicipital tuberosities were larger than in modern humans, which suggests they were probably able to use their biceps for supination over a wider range of pronation-supination. It is possible that they relied more on their biceps for forceful supination without the assistance of the supinator muscle like in modern humans, and thus that they used a different movement when throwing.
Horses
In the horse, the biceps' function is to extend the shoulder and flex the elbow. It is composed of two short-fibred heads separated longitudinally by a thick internal tendon which stretches from the origin on the supraglenoid tubercle to the insertion on the medial radial tuberosity. This tendon can withstand very large forces when the biceps is stretched. From this internal tendon a strip of tendon, the lacertus fibrosus, connects the muscle with the extensor carpi radialis -- an important feature in the horse's stay apparatus (through which the horse can rest and sleep whilst standing.)
| Biology and health sciences | Human anatomy | Health |
228348 | https://en.wikipedia.org/wiki/Yellow%20crazy%20ant | Yellow crazy ant | The yellow crazy ant (Anoplolepis gracilipes), also known as the long-legged ant or Maldive ant, is a species of ant, thought to be native to West Africa or Asia. They have been accidentally introduced to numerous places in the world's tropics.
The yellow crazy ant has colloquially been given the modifier "crazy" on account of the ant's erratic movements when disturbed. Its long legs and antennae make it one of the largest invasive ant species in the world.
Like several other invasive ants, such as the red imported fire ant (Solenopsis invicta), the big-headed ant (Pheidole megacephala), the little fire ant (Wasmannia auropunctata), and the Argentine ant (Linepithema humile), the yellow crazy ant is a "tramp ant", a species that easily becomes established and dominant in new habitat due to traits such as aggression toward other ant species, little aggression toward members of its own species, efficient recruitment, and large colony size.
It is on a list of "one hundred of the world's worst invasive species" formulated by the International Union for Conservation of Nature (IUCN), having invaded ecosystems from Hawaii to the Seychelles, and formed supercolonies on Christmas Island in the Indian Ocean.
In 2023, a scientific article postulated a unique reproductive cycle for A. gracilipes, suggesting that males are obligate chimeras.
Physiology
Anoplolepis gracilipes is a relatively large, yellow to orange ant with long legs, large eyes and extremely long antennal scapes.
Although A. gracilipes is the only invasive species in the genus Anoplolepis, there are several other genera for which it can be mistaken. Both Leptomyrmex and Oecophylla can be confused with Anoplolepis because of their similar sizes and very long limbs. Anoplolepis can be distinguished from Leptomyrmex by the presence of an acidopore, while Anoplolepis can be distinguished from Oecophylla by the more compact petiole. Although both of these genera occur in the Pacific, neither contain any invasive species.
Several species of invasive ants belonging to the genera Camponotus and Paratrechina can appear similar to A. gracilipes. Although several invasive species of Pheidole can also be slender-bodied with long legs and long antennal scapes, they can be separated from A. gracilipes by their two-segmented waists.
A. gracilipes is widespread across the tropics, and populations are especially dense in the Pacific region. The species is most infamous for causing the ecological "meltdown" of Christmas Island. Although widespread across the Pacific, A. gracilipes can cause significant damage to native biological diversity. Strong quarantine measures are encouraged to keep it from spreading to new localities.
Geographical range and dispersal
The yellow crazy ant's natural habitats are the moist tropical lowlands of Southeast Asia, and surrounding areas and islands in the Indian and Pacific Oceans. It has been introduced into a wide range of tropical and subtropical environments including northern Australia, some of the Caribbean islands, some Indian Ocean islands (Seychelles, Madagascar, Mauritius, Réunion, the Cocos Islands and the Christmas Islands) and some Pacific islands (New Caledonia, Hawaii, French Polynesia, Okinawa, Vanuatu, Micronesia, Johnston Atoll, and the Galapagos archipelago). The species has been known to occupy such agricultural systems as cinnamon, citrus, coffee and coconut plantations. Because yellow crazy ants have generalized nesting habits, they are able to disperse via trucks, boats and other forms of human transport.
Crazy ant colonies naturally disperse through "budding", i.e. when mated queens and workers leave the nest to establish a new one, and only rarely through flight via female winged reproductive forms. Generally, colonies that disperse through budding have a lower rate of dispersal, requiring human intervention to reach distant areas. It has been recorded that A. gracilipes moves as much as a year in the Seychelles. A survey on Christmas Island, however, yielded an average spreading speed of per day, the equivalent of one kilometer (0.6 mile) per year.
Diet
A gracilipes has been described as a "scavenging predator" exhibiting a broad diet, a characteristic of many invasive species. It consumes a wide variety of foods, including grains, seeds, arthropods, and decaying matter such as vertebrate corpses. They have been reported to attack and dismember invertebrates such as small isopods, myriapods, molluscs, arachnids, land crabs, earthworms and insects.
Like all ants, A gracilipes requires a protein-rich food source for the queen to lay eggs and carbohydrates as energy for the workers. They get their carbohydrates from plant nectar and honeydew producing insects, especially scale insects, aphids, and other Sternorrhyncha. Studies indicate that crazy ants rely so much on scale insects that a scarcity of them can actually limit ant population growth.
Reproduction
Similar to other ants, the queen produces eggs which are fertilized by male sperm that are stored in sperm stores. When an egg is fertilized, there are three distinct events that can happen: (i) the resulting diploid organism develops into a queen if the egg is fertilized by an R sperm or (ii) into an infertile diploid worker if the egg is fertilized by a W sperm. However, a third outcome has been described in a 2023 scientific study: (iii) the egg is fertilized by a W sperm but the parental nuclei bypasses the fusion of the two gametes and divide separately within the same egg, leading to a haploid male that is chimeric with a portion of cells carrying the W genome and a portion of cells carrying the R genome. Interestingly, not all tissues have equal proportions of each cell line, with sperm cells mostly carrying the W genome and thus providing the W alleles with a fitness advantage. This is the first known case of obligate chimerism in animals.
Mutualism
Crazy ants obtain much of their food requirements from scale insects, which are plant pests that feed on sap of trees and release honeydew, a sugary liquid. Ants eat honeydew, and in return protect the scale insects from their enemies and spread them among trees, an example of mutualism. The honeydew not eaten by the ants drips onto the trees and encourages the growth of sooty mold over the leaves and stems. This gives plants an ugly black appearance and reduces their health and vigor.
The ants protect the insects by "nannying" the mobile crawler stages and protecting them against their natural enemies. Experiments have shown that this connection is so strong that, in environments where A. gracilipes was removed, the density of scale insects dropped by 67% within 11 weeks, and to zero after 12 months.
In Australia
In Australia, yellow crazy ants have been found at more than 30 sites in Queensland, and in Arnhem Land in the Northern Territory, where a large scattered population exists. A single New South Wales infestation was detected and eradicated, and, in Western Australia, yellow crazy ants have been intercepted in shipping freight arriving at Fremantle.
Queensland's main infestation is in and around Wet Tropics of Queensland rainforest, a World Heritage Site. The Northern Territory infestation covers , an area larger than the Australian Capital Territory.
Climate modelling indicates yellow crazy ants could spread across northern Australia from Queensland to Western Australia, across much of Queensland and into coastal and inland parts of New South Wales. Areas with the most ideal habitat and climatic conditions, such as the Wet Tropics of Queensland rainforests, are likely to experience the highest impacts.
A costbenefit analysis by the Queensland government undertaken in 2012 found that yellow crazy ants could cost Australia's economy over A$3 billion if the ants were not treated. This analysis did not take potential impacts on Australia's biodiversity into account. The known impacts of crazy ants in tropical rainforests overseas may provide useful insights into these impacts, bearing in mind that the most significant impacts are associated with relatively small islands, such as Christmas Island.
Impact on Christmas Island
Crazy ants have had a profound impact on the biodiversity of Christmas Island.
The crazy ant has a significant destructive impact on the island's ecosystem, killing and displacing crabs on the forest floor. The supercolonies also devastate crab numbers migrating to the coast. This has seen a rapid depletion in the number of land crabs — killing up to 20 million of them — which are vital to Christmas Island's biodiversity; land crabs are a keystone species in the forest ecology: they dig burrows, turn over the soil, and fertilize it with their droppings.
Seedlings that were previously eaten by crabs started to grow and, as a result, changed the structure of the forest. Weeds have spread into the rainforest because there are no crabs to control them. One of the most noticeable changes in the forest is the increased numbers of the stinging tree Dendrocnide peltata, which now flourishes in many areas frequently visited by humans. The forest canopy also changed as the scale insects tended by yellow crazy ants multiplied and killed mature trees.
Christmas Island red crabs are completely wiped out in infested areas. Populations of other ground and canopy dwelling animals, such as reptiles and other leaf litter fauna, have also decreased. During crab migrations, many crabs move through areas infested with ants and are killed. Studies show that the ant has displaced an estimated 15–20 million crabs by occupying their burrows, killing and eating resident crabs, and using their burrows as nest sites. This factor has greatly depleted red crabs, and made their annual land migrations far more perilous.
Although crazy ants do not bite or sting, they spray formic acid as a defence mechanism and to subdue their prey. In areas of high ant density, the movement of a land crab disturbs the ants and, as a result, the ants instinctively spray formic acid as a form of defence. The high levels of formic acid at ground level eventually overwhelm the crabs, and they are usually blinded then eventually die from dehydration (while attempting to flush off the formic acid) and exhaustion. As the dead crabs decay, the protein becomes available to the ants.
Crazy ants kill fauna, but encourage scale insects. Increased densities of scale insects cause forest die back, and even the death of large forest trees. These changes create a cascade of negative impacts, including weed invasion, significantly altering the forest landscape.
Supercolonies
Christmas Island is a focal point for international control efforts. These supercolonies spread farther and cause more damage than single colonies, and they pose the single greatest known threat to the island's biodiversity.
Staff from Christmas Island National Park have worked in recent years to keep ant numbers in check. With help from the Christmas Island Crazy Ant Scientific Advisory Panel and support from the Australian Government they are holding ground.
Another supercolony nearly devastated the bird fauna of Johnston Atoll in the Pacific Ocean. The single massive colony was found to occupy nearly a quarter of the island, with up to 1,000 queens in a plot of land wide. The infestation is thought to have been eradicated.
Control measures
To reduce the impacts of crazy ants on red crabs and Christmas Island's ecosystems the Parks Australia carried out a major aerial baiting program in 2009, to follow up the first aerial baiting conducted in 2002. The first step was conducting an extensive island-wide survey to determine the exact locations of the supercolonies. For several months, staff traversed the island surveying over 900 sites. The result was a map of crazy ant supercolonies and red crab burrow densities, together with other biodiversity data.
In September 2009, a helicopter was used to precisely bait crazy ant supercolonies, which covered of the island. A very low concentration of fipronil bait (0.1%) was used to control the ants. Monthly monitoring of these baited supercolony sites shows that crazy ant densities were reduced by 99%.
Park staff placed a high emphasis on minimising non-target impact of baiting. Food lures were dropped from a helicopter to attract robber crabs away from areas that were about to be baited. This technique, combined with the low concentration fipronil bait, proved to be highly successful with extremely low numbers of robber crabs and no red crabs known to be killed by the baiting.
While baiting has slowed the decline of the red crab, its effects on the crazy ant populations are only temporary, as escaping colonies invade the treated areas again, and it is expensive, requiring much man power. In an effort to find a better control, after research, Australian Parks in December 2016 imported Tachardiaephagus somervillei, a small () wasp and began breeding them for release. The wasp, which attacks only scale insects, is a voracious predator of what is believed to be one of the crazy ant's largest source of honeydew on Christmas Island, the yellow lac scale insect.
Researchers from La Trobe University in Melbourne, funded by Parks Australia, began looking for biological controls in 2009. While the ants are omnivores, studies have shown honeydew is an important part of the diet of Christmas Island crazy ants. Samples of ants taken from colonies that are growing rapidly have more honeydew in their diet than when the colonies decline. Further, restricting access to honeydew, by binding trees where the scale insects feed, dramatically reduced the colony as ant activity on the ground fell by 95% in just four weeks. In the laboratory, colonies with limited sources of sugar were compared to colonies with access to abundant sugar. Those with abundant sugar had more fertile queens and lower death rates among workers. The workers were also more aggressive toward other ant species and explored their environments more. This is believed to show why the ants decline when deprived of access to scale insects in the field, and confirm reduced honeydew will greatly reduce the ants' ability to form super colonies.
While controlling the scale insect is expected to control the yellow crazy ant on Christmas Island, on mainland Australia it is thought this would not help. There are at least a dozen honeydew producing insects as well as extrafloral nectar from native acacia trees, all of which fuel yellow crazy ants.
Experts continue to call for a fully funded, long term baiting program on mainland Australia.
| Biology and health sciences | Hymenoptera | Animals |
17027821 | https://en.wikipedia.org/wiki/Serow | Serow | The serow (, or ), is any of four species of medium-sized goat-like or antelope-like mammals in the genus Capricornis. All four species of serow were, until recently, classified under Naemorhedus, which now only contains the gorals.
Extant species
This genus has been analyzed, studied and reclassified a number of times. In 2005, Mammal Species of the World (3rd ed.) listed six different species (C. crispus, C. milneedwardsii, C. rubidus, C. sumatraensis, C. swinhoei, and C. thar), with two subspecies of C. milneedwardsii. The current consensus recognises the following four species, with milneedwardsii and thar demoted to subspecies of C. sumatraensis:
Serows live in south-central, southeast and eastern Asia. Their coloration varies by species, region, and individual. However, the different species are not particularly sexually dimorphic, as both males and females have beards and small horns (which are often shorter than their ears).
Like their smaller relatives, the gorals, serows are often found grazing on rocky and forested hillsides, though typically at a lower elevation in places where the two species' territories overlap; gorals tend to be wary and typically retreat to higher elevations and steeper mountainsides. Serows are slightly larger and slower-moving, and somewhat less agile, than gorals; however, they can still nimbly climb up or down the slopes to escape predation or to find appropriate shelter during cold winters or hot summers. Serows, unlike gorals, make use of their preorbital glands in territorial scent marking.
Fossils of serow-like animals date as far back as the late Pliocene, two to seven million years ago. The common ancestor species of the Caprinae subfamily may have been very similar to modern serows.
The serow subfamily population as a whole is considered endangered. Most serow species are included in the red list of IUCN with decreasing populations. The Japanese serow is better protected than the other sub-species of serows.
| Biology and health sciences | Bovidae | Animals |
17030936 | https://en.wikipedia.org/wiki/Direct%20process | Direct process | The direct process, also called the direct synthesis, Rochow process, and Müller-Rochow process is the most common technology for preparing organosilicon compounds on an industrial scale. It was first reported independently by Eugene G. Rochow and Richard Müller in the 1940s.
The process involves copper-catalyzed reactions of alkyl halides with elemental silicon, which take place in a fluidized bed reactor. Although theoretically possible with any alkyl halide, the best results in terms of selectivity and yield occur with chloromethane (CH3Cl). Typical conditions are 300°C and 2–5bar. These conditions allow for 90–98% conversion for silicon and 30–90% for chloromethane. Approximately 1.4 Mton of dimethyldichlorosilane (Me2SiCl2) is produced annually using this process.
Few companies actually carry out the Rochow process, because of the complex technology and high capital requirements. Since the silicon is crushed prior to reaction in a fluidized bed, the companies practicing this technology are referred to as silicon crushers.
Reaction and mechanism
The relevant reactions are (Me = CH3):
x MeCl + Si → Me3SiCl, Me2SiCl2, MeSiCl3, Me4Si2Cl2, …
Dimethyldichlorosilane (Me2SiCl2) is of particular value (precursor to silicones), but trimethylsilyl chloride (Me3SiCl) and methyltrichlorosilane (MeSiCl3) are also valuable.
The mechanism of the direct process is still not well understood, despite much research. Copper plays an important role. The copper and silicon form intermetallics with the approximate composition Cu3Si. This intermediate facilitates the formation of the Si-Cl and Si-Me bonds. It is proposed that close proximity of the Si-Cl to a copper-chloromethane "adduct" allows for formation of the Me-SiCl units. Transfer of a second chloromethane allows for the release of the Me2SiCl2. Thus, copper is oxidized from the zero oxidation state and then reduced to regenerate the catalyst.
The chain reaction can be terminated in many ways. These termination processes give rise to the other products that are seen in the reaction. For example, combining two Si-Cl groups gives the SiCl2 group, which undergoes Cu-catalyzed reaction with MeCl to give MeSiCl3.
In addition to copper, the catalyst optimally contains promoter metals that facilitate the reaction. Among the many promoter metals, zinc, tin, antimony, magnesium, calcium, bismuth, arsenic, and cadmium have been mentioned.
Product distribution and isolation
The major product for the direct process should be dichlorodimethylsilane, Me2SiCl2. However, many other products are formed. Unlike most reactions, this distribution is actually desirable because the product isolation is very efficient. Each methylchlorosilane has specific and often substantial applications. Me2SiCl2 is the most useful. It is the precursor for the majority of silicon products produced on an industrial scale. The other products are used in the preparation of siloxane polymers as well as specialized applications.
Dichlorodimethylsilane is the major product of the reaction, as is expected, being obtained in about 70–90% yield. The next most abundant product is methyltrichlorosilane (MeSiCl3), at 5–15% of the total. Other products include Me3SiCl (2–4%), MeHSiCl2 (1–4%), and Me2HSiCl (0.1–0.5%).
The Me2SiCl2 is purified by fractional distillation. Although the boiling points of the various chloromethylsilanes are similar (Me2SiCl2: 70°C, MeSiCl3: 66°C, Me3SiCl: 57°C, MeHSiCl2: 41°C, Me2HSiCl: 35°C), the distillation utilizes columns with high separating capacities, connected in series. The purity of the products crucially affects the production of siloxane polymers, otherwise chain branching arises.
| Physical sciences | Synthetic strategies | Chemistry |
423541 | https://en.wikipedia.org/wiki/Document%20file%20format | Document file format | A document file format is a text or binary file format for storing documents on a storage media, especially for use by computers.
There currently exist a multitude of incompatible document file formats.
Examples of XML-based open standards are DocBook, XHTML, and, more recently, the ISO/IEC standards OpenDocument (ISO 26300:2006) and Office Open XML (ISO 29500:2008).
In 1993, the ITU-T tried to establish a standard for document file formats, known as the Open Document Architecture (ODA) which was supposed to replace all competing document file formats. It is described in ITU-T documents T.411 through T.421, which are equivalent to ISO 8613. It did not succeed.
Page description languages such as PostScript and PDF have become the de facto standard for documents that a typical user should only be able to create and read, not edit. In 2001, a series of ISO/IEC standards for PDF began to be published, including the specification for PDF itself, ISO-32000.
HTML is the most used and open international standard and it is also used as document file format. It has also become ISO/IEC standard (ISO 15445:2000).
The default binary file format used by Microsoft Word (.doc) has become widespread de facto standard for office documents, but it is a proprietary format and is not always fully supported by other word processors.
Common document file formats
ASCII, UTF-8 — plain text encodings. With these two character sets, there are three different line endings used: (a) LF -- linefeed, by UNIX and like systems, (b) CRLF -- carriage return, linefeed by DOS and Windows systems, and (c) CR -- carriage return by older Macintosh systems.
Amigaguide
.doc for Microsoft Word — Structural binary format developed by Microsoft (specifications available since 2008 under the Open Specification Promise)
DjVu — file format designed primarily to store scanned documents
DocBook — an XML format for technical documentation
HTML (.html, .htm), (open standard, ISO from 2000), in combination with possible image files referred to.
FictionBook (.fb2) — open XML-based e-book format
Markdown (.md) — markup language for creating formatted text using plain text
Office Open XML — .docx (XML-based standard for office documents)
OpenDocument — .odt (XML-based standard for office documents)
OpenOffice.org XML — .sxw (open, XML-based format for office documents)
OXPS — Open XML Paper Specification (Windows 8.1 and above, older version is XPS used in Windows 7)
PalmDoc — handheld document format
.pages for Pages
PDF — Open standard for document exchange. ISO standards include PDF/X (eXchange), PDF/A (Archive), PDF/E (Engineering), ISO 32000 (PDF), PDF/UA (Accessibility) and PDF/VT (Variable data and transactional printing). PDF is readable on almost every platform with free or open source readers. Open source PDF creators are also available.
PostScript — .ps
Rich Text Format (RTF) — meta data format being developed by Microsoft since 1987 for Microsoft products and cross-platform document interchange
SYmbolic LinK (SYLK)
Scalable Vector Graphics (SVG) - Graphics format primarily for vector-based images.
TeX — Open-source typesetting program and format. First successful mathematical notation language.
TEI — XML format for digital publication
Troff
Uniform Office Format — Chinese standard
WordPerfect (.wpd, .wp, .wp7, .doc) (Note: possible confusion with Word format extension)
| Technology | File formats | null |
423669 | https://en.wikipedia.org/wiki/Whirligig%20beetle | Whirligig beetle | The whirligig beetles are water beetles, comprising the family Gyrinidae that usually swim on the surface of the water if undisturbed, though they swim underwater when threatened. They get their common name from their habit of swimming rapidly in circles when alarmed, and are also notable for their divided eyes which are believed to enable them to see both above and below water. (View wiki description) The family includes some 700 extant species worldwide, in 15 genera, plus a few fossil species. Most species are very similar in general appearance, though they vary in size from perhaps 3 mm to 18 mm in length. They tend to be flattened and rounded in cross section, in plain view as seen from above, and in longitudinal section. In fact their shape is a good first approximation to an ellipsoid, with legs and other appendages fitting closely into a streamlined surface. Whirligig beetles belong to the beetle suborder Adephaga, which also includes ground beetles and diving beetles.
Description
Whirligig beetles are most conspicuous for their bewildering swimming. They can be difficult to see if they are not moving or are under water. Most species are coloured steely grey or bronze. Their integument is finely sculpted with little pits; it is hard and elastic and produces a water repellent waxy outer layer, which is constantly supplemented. Among other functions, the lubricant layer and smooth outline make the beetles difficult to hold on to if caught.
The antennae are unusual among beetles, being short and plump, and placed about at water level. The compound eyes are remarkable for each being divided into a higher part that is above water level when a beetle is floating passively, and a lower part that is below water level. In this respect they recall the horizontally divided eyes of the four-eyed fishes (Anableps), which also live at the surface of the water. The middle, and more especially the hind legs are adapted for swimming (natatory): they are greatly flattened and fringed with bristles that fold to aid swimming action. In contrast the front legs are long and adapted for grasping food or prey. In males the front tarsi have suckers, which are used to hold onto the slippery female during mating.
Behavior and morphological adaptations
The Gyrinidae are surface swimmers for preference. They are known for the bewildering and rapid gyrations in which they swim, and for their gregarious behavior. Most species also can fly well, even taking off from water if need be. The combination constitutes a survival strategy that helps them to avoid predation and take advantage of mating opportunities. In general the adults occupy areas where water flows steadily and not too fast, such as minor rapids and narrows in leisurely streams. Such places supply a good turnover of floating detritus or struggling insects or other small animals that have fallen in and float with the current.
The positions that individuals occupy within a group are determined by a number of factors, thought to include hunger, sex, species, water temperature, age, parasite level and stress level. Research underway on their behavior is directed at investigating the significance of chemical defense in relation to their position in the group. Such studies are of interest in research into aspects of nanotechnology because the beetles' motion may be expected to provide insights into how groups of robots might coordinate movements.In particular the beetles make behavioral trade-offs that affect their choices of positions within a group. For example, relatively hungry beetles go to the outside of a group, where there is less competition for finding food, but higher risk of encountering predators. Males are also more likely to be found on the outside of groups (although grouping is not known to be relevant to mating behavior in this family). The economies that the beetles can gain by suitably adjusting their positions within the group, are important when individuals swim against the flow of a stream. By swimming behind other beetles they can take advantage of forward-moving drafts. Such action is called drafting. The determination of forward/backward positioning within a group has been found to be affected in a complex manner by a combination of water speed, sex of the beetle, and the type of predator (bird or fish) that a beetle has most recently observed.
The beetles could use the waves generated by their moving as a sort of radar to detect the position of object on the water surface around them. This technique could be used to detect prey or to avoid colliding each other.
The adult beetles carry a bubble of air trapped beneath their elytra. This allows them to dive and swim under well-oxygenated water for indefinite periods if necessary. The mechanism is sophisticated and amounts to a physical gill. In practice though, their ecological adaptation is for the adults to scavenge and hunt on the water surface, so they seldom stay down for long. The larvae have paired plumose tracheal gills on each of the first eight abdominal segments.
Generally, gyrinids lay their eggs under water, attached to water plants, typically in rows. Like the adults, the larvae are active predators, largely benthic inhabitants of the stream bed and aquatic plants. They have long thoracic legs with paired claws. Their mandibles are curved, pointed, and pierced with a sucking canal. In this they resemble the larvae of many other predatory water beetles, such as the Dytiscidae. Mature larvae pupate in a cocoon that also is attached to water plants.
Taxonomy
Whirligig beetles were previously grouped with other aquatic members of the Adephaga such as Dytiscidae, as members of the group "Hydradephaga". However based on molecular evidence they are currently thought to be the earliest diverging lineage of the Adephaga, and to have evolved their aquatic ecology independently from other adephagans. Cladogram after Vasilikopoulos et al. 2021
Internal taxonomy
Taxonomy after
Spanglerogyrinae
Angarogyrus - Early Jurassic-Early Cretaceous, Asia
Spanglerogyrus - North America
Heterogyrinae
Mesogyrus - Late Jurassic-Early Cretaceous, Asia
Heterogyrus - Madagascar
Cretotortor - Late Cretaceous-Paleocene (Asia)
Baissogyrus - Zaza Formation, Russia, Early Cretaceous (Aptian)
Gyrininae
Dineutini
Cretodineutus - Burmese amber, Late Cretaceous (Cenomanian)
Cretogyrus - Burmese amber, Cenomanian
Dineutus
Enhydrus
Macrogyrus (including Andogyrus )
Mesodineutes - Darmakan Formation, Russia, Danian
Miodineutes - Germany, Miocene
Porrorhynchus
Gyrinini
Aulonogyrus
Gyrinoides
Gyrinus
Metagyrinus
Orectochilini
Gyretes
Orectochilus
Orectogyrus
Patrus
Chimerogyrus - Burmese amber, Cenomanian
Incertae sedis
Anagyrinus - Insektenmergel Formation, Switzerland, Early Jurassic, Hettangian
Gyrinopsis - Insektenmergel Formation, Switzerland, Early Jurassic, Hettangian
| Biology and health sciences | Beetles (Coleoptera) | Animals |
423684 | https://en.wikipedia.org/wiki/Water%20beetle | Water beetle | A water beetle is a generalized name for any beetle that is adapted to living in water at any point in its life cycle. Most water beetles can only live in fresh water, with a few marine species that live in the intertidal zone or littoral zone. There are approximately 2000 species of true water beetles native to lands throughout the world.
Many water beetles carry an air bubble, called the elytra cavity, underneath their abdomens, which provides an air supply, and prevents water from getting into the spiracles. Others have the surface of their exoskeleton modified to form a plastron, or "physical gill", which permits direct gas exchange with the water. Some families of water beetles have fringed hind legs adapted for swimming, but most do not. Most families of water beetles have larvae that are also aquatic; many have aquatic larvae and terrestrial adults.
Diet
Water beetles can be either herbivores, predators, or scavengers. Herbivorous beetles eat only aquatic vegetation, such as algae or leaves. They might also suck juices out the stem of a plant nearby. Scavenger beetles will feed on decomposing organic material that has been deposited. The scavenged material can come from aquatic vegetation, feces, or other small organisms that have died. The great diving beetle, a predator, feeds on things like worms, tadpoles, and even sometimes small fish.
Species
Families in which all species are aquatic in all life stages include:
Dytiscidae
Gyrinidae (Whirligig beetles)
Haliplidae
Noteridae
Amphizoidae
Hygrobiidae (Squeak beetles)
Meruidae
Hydroscaphidae (Skiff beetles).
Families in which the adults are not necessarily aquatic include:
Hydrophilidae
Lutrochidae (Travertine beetles)
Dryopidae
Elmidae
Eulichadidae
Heteroceridae
Limnichidae
Psephenidae (Water-penny beetles)
Ptilodactylidae
Torridincolidae
Sphaeriusidae
| Biology and health sciences | Beetles (Coleoptera) | Animals |
423849 | https://en.wikipedia.org/wiki/Airport%20terminal | Airport terminal | An airport terminal is a building at an airport where passengers transfer between ground transportation and the facilities that allow them to board and disembark from an aircraft.
The buildings that provide access to the airplanes (via gates) are typically called concourses. However, the terms "terminal" and "concourse" are sometimes used interchangeably, depending on the configuration of the airport. Smaller airports have one terminal while larger airports have several terminals and/or concourses. At small airports, a single terminal building typically serves all of the functions of a terminal and a concourse. Larger airports might have either one terminal that is connected to multiple concourses or multiple almost independent unit terminals.
By the end of the 20th century airport terminals became symbols of progress and trade, showcasing the aspirations of nations constructing them. The buildings are also characterized by a very rapid pace of redevelopment, much higher than that for structures supporting other modes of transportation, eroding the boundary between the permanent and temporary construction.
Unit terminals
An airport might have multiple separate "unit terminals", in order, for example to separate the international travel from the domestic one, or provide the separate airlines with the ability to offer their own terminals. The unit terminals might use similar design (Dallas-Fort Worth Airport) or be completely different (Pearson International Airport). Use of multiple terminals typically requires an extensive network of automatic people movers.
Functions
Terminals perform three main functions:
change of mode of transportation: the flights almost inevitably involve some land travel, so the terminal should facilitate the passengers moving along the prescribed routes and thus contains so called passenger circulation areas;
processing of the passengers and their luggage that includes ticketing / checking-in of passengers, separating the luggage and returning it back to the passengers, security checks of both passengers and luggage. These functions are performed in the passenger processing spaces;
grouping/ungrouping of the passengers. The passengers do not arrive at the terminal pre-sorted in batches for the flights and have to be grouped to board a plane. Upon arrival, a reverse process occurs, so the terminal needs the passenger holding place.
Landside and airside
Just like entire airports, the terminals are divided into landside and airside zones. Typically passengers and staff must be checked by airport security, and/or customs/border control before being permitted to enter the airside zone. Conversely, passengers arriving from an international flight must pass through border control and customs to access the landside area.
The landside-airside boundary became the defining element of the terminal architecture. The functions that are performed on the landside, like ticketing and check-in, are relatively stable, while the airside is subject to rapid technological and operational changes. Victor Marquez suggests that the boundary is not really an integral part of the airport functions, but a "socio-technical construct" that has gradually shaped the thinking of architects and planners.
Architectural styles
The passenger terminal is the main opportunity within the airport for architects to express themselves and a key element of the airport design. Brian Edwards compares the architectural role of the terminal in the airport to the one of a mall within a small town.
Historically, airports were built in a variety of architectural styles, with the selection depending on the country:
in the US, Art Deco was used for the Kansas's Fairfax Airport (1929, Charles A. Smith, Moderne for Washington Airport (1930, Holden, Scott and Hutchinson), Neocolonial for the first St. Louis airport (1931) and the Floyd Bennett Field (1931); Adobe style for Albuquerque Airport (1936–1939, Ernest Blumenthal), Spanish-pueblo style for the San Francisco Airport (1937, H. G. Chipier), International Modern for Chicago Airport, Moderne/Art Deco combination for the Miami Airport and LaGuardia (Delano and Aldrich);
South America was following the US pattern, with more Modern in the mix;
Europe preferred all stripes of Modernism, from Brick Expressionism (Madrid–Barajas Airport, 1929–1931, Luis Gutiérrez Soto) to edgy French Modernism and Czech version of functionalism (Prague Airport, 1933–1937, Adolf Benš) to softer Scandinavian functionalism (Kastrup Airport, 1936–1938, Vilhelm Lauritzen). Few early terminals used Neoclassical style of the Edwardian era, a terminal in the old Basra Airport was built to resemble a palace (1937–1938, Wilson and Mason).
The concrete boxes of terminals built in the 1960s and 1970s generally gave way to glass boxes in the 1990s and 2000s, with the best terminals making a vague stab at incorporating ideas of "light" and "air"'. However, some, such as Baghdad International Airport and Denver International Airport, are monumental in stature, while others are considered architectural masterpieces, such as Terminal 1 at Charles de Gaulle Airport, near Paris, the main terminal at Washington Dulles in Virginia, or the TWA Flight Center at New York's JFK Airport. A few are designed to reflect the culture of a particular area, some examples being the terminal at Albuquerque International Sunport in New Mexico, which is designed in the Pueblo Revival style popularized by architect John Gaw Meem, as well as the terminal at Bahías de Huatulco International Airport in Huatulco, Oaxaca, Mexico, which features some palapas that are interconnected to form the airport terminal."
Early history
The first airfields, built in the early 20th century, did not have passengers and thus did not need the terminals. Large facilities were built, however, to house the fragile and inventive airships of the time protecting them from elements and industrial spies. Still, some of the concept architectural designs resembled the modern terminal buildings: Erich Mendelsohn’s sketch (1914) contained a large building with the attached ancillaries for planes (the central building was intended not for the passengers, but for a dirigible). The predecessors of the modern terminals were the structures erected for the air shows of the Edwardian era (for example, the Reims Air Meet in 1909). These buildings usually were L-shaped, with one wing dedicated to the planes and flight personnel, and the other intended for the spectators, with a grandstand and restaurants in an arrangement similar to the one used for the racetracks. The shows also featured occasional passenger flights. The other template of a terminal was provided by the first airline, the German DELAG that featured sheds for Zeppelins combined with passenger spaces close to the centers of cities, like the railroad stations.
The first European passenger airports of the interwar period in the major transportation nodes (London, Paris, Berlin) were converted military airfields (London Terminal Aerodrome, Croydon Aerodrome, Great West Aerodrome, Le Bourget, Tempelhof) and lacked the spaces for the actual passengers. US, on the other hand, lacked the war infrastructure and had to build the airports from scratch, mostly following the "hangar-depot" building type where, staff, passengers, and airplanes were all accommodated inside a single large building, like the one at the Ford Dearborn Airport (1925–1926).
Dedicated passenger buildings started to appear. In Europe, Le Bourget got a new buildings in classical style arranged in very non-airport-like manner around a central garden in the early 1920s. The "air station" of Königsberg Devau (1922) was probably the first design resembling the modern ones: Hanns Hopp, a German architect, placed a passenger building flanked by hangars into the corner of an airfield. This design influenced the Tempelhof, arguably the seminal design in the history or airports: the original Modernist terminal by Paul and Klaus Englers of 1926-1929 was placed into the center of the field, thus defied the need for expansion, and had to be replaced by the new building in the late 1930s (architect Ernst Sagebiel). Hounslow (now Heathrow airport) was processing the passengers through a reused aircraft hangar, and a new classical terminal was built in Croydon in 1928. In the US, by 1931 the first airport in Chicago (now Midway Airport) had its own Art Deco terminal building.
Sagebiel's Tempelhof had an appearance of a major railway terminus and housed, like many other European airports, great restaurants. The design survived for more than 60 years, highly unusual for an airport due to Sagebiel being prescient and oversizing the building beyond the scope of the original needs.
The original Le Bourget design was corrected by in 1936–1937, with the new Modernist single-terminal layout following ideas of not-yet-unfinished Tempelhof (but without covered access to the planes) and Croydon.
New York's LaGuardia Airport (Delano and Aldrich, 1939) contained many features common in the modern designs: two-level layout for separation between departing and arriving passengers, "spine" concourse extending to the both sides of the building, "dispatcher booths" as precursors to the airport gates.
Airbridges
Tempelhof faced a contemporary critique for its cantilevered roofs intended to protect the planes and passengers − but wasteful in terms of construction and limiting the future aircraft designs (in addition to the lack of separation between the boarding and deplaning passengers). The movable covered ways (precursors of the modern jet bridges) were experimented with in the 1930s. The Boeing's United Airport in Burbank, California featured retractable canopies already in the 1930. The tubes first appeared in the 1936 terminal at the London South Airport. The circular terminal design included six telescopic rectangular in section tubes for passengers, moving over the rails.
Rail links
The terminal at London South (now known as Gatwick Airport) also featured the first direct rail link connection (to the London Victoria Station). The rail ticket was included with the airfare.
Centralized luggage handling
The system for early separation of departing passengers from their luggage (check-in desk) was introduced in the Speke Airport in Liverpool (1937–1938). It remains a key element of design of most passenger terminals ever since.
After Second World War
Some airlines checked in their passengers at downtown terminals, and had their own transportation facilities to the airfield. For example, Air France checked in passengers at the Invalides Air Terminal (Aérogare des Invalides) from 1946 to 1961, when all passengers started checking in at the airport. The Air Terminal continued in service as the boarding point for airline buses until 2016.
Chicago's O'Hare International Airport's innovative design pioneered concepts such as direct highway access to the airport, concourses, and jetbridges; these designs are now seen at most airports worldwide.
When London Stansted Airport's new terminal opened in 1991, it marked a shift in airport terminal design since Norman Foster placed the baggage handling system in the basement in order to create a vast open interior space. Airport architects have followed this model since unobstructed sightlines aid with passenger orientation. In some cases, architects design the terminal's ceiling and flooring with cues that suggest the required directional flow. For instance, at Toronto Pearson's Terminal 1 Moshe Safdie included skylights for wayfinding purposes.
Security
Originally, the airport terminals were secured the same way as the rail stations, with local police guarding against the common crimes, like pickpocketing. The industry-specific crimes were rare, although the first plane hijacking occurred in the 1931 (in Peru). The 1960s brought the waves of terrorism and the tight security based on the ICAO recommendations. By the 1990s both passengers and luggage were routinely screened for weapons and explosive devices. The old floorplans of terminals were frequently inadequate (and structures not strong enough to carry the weight of the new equipment), so extensive redesign was required. Passenger garages integrated into the terminals were moved out to reduce the potential effects of the car bombs. Time spent by passengers at the airports greatly increased, causing the need for additional space.
Layouts
Early airport terminals opened directly onto the tarmac: passengers would simply walk to their aircraft, a so-called "open apron" layout. This simple design is still common among smaller airports.
Linear
For larger airports, like Kansas City International Airport, Munich Airport and Charles de Gaulle Airport, allowing many passenger to walk across tarmac becomes unfeasible, so the terminals switch to the "linear" layout, where the planes are located next to an elongated building and passengers use jet bridges to walk on board. The design places limit on the number of gates, as the walkability requirement dictates the total length of the building (including the "spine" concourses) to be less than mile.
Semicircular
Some airports use a linear structure bent into a semicircular shape, with aircraft parked on the convex side and cars on the other. This design still requires long walks for connecting passengers, but greatly reduces travel times between check-in and the aircraft.
Pier
A pier design uses a small, narrow building with aircraft parked on both sides. One end connects to a ticketing and baggage claim area. Piers offer high aircraft capacity and simplicity of design, but often result in a long distance from the check-in counter to the gate (up to half a mile in the cases of Kansai International Airport or Lisbon Portela Airport's Terminal 1). Most large international airports have piers, O'Hare Airport in Chicago and Hartsfield Airport in Atlanta were able to process 45 million passengers per year using this layout in the 1970s.
Remote pier
Remote pier layout consist of multiple concourses that are connected by automatic people movers located underground or overhead. Once arrived on the concourse, passengers get on the planes as usual. This layout, after its first appearance at Hartsfield, was used at Stansted Airport in UK and, with an adequate people-moving system, is considered to be very efficient for the airport hubs with high percentage of transfer passengers.
Satellite terminals
A satellite terminal is a round- or star-shaped building detached from other airport buildings, so that aircraft can park around its entire circumference. The first airport to use a satellite terminal was London Gatwick Airport. It used an underground pedestrian tunnel to connect the satellite to the main terminal. Passengers are sometimes ferried to the satellite terminals by people movers, trains, or overhead bridges. The layout has the potential to cut the walking distances and was successfully applied in the Orlando International Airport and Tampa International Airport. However, the excessive area of airport apron required and difficult remodeling for new aircraft designs had reduced its popularity. Los Angeles International Airport, in particular, switched from satellite terminals to pier layout in the 1980s.
Transporter terminals
The idea of a large airport using specially-built vehicles to connect passengers to the planes was driven by the desire to reduce time spent by the planes getting to and from the terminal, and dates to 1960s. The bodies of the so-called mobile lounges can be raised to match the height of the terminal and airplane exit doors (much earlier designs used regular apron buses, for example, in the Milan's Linate Airport, but the passengers in this case had to climb up and down the airstairs). While used in the Washington Dulles International Airport and King Abdulaziz International Airport, the arrangement is prone to slowing down the embarkation and disembarkation as well as accidental damage to the planes.
Other
A particularly unusual design was employed at Berlin Tegel Airport's Terminal A. Consisting of an hexagonal-shaped ring around a courtyard, five of the outer walls were airside and fitted with jet bridges, while the sixth (forming the entrance), along with the inner courtyard, was landside. Although superficially resembling a satellite design insofar as aircraft could park around most of the structure, it was in fact a self-contained terminal which unlike a satellite did not depend on remote buildings for facilities such as check-in, security controls, arrivals etc.
Especially unique were its exceptionally short walking distances and lack of any central area for security, passport control, arrivals or transfer. Instead, individual check-in counters are located immediately in front of the gate of the flight they serve. Checked-in passengers then entered airside via a short passage situated immediately to the side of the check-in desk, passed (for non-Schengen flights) a single passport control booth (with officers sat in the same area as check-in staff), followed by a single security lane which terminated at the gate's waiting area behind. Pairs of gates shared the same seating area, with small kiosks for duty-free and refreshments making up the only airside commercial offerings. Thus, other than the adjacent gate, passengers could not move around the terminal airside and there was no central waiting lounge and retail area for departures. Individual rooms for arrivals, likewise serving a pair of gates, each contained a single baggage carousel and were alternately situated in between each pair of departure gates on the same level, such that the entrance/exit of each jet bridge lied at the boundary of the two areas. Two or three passport control booths were located close to the end of the jet bridge for arriving passengers (causing passengers to queue into the bridge and plane itself) and passengers left the arrivals area unsegregated from departing passengers into the same landside ring-concourse, emerging next to the check-in desks. This allowed both arriving and departing passengers immediate access to the courtyard on the same level, where short-stay parking and taxi-pickup were located. Vehicles could enter and exit via a road underpass underneath the terminal building entrance.
For flights using jet-bridges and passengers arriving or leaving by private transport, this resulted in extremely short walking distances of just a few tens of metres between vehicles and the plane, with only a slightly longer walk for public transport connections. A downside of this design is a lack of any provision for transfer flights, with passengers only able to transit landside.
Hybrid layouts also exist. San Francisco International Airport and Melbourne Airport use a hybrid pier-semicircular layout and a pier layout for the rest.
Levels
Chris Blow lists the following standard options of using multiple levels in the airport terminals:
Side-by-side arrivals and departures on a single level is the simplest option for small airports that do not use the jet bridges;
Side-by-side arrivals and departures on two levels uses a street-level car traffic at the landside interface, with elevators and lifts bringing the passengers to and from the upper (boarding) level with jet bridges;
Vertical stacking of arrivals and departures is adopted by the large airports. The departure spaces are located on the upper level, while the arrivals along with all baggage processing are handled at the lower level. This approach typically uses an elevated car approach for departures, so the departing passengers are dropped off at the level of (or above) the boarding gates. Deplaning passengers are guided down to the baggage reclaim area;
Vertical segregation is used for very high passenger traffic. In this scheme, there is no mixing of departing and arriving passengers at all. While segregation can be horizontal, typical arrangement places the departure circulation onto the upper level, while the arriving passenger flow happens on the lower level (at the end of their route, the departing passengers are guided down to the airplanes).
Common-use facility
A common-use facility or terminal design disallows airlines to have its own proprietary check-in counters, gates and IT systems. Rather, check-in counters and gates can be flexibly reassigned as needed. This is used at Boston Logan International Airport's Terminal E.
Records
This table below lists the top airport terminals throughout the world with the largest amount of floor area, with usable floor space across multiple stories of at least .
Ground transportation
Many small and mid-size airports have a single, two, or three-lane one-way loop road which is used by local private vehicles and buses to drop off and pick up passengers.
A large hub airport often has two grade-separated one-way loop roads, one for departures and one for arrivals. It may have a direct rail connection by regional rail, light rail, or subway to the downtown or central business district of the closest major city. The largest airports may have direct connections to the closest freeway. The Hong Kong International Airport has ferry piers on the airside for ferry connections to and from mainland China and Macau without passing through Hong Kong immigration controls.
| Technology | Concepts of aviation | null |
423933 | https://en.wikipedia.org/wiki/Cracking%20%28chemistry%29 | Cracking (chemistry) | In petrochemistry, petroleum geology and organic chemistry, cracking is the process whereby complex organic molecules such as kerogens or long-chain hydrocarbons are broken down into simpler molecules such as light hydrocarbons, by the breaking of carbon–carbon bonds in the precursors. The rate of cracking and the end products are strongly dependent on the temperature and presence of catalysts. Cracking is the breakdown of large hydrocarbons into smaller, more useful alkanes and alkenes. Simply put, hydrocarbon cracking is the process of breaking long-chain hydrocarbons into short ones. This process requires high temperatures.
More loosely, outside the field of petroleum chemistry, the term "cracking" is used to describe any type of splitting of molecules under the influence of heat, catalysts and solvents, such as in processes of destructive distillation or pyrolysis.
Fluid catalytic cracking produces a high yield of petrol and LPG, while hydrocracking is a major source of jet fuel, diesel fuel, naphtha, and again yields LPG.
History and patents
Among several variants of thermal cracking methods (variously known as the "Shukhov cracking process", "Burton cracking process", "Burton–Humphreys cracking process", and "Dubbs cracking process") Vladimir Shukhov, a Russian engineer, invented and patented the first in 1891 (Russian Empire, patent no. 12926, November 7, 1891). One installation was used to a limited extent in Russia, but development was not followed up. In the first decade of the 20th century the American engineers William Merriam Burton and Robert E. Humphreys independently developed and patented a similar process as U.S. patent 1,049,667 on June 8, 1908. Among its advantages was that both the condenser and the boiler were continuously kept under pressure.
In its earlier versions it was a batch process, rather than continuous, and many patents were to follow in the US and Europe, though not all were practical. In 1924, a delegation from the American Sinclair Oil Corporation visited Shukhov. Sinclair Oil apparently wished to suggest that the patent of Burton and Humphreys, in use by Standard Oil, was derived from Shukhov's patent for oil cracking, as described in the Russian patent. If that could be established, it could strengthen the hand of rival American companies wishing to invalidate the Burton–Humphreys patent. In the event Shukhov satisfied the Americans that in principle Burton's method closely resembled his 1891 patents, though his own interest in the matter was primarily to establish that "the Russian oil industry could easily build a cracking apparatus according to any of the described systems without being accused by the Americans of borrowing for free".
At that time, just a few years after the Russian Revolution and Russian Civil War, the Soviet Union was desperate to develop industry and earn foreign exchange. The Soviet oil industry eventually did obtain much of their technology from foreign companies, largely American ones. At about that time, fluid catalytic cracking was being explored and developed and soon replaced most of the purely thermal cracking processes in the fossil fuel processing industry. The replacement was not complete; many types of cracking, including pure thermal cracking, still are in use, depending on the nature of the feedstock and the products required to satisfy market demands. Thermal cracking remains important, for example, in producing naphtha, gas oil, and coke; more sophisticated forms of thermal cracking have since been developed for various purposes. These include visbreaking, steam cracking, and coking.
Cracking methodologies
Thermal cracking
Modern high-pressure thermal cracking operates at absolute pressures of about 7,000 kPa. An overall process of disproportionation can be observed, where "light", hydrogen-rich products are formed at the expense of heavier molecules which condense and are depleted of hydrogen. The actual reaction is known as homolytic fission and produces alkenes, which are the basis for the economically important production of polymers.
Thermal cracking is currently used to "upgrade" very heavy fractions or to produce light fractions or distillates, burner fuel and/or petroleum coke. Two extremes of the thermal cracking in terms of the product range are represented by the high-temperature process called "steam cracking" or pyrolysis (ca. 750 °C to 900 °C or higher) which produces valuable ethylene and other feedstocks for the petrochemical industry, and the milder-temperature delayed coking (ca. 500 °C) which can produce, under the right conditions, valuable needle coke, a highly crystalline petroleum coke used in the production of electrodes for the steel and aluminium industries.
William Merriam Burton developed one of the earliest thermal cracking processes in 1912 which operated at and an absolute pressure of and was known as the Burton process. Shortly thereafter, in 1921, C.P. Dubbs, an employee of the Universal Oil Products Company, developed a somewhat more advanced thermal cracking process which operated at and was known as the Dubbs process. The Dubbs process was used extensively by many refineries until the early 1940s when catalytic cracking came into use.
Steam cracking
Steam cracking is a petrochemical process in which saturated hydrocarbons are broken down into smaller, often unsaturated, hydrocarbons. It is the principal industrial method for producing the lighter alkenes (or commonly olefins), including ethene (or ethylene) and propene (or propylene). Steam cracker units are facilities in which a feedstock such as naphtha, liquefied petroleum gas (LPG), ethane, propane or butane is thermally cracked through the use of steam in a bank of pyrolysis furnaces to produce lighter hydrocarbons.
In steam cracking, a gaseous or liquid hydrocarbon feed like naphtha, LPG or ethane is diluted with steam and briefly heated in a furnace without the presence of oxygen. Typically, the reaction temperature is very high, at around 850 °C, but the reaction is only allowed to take place very briefly. In modern cracking furnaces, the residence time is reduced to milliseconds to improve yield, resulting in gas velocities up to the speed of sound. After the cracking temperature has been reached, the gas is quickly quenched to stop the reaction in a transfer line heat exchanger or inside a quenching header using quench oil.
The products produced in the reaction depend on the composition of the feed, the hydrocarbon-to-steam ratio, and on the cracking temperature and furnace residence time. Light hydrocarbon feeds such as ethane, LPGs or light naphtha give product streams rich in the lighter alkenes, including ethylene, propylene, and butadiene. Heavier hydrocarbon (full range and heavy naphthas as well as other refinery products) feeds give some of these, but also give products rich in aromatic hydrocarbons and hydrocarbons suitable for inclusion in gasoline or fuel oil. Typical product streams include pyrolysis gasoline (pygas) and BTX.
A higher cracking temperature (also referred to as severity) favors the production of ethylene and benzene, whereas lower severity produces higher amounts of propylene, C4-hydrocarbons and liquid products. The process also results in the slow deposition of coke, a form of carbon, on the reactor walls. Since coke degrades the efficiency of the reactor, great care is taken to design reaction conditions to minimize its formation. Nonetheless, a steam cracking furnace can usually only run for a few months between de-cokings. "Decokes" require the furnace to be isolated from the process and then a flow of steam or a steam/air mixture is passed through the furnace coils. This decoking is essentially combustion of the carbons, converting the hard solid carbon layer to carbon monoxide and carbon dioxide.
Fluid catalytic cracking
The catalytic cracking process involves the presence of solid acid catalysts, usually silica-alumina and zeolites. The catalysts promote the formation of carbocations, which undergo processes of rearrangement and scission of C-C bonds. Relative to thermal cracking, cat cracking proceeds at milder temperatures, which saves energy. Furthermore, by operating at lower temperatures, the yield of undesirable alkenes is diminished. Alkenes cause instability of hydrocarbon fuels.
Fluid catalytic cracking is a commonly used process, and a modern oil refinery will typically include a cat cracker, particularly at refineries in the US, due to the high demand for gasoline. The process was first used around 1942 and employs a powdered catalyst. During WWII, the Allied Forces had plentiful supplies of the materials in contrast to the Axis Forces, which suffered severe shortages of gasoline and artificial rubber. Initial process implementations were based on low activity alumina catalyst and a reactor where the catalyst particles were suspended in a rising flow of feed hydrocarbons in a fluidized bed.
In newer designs, cracking takes place using a very active zeolite-based catalyst in a short-contact time vertical or upward-sloped pipe called the "riser". Pre-heated feed is sprayed into the base of the riser via feed nozzles where it contacts extremely hot fluidized catalyst at . The hot catalyst vaporizes the feed and catalyzes the cracking reactions that break down the high-molecular weight oil into lighter components including LPG, gasoline, and diesel. The catalyst-hydrocarbon mixture flows upward through the riser for a few seconds, and then the mixture is separated via cyclones. The catalyst-free hydrocarbons are routed to a main fractionator for separation into fuel gas, LPG, gasoline, naphtha, light cycle oils used in diesel and jet fuel, and heavy fuel oil.
During the trip up the riser, the cracking catalyst is "spent" by reactions which deposit coke on the catalyst and greatly reduce activity and selectivity. The "spent" catalyst is disengaged from the cracked hydrocarbon vapors and sent to a stripper where it contacts steam to remove hydrocarbons remaining in the catalyst pores. The "spent" catalyst then flows into a fluidized-bed regenerator where air (or in some cases air plus oxygen) is used to burn off the coke to restore catalyst activity and also provide the necessary heat for the next reaction cycle, cracking being an endothermic reaction. The "regenerated" catalyst then flows to the base of the riser, repeating the cycle.
The gasoline produced in the FCC unit has an elevated octane rating but is less chemically stable compared to other gasoline components due to its olefinic profile. Olefins in gasoline are responsible for the formation of polymeric deposits in storage tanks, fuel ducts and injectors. The FCC LPG is an important source of C3–C4 olefins and isobutane that are essential feeds for the alkylation process and the production of polymers such as polypropylene.
Typical yields of a UOP Fluid Catalytic Cracker (volume, feed basis, ~23 API feedstock and 74% conversion)
Hydrocracking
Hydrocracking is a catalytic cracking process assisted by the presence of added hydrogen gas. Unlike a hydrotreater, hydrocracking uses hydrogen to break C–C bonds (hydrotreatment is conducted prior to hydrocracking to protect the catalysts in a hydrocracking process). In 2010, 265 million tons of petroleum was processed with this technology. The main feedstock is vacuum gas oil, a heavy fraction of petroleum.
The products of this process are saturated hydrocarbons; depending on the reaction conditions (temperature, pressure, catalyst activity) these products range from ethane, LPG to heavier hydrocarbons consisting mostly of isoparaffins. Hydrocracking is normally facilitated by a bifunctional catalyst that is capable of rearranging and breaking hydrocarbon chains as well as adding hydrogen to aromatics and olefins to produce naphthenes and alkanes.
The major products from hydrocracking are jet fuel and diesel, but low sulphur naphtha fractions and LPG are also produced. All these products have a very low content of sulfur and other contaminants with a goal of reducing the gasoil and naphtha range material to 10 PPM sulfur or lower. It is very common in Europe and Asia because those regions have high demand for diesel and kerosene. In the US, fluid catalytic cracking is more common because the demand for gasoline is higher.
The hydrocracking process depends on the nature of the feedstock and the relative rates of the two competing reactions, hydrogenation and cracking. Heavy aromatic feedstock is converted into lighter products under a wide range of very high pressures (1,000–2,000 psi) and fairly high temperatures (750–1,500 °F, 400–800 °C), in the presence of hydrogen and special catalysts.
Indicative Isocracking (UOP VGO Hydrocracking) Yields
Feedstock: Russian VGO 18.5 API, 2.28% Sulfur by wt, 0.28% Nitrogen by wt, Wax 6.5% by wt.
Feedstock Distillation Curve
Products from a UOP Hydrocracker
Hydrocracking is (mostly) a licensed technology due to its complexity. Typically the licensor is also the catalyst provider. Also, unit internals can often be patented by the process licensors and are designed to support specific functions of the catalyst load. Currently, the major process licensors for hydrocracking are:
UOP
Axens
Chevron Lummus Global
Topsoe
Shell Criterion
Elessent (formerly DuPont)
ExxonMobil (iso-dewaxing for lubricant hydrocracking)
Fundamentals
Outside of the industrial sector, cracking of C−C and C−H bonds are rare chemical reactions. In principle, ethane can undergo homolysis:
CH3CH3 → 2 CH3⋅
Because C−C bond energy is so high (377 kJ/mol), this reaction is not observed under laboratory conditions. More common examples of cracking reactions involve retro-Diels–Alder reactions. Illustrative is the thermal cracking of dicyclopentadiene to produce cyclopentadiene.
| Physical sciences | Other reactions | Chemistry |
423943 | https://en.wikipedia.org/wiki/Rhesus%20macaque | Rhesus macaque | The rhesus macaque (Macaca mulatta), colloquially rhesus monkey, is a species of Old World monkey. There are between six and nine recognised subspecies split between two groups, the Chinese-derived and the Indian-derived. Generally brown or grey in colour, it is in length with a tail and weighs . It is native to South, Central, and Southeast Asia and has the widest geographic range of all non-human primates, occupying a great diversity of altitudes and habitats.
The rhesus macaque is diurnal, arboreal, and terrestrial. It is mostly herbivorous, feeding mainly on fruit, but also eating seeds, roots, buds, bark, and cereals. Rhesus macaques living in cities also eat human food and trash. They are gregarious, with troops comprising 20–200 individuals. The social groups are matrilineal. Individuals communicate with a variety of facial expressions, vocalisations, body postures, and gestures.
As a result of the rhesus macaque's relatively easy upkeep, wide availability, and closeness to humans anatomically and physiologically, it has been used extensively in medical and biological research. It has facilitated many scientific breakthroughs including vaccines for rabies, smallpox, polio and antiretroviral medication to treat HIV/AIDS. A rhesus macaque became the first primate astronaut in 1948.
The rhesus is listed as Least Concern in the IUCN Red List.
Etymology
The name "rhesus" is reminiscent of the mythological king Rhesus of Thrace, a minor character in the Iliad. However, the French naturalist Jean-Baptiste Audebert who named the species, stated: "it has no meaning". The rhesus macaque is also known colloquially as the "rhesus monkey".
Taxonomy
According to Zimmermann's first description of 1780, the rhesus macaque is distributed in eastern Afghanistan, Bangladesh, Bhutan, as far east as the Brahmaputra Valley, Barak valley and in peninsular India, Nepal, and northern Pakistan. Today, this is known as the Indian rhesus macaque Macaca mulatta mulatta, which includes the morphologically similar M. rhesus villosus, described by True in 1894, from Kashmir, and M. m. mcmahoni, described by Pocock in 1932 from Kootai, Pakistan. Several Chinese subspecies of rhesus macaques were described between 1867 and 1917. The molecular differences identified among populations, however, are alone not consistent enough to conclusively define any subspecies.
The Chinese subspecies can be divided as follows:
M. m. mulatta is found in western and central China, in the south of Yunnan, and southwest of Guangxi;
M. m. lasiota (Gray, 1868), the west Chinese rhesus macaque, is distributed in the west of Sichuan, northwest of Yunnan, and southeast of Qinghai; it is possibly synonymous with M. m. sanctijohannis (R. Swinhoe, 1867), if not with M. m. mulatta.
M. m. tcheliensis (Milne-Edwards, 1870), the north Chinese rhesus macaque, lives in the north of Henan, south of Shanxi, and near Beijing. Some consider it as the most endangered subspecies. Others consider it possibly synonymous with M. m. sanctijohannis, if not with M. m. mulatta.
M. m. vestita (Milne-Edwards, 1892), the Tibetan rhesus macaque, lives in the southeast of Tibet, northwest of Yunnan (Deqing), and perhaps including Yushu; it is possibly synonymous with M. m. sanctijohannis, if not with M. m. mulatta.
M. m. littoralis (Elliot, 1909), the south Chinese rhesus macaque, lives in Fujian, Zhejiang, Anhui, Jiangxi, Hunan, Hubei, Guizhou, northwest of Guangdong, north of Guangxi, northeast of Yunnan, east of Sichuan, and south of Shaanxi; it is possibly synonymous with M. m. sanctijohannis, if not with M. m. mulatta.
M. m. brevicaudus, also referred to as Pithecus brevicaudus (Elliot, 1913), lives on the Hainan Island and Wanshan Islands in Guangdong, and the islands near Hong Kong; it may be synonymous with M. m. mulatta.
M. m. siamica (Kloss, 1917), the Indochinese rhesus macaque, is distributed in Myanmar, in the north of Thailand and Vietnam, in Laos, and in the Chinese provinces of Anhui, northwest Guangxi, Guizhou, Hubei, Hunan, central and eastern Sichuan, and western and south-central Yunnan; possibly synonymous with M. m. sanctijohannis, if not with M. m. mulatta.
Description
The rhesus macaque is brown or grey in color and has a pink face, which is bereft of fur. It has, on average, 50 vertebrae and a wide rib cage. Its tail averages between . Adult males measure about on average and weigh about . Females are smaller, averaging in length and in weight. The ratio of arm length to leg length is 89.6–94.3%.
The rhesus macaque has a dental formula of and bilophodont molar teeth.
Distribution and habitat
Rhesus macaques are native to India, Bangladesh, Pakistan, Nepal, Myanmar, Thailand, Afghanistan, Vietnam, southern China, and some neighbouring areas. They have the widest geographic ranges of any non-human primate, occupying a great diversity of altitudes throughout Central, South, and Southeast Asia. Inhabiting arid, open areas, rhesus macaques may be found in grasslands, woodlands, and in mountainous regions up to in elevation. They are strong swimmers, and can swim across rivers. Rhesus macaques are noted for their tendency to move from rural to urban areas, coming to rely on handouts or refuse from humans. They adapt well to human presence, and form larger troops in human-dominated landscapes than in forests. Rhesus monkeys live in patches of forest within agricultural areas, which gives them access to agroecosystem habitats and makes them at ease in navigating through them.
The southern and the northern distributional limits for rhesus and bonnet macaques, respectively, currently run parallel to each other in the western part of India, are separated by a large gap in the center, and converge on the eastern coast of the peninsula to form a distribution overlap zone. This overlap region is characterized by the presence of mixed-species troops, with pure troops of both species sometimes occurring even in close proximity to one another. The range extension of rhesus macaque – a natural process in some areas, and a direct consequence of introduction by humans in other regions – poses grave implications for the endemic and declining populations of bonnet macaques in southern India.
Kumar et al (2013) provides a summary of population distribution and habitat in India. It states that there were sightings of rhesus macaques in all surveyed habitats except semi-evergreen forests.
Fossil record
Fossilized isolated teeth and mandible fragments from Tianyuan Cave and a juvenile maxilla from Wanglaopu Cave near Zhoukoudian represent the first recognized occurrence of rhesus macaque fossils in the far north of China, and thus the population of rhesus macaques which lived around Beijing decades ago is believed to have originated from Pleistocene ancestors rather than being human-introduced. Fossil mandible fragments from the Taedong River Basin around Pyongyang, North Korea, have also been assigned to this species.
Exogenous colonies
Rhesus macaques have also been introduced and acclimated to other areas, such as the United States, where they are considered an invasive species. Colonies have been established in Florida, Puerto Rico, and South Carolina
Around the spring of 1938, a colony of rhesus macaques was released in and around Silver Springs in Florida by a tour boat operator known locally as "Colonel Tooey" to enhance his "Jungle Cruise". Tooey had been hoping to profit from the boom in jungle adventure stories in film and print media, buying the monkeys to be attractions at his river boat tour. Tooey apparently hadn't been aware of rhesus macaques being proficient swimmers, meaning his original plan to keep the monkeys isolated to an island inside the river didn't work. The macaques nevertheless remained in the region thanks to daily feedings by Tooey and the boat tours. Tooey subsequently released additional monkeys to add to the gene pool and avoid inbreeding. The traditional story that the monkeys were released for scenery enhancement in the Tarzan movies that were filmed at that location is false, as the only Tarzan movie filmed in the area, 1939's Tarzan Finds a Son!, does not contain rhesus macaques. Whilst this was the first colony established and the longest lasting, other colonies have since been established intentionally or accidentally. A population in Titusville, Florida, was featured at the now defunct Tropical Wonderland theme park, which coincidentally was at one time endorsed by Johnny Weissmuller, who had portrayed Tarzan in the aforementioned films. This association might have contributed to the misconception the monkeys were associated directly with the Tarzan films. This colony either escaped or was intentionally released, roaming the woods of the area for a decade. In the 1980s a trapper captured several monkeys from the Titusville population and released them in the Silver Springs area to join that population. The last printed records of monkeys in the Titusville area occurred in early 1990s, but sightings continue to this day.
Various colonies of rhesus macaque are speculated to be the result of zoos and wildlife parks destroyed in hurricanes, most notably Hurricane Andrew. A 2020 estimate put the number at 550–600 rhesus macaques living in the state; officials have caught more than 1,000 of the monkeys in the past decade. Most of the captured monkeys tested positive for herpes B virus, which leads wildlife officials to consider the animals a public health hazard. Of the three monkey species to have had any lasting presence in Florida, the other two being African vervet monkeys and South American squirrel monkeys, the Rhesus macaques have endured the longest and are the only ones to show continual population growth. The species' adaptable nature, generalized diet, and larger size as to reduce the chance of cold stress or predator attack are thought to be reasons for their success.
Despite the risks, the macaques have continued to enjoy long-standing support from residents in Florida, strongly disagreeing with their removal. The Silver Springs colony has continued to grow in size and range, being commonly sighted in both the park grounds, the nearby city of Ocala, Florida, and the neighboring Ocala National Forest. Individuals likely originating from this colony have been seen hundreds of kilometers away, in St. Augustine, Florida and St. Petersburg, Florida. One infamous individual, named the "Mystery Monkey of Tampa Bay", evaded capture for years, inspiring social media posts and a song.
Exogenous colonies have also resulted from research activities. There is a colony of rhesus macaques on Morgan Island, one of the Sea Islands in the South Carolina Lowcountry. They were imported in the 1970s for use in the local labs. Another research colony was established by the Caribbean Primate Research Center of the University of Puerto Rico on the island of Cayo Santiago, off of Puerto Rico. There are no predators on the island, and humans are not permitted to land, except as part of the research program. Another Puerto Rico research colony was released into the Desecheo National Wildlife Refuge in 1966. they are continuing to do ecological harm, damage crops amounting to $300,000/year and cost $1,000,000/year to manage.
Ecology and behavior
The Rhesus macaque is diurnal, and both arboreal and terrestrial. It is quadrupedal and, when on the ground, it walks digitigrade and plantigrade. It is mostly herbivorous, feeding mainly on fruit, but also eating seeds, roots, buds, bark, and cereals. It is estimated to consume around 99 different plant species in 46 families. During the monsoon season, it gets much of its water from ripe and succulent fruit. Rhesus macaques living far from water sources lick dewdrops from leaves and drink rainwater accumulated in tree hollows. They have also been observed eating termites, grasshoppers, ants, and beetles. When food is abundant, they are distributed in patches, and forage throughout the day in their home ranges. They drink water when foraging, and gather around streams and rivers. Rhesus macaques have specialized pouch-like cheeks, allowing them to temporarily hoard their food.
It has specialised cheek pouches where it can temporarily store food and also eats invertebrates, including adult and larval insects, spiders, lice, honeycombs, crabs and bird eggs. With an increase in anthropogenic land changes, the rhesus macaque has evolved alongside intense and rapid environmental disturbance associated with human agriculture and urbanization resulting in proportions of their diet to be altered.
In psychological research, rhesus macaques have demonstrated a variety of complex cognitive abilities, including the ability to make same-different judgments, understand simple rules, and monitor their own mental states. They have even been shown to demonstrate self-agency, an important type of self-awareness. In 2014, onlookers at a train station in Kanpur, India, documented a rhesus monkey, knocked unconscious by overhead power lines, that was revived by another rhesus that systematically administered a series of resuscitative actions.
Group structure
Like other macaques, rhesus troops comprise a mixture of 20–200 males and females. Females may outnumber the males by a ratio of 4:1. Males and females both have separate hierarchies. Female philopatry, common among social mammals, has been extensively studied in rhesus macaques. Females tend not to leave the social group, and have highly stable matrilineal hierarchies in which a female's rank is dependent on the rank of her mother. In addition, a single group may have multiple matrilineal lines existing in a hierarchy, and a female outranks any unrelated females that rank lower than her mother. Rhesus macaques are unusual in that the youngest females tend to outrank their older sisters. This is likely because young females are more fit and fertile. Mothers seem to prevent the older daughters from forming coalitions against her. The youngest daughter is the most dependent on the mother, and would have nothing to gain from helping her siblings in overthrowing their mother. Since each daughter had a high rank in her early years, rebelling against her mother is discouraged. Juvenile male macaques also exist in matrilineal lines, but once they reach four to five years of age, they are driven out of their natal groups by the dominant male. Thus, adult males gain dominance by age and experience.
In the group, macaques position themselves based on rank. The "central male subgroup" contains the two or three oldest and most dominant males which are codominant, along with females, their infants, and juveniles. This subgroup occupies the center of the group and determines the movements, foraging, and other routines. The females of this subgroup are also the most dominant of the entire group. The farther to the periphery a subgroup is, the less dominant it is. Subgroups on the periphery of the central group are run by one dominant male, of a rank lower than the central males, and he maintains order in the group, and communicates messages between the central and peripheral males. A subgroup of subordinate, often subadult, males occupy the very edge of the groups, and have the responsibility of communicating with other macaque groups and making alarm calls. Rhesus social behaviour has been described as despotic, in that high-ranking individuals often show little tolerance, and frequently become aggressive towards non-kin. Top-ranking female rhesus monkeys are known to sexually coerce unreceptive males and also physically injure them, biting off digits and damaging their genitals.
Rhesus macaques have been observed engaging in interspecies grooming with Hanuman langurs and with Sambar deer.
Communication
Rhesus macaques interact using a variety of facial expressions, vocalizations, body postures, and gestures. Perhaps the most common facial expression the macaque makes is the "silent bared teeth" face. This is made between individuals of different social ranks, with the lower-ranking one giving the expression to its superior. A less-dominant individual also makes a "fear grimace", accompanied by a scream, to appease or redirect aggression. Another submissive behavior is the "present rump", where an individual raises its tail and exposes its genitals to the dominant one. A dominant individual threatens another individual by standing quadrupedally and making a silent "open mouth stare" accompanied by the tail sticking straight. During movements, macaques make coos and grunts. These are also made during affiliative interactions, and approaches before grooming. When they find rare food of high quality, macaques emit warbles, harmonic arches, or chirps. When in threatening situations, macaques emit a single loud, high-pitched sound called a shrill bark. Screeches, screams, squeaks, pant-threats, growls, and barks are used during aggressive interactions. Infants "gecker" to attract their mother's attention.
Reproduction
Adult male macaques try to maximize their reproductive success by entering into sex with females both in and outside the breeding period. Females prefer to mate with males that are not familiar to them. Outsider males who are not members of the female's own troop are preferred over higher-ranking males. Outside of the consortship period, males and females return the prior behavior of not exhibiting preferential treatment or any special relationship. The breeding period can last up to eleven days, and a female usually mates with numerous males during that time. Male rhesus macaques have been observed to fight for access to sexually receptive females and they suffer more wounds during the mating season. Female macaques first breed when they are four years old and reach menopause at around twenty-five years of age. Male macaques generally play no role in raising the young but do have peaceful relationships with the offspring of their consort pairs.
Manson and Parry found that free-ranging rhesus macaques avoid inbreeding. Adult females were never observed to copulate with males of their own matrilineage during their fertile periods.
Mothers with one or more immature daughters in addition to their infants are in contact with their infants less than those with no older immature daughters, because the mothers may pass the parenting responsibilities to their daughters. High-ranking mothers with older immature daughters also reject their infants significantly more than those without older daughters and tend to begin mating earlier in the mating season than expected based on their dates of parturition the preceding birth season. Infants farther from the center of the groups are more vulnerable to infanticide from outside groups. Some mothers abuse their infants, which is believed to be the result of controlling parenting styles.
Aging
The rhesus monkey has been used as a model for studying aging of the ovaries of primate females. Ovarian aging was found to be associated with increased DNA double strand breaks and reduced DNA repair in granulosa cells, that is, somatic cells closely associated with developing oocytes.
Self-awareness
In several experiments giving mirrors to rhesus monkeys, they looked into the mirrors and groomed themselves, as well as flexed various muscle groups. This behaviour indicates that they recognised and were aware of themselves.
Human - rhesus conflict
The macaque–human relationships is complex and culturally specific, ranging from relatively peaceful coexistence to extreme levels of conflict. Conflicts tend to result from rapidly changing agricultural practices, increasing urbanisation, and clearing of woodlands and other territory, pushing macaques into human settlements in the search for resources. A 2021 study stated that human-macaque conflict is one of the most critical challenges faced by wildlife managers in the South- and Southeast-Asian regions.
Conflict between rhesus macaques and humans is at all-time high, with areas once forested habitat being converted to industrial agriculture. In Nepal, the expansion of monocultures, increased forest fragmentation, degradation of natural habitats and changing agricultural practices have led to a significant increase in the frequency of human-macaque conflict. Crop raiding is one of the biggest visible effects of human-rhesus conflict. The estimated financial cost to individual farmer households of macaque corn and rice raiding is approximately US$14.9 or 4.2% of their yearly income. This has resulted in farmers and other members of the population viewing macaques inhabiting agricultural landscapes as serious crop pests. Nepal is a significant study area with almost 44% of Nepal's land area containing suitable habitat for rhesus macaques but only having 8% of such suitable area being protected national parks. Rhesus macaques are rated as one of the top ten crop-raiding wildlife species in Nepal, which adds to their negative perception.
Suggestions to mitigate conflict include "prioritizing forest restoration programs, strategic management plans designed to connect isolated forest fragments with high rhesus macaque population densities, creating government programs that compensate farmers for income lost due to crop-raiding, and educational outreach that informs local villagers of the importance of conservation and protecting biodiversity". Mitigation strategies offers the most effective solutions to reduce conflict occurring between rhesus macaques and humans in Nepal.
India is another country that is seeing the rise of human-macaque conflict. Macaque-human conflict particularly occurs in the twin hill-states of Uttarakhand and Himachal Pradesh with such conflict being a source of contentious debate in political scenarios, resentment and polarization amongst agriculturalists and wildlife conservationists. In India, crop raiding by rhesus macaques has been identified as the main cause of conflict. In urban areas, rhesus macaques damage property and injure people in house raids to access food and provisions; in agricultural areas, they cause financial losses to farmers due to crop depredation. The estimated extent of crop damages in Himachal Pradesh ranges from 10–100% to 40–80% of all crop losses. The financial implications of such damage is estimated at approximately USD$200,000 in agriculture and USD$150,000 in horticulture. Quantification of crop and financial loses is challenging. Farmers' negative views of macaques may cause them to perceive higher than actual losses. This has led to harsh actions against rhesus macaque communities. Other factors in rhesus perception include economic status, farmer economic stability, cultural attitudes towards the given species and the frequency and intensity of wildlife conflicts. All of the above have resulted in changes in conservation and management with legal rhesus macaque culling issued in 2010.
Human-macaque conflict is also occurring in China, specifically in the area of Longyang District, Baoshan City, Yunnan Province. The peak period of conflict occurs from August–October. Factors associated with accessibility and availability of food and shelter appear to be the key drivers of human-macaque conflict, with an overall increase between the years of 2012 and 2021.
One key factor of conflict that directly affects the human-macaque relationship is visibility. Visibility of rhesus macaques in agroecosystem-dominated areas largely impacts conflict between humans and rhesus macaques. The conspicuous presence of rhesus macaques in and around farms results in farmers believing that macaques cause heavy crop depredations which, in turn, have led to negative perceptions and actions against the species. Whereas visibility in urban areas can result in a positive relationship, areas include around temples, and tourist areas where their dietary needs are largely met by food provisioning.
Towards the end of March 2018, it was reported that a monkey had entered a house in the village of Talabasta, Odisha, India and kidnapped a baby. The baby was later found dead in a well. Though monkeys are known to attack people, enter homes and damage property, this reported behaviour was unusual.
Population management tools
Crop-raiding is seen as one of the most important behaviours to change to reduce conflicts. One example is the implementation of guards in agricultural settings to chase off intruding monkeys using dogs, slingshots, and firecrackers. This method is non-lethal and can alter behavioural patterns of crop-raiding monkeys. Another strategy that farmers can employ is to plant alternative, buffer crops which are unattractive to monkeys in high-conflict zones, such as along the edges of macaque habitats. In urban settings, planting food trees within city periphery and country parks aim to discourage macaques from entering nearby residential areas for food.
In areas of tourism, human behaviour change is necessary to prevent conflict. One method is to introduce public education programs as well as restrict visitors to specific viewing platforms, with the goal to minimize physical proximity. An important aspect is enforcing no feed regulations that only allow provisioning by trained staff at scheduled times. Regulating visitor behaviours that provoke aggressive responses from macaques, including noise regulation, greatly benefits conflict reduction. Replacing food-conditioned behaviours established by human visitors and further human education will greatly aid in returning co-existence between rhesus macaques and humans.
Another method of population management is translocation. Translocation of problem macaques in urban rhesus communities in India has been employed as a non-lethal solution to human–macaque conflicts. Translocation can be seen as a short-term fix, as macaques may return or other rhesus groups may take their place. Translocation is also hampered by a lack of suitable alternate locations.
Another tool of population management is sterilisation and/or contraceptive programmes. Fertility control looks to be a feasible management tool for reducing human–macaque conflict because it avoids the extermination of the animals and avoids costs and problems associated with translocation. Although there is potential for sterilization and general fertility control to be positive, there is limited research and understanding of the long-term effects of sterilization programs and its effectiveness.
In science
The rhesus macaque is well known to science. Due to its relatively easy upkeep in captivity, wide availability, and closeness to humans anatomically and physiologically, it has been used extensively in medical and biological research on human and animal health-related topics. It has given its name to the Rh factor, one of the elements of a person's blood group, by the discoverers of the factor, Karl Landsteiner and Alexander Wiener. The rhesus macaque was also used in the well-known experiments on maternal deprivation carried out in the 1950s by controversial comparative psychologist Harry Harlow. Other medical breakthroughs facilitated by the use of the rhesus macaque include:
development of the rabies, smallpox, and polio vaccines
creation of drugs to manage HIV/AIDS
understanding of the female reproductive cycle and development of the embryo and the propagation of embryonic stem cells.
The U.S. Army, the U.S. Air Force, and NASA launched rhesus macaques into outer space during the 1950s and 1960s, and the Soviet/Russian space program launched them into space as recently as 1997 on the Bion missions. Albert II became the first primate and first mammal in space during a U.S. V-2 rocket suborbital flight on 14 June 1949, and died on impact when a parachute failed.
Another rhesus monkey, Able, was launched on a suborbital spaceflight in 1959, and was among the first living beings (along with Miss Baker, a squirrel monkey on the same mission) to travel in space and return alive.
On 25 October 1999, the rhesus macaque became the first cloned primate with the birth of Tetra. January 2001 had the birth of ANDi, the first transgenic primate; ANDi carries foreign genes originally from a jellyfish.
Though most studies of the rhesus macaque are from various locations in northern India, some knowledge of the natural behavior of the species comes from studies carried out on a colony established by the Caribbean Primate Research Center of the University of Puerto Rico on the island of Cayo Santiago, off Puerto Rico, where approximately 1800 of the monkeys live. No predators are on the island, and humans are not permitted to land except as part of the research programmes. The colony is provisioned to some extent, but about half of its food comes from natural foraging.
Rhesus macaques, like many macaques, carry the herpes B virus. This virus does not typically harm the monkey, but is very dangerous to humans in the rare event that it jumps species, for example in the 1997 death of Yerkes National Primate Research Center researcher Elizabeth Griffin.
Genome sequencing
Work on the genome of the rhesus macaque was completed in 2007, making the species the second nonhuman primate whose genome was sequenced. Humans and macaques apparently share about 93% of their DNA sequence and shared a common ancestor roughly 25 million years ago. The rhesus macaque has 21 pairs of chromosomes.
Comparison of rhesus macaques, chimpanzees, and humans revealed the structure of ancestral primate genomes, positive selection pressure and lineage-specific expansions, and contractions of gene families. "The goal is to reconstruct the history of every gene in the human genome," said Evan Eichler, University of Washington, Seattle. DNA from different branches of the primate tree will allow us "to trace back the evolutionary changes that occurred at various time points, leading from the common ancestors of the primate clade to Homo sapiens," said Bruce Lahn, University of Chicago.
After the human and chimpanzee genomes were sequenced and compared, it was usually impossible to tell whether differences were the result of the human or chimpanzee gene changing from the common ancestor. After the rhesus macaque genome was sequenced, three genes could be compared. If two genes were the same, they were presumed to be the original gene.
The chimpanzee and human genome diverged 6 million years ago. They have 98% identity and many conserved regulatory regions. Comparing the macaque and human genomes, further identified evolutionary pressure and gene function. Like the chimpanzee, changes were on the level of gene rearrangements rather than single mutations. Frequent insertions, deletions, changes in the order and number of genes, and segmental duplications near gaps, centromeres and telomeres occurred. So, macaque, chimpanzee, and human chromosomes are mosaics of each other.
Some normal gene sequences in healthy macaques and chimpanzees cause profound disease in humans. For example, the normal sequence of phenylalanine hydroxylase in macaques and chimpanzees is the mutated sequence responsible for phenylketonuria in humans. So, humans must have been under evolutionary pressure to adopt a different mechanism. Some gene families are conserved or under evolutionary pressure and expansion in all three primate species, while some are under expansion uniquely in human, chimpanzee, or macaque. For example, cholesterol pathways are conserved in all three species (and other primate species). In all three species, immune response genes are under positive selection, and genes of T cell-mediated immunity, signal transduction, cell adhesion, and membrane proteins generally. Genes for keratin, which produce hair shafts, were rapidly evolving in all three species, possibly because of climate change or mate selection. The X chromosome has three times more rearrangements than other chromosomes. The macaque gained 1,358 genes by duplication. Triangulation of human, chimpanzee, and macaque sequences showed expansion of gene families in each species.
The PKFP gene, important in sugar (fructose) metabolism, is expanded in macaques, possibly because of their high-fruit diet. So are genes for the olfactory receptor, cytochrome P450 (which degrades toxins), and CCL3L1-CCL4 (associated in humans with HIV susceptibility). Immune genes are expanded in macaques, relative to all four great ape species. The macaque genome has 33 major histocompatibility genes, three times those of human. This has clinical significance because the macaque is used as an experimental model of the human immune system.
In humans, the preferentially expressed antigen of melanoma (PRAME) gene family is expanded. It is actively expressed in cancers, but normally is testis-specific, possibly involved in spermatogenesis. The PRAME family has 26 members on human chromosome 1. In the macaque, it has eight, and has been very simple and stable for millions of years. The PRAME family arose in translocations in the common mouse-primate ancestor 85 million years ago, and is expanded on mouse chromosome 4.
DNA microarrays are used in macaque research. For example, Michael Katze of University of Washington, Seattle, infected macaques with 1918 and modern influenzas. The DNA microarray showed the macaque genomic response to human influenza on a cellular level in each tissue. Both viruses stimulated innate immune system inflammation, but the 1918 flu stimulated stronger and more persistent inflammation, causing extensive tissue damage, and it did not stimulate the interferon-1 pathway. The DNA response showed a transition from innate to adaptive immune response over seven days.
The full sequence and annotation of the macaque genome is available on the Ensembl genome browser.
Conservation status
The rhesus macaque is listed as Least Concern on the IUCN Red List and estimated to exist in large numbers; it is tolerant of a broad range of habitats, including urban environments. It has the largest natural range of any non-human primate. The Thai population is locally threatened. In addition to habitat destruction and agricultural encroachment, pet releases of the different species into existing troops are diluting the gene pool and putting its genetic integrity at risk. Despite the wealth of information on its ecology and behaviour, little attention has been paid to its demography or population status, which can pose a risk for future Rhesus macaque populations. The extension of its distributional limits by approximately in southeast India caused population stress on other species. This range extension has been caused by human intervention tactics whereby village translocation occurs from urban conflict ridden areas.
| Biology and health sciences | Primates | null |
424015 | https://en.wikipedia.org/wiki/Asbestosis | Asbestosis | Asbestosis is long-term inflammation and scarring of the lungs due to asbestos fibers. Symptoms may include shortness of breath, cough, wheezing, and chest tightness. Complications may include lung cancer, mesothelioma, and pulmonary heart disease.
Asbestosis is caused by breathing in asbestos fibers. It requires a relatively large exposure over a long period of time, which typically only occur in those who directly work with asbestos. All types of asbestos fibers are associated with an increased risk. It is generally recommended that currently existing and undamaged asbestos be left undisturbed. Diagnosis is based upon a history of exposure together with medical imaging. Asbestosis is a type of interstitial pulmonary fibrosis.
There is no specific treatment. Recommendations may include influenza vaccination, pneumococcal vaccination, oxygen therapy, and stopping smoking. Asbestosis affected about 157,000 people and resulted in 3,600 deaths in 2015. Asbestos use has been banned in a number of countries in an effort to prevent disease.
Statistics from the UK's Health and Safety Executive showed that in 2019, there were 490 asbestosis deaths.
Signs and symptoms
The signs and symptoms of asbestosis typically manifest after a significant amount of time has passed following asbestos exposure, often several decades under current conditions in the US. The primary symptom of asbestosis is generally the slow onset of shortness of breath, especially with physical activity. Clinically advanced cases of asbestosis may lead to respiratory failure. When a stethoscope is used to listen to the lungs of a person with asbestosis, they may hear inspiratory "crackles".
The characteristic pulmonary function finding in asbestosis is a restrictive ventilatory defect. This manifests as a reduction in lung volumes, particularly the vital capacity (VC) and total lung capacity (TLC). The TLC may be reduced through alveolar wall thickening; however, this is not always the case. Large airway function, as reflected by FEV1/FVC, is generally well preserved. In severe cases, the drastic reduction in lung function due to the stiffening of the lungs and reduced TLC may induce right-sided heart failure (cor pulmonale). In addition to a restrictive defect, asbestosis may produce reduction in diffusion capacity and a low amount of oxygen in the blood of the arteries.
Cause
The cause of asbestosis is the inhalation of microscopic asbestos mineral fibers suspended in the air. In the 1930s, E. R. A. Merewether found that greater exposure resulted in greater risk.
Risk factors
Those who worked in the production, milling, manufacturing, installation, or removal of asbestos products before the late 1970s are at an increased risk of exposure to asbestos. This includes people who worked in these jobs in the United States and Canada. For example:
Asbestos miners
Aeronautical and car mechanics
Boiler operators
Construction workers
Electricians
Railway workers
Workers who remove asbestos insulation from around a steam vessel in an old building
Construction workers who inhale asbestos from contaminated building materials such as paint, spackling, roof shingles, masonry compounds, and drywall may get asbestosis.
The amount and length of an individual's exposure to asbestos are the primary factors that determine the level of risk. The longer one is exposed to the substance, the higher their risk of developing lung damage.
Families of exposed workers can be affected because asbestos fibers from clothing and hair can end up in the home. People who live near mines can also be exposed to airborne asbestos fibers.
Pathogenesis
Asbestosis is the scarring of lung tissue (beginning around terminal bronchioles and alveolar ducts and extending into the alveolar walls) resulting from the inhalation of asbestos fibers. There are two types of fibers: amphibole (thin and straight) and serpentine (curly). All forms of asbestos fibers are responsible for human disease as they are able to penetrate deeply into the lungs. When such fibers reach the alveoli (air sacs) in the lung, where oxygen is transferred into the blood, the foreign bodies (asbestos fibers) cause the activation of the lungs' local immune system and provoke an inflammatory reaction dominated by lung macrophages that respond to chemotactic factors activated by the fibers. This inflammatory reaction can be described as chronic rather than acute, with a slow ongoing progression of the immune system attempting to eliminate the foreign fibers. Macrophages phagocytose (ingest) the fibers and stimulate fibroblasts to deposit connective tissue.
Due to the asbestos fibers' natural resistance to digestion, some macrophages release inflammatory chemical signals, and other macrophages are killed, releasing reactive oxygen species and activating transcription factors, like NF-kB, which amplify the expression of pro-inflammatory cytokines. These inflammatory chemical signals attract further lung macrophages and fibrolastic cells that synthesize fibrous scar tissue, which eventually becomes diffuse and can progress in heavily exposed individuals. This tissue can be seen microscopically soon after exposure in animal models. Some asbestos fibers become layered by an iron-containing proteinaceous material (ferruginous body) in cases of heavy exposure where about 10% of the fibers become coated. Most inhaled asbestos fibers remain uncoated. About 20% of the inhaled fibers are transported by cytoskeletal components of the alveolar epithelium to the interstitial compartment of the lung where they interact with macrophages and mesenchymal cells. The cytokines, transforming growth factor beta and tumor necrosis factor alpha, appear to play major roles in the development of scarring inasmuch as the process can be blocked in animal models by preventing the expression of the growth factors. The result is fibrosis in the interstitial space, thus asbestosis.
This fibrotic scarring causes alveolar walls to thicken, which reduces elasticity and gas diffusion, reducing oxygen transfer to the blood as well as the removal of carbon dioxide. This can result in shortness of breath, a common symptom exhibited by individuals with asbestosis. Those with asbestosis may be more vulnerable to tumor growth (mesothelioma), because asbestos decreases the cytotoxicity of natural killer cells and impairs the functioning of T helper cells, which detect abnormal cell growth.
Diagnosis
According to the American Thoracic Society (ATS), the general diagnostic criteria for asbestosis are:
Evidence of structural pathology consistent with asbestosis, as documented by imaging or histology
Evidence of causation by asbestos as documented by the occupational and environmental history, markers of exposure (usually pleural plaques), recovery of asbestos bodies, or other means
Exclusion of alternative plausible causes for the findings
The abnormal chest x-ray and its interpretation remain the most important factors in establishing the presence of pulmonary fibrosis. The findings usually appear as small, irregular parenchymal opacities, primarily in the lung bases. Using the ILO Classification system, "s", "t", and/or "u" opacities predominate. CT or high-resolution CT (HRCT) are more sensitive than plain radiography at detecting pulmonary fibrosis (as well as any underlying pleural changes). More than 50% of people affected with asbestosis develop plaques in the parietal pleura, the space between the chest wall and lungs. Once apparent, the radiographic findings in asbestosis may slowly progress or remain static, even in the absence of further asbestos exposure. Rapid progression suggests an alternative diagnosis.
Asbestosis resembles many other diffuse interstitial lung diseases, including other pneumoconiosis. The differential diagnosis includes idiopathic pulmonary fibrosis (IPF), hypersensitivity pneumonitis, sarcoidosis, and others. The presence of pleural plaques may provide supportive evidence of causation by asbestos. Although lung biopsy is usually not necessary, the presence of asbestos bodies in association with pulmonary fibrosis establishes the diagnosis. Conversely, interstitial pulmonary fibrosis in the absence of asbestos bodies is most likely not asbestosis. Asbestos bodies in the absence of fibrosis indicate exposure, not disease.
Treatment
There is no cure available for asbestosis. Oxygen therapy at home is often necessary to relieve the shortness of breath and correct underlying low blood oxygen levels. Supportive treatment of symptoms includes respiratory physiotherapy to remove secretions from the lungs by postural drainage, chest percussion, and vibration. Nebulized medications may be prescribed in order to loosen secretions or treat underlying chronic obstructive pulmonary disease. Immunization against pneumococcal pneumonia and annual influenza vaccination is administered due to increased sensitivity to the diseases. Those with asbestosis are at increased risk for certain cancers. If the person smokes, quitting the habit reduces further damage. Periodic pulmonary function tests, chest x-rays, and clinical evaluations, including cancer screening/evaluations, are given to detect additional hazards.
Society and culture
Legal issues
On 21 December 1906 H. Montague Murray, M.D., F.R.C.P., testified before a British committee concerning a patient who died in April 1900. Murray indicated that fibrosis of the lungs caused by asbestos dust was a plausible cause of the patient's death.
The death of English textile worker Nellie Kershaw in 1924 from pulmonary asbestosis was the first case to be described in medical literature, and the first published account of disease definitely attributed to occupational asbestos exposure. However, her former employers (Turner Brothers Asbestos) denied that asbestosis even existed because the medical condition was not officially recognised at the time. As a result, they accepted no liability for her injuries and paid no compensation, either to Kershaw during her final illness or to her family after her death. Even so, the findings of the inquest into her death were highly influential insofar as they led to a parliamentary enquiry by the British Parliament. The enquiry formally acknowledged the existence of asbestosis, recognised that it was hazardous to health and concluded that it was irrefutably linked to the prolonged inhalation of asbestos dust. Having established the existence of asbestosis on a medical and judicial basis, the report resulted in the first Asbestos Industry Regulations being published in 1931, which came into effect on 1 March 1932.
The first lawsuits against asbestos manufacturers occurred in 1929. Since then, many lawsuits have been filed against asbestos manufacturers and employers, for neglecting to implement safety measures after the link between asbestos, asbestosis and mesothelioma became known (some reports seem to place this as early as 1898 in modern times). The liability resulting from the sheer number of lawsuits and people affected has reached billions of U.S. dollars. The amounts and method of allocating compensation have been the source of many court cases, and government attempts at resolution of existing and future cases.
To date, about 100 companies have declared bankruptcy at least partially due to asbestos-related liability. In accordance with Chapter 11 and § 524(g) of the U.S. federal bankruptcy code, a company may transfer its liabilities and certain assets to an asbestos personal injury trust, which is then responsible for compensating present and future claimants. Since 1988, 60 trusts have been established to pay claims with about $37 billion in total assets. From 1988 through 2010, analysis from the United States Government Accountability Office indicates that trusts have paid about 3.3 million claims valued at about $17.5 billion.
Notable people
This is a partial list of notable people who have died from lung fibrosis associated with asbestos:
Bernie Banton, social justice advocate
Paul Gleason, Breakfast Club actor
Nellie Kershaw, first person diagnosed with asbestos-related disease, 1924
John MacDougall, politician
Steve McQueen, actor
Theodore Sturgeon, writer
| Biology and health sciences | Types | Health |
424253 | https://en.wikipedia.org/wiki/GarageBand | GarageBand | GarageBand is a software application by Apple for macOS, iPadOS, and iOS devices that allows users to create music or podcasts. It is a lighter, amateur-oriented offshoot of Logic Pro. GarageBand was originally released for macOS in 2004 and brought to iOS in 2011. The app's music and podcast creation system enables users to create multiple tracks with software synthesizer presets (to be played on a MIDI keyboard and/or sequenced on a piano roll), pre-made and user-created loops, an array of various effects, and voice recordings.
History
GarageBand was developed by Apple under the direction of Dr. Gerhard Lengeling. Dr. Lengeling was formerly from the German company Emagic, makers of Logic Audio (later renamed Logic Pro). Apple acquired Emagic in July 2002. It developed GarageBand as a lighter version of Logic Pro (with the intermediate application Logic Express offered for a brief period), and each version of GarageBand resembles the current version of Logic aesthetically in addition to featuring its audio engine.
Steve Jobs announced the application in his keynote speech at the Macworld Conference & Expo in San Francisco on January 6, 2004. Musician John Mayer assisted with its demonstration. It is part of the iLife '04 package.
Apple announced GarageBand 2 at the 2005 Macworld Conference & Expo on January 11, 2005. It shipped, as announced, around January 22, 2005. Notable new features included the abilities to view and edit music in musical notation. It was also possible to record up to eight tracks at once and to fix timing and pitch of recordings. Apple added automation of track pan position and the master pitch. Transposition of both audio and MIDI has been added by Apple along with the ability to import MIDI files. It is part of iLife '05.
GarageBand 3, announced at 2006's Macworld Conference & Expo, includes a 'podcast studio', including the ability to use more than 200 effects and jingles, and integration with iChat for remote interviews. It is part of iLife '06.
GarageBand 4, also known as GarageBand '08, is part of iLife '08. It incorporates the ability to record sections of a song separately, such as bridges, and chorus lines. Additionally, it provides support for the automation of tempos and instruments, the creation, and exportation of iPhone ringtones, and a "Magic GarageBand" feature which includes a virtual jam session with a complete 3D view of the Electric instruments.
GarageBand 5 is part of the iLife '09 package. It includes music instruction and allows the user to buy instructional videos by contemporary artists. It also contains new features for electric guitar players, including a dedicated 3D Electric Guitar Track containing a virtual stomp box pedalboard, and virtual amplifiers with spring reverb and tremolo. GarageBand 5 also includes a redesigned user interface as well as Project Templates.
GarageBand 6, also known as GarageBand '11, is part of the iLife '11 package, which Apple released on October 20, 2010. This version brings new features such as Flex Time, a tool to adjust the rhythm of a recording. It also includes the ability to match the tempo of one track with another instantly, additional guitar amps and stomp boxes, 22 new lessons for guitar and piano, and "How Did I Play?", a tool to measure the accuracy and progress of a piano or guitar performance in a lesson.
Apple released GarageBand 10 along with OS X 10.9 Mavericks in October 2013. This version has lost Magic GarageBand and the podcast functionality.
Apple updated GarageBand 10 for Mac on March 20, 2014. Version 10.0.2 adds the ability to export tracks in MP3 format as well as a new drummer module, but removed support for podcasting; users with podcast files created in GarageBand 6 can continue to edit them using the older version.
GarageBand was updated to version 10.0.3 on October 16, 2014. This version adds a dedicated Bass Amp Designer, global track effects and dynamic track resizing.
Apple released GarageBand 10.2 on June 5, 2017.
Features
Audio recording
GarageBand is a digital audio workstation (DAW) and music sequencer that can record and play back multiple tracks of audio. Built-in audio filters that use the AU (audio unit) standard allow the user to enhance the audio track with various effects, including reverb, echo, and distortion amongst others. GarageBand also offers the ability to record at both 16-bit and 24-bit Audio Resolution, but at a fixed sample rate of 44.1 kHz. An included tuning system helps with pitch correction and can effectively imitate the Auto-Tune effect when tuned to the maximum level. It also has a large array of preset effects to choose from, with an option to create one's own effects.
Virtual software instruments
GarageBand includes a large selection of realistic, sampled instruments and software-modeled synthesizers. These can be used to create original compositions or play music live through the use of a USB MIDI keyboard connected to the computer. An on-screen virtual keyboard is also available as well as using a standard QWERTY keyboard with the "musical typing" feature. The synthesizers were broken into two groups: [virtual] analog and digital. Each synthesizer has a wide variety of adjustable parameters, including richness, glide, cut off, standard attack, decay, sustain, and release; these allow for a wide array of sound creation. The five synth thumbnails are the ARP 2600, the Minimoog, the Waldorf Wave, the Nord Lead 1 and the Yamaha DX7.
Guitar features
In addition to the standard tracks, Garageband allows for guitar-specific tracks that can use a variety of simulated amplifiers, stomp boxes, and effects processors. These imitate popular hardware from companies including Marshall Amplification, Orange Music Electronic Company, and Fender Musical Instruments Corporation. Up to five simulated effects can be layered on top of the virtual amplifiers, which feature adjustable parameters including tone, reverb, and volume. Guitars can be connected to Macs using the built-in input (requires hardware that can produce a standard stereo signal using a 3.5mm output) or a USB interface.
MIDI editing
GarageBand can import MIDI files and offers piano roll or notation-style editing and playback. By complying with the MIDI Standard, a user can edit many different aspects of a recorded note, including pitch, velocity, and duration. Pitch was settable to 1/128 of a semitone, on a scale of 0–127 (sometimes described on a scale of 1–128 for clarity). Velocity, which determines amplitude (volume), can be set and adjusted on a scale of 0–127. Note duration can be adjusted manually via the piano roll or in the score view. Note rhythms can be played via the software instruments, or created in the piano roll environment; rhythm correction is also included to lock notes to any time signature subdivision. GarageBand also offers global editing capabilities to MIDI information with Enhanced Timing, also known as Quantizing. While offering comprehensive control over MIDI files, GarageBand does not include several features of professional-level DAWs, such as a sequencer for drum tracks separate from the normal piano roll. However, many of these shortcomings have been addressed with each successive release of GarageBand.
Also of note, MIDI sequences edited or created in GarageBand cannot be exported to other DAWs or programs without first being converted to audio. A MIDI file can be extracted from a loop file created from a region, but this is not a general MIDI export facility, using manual steps and an open-source program.
Music lessons
A new feature included with GarageBand '09 and later is the ability to download pre-recorded music lessons from GarageBand's Lesson Store for guitar and piano. There are two types of lessons available in the Lesson Store: Basic Lessons, which are a free download, and Artist Lessons, which a user must purchase. The first Basic Lessons for both guitar and piano are included with GarageBand. In GarageBand 10, many sounds (aka patches, which Apple refers to as 'audio units') that are listed within the sound library are dimmed and unusable until the user pays an additional fee that allows the utilization of those sounds, bundled with the guitar and piano lessons. Attempting to click on and select the dimmed audio units to apply to the track causes promotional prompts to appear, requiring the user to log on with their Apple ID and furnish credit card information before knowing the price of the bundle.
In both types of lessons, a music teacher presents the lesson, which is in a special format offering high-quality video and audio instructions. The lessons include a virtual guitar or piano, which demonstrates finger position and a musical notation area to show the correct musical notations. The music examples used in these lessons feature popular music.
In an Artist Lesson the music teacher is the actual musician/songwriter who composed the song being taught in the lesson. the artists featured are:
Sting (The Police) — "Roxanne", "Message in a Bottle", "Fragile"
Sarah McLachlan — "Angel"
Patrick Stump of Fall Out Boy — "I Don't Care", "Sugar, We're Goin' Down"
Norah Jones — "Thinking About You"
Colbie Caillat — "Bubbly"
Sara Bareilles — "Love Song"
John Fogerty (Creedence Clearwater Revival) — "Proud Mary", "Fortunate Son", "Centerfield"
Ryan Tedder (OneRepublic) — "Apologize"
Ben Folds — "Brick", "Zak and Sara"
John Legend — "Ordinary People"
Alex Lifeson (Rush) — "Tom Sawyer", "Limelight", "Working Man", "The Spirit of Radio".
No new Artist Lessons were released in 2010, and Apple has not announced plans to release additional entries.
In June 2018, the GarageBand 10.3 update made Artist Lessons free.
Additional audio loops
Garageband includes an extensive selection of pre-made audio loops to choose from with an option to import custom sound loops and an additional loop pack that is purchasable via the App Store. All loops have an edit and effects option.
The Additional Audio Loops are as follows
Jam Packs
Jam Packs are Apple's official add-ons for GarageBand. Each Jam Pack contains loops and software instruments grouped into certain genres and styles.
The Jam Packs are as follows:
GarageBand Jam Pack: Remix Tools
GarageBand Jam Pack: Rhythm Section
GarageBand Jam Pack: Symphony Orchestra
GarageBand Jam Pack: World Music
GarageBand Jam Pack: Voices
There was also another GarageBand Jam Pack, initially known just as GarageBand Jam Pack, later GarageBand Jam Pack 1, which Apple discontinued in January 2006. Beginning with the release of the Remix Tools and Rhythm Section Jam Packs, each Jam Pack has been designated with a number. The release of GarageBand Jam Pack: World Music also saw a redesign in packaging.
MainStage 2
MainStage 2 by Apple also includes 40 built-in instruments – including synths, vintage keyboards, and a drum machine – to use in GarageBand. It also features an interface for live performances and includes a large collection of plug-ins and sounds.
Third-party instrument and Apple Loop packages
In addition to Apple, many other companies today offer commercial or shareware virtual software instruments designed especially for GarageBand, and collections of Apple Loops intended for GarageBand users.
GarageBand can also use any third-party software synthesizer that adheres to the Core Audio (Audio Units) standard. However, there are limitations, including that Audio Unit instruments which can respond to multiple MIDI channels or ports can be triggered only on the first channel of the first port. This means that multi-timbral instruments that contain multiple channels and respond to many MIDI channels, such as Native Instruments Kontakt and MOTU MachFive, are not ideally suited for use in GarageBand.
Third-party vendors also offer extra loops for use in GarageBand. Users can also record custom loops through a microphone, via a software instrument, or by using an audio interface to connect physically a guitar or other hardware instruments to a Mac or iOS device.
Sample multitrack source files
In 2005, Trent Reznor from the band Nine Inch Nails released the source multitrack GarageBand files for the song "The Hand That Feeds" to allow the public to experiment with his music, and permitted prospective GarageBand users to remix the song. He also gave permission for anyone to share their personalized remix with the world. Since then, Nine Inch Nails has released several more GarageBand source files, and several other artists have also released their GarageBand files that the public could use to experiment.
New Zealand band Evermore also released the source multi-track files for GarageBand for their song "Never Let You Go".
Ben Folds released Stems & Seeds, a special version of his 2008 album Way to Normal. Stems and Seeds contained a remastered version of Way to Normal, and a separate disc containing GarageBand files for each track from the album to allow fans to remix the songs.
Limitations
A lack of MIDI-out capability limits the use of external MIDI instruments. There is also only limited support for messages sent from knobs on MIDI keyboards, as only real-time pitch bend, modulation, sustain, and foot control are recognizable. However, since GarageBand '08, other parameters affected by MIDI knobs can be automated later, per-track. GarageBand has no functions for changing time signature mid-song though the software does now allow a tempo track to automate tempo changes.
Other than pitch bend, GarageBand is limited to the pitches and intervals of standard 12-tone equal temperament, so it does not natively support xenharmonic music. Logic Pro supports many different tunings. GarageBand does not support different tunings; however, audio units which support micro tuning (using or files, or some other method) can be employed in GarageBand to produce alternative pitches.
Before GarageBand 10, there was no export option, and the only option was to save files as or export to iTunes. There is no built-in MIDI export feature, although regions can be manually exported as loops and converted to MIDI files.
GarageBand for iOS
On March 2, 2011, Apple announced a version of GarageBand for the iPad. It has many features similar to the macOS version. Music can be created using the on-screen instruments, which include keyboards, drums, a sampler, and various "smart instruments". It also acts as a multitrack recording studio with stomp box effects and guitar amps. Songs can be emailed or sent to an iTunes Library. Additionally, projects can be imported to GarageBand for macOS, where they are further editable. This feature also allows instruments from the iOS platform to be savable to software instrument library on the Mac. However, projects created in the macOS version cannot be opened in the iOS version. The app is compatible with iPhone 3GS or higher, the third generation iPod Touch or higher, and all versions of the iPad, including the iPad Mini. The app, with all instruments included, was available for $6.99 from the Apple App Store. In 2017, it was made free.
Instruments
GarageBand comes with a wide range of instruments. All non-drum instruments (with the exception of the koto) come with the functionality to limit the note selection to different musical scales.
Keyboard
The keyboard is set up like a standard keyboard, and features several keyboard instruments, including grand piano, electric piano, various organs, clavinet, synth leads, synth pads, and bass synths. It also has many different non-keyboard instrument sounds including versions of many of the other instruments, for example users can use the keyboard to play guitar, bass and string sounds. In version 2.2, the Alchemy Synth synth engine from Logic Pro was also added to the keyboard. The keyboard has several additional features including a pitch bend, arpeggiator and "autoplay" function (which will play one of four rhythms for each instrument). Many of the instruments have adjustable parameters such as Attack, Cutoff and Resonance. Prior to version 2.2 there was also a separate "Smart Keyboard" instrument which was arranged like the other smart instruments, allowing the user to play chords on a limited selection of keyboard instruments (piano, electric piano, organ, clavinet, and four adjustable synthesizers). This functionality has since been integrated into the main keyboard instrument in version 2.2 with the new "Chord Strips" that allow the user to access the layout from the Smart Keyboard using any keyboard instrument.
Drums
There are three different kinds of drum instruments in GarageBand. The touch drums instrument includes by default seven acoustic drum kits with a realistic drum kit layout, and twelve electronic drum kits (including Hip Hop drums, House drums, and drum kits with Roland TR-808 and 909 samples). The electronic kits are set up like drum machines with customizable sounds that can be saved as separate drum kits. The Chinese Kit was later added in version, which included genuine Chinese sounds like the gong. The "Smart Drums" instrument allows the arranging of drum sounds on a grid by complexity and volume. It contains a selection of six drums (Classic Studio Kit, Live Rock Kit, Vintage Kit, Classic Drum Machine, Hip Hop Drum Machine, and House Drum Machine). The "Beat Sequencer" involves the placement of steps to form a beat pattern. There are many pre-set patterns to choose from and users can customise aspects of the pattern such as note velocity and probability.
Smart Guitar
GarageBand includes five guitars: an acoustic guitar, three electric guitars, and a distortion guitar. Each guitar (except for the acoustic one) has two optional sound boxes. The instrument is set up with two different modes. The first is set up like the Chord Strips, where multiple chords are playable. Each note in a chord can also be played separately, or muted by holding the left side of the string. This mode includes an autoplay feature which will play one of four different rhythms depending on which guitar is chosen.
Smart Bass
The bass instrument is set up like the guitar, where four strings can play various notes. However, the bass cannot play chords. Included are three electric basses, an acoustic orchestral bass, and four customizable synth basses. Like the smart keyboard and smart guitars, there is an "autoplay" feature.
Smart Strings
Smart Strings were added in version 1.2 and consist of a string section made of 1st and 2nd violins, violas, cellos, and bass. They are capable of playing notes legato, staccato, and pizzicato depending on if the user swipes up and down, flicks or taps their screen respectively. The orchestra is customizable, including four different string styles (all with a different "autoplay" feature) and the option to choose which instruments to play. For example, one can play a chord made up of all the available instruments, or simply play a violin note.
World
World instruments were added in version 2.3 which allow the user to play traditional Chinese and Japanese instruments. The instruments available are the pipa, erhu, koto and guzheng.
Drummer
The Drummer was added in version 2.1 and is a virtual player who will create realistic drum grooves. There are numerous drummers to choose from in various genres. Each drummer has a unique kit, which can be an acoustic, electronic or percussion drum kit. Users can also customise the playing style of each drummer, including choosing from various preset rhythms. They can also adjust which parts of the drum kit the drummer will play, the amount of swing and if the drummer should follow the rhythm of another track.
Sampler
In the sampler, the user can import or record their own sound and then play it on the keyboard (it has the same interface as the keyboard instrument). After the sound has been recorded or imported, it can be modified with a various amount of tools within the sampler in order to trim or reverse the sample, loop a section of it or adjust the tuning and volume envelope of the sample. The app comes with numerous sound effects such as a dog bark, party horn and cheering already available to use in the sampler.
Audio recorder
The audio recorded is a standard recorder for recording and editing audio. Audio can be recorded through the device's internal microphone, a headphone microphone or external microphone connected to the device via an audio interface. After the sound has been recorded, many audio effects can be applied. The recorder comes with various presets designed for recording different sounds like guitar, piano or lead vocals, all with adjustable parameters.
Amp
The amp is designed to be played by plugging a guitar or bass into the device and recording, but can also work with sounds from the audio recorder, included Apple Loops, and imported music files. Within it are several customizable amplifiers and stomp boxes, allowing for a broad range of different sounds.
External apps
Third-party music apps can be used inside GarageBand in one of two ways. The Audio Unit Extensions feature allows third-party instruments and effect plug-ins to be played and used directly inside GarageBand as if they were native to the app. The Inter-App Audio functionality lets users record audio from another app into GarageBand.
Sound Library
The Sound Library was added in November 2017 with the 2.3.1 update and lets the user download additional free instruments, drummers and loops released as Sound Packs that are added to the app over time.
Updates
On November 1, 2011, Apple introduced GarageBand for iOS 1.1, adding support for the iPhone and iPod Touch, among other features. These included the ability to create custom and time signatures, and exporting in AAC or AIFF format.
On March 7, 2012, Apple updated GarageBand to 1.2, adding support for the third-generation iPad. It introduced the new Smart Strings instrument, a string orchestra of 1st and 2nd violins, violas, celli, and bass, capable of playing notes legato, staccato, and pizzicato. Additionally, it added synthesizers to the Smart Keyboard and Smart Bass instruments. It also added a note editor that allows users to fine-tune note placement and length and the ability to upload songs to Facebook, YouTube and SoundCloud, as well as the ability to upload projects to iCloud. It also included Jam Session, a feature that enables up to four iPhones, iPod Touches, and/or iPads with GarageBand installed to play simultaneously.
On May 1, 2012, GarageBand was updated to 1.2.1, providing minor bug fixes and stability improvements.
Alongside the new iOS 6, Apple updated GarageBand to 1.3 on September 19, 2012. The update added the ability to import music from one's music library, ringtone creation, the ability to use the app in the background, and minor bug fixes.
GarageBand was updated to 1.4 on March 20, 2013. The update added support for Audiobus, the ability to remove grid snapping, and minor bug fixes.
GarageBand received an overhaul of design coinciding with the reveal of the iPad Air on October 22, 2013. GarageBand 2.0 features a new design to match iOS 7, an extended number of tracks per song, and new functions in the Sampler instrument.
In January 2016, version 2.1 was released in which GarageBand received a new Live Loops layout that lets users create and perform music by triggering loops and adding effects in real-time. Other features in the update included the ability to add a virtual Drummer, increased maximum number of tracks up to 32, the ability to edit volume automation curves and the addition of basic EQ and compressor plug-ins. Amplifiers for bass guitars were also added. Third-party instrument apps could now be used inside GarageBand via Audio Unit Extensions.
In January 2017, version 2.2 was released with a number of new features including the Alchemy Synth previously only available in Logic Pro. Audio Unit Extension compatibility was updated to also allow third-party effects apps to be used.
A new Sound Library was added in November 2017 which allows users to download additional free instruments and loops released as part of Sound Packs that are added to the app over time. A new Beat Sequencer for creating drum beats was also added in this update.
MIDI support was added in update 2.3.6 in September 2018.
In July 2021, GarageBand released multiple new Sound Packs with loops and instruments from many producers such as Boys Noize, as well as two Remix Sessions from Dua Lipa and Lady Gaga that allow users to remix their songs.
In August 2022, GarageBand released two Remix Sessions from Seventeen and Katy Perry.
In December 2022, GarageBand released a Remix Session from Zedd.
Differences from MacOS version
No Music Lessons.
Only three time signatures (, , and ).
No master track.
Automation is only available for volume.
Live Loops layout.
Audio Unit Extensions (via App Store).
A Sound Library providing free, downloadable content such as additional keyboards, drum sets, and more.
Limited exporting functions (As of 2.3.3, the option to export recorded projects as songs to YouTube has been removed).
Availability
Prior to the launch of Apple's Mac App Store, GarageBand was only available as a part of iLife, a suite of applications (also including iPhoto, iMovie, iDVD, and iWeb) intended to simplify the creation and organization of digital content, or available on a new Mac. On January 6, 2011, GarageBand was made available independently on the Mac App Store in addition to iPhoto and iMovie. Since then GarageBand's user base has increased drastically.
Notable users
GarageBand has been embraced by many musicians of varying levels of fame in order to record and produce music. Steve Lacy used the GarageBand app on his cracked 2012 iPhone to produce music for his solo projects, the Internet, and J. Cole. That phone is currently on display in the Smithsonian. Nine Inch Nails made their song "The Hand That Feeds" in the software, and released a link to the multitrack GarageBand file on the band's website, allowing other GarageBand users to remix the song. Musicians that have collaborated with Apple to promote GarageBand include Katy Perry, John Mayer, Dua Lipa, Billie Eilish, and Lady Gaga. Charlotte Day Wilson, Doja Cat, Ellie Rowsell (of Wolf Alice), Sloan Struble (of Dayglow), Meghan Trainor, Ethel Cain, and Awkwafina all began learning to produce and create music using GarageBand. GarageBand was also used by artists such as T-Pain; Grimes for her album Visions; St. Vincent for multiple projects; Danielle Haim for Haim songs, with the song "Summer Girl" starting out as a GarageBand demo; and Jesse Rutherford for his sophomore solo album, GARAGEB&, named after the application, as he produced most of the tracks in GarageBand. In addition, Rihanna's hit "Umbrella" was born from a stock GarageBand drum track. Fiona Apple largely recorded her album Fetch the Bolt Cutters at home with GarageBand. As well, the music for the viral internet video Charlie the Unicorn was recorded in GarageBand.
Supported music file formats
This app supports many music formats, including AIFF, WAV, and MIDI. The app can export songs to AAC, MP3, MP4 or AIFF format.
Support for 8-bit audio files was dropped in version 10.
| Technology | Multimedia_2 | null |
424302 | https://en.wikipedia.org/wiki/Itch | Itch | An itch (also known as pruritus) is a sensation that causes a strong desire or reflex to scratch. Itches have resisted many attempts to be classified as any one type of sensory experience. Itches have many similarities to pain, and while both are unpleasant sensory experiences, their behavioral response patterns are different. Pain creates a withdrawal reflex, whereas itches leads to a scratch reflex.
Unmyelinated nerve fibers for itches and pain both originate in the skin. Information for them is conveyed centrally in two distinct systems that both use the same nerve bundle and spinothalamic tract.
Classification
Most commonly, an itch is felt in one place. If it is felt all over the body, then it is called generalized itch or generalized pruritus. Generalized itch is infrequently a symptom of a serious underlying condition, such as cholestatic liver disease.
If the sensation of itching persists for six weeks or longer, then it is called chronic itch or chronic pruritus. Chronic idiopathic pruritus or Chronic Pruritus of Unknown Origin is a form of itch that persists for longer than six weeks, and for which no clear cause can be identified.
Signs and symptoms
Pain and itch have very different behavioral response patterns. Pain elicits a withdrawal reflex, which leads to retraction and therefore a reaction trying to protect an endangered part of the body. Itch in contrast creates a scratch reflex, which draws one to the affected skin site. Itch generates stimulus of a foreign object underneath or upon the skin and also the urge to remove it. For example, responding to a local itch sensation is an effective way to remove insects from one's skin.
Scratching has traditionally been regarded as a way to relieve oneself by reducing the annoying itch sensation. However, there are hedonic aspects to scratching, as one would find noxious scratching highly pleasurable. This can be problematic with chronic itch patients, such as ones with atopic dermatitis, who may scratch affected spots until they no longer produce a pleasant or painful sensation, instead of when the itch sensation disappears. It has been hypothesized that motivational aspects of scratching include the frontal brain areas of reward and decision making. These aspects might therefore contribute to the compulsive nature of itch and scratching.
Contagious itch
Events of "contagious itch" are very common occurrences. Even a discussion on the topic of itch can give one the desire to scratch. Itch is likely to be more than a localized phenomenon in the place one scratches. Results from a study showed that itching and scratching were induced purely by visual stimuli in a public lecture on itching. The sensation of pain can also be induced in a similar fashion, often by listening to a description of an injury, or viewing an injury itself.
There is little detailed data on central activation for contagious itching, but it is hypothesized that a human mirror neuron system exists in which one imitates certain motor actions when they view others performing the same action. A similar hypothesis has been used to explain the cause of contagious yawning.
Itch inhibition due to pain
Studies done in the last decade have shown that itch can be inhibited by many other forms of painful stimuli, such as noxious heat, physical rubbing/scratching, noxious chemicals, and electric shock.
Causes
Infectious
Body louse, found in substandard living conditions
Cutaneous larva migrans, a skin disease caused by hookworm infection
Head lice, if limited to the neck and scalp
Herpes, a viral disease
Insect bites, such as those from mosquitos or chiggers
Pubic lice, if limited to the genital area
Scabies, especially when several other persons in close contact also itch
Shaving, which may irritate the skin
Swimmer's itch, a short-term immune reaction
Varicella – i.e. chickenpox, prevalent among young children and highly contagious
Tungiasis, ectoparasite of skin
Environmental and allergic
Allergic reaction to contact with specific chemicals, such as urushiol, derived from poison ivy or poison oak, or Balsam of Peru, found in many foods and fragrances. Certain allergens may be diagnosed in a patch test.
Foreign objects on the skin are the most common cause of non-pathological itching.
Photodermatitis – sunlight reacts with chemicals in the skin, leading to the formation of irritant metabolites.
Urticaria (also called hives) usually causes itching.
Dermatologic
Dandruff, an unusually large amount of flaking is associated with this sensation.
Punctate palmoplantar keratoderma, a group of disorders characterized by abnormal thickening of the palms and soles.
Skin conditions (such as psoriasis, eczema, seborrhoeic dermatitis, sunburn, athlete's foot, and hidradenitis suppurativa). Most are of an inflammatory nature.
Scab healing, scar growth, and the development or emergence of moles, pimples, and ingrown hairs from below the epidermis.
Xerosis, dry skin, frequently seen in the winter and also associated with older age, frequent bathing in hot showers or baths, and high-temperature and low-humidity environments.
Other diseases
Diabetes mellitus, a group of metabolic diseases in which a person has high blood sugar
Hyperparathyroidism, overactivity of the parathyroid glands resulting in excess production of parathyroid hormone (PTH)
Iron deficiency anemia, a common anemia (low red blood cell or hemoglobin levels)
Cholestasis, where bile acids leaking into the serum activate peripheral opioid receptors, resulting in the characteristic generalized, severe itching
Malignancy or internal cancer, such as lymphoma or Hodgkin's disease
Polycythemia, which can cause generalized itching due to increased histamines
Psychiatric disease ("psychogenic itch", as may be seen in delusional parasitosis)
Thyroid illness
Uraemia – the itching sensation this causes is known as uremic pruritus
Medication
Drugs (such as opioids) that activate histamine (H1) receptors or trigger histamine release
Chloroquine, a drug used in the treatment and prevention of malaria
Bile acid congeners such as obeticholic acid
Related to pregnancy
Gestational pemphigoid, a dermatosis of pregnancy
Intrahepatic cholestasis of pregnancy, a medical condition in which cholestasis occurs
Pruritic urticarial papules and plaques of pregnancy (PUPPP), a chronic hives-like rash
Other
Menopause, or changes in hormonal balances associated with aging
Terminal illness
Mechanism
Itch can originate in the peripheral nervous system (dermal or neuropathic) or in the central nervous system (neuropathic, neurogenic, or psychogenic).
Pruritoceptive
Itch originating in the skin is known as pruritoceptive, and can be induced by a variety of stimuli, including mechanical, chemical, thermal, and electrical stimulation, or infection. The primary afferent neurons responsible for histamine-induced itch are unmyelinated C-fibres.
Nociceptors. Two major classes of human C-fibre nociceptors exist: mechano-responsive nociceptors and mechano-insensitive nociceptors. Mechano-responsive nociceptors have been shown in studies to respond to mostly pain, and mechano-insensitive receptors respond mostly to itch induced by histamine. However, it does not explain mechanically induced itch or itch produced without a flare reaction that involves no histamine. Therefore, it is possible that pruritoceptive nerve fibres have different classes of fibres, which is unclear in current research.
Histology and skin layers. Studies have been done to show that itch receptors are found only on the top two skin layers, the epidermis and the epidermal/dermal transition layers. Shelley and Arthur verified the depth by injecting individual itch powder (Mucuna pruriens) spicules and noting that maximal sensitivity occurred at the basal cell layer or the innermost layer of the epidermis. Surgical removal of those skin layers removed the ability for a patient to perceive itch. Itch is never felt in muscle or joints, which strongly suggests that deep tissue probably does not contain itch signaling apparatuses.
Sensitivity to pruritic stimuli is evenly distributed across the skin and has a clear spot distribution with similar density to that of pain. The different substances that elicit itch upon intracutaneous injection (injection within the skin) elicit only pain when injected subcutaneously (beneath the skin).
Molecular basis
Itch is often classified as that which is histamine mediated (histaminergic) and nonhistaminergic.
Itch is readily abolished in skin areas treated with nociceptor excitotoxin capsaicin but remains unchanged in skin areas rendered touch insensitive by pretreatment with anti-inflammatory saponins. Although experimentally induced itch can still be perceived under a complete A-fiber conduction block, it is significantly diminished. Overall, itch sensation is mediated by A-delta and C nociceptors located in the uppermost layer of the skin.
Gene expression. Using single-cell mRNA sequencing, clusters of genes expressed in itch-related tissues were identified, e.g. NP1-3, transmitting itch information; where NP3 expresses neuropeptides Nppb and Sst as well as genes involved in inflammatory itch (Il31ra, Osmr and Crystrl2). The histamine receptor gene Hrh1 was found in NP2 and NP3, suggesting that histaminergic itch is transmitted by both these pruriceptive sub clusters.
Infection. Staphylococcus aureus, a bacterial pathogen associated with itchy skin diseases, directly activates pruriceptor sensory neurons to drive itch. Skin exposure to S. aureus causes robust itch and scratch-induced damage. This reaction is mediated by S. aureus serine protease V8 which cleaves proteinase-activated receptor 1 (PAR1) on mouse and human sensory neurons. Targeting PAR1 through genetic deficiency, small interfering RNA (siRNA) knockdown, or pharmacological blockade decreases itch and skin damage caused by V8 and S. aureus exposure.
Spinal itch pathway
After the pruriceptive primary afferent has been activated, the signal is transmitted from the skin into the spinal dorsal horn. In this area, a number of interneurons will either be inhibited or activated to promote activation of projection neurons, mediating the pruriceptive signal to the brain. The GRP-GRPR interneuron system has been found to be important for mediating both histaminergic and non-histaminergic itch, where the GRP neurons activate GRPR neurons to promote itch
Neuropathic
Neuropathic itch can originate at any point along the afferent pathway as a result of damage of the nervous system. They could include diseases or disorders in the central nervous system or peripheral nervous system. Examples of neuropathic itch in origin are notalgia paresthetica, brachioradial pruritus, brain tumors, multiple sclerosis, peripheral neuropathy, and nerve irritation.
Neurogenic
Neurogenic itch, which is itch induced centrally but with no neural damage, is mostly associated with increased accumulation of exogenous opioids and possibly synthetic opioids.
Psychogenic
Itch is also associated with some symptoms of psychiatric disorders such as tactile hallucinations, delusions of parasitosis, or obsessive-compulsive disorders (as in OCD-related neurotic scratching).
Peripheral sensitization
Inflammatory mediators—such as bradykinin, serotonin (5-HT) and prostaglandins—released during a painful or pruritic inflammatory condition not only activate pruriceptors but also cause acute sensitization of the nociceptors. In addition, expression of neuro growth factors (NGF) can cause structural changes in nociceptors, such as sprouting. NGF is high in injured or inflamed tissue. Increased NGF is also found in atopic dermatitis, a hereditary and non-contagious skin disease with chronic inflammation. NGF is known to up-regulate neuropeptides, especially substance P. Substance P has been found to have an important role in inducing pain; however, there is no confirmation that substance P directly causes acute sensitization. Instead, substance P may contribute to itch by increasing neuronal sensitization and may affect release of mast cells, which contain many granules rich in histamine, during long-term interaction.
Central sensitization
Noxious input to the spinal cord is known to produce central sensitization, which consists of allodynia, exaggeration of pain, and punctuate hyperalgesia, extreme sensitivity to pain. Two types of mechanical hyperalgesia can occur: 1) touch that is normally painless in the uninjured surroundings of a cut or tear can trigger painful sensations (touch-evoked hyperalgesia), and 2) a slightly painful pin prick stimulation is perceived as more painful around a focused area of inflammation (punctuate hyperalgesia). Touch-evoked hyperalgesia requires continuous firing of primary afferent nociceptors, and punctuate hyperalgesia does not require continuous firing which means it can persist for hours after a trauma and can be stronger than normally experienced. In addition, it was found that patients with neuropathic pain, histamine ionophoresis resulted in a sensation of burning pain rather than itch, which would be induced in normal healthy patients. This shows that there is spinal hypersensitivity to C-fiber input in chronic pain.
Treatment
A variety of over-the-counter and prescription anti-itch drugs are available. Some plant products have been found to be effective anti-pruritics, others not. Non-chemical remedies include cooling, warming, soft stimulation.
Topical antipruritics in the form of creams and sprays are often available over-the-counter. Oral anti-itch drugs also exist and are usually prescription drugs. The active ingredients usually belong to the following classes:
Antihistamines, such as diphenhydramine (Benadryl)
Corticosteroids, such as hydrocortisone topical cream; see topical steroid
Counterirritants, such as mint oil, menthol, or camphor
Crotamiton (trade name Eurax) is an antipruritic agent available as a cream or lotion, often used to treat scabies. Its mechanism of action remains unknown.
JAK inhibitors, such as ruxolitinib topical cream; see topical JAK inhibitor
Local anesthetics, such as benzocaine topical cream (Lanacane)
Phototherapy is helpful for severe itching, especially if caused by chronic kidney disease. The common type of light used is UVB.
Sometimes scratching relieves isolated itches, hence the existence of devices such as the back scratcher. Often, however, scratching only offers temporary relief and can intensify itching, even causing further damage to the skin, dubbed the "itch-scratch cycle".
The mainstay of therapy for dry skin is maintaining adequate skin moisture and topical emollients.
No studies have been conducted to investigate the effectiveness of emollient creams, cooling lotions, topical corticosteroids, topical antidepressants, systemic antihistamines, systemic antidepressants, systemic anticonvulsants, and phototherapy on chronic pruritus of unknown origin. However, there are clinical trials currently underway with dupilumab which is thought to alleviate itch by acting on the IL-4 receptor on sensory neurons. The effectiveness of therapeutic options for people who are terminally ill with malignant cancer is not known.
Epidemiology
Approximately 280 million people globally, 4% of the population, have difficulty with itchiness. This is comparable to the 2–3% of the population who have psoriasis.
History
In 1660, German physician Samuel Hafenreffer introduced the definition of pruritus (itch).
| Biology and health sciences | Symptoms and signs | Health |
424348 | https://en.wikipedia.org/wiki/Cardiac%20muscle | Cardiac muscle | Cardiac muscle (also called heart muscle or myocardium) is one of three types of vertebrate muscle tissues, the others being skeletal muscle and smooth muscle. It is an involuntary, striated muscle that constitutes the main tissue of the wall of the heart. The cardiac muscle (myocardium) forms a thick middle layer between the outer layer of the heart wall (the pericardium) and the inner layer (the endocardium), with blood supplied via the coronary circulation. It is composed of individual cardiac muscle cells joined by intercalated discs, and encased by collagen fibers and other substances that form the extracellular matrix.
Cardiac muscle contracts in a similar manner to skeletal muscle, although with some important differences. Electrical stimulation in the form of a cardiac action potential triggers the release of calcium from the cell's internal calcium store, the sarcoplasmic reticulum. The rise in calcium causes the cell's myofilaments to slide past each other in a process called excitation-contraction coupling.
Diseases of the heart muscle known as cardiomyopathies are of major importance. These include ischemic conditions caused by a restricted blood supply to the muscle such as angina, and myocardial infarction.
Structure
Gross anatomy
Cardiac muscle tissue or myocardium forms the bulk of the heart. The heart wall is a three-layered structure with a thick layer of myocardium sandwiched between the inner endocardium and the outer epicardium (also known as the visceral pericardium). The inner endocardium lines the cardiac chambers, covers the cardiac valves, and joins with the endothelium that lines the blood vessels that connect to the heart. On the outer aspect of the myocardium is the epicardium which forms part of the pericardial sac that surrounds, protects, and lubricates the heart.
Within the myocardium, there are several sheets of cardiac muscle cells or cardiomyocytes. The sheets of muscle that wrap around the left ventricle closest to the endocardium are oriented perpendicularly to those closest to the epicardium. When these sheets contract in a coordinated manner they allow the ventricle to squeeze in several directions simultaneously – longitudinally (becoming shorter from apex to base), radially (becoming narrower from side to side), and with a twisting motion (similar to wringing out a damp cloth) to squeeze the maximum possible amount of blood out of the heart with each heartbeat.
Contracting heart muscle uses a lot of energy, and therefore requires a constant flow of blood to provide oxygen and nutrients. Blood is brought to the myocardium by the coronary arteries. These originate from the aortic root and lie on the outer or epicardial surface of the heart. Blood is then drained away by the coronary veins into the right atrium.
Microanatomy
Cardiac muscle cells (also called cardiomyocytes) are the contractile myocytes of the cardiac muscle. The cells are surrounded by an extracellular matrix produced by supporting fibroblast cells. Specialised modified cardiomyocytes known as pacemaker cells, set the rhythm of the heart contractions. The pacemaker cells are only weakly contractile without sarcomeres, and are connected to neighboring contractile cells via gap junctions. They are located in the sinoatrial node (the primary pacemaker) positioned on the wall of the right atrium, near the entrance of the superior vena cava. Other pacemaker cells are found in the atrioventricular node (secondary pacemaker).
Pacemaker cells carry the impulses that are responsible for the beating of the heart. They are distributed throughout the heart and are responsible for several functions. First, they are responsible for being able to spontaneously generate and send out electrical impulses. They also must be able to receive and respond to electrical impulses from the brain. Lastly, they must be able to transfer electrical impulses from cell to cell. Pacemaker cells in the sinoatrial node, and atrioventricular node are smaller and conduct at a relatively slow rate between the cells. Specialized conductive cells in the bundle of His, and the Purkinje fibers are larger in diameter and conduct signals at a fast rate.
The Purkinje fibers rapidly conduct electrical signals; coronary arteries to bring nutrients to the muscle cells, and veins and a capillary network to take away waste products.
Cardiac muscle cells are the contracting cells that allow the heart to pump. Each cardiomyocyte needs to contract in coordination with its neighboring cells - known as a functional syncytium - working to efficiently pump blood from the heart, and if this coordination breaks down then – despite individual cells contracting – the heart may not pump at all, such as may occur during abnormal heart rhythms such as ventricular fibrillation.
Viewed through a microscope, cardiac muscle cells are roughly rectangular, measuring 100–150μm by 30–40μm. Individual cardiac muscle cells are joined at their ends by intercalated discs to form long fibers. Each cell contains myofibrils, specialized protein contractile fibers of actin and myosin that slide past each other. These are organized into sarcomeres, the fundamental contractile units of muscle cells. The regular organization of myofibrils into sarcomeres gives cardiac muscle cells a striped or striated appearance when looked at through a microscope, similar to skeletal muscle. These striations are caused by lighter I bands composed mainly of actin, and darker A bands composed mainly of myosin.
Cardiomyocytes contain T-tubules, pouches of cell membrane that run from the cell surface to the cell's interior which help to improve the efficiency of contraction. The majority of these cells contain only one nucleus (some may have two central nuclei), unlike skeletal muscle cells which contain many nuclei. Cardiac muscle cells contain many mitochondria which provide the energy needed for the cell in the form of adenosine triphosphate (ATP), making them highly resistant to fatigue.
T-tubules
T-tubules are microscopic tubes that run from the cell surface to deep within the cell. They are continuous with the cell membrane, are composed of the same phospholipid bilayer, and are open at the cell surface to the extracellular fluid that surrounds the cell. T-tubules in cardiac muscle are bigger and wider than those in skeletal muscle, but fewer in number. In the centre of the cell they join, running into and along the cell as a transverse-axial network. Inside the cell they lie close to the cell's internal calcium store, the sarcoplasmic reticulum. Here, a single tubule pairs with part of the sarcoplasmic reticulum, called a terminal cisterna, in a combination known as a diad.
The functions of T-tubules include rapidly transmitting electrical impulses known as action potentials from the cell surface to the cell's core, and helping to regulate the concentration of calcium within the cell in a process known as excitation-contraction coupling. They are also involved in mechano-electric feedback, as evident from cell contraction induced T-tubular content exchange (advection-assisted diffusion), which was confirmed by confocal and 3D electron tomography observations.
Intercalated discs
The cardiac syncytium is a network of cardiomyocytes connected by intercalated discs that enable the rapid transmission of electrical impulses through the network, enabling the syncytium to act in a coordinated contraction of the myocardium. There is an atrial syncytium and a ventricular syncytium that are connected by cardiac connection fibres. Electrical resistance through intercalated discs is very low, thus allowing free diffusion of ions. The ease of ion movement along cardiac muscle fibers axes is such that action potentials are able to travel from one cardiac muscle cell to the next, facing only slight resistance. Each syncytium obeys the all or none law.
Intercalated discs are complex adhering structures that connect the single cardiomyocytes to an electrochemical syncytium (in contrast to the skeletal muscle, which becomes a multicellular syncytium during embryonic development). The discs are responsible mainly for force transmission during muscle contraction. Intercalated discs consist of three different types of cell-cell junctions: the actin filament anchoring fascia adherens junctions, the intermediate filament anchoring desmosomes, and gap junctions. They allow action potentials to spread between cardiac cells by permitting the passage of ions between cells, producing depolarization of the heart muscle. The three types of junction act together as a single area composita.
Under light microscopy, intercalated discs appear as thin, typically dark-staining lines dividing adjacent cardiac muscle cells. The intercalated discs run perpendicular to the direction of muscle fibers. Under electron microscopy, an intercalated disc's path appears more complex. At low magnification, this may appear as a convoluted electron dense structure overlying the location of the obscured Z-line. At high magnification, the intercalated disc's path appears even more convoluted, with both longitudinal and transverse areas appearing in longitudinal section.
Fibroblasts
Cardiac fibroblasts are vital supporting cells within cardiac muscle. They are unable to provide forceful contractions like cardiomyocytes, but instead are largely responsible for creating and maintaining the extracellular matrix which surrounds the cardiomyocytes. Fibroblasts play a crucial role in responding to injury, such as a myocardial infarction. Following injury, fibroblasts can become activated and turn into myofibroblasts – cells which exhibit behaviour somewhere between a fibroblast (generating extracellular matrix) and a smooth muscle cell (ability to contract). In this capacity, fibroblasts can repair an injury by creating collagen while gently contracting to pull the edges of the injured area together.
Fibroblasts are smaller but more numerous than cardiomyocytes, and several fibroblasts can be attached to a cardiomyocyte at once. When attached to a cardiomyocyte they can influence the electrical currents passing across the muscle cell's surface membrane, and in the context are referred to as being electrically coupled, as originally shown in vitro in the 1960s, and ultimately confirmed in native cardiac tissue with the help of optogenetic techniques. Other potential roles for fibroblasts include electrical insulation of the cardiac conduction system, and the ability to transform into other cell types including cardiomyocytes and adipocytes.
Extracellular matrix
The extracellular matrix (ECM) surrounds the cardiomyocyte and fibroblasts. The ECM is composed of proteins including collagen and elastin along with polysaccharides (sugar chains) known as glycosaminoglycans. Together, these substances give support and strength to the muscle cells, create elasticity in cardiac muscle, and keep the muscle cells hydrated by binding water molecules.
The matrix in immediate contact with the muscle cells is referred to as the basement membrane, mainly composed of type IV collagen and laminin. Cardiomyocytes are linked to the basement membrane via specialised glycoproteins called integrins.
Development
Humans are born with a set number of heart muscle cells, or cardiomyocytes, which increase in size as the heart grows larger during childhood development. Evidence suggests that cardiomyocytes are slowly turned over during aging, but less than 50% of the cardiomyocytes present at birth are replaced during a normal life span. The growth of individual cardiomyocytes not only occurs during normal heart development, it also occurs in response to extensive exercise (athletic heart syndrome), heart disease, or heart muscle injury such as after a myocardial infarction. A healthy adult cardiomyocyte has a cylindrical shape that is approximately 100μm long and 10–25μm in diameter. Cardiomyocyte hypertrophy occurs through sarcomerogenesis, the creation of new sarcomere units in the cell. During heart volume overload, cardiomyocytes grow through eccentric hypertrophy. The cardiomyocytes extend lengthwise but have the same diameter, resulting in ventricular dilation. During heart pressure overload, cardiomyocytes grow through concentric hypertrophy. The cardiomyocytes grow larger in diameter but have the same length, resulting in heart wall thickening.
Physiology
The physiology of cardiac muscle shares many similarities with that of skeletal muscle. The primary function of both muscle types is to contract, and in both cases, a contraction begins with a characteristic flow of ions across the cell membrane known as an action potential. The cardiac action potential subsequently triggers muscle contraction by increasing the concentration of calcium within the cytosol.
Cardiac cycle
The cardiac cycle is the performance of the human heart from the beginning of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole, following a period of robust contraction and pumping of blood, dubbed systole. After emptying, the heart immediately relaxes and expands to receive another influx of blood returning from the lungs and other systems of the body, before again contracting to pump blood to the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again.
The rest phase is considered polarized. The resting potential during this phase of the beat separates the ions such as sodium, potassium, and calcium. Myocardial cells possess the property of automaticity or spontaneous depolarization. This is the direct result of a membrane which allows sodium ions to slowly enter the cell until the threshold is reached for depolarization. Calcium ions follow and extend the depolarization even further. Once calcium stops moving inward, potassium ions move out slowly to produce repolarization. The very slow repolarization of the CMC membrane is responsible for the long refractory period.
However, the mechanism by which calcium concentrations within the cytosol rise differ between skeletal and cardiac muscle. In cardiac muscle, the action potential comprises an inward flow of both sodium and calcium ions. The flow of sodium ions is rapid but very short-lived, while the flow of calcium is sustained and gives the plateau phase characteristic of cardiac muscle action potentials. The comparatively small flow of calcium through the L-type calcium channels triggers a much larger release of calcium from the sarcoplasmic reticulum in a phenomenon known as calcium-induced calcium release. In contrast, in skeletal muscle, minimal calcium flows into the cell during action potential and instead the sarcoplasmic reticulum in these cells is directly coupled to the surface membrane. This difference can be illustrated by the observation that cardiac muscle fibers require calcium to be present in the solution surrounding the cell to contract, while skeletal muscle fibers will contract without extracellular calcium.
During contraction of a cardiac muscle cell, the long protein myofilaments oriented along the length of the cell slide over each other in what is known as the sliding filament theory. There are two kinds of myofilaments, thick filaments composed of the protein myosin, and thin filaments composed of the proteins actin, troponin and tropomyosin. As the thick and thin filaments slide past each other the cell becomes shorter and fatter. In a mechanism known as cross-bridge cycling, calcium ions bind to the protein troponin, which along with tropomyosin then uncover key binding sites on actin. Myosin, in the thick filament, can then bind to actin, pulling the thick filaments along the thin filaments. When the concentration of calcium within the cell falls, troponin and tropomyosin once again cover the binding sites on actin, causing the cell to relax.
Regeneration
It was commonly believed that cardiac muscle cells could not be regenerated. However, this was contradicted by a report published in 2009. Olaf Bergmann and his colleagues at the Karolinska Institute in Stockholm tested samples of heart muscle from people born before 1955 who had very little cardiac muscle around their heart, many showing with disabilities from this abnormality. By using DNA samples from many hearts, the researchers estimated that a 4-year-old renews about 20% of heart muscle cells per year, and about 69% of the heart muscle cells of a 50-year-old were generated after they were born.
One way that cardiomyocyte regeneration occurs is through the division of pre-existing cardiomyocytes during the normal aging process.
In the 2000s, the discovery of adult endogenous cardiac stem cells was reported, and studies were published that claimed that various stem cell lineages, including bone marrow stem cells were able to differentiate into cardiomyocytes, and could be used to treat heart failure.
However, other teams were unable to replicate these findings, and many of the original studies were later retracted for scientific fraud.
Differences between atria and ventricles
Cardiac muscle forms both the atria and the ventricles of the heart. Although this muscle tissue is very similar between cardiac chambers, some differences exist. The myocardium found in the ventricles is thick to allow forceful contractions, while the myocardium in the atria is much thinner. The individual myocytes that make up the myocardium also differ between cardiac chambers. Ventricular cardiomyocytes are longer and wider, with a denser T-tubule network. Although the fundamental mechanisms of calcium handling are similar between ventricular and atrial cardiomyocytes, the calcium transient is smaller and decays more rapidly in atrial myocytes, with a corresponding increase in calcium buffering capacity. The complement of ion channels differs between chambers, leading to longer action potential durations and effective refractory periods in the ventricles. Certain ion currents such as IK(UR) are highly specific to atrial cardiomyocytes, making them a potential target for treatments for atrial fibrillation.
Clinical significance
Diseases affecting cardiac muscle, known as cardiomyopathies, are the leading cause of death in developed countries. The most common condition is coronary artery disease, in which the blood supply to the heart is reduced. The coronary arteries become narrowed by the formation of atherosclerotic plaques. If these narrowings become severe enough to partially restrict blood flow, the syndrome of angina pectoris may occur. This typically causes chest pain during exertion that is relieved by rest. If a coronary artery suddenly becomes very narrowed or completely blocked, interrupting or severely reducing blood flow through the vessel, a myocardial infarction or heart attack occurs. If the blockage is not relieved promptly by medication, percutaneous coronary intervention, or surgery, then a heart muscle region may become permanently scarred and damaged. Specific cardiomyopathies include: increased left ventricular mass (hypertrophic cardiomyopathy), abnormally large (dilated cardiomyopathy), or abnormally stiff (restrictive cardiomyopathy). Some of these conditions are caused by genetic mutations and can be inherited.
Heart muscle can also become damaged despite a normal blood supply. The heart muscle may become inflamed in a condition called myocarditis, most commonly caused by a viral infection but sometimes caused by the body's own immune system. Heart muscle can also be damaged by drugs such as alcohol, long standing high blood pressure or hypertension, or persistent abnormal heart racing.
Many of these conditions, if severe enough, can damage the heart so much that the pumping function of the heart is reduced. If the heart is no longer able to pump enough blood to meet the body's needs, this is described as heart failure.
Significant damage to cardiac muscle cells is referred to as myocytolysis which is considered a type of cellular necrosis defined as either coagulative or colliquative.
| Biology and health sciences | Muscular system | Biology |
424440 | https://en.wikipedia.org/wiki/H-theorem | H-theorem | In classical statistical mechanics, the H-theorem, introduced by Ludwig Boltzmann in 1872, describes the tendency of the quantity H (defined below) to decrease in a nearly-ideal gas of molecules. As this quantity H was meant to represent the entropy of thermodynamics, the H-theorem was an early demonstration of the power of statistical mechanics as it claimed to derive the second law of thermodynamics—a statement about fundamentally irreversible processes—from reversible microscopic mechanics. It is thought to prove the second law of thermodynamics, albeit under the assumption of low-entropy initial conditions.
The H-theorem is a natural consequence of the kinetic equation derived by Boltzmann that has come to be known as Boltzmann's equation. The H-theorem has led to considerable discussion about its actual implications, with major themes being:
What is entropy? In what sense does Boltzmann's quantity H correspond to the thermodynamic entropy?
Are the assumptions (especially the assumption of molecular chaos) behind Boltzmann's equation too strong? When are these assumptions violated?
Name and pronunciation
Boltzmann in his original publication writes the symbol E (as in entropy) for its statistical function. Years later, Samuel Hawksley Burbury, one of the critics of the theorem, wrote the function with the symbol H, a notation that was subsequently adopted by Boltzmann when referring to his "H-theorem". The notation has led to some confusion regarding the name of the theorem. Even though the statement is usually referred to as the "Aitch theorem", sometimes it is instead called the "Eta theorem", as the capital Greek letter Eta (Η) is indistinguishable from the capital version of Latin letter h (H). Discussions have been raised on how the symbol should be understood, but it remains unclear due to the lack of written sources from the time of the theorem. Studies of the typography and the work of J.W. Gibbs seem to favour the interpretation of H as Eta.
Definition and meaning of Boltzmann's H
The H value is determined from the function f(E, t) dE, which is the energy distribution function of molecules at time t. The value f(E, t) dE is the number of molecules that have kinetic energy between E and E + dE. H itself is defined as
For an isolated ideal gas (with fixed total energy and fixed total number of particles), the function H is at a minimum when the particles have a Maxwell–Boltzmann distribution; if the molecules of the ideal gas are distributed in some other way (say, all having the same kinetic energy), then the value of H will be higher. Boltzmann's H-theorem, described in the next section, shows that when collisions between molecules are allowed, such distributions are unstable and tend to irreversibly seek towards the minimum value of H (towards the Maxwell–Boltzmann distribution).
(Note on notation: Boltzmann originally used the letter E for quantity H; most of the literature after Boltzmann uses the letter H as here. Boltzmann also used the symbol x to refer to the kinetic energy of a particle.)
Boltzmann's H theorem
Boltzmann considered what happens during the collision between two particles. It is a basic fact of mechanics that in the elastic collision between two particles (such as hard spheres), the energy transferred between the particles varies depending on initial conditions (angle of collision, etc.).
Boltzmann made a key assumption known as the Stosszahlansatz (molecular chaos assumption), that during any collision event in the gas, the two particles participating in the collision have 1) independently chosen kinetic energies from the distribution, 2) independent velocity directions, 3) independent starting points. Under these assumptions, and given the mechanics of energy transfer, the energies of the particles after the collision will obey a certain new random distribution that can be computed.
Considering repeated uncorrelated collisions, between any and all of the molecules in the gas, Boltzmann constructed his kinetic equation (Boltzmann's equation). From this kinetic equation, a natural outcome is that the continual process of collision causes the quantity H to decrease until it has reached a minimum.
Impact
Although Boltzmann's H-theorem turned out not to be the absolute proof of the second law of thermodynamics as originally claimed (see Criticisms below), the H-theorem led Boltzmann in the last years of the 19th century to more and more probabilistic arguments about the nature of thermodynamics. The probabilistic view of thermodynamics culminated in 1902 with Josiah Willard Gibbs's statistical mechanics for fully general systems (not just gases), and the introduction of generalized statistical ensembles.
The kinetic equation and in particular Boltzmann's molecular chaos assumption inspired a whole family of Boltzmann equations that are still used today to model the motions of particles, such as the electrons in a semiconductor. In many cases the molecular chaos assumption is highly accurate, and the ability to discard complex correlations between particles makes calculations much simpler.
The process of thermalisation can be described using the H-theorem or the relaxation theorem.
Criticism and exceptions
There are several notable reasons described below why the H-theorem, at least in its original 1871 form, is not completely rigorous. As Boltzmann would eventually go on to admit, the arrow of time in the H-theorem is not in fact purely mechanical, but really a consequence of assumptions about initial conditions.
Loschmidt's paradox
Soon after Boltzmann published his H theorem, Johann Josef Loschmidt objected that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism. If the H decreases over time in one state, then there must be a matching reversed state where H increases over time (Loschmidt's paradox). The explanation is that Boltzmann's equation is based on the assumption of "molecular chaos", i.e., that it follows from, or at least is consistent with, the underlying kinetic model that the particles be considered independent and uncorrelated. It turns out that this assumption breaks time reversal symmetry in a subtle sense, and therefore begs the question. Once the particles are allowed to collide, their velocity directions and positions in fact do become correlated (however, these correlations are encoded in an extremely complex manner). This shows that an (ongoing) assumption of independence is not consistent with the underlying particle model.
Boltzmann's reply to Loschmidt was to concede the possibility of these states, but noting that these sorts of states were so rare and unusual as to be impossible in practice. Boltzmann would go on to sharpen this notion of the "rarity" of states, resulting in his entropy formula of 1877.
Spin echo
As a demonstration of Loschmidt's paradox, a modern counterexample (not to Boltzmann's original gas-related H-theorem, but to a closely related analogue) is the phenomenon of spin echo. In the spin echo effect, it is physically possible to induce time reversal in an interacting system of spins.
An analogue to Boltzmann's H for the spin system can be defined in terms of the distribution of spin states in the system. In the experiment, the spin system is initially perturbed into a non-equilibrium state (high H), and, as predicted by the H theorem the quantity H soon decreases to the equilibrium value. At some point, a carefully constructed electromagnetic pulse is applied that reverses the motions of all the spins. The spins then undo the time evolution from before the pulse, and after some time the H actually increases away from equilibrium (once the evolution has completely unwound, the H decreases once again to the minimum value). In some sense, the time reversed states noted by Loschmidt turned out to be not completely impractical.
Poincaré recurrence
In 1896, Ernst Zermelo noted a further problem with the H theorem, which was that if the system's H is at any time not a minimum, then by Poincaré recurrence, the non-minimal H must recur (though after some extremely long time). Boltzmann admitted that these recurring rises in H technically would occur, but pointed out that, over long times, the system spends only a tiny fraction of its time in one of these recurring states.
The second law of thermodynamics states that the entropy of an isolated system always increases to a maximum equilibrium value. This is strictly true only in the thermodynamic limit of an infinite number of particles. For a finite number of particles, there will always be entropy fluctuations. For example, in the fixed volume of the isolated system, the maximum entropy is obtained when half the particles are in one half of the volume, half in the other, but sometimes there will be temporarily a few more particles on one side than the other, and this will constitute a very small reduction in entropy. These entropy fluctuations are such that the longer one waits, the larger an entropy fluctuation one will probably see during that time, and the time one must wait for a given entropy fluctuation is always finite, even for a fluctuation to its minimum possible value. For example, one might have an extremely low entropy condition of all particles being in one half of the container. The gas will quickly attain its equilibrium value of entropy, but given enough time, this same situation will happen again. For practical systems, e.g. a gas in a 1-liter container at room temperature and atmospheric pressure, this time is truly enormous, many multiples of the age of the universe, and, practically speaking, one can ignore the possibility.
Fluctuations of H in small systems
Since H is a mechanically defined variable that is not conserved, then like any other such variable (pressure, etc.) it will show thermal fluctuations. This means that H regularly shows spontaneous increases from the minimum value. Technically this is not an exception to the H theorem, since the H theorem was only intended to apply for a gas with a very large number of particles. These fluctuations are only perceptible when the system is small and the time interval over which it is observed is not enormously large.
If H is interpreted as entropy as Boltzmann intended, then this can be seen as a manifestation of the fluctuation theorem.
Connection to information theory
H is a forerunner of Shannon's information entropy. Claude Shannon denoted his measure of information entropy H after the H-theorem. The article on Shannon's information entropy contains an
explanation of the discrete counterpart of the quantity H, known as the information entropy or information uncertainty (with a minus sign). By extending the discrete information entropy to the continuous information entropy, also called differential entropy, one obtains the expression in the equation from the section above, Definition and Meaning of Boltzmann's H, and thus a better feel for the meaning of H.
The H-theorem's connection between information and entropy plays a central role in a recent controversy called the Black hole information paradox.
Tolman's H-theorem
Richard C. Tolman's 1938 book The Principles of Statistical Mechanics dedicates a whole chapter to the study of Boltzmann's H theorem, and its extension in the generalized classical statistical mechanics of Gibbs. A further chapter is devoted to the quantum mechanical version of the H-theorem.
Classical mechanical
We let and be our generalized canonical coordinates for a set of particles. Then we consider a function that returns the probability density of particles, over the states in phase space. Note how this can be multiplied by a small region in phase space, denoted by , to yield the (average) expected number of particles in that region.
Tolman offers the following equations for the definition of the quantity H in Boltzmann's original H theorem.
Here we sum over the regions into which phase space is divided, indexed by . And in the limit for an infinitesimal phase space volume , we can write the sum as an integral.
H can also be written in terms of the number of molecules present in each of the cells.
An additional way to calculate the quantity H is:
where P is the probability of finding a system chosen at random from the specified microcanonical ensemble. It can finally be written as:
where G is the number of classical states.
The quantity H can also be defined as the integral over velocity space :
{| style="width:100%" border="0"
|-
| style="width:95%" |
| style= | (1)
|}
where P(v) is the probability distribution.
Using the Boltzmann equation one can prove that H can only decrease.
For a system of N statistically independent particles, H is related to the thermodynamic entropy S through:
So, according to the H-theorem, S can only increase.
Quantum mechanical
In quantum statistical mechanics (which is the quantum version of classical statistical mechanics), the H-function is the function:
where summation runs over all possible distinct states of the system, and pi is the probability that the system could be found in the i-th state.
This is closely related to the entropy formula of Gibbs,
and we shall (following e.g., Waldram (1985), p. 39) proceed using S rather than H.
First, differentiating with respect to time gives
(using the fact that Σ dpi/dt = 0, since Σ pi = 1, so the second term vanishes. We will see later that it will be useful to break this into two sums.)
Now Fermi's golden rule gives a master equation for the average rate of quantum jumps from state α to β; and from state β to α. (Of course, Fermi's golden rule itself makes certain approximations, and the introduction of this rule is what introduces irreversibility. It is essentially the quantum version of Boltzmann's Stosszahlansatz.) For an isolated system the jumps will make contributions
where the reversibility of the dynamics ensures that the same transition constant ναβ appears in both expressions.
So
The two differences terms in the summation always have the same sign. For example:
then
so overall the two negative signs will cancel.
Therefore,
for an isolated system.
The same mathematics is sometimes used to show that relative entropy is a Lyapunov function of a Markov process in detailed balance, and other chemistry contexts.
Gibbs' H-theorem
Josiah Willard Gibbs described another way in which the entropy of a microscopic system would tend to increase over time. Later writers have called this "Gibbs' H-theorem" as its conclusion resembles that of Boltzmann's. Gibbs himself never called it an H-theorem, and in fact his definition of entropy—and mechanism of increase—are very different from Boltzmann's. This section is included for historical completeness.
The setting of Gibbs' entropy production theorem is in ensemble statistical mechanics, and the entropy quantity is the Gibbs entropy (information entropy) defined in terms of the probability distribution for the entire state of the system. This is in contrast to Boltzmann's H defined in terms of the distribution of states of individual molecules, within a specific state of the system.
Gibbs considered the motion of an ensemble which initially starts out confined to a small region of phase space, meaning that the state of the system is known with fair precision though not quite exactly (low Gibbs entropy). The evolution of this ensemble over time proceeds according to Liouville's equation. For almost any kind of realistic system, the Liouville evolution tends to "stir" the ensemble over phase space, a process analogous to the mixing of a dye in an incompressible fluid. After some time, the ensemble appears to be spread out over phase space, although it is actually a finely striped pattern, with the total volume of the ensemble (and its Gibbs entropy) conserved. Liouville's equation is guaranteed to conserve Gibbs entropy since there is no random process acting on the system; in principle, the original ensemble can be recovered at any time by reversing the motion.
The critical point of the theorem is thus: If the fine structure in the stirred-up ensemble is very slightly blurred, for any reason, then the Gibbs entropy increases, and the ensemble becomes an equilibrium ensemble. As to why this blurring should occur in reality, there are a variety of suggested mechanisms. For example, one suggested mechanism is that the phase space is coarse-grained for some reason (analogous to the pixelization in the simulation of phase space shown in the figure). For any required finite degree of fineness the ensemble becomes "sensibly uniform" after a finite time. Or, if the system experiences a tiny uncontrolled interaction with its environment, the sharp coherence of the ensemble will be lost. Edwin Thompson Jaynes argued that the blurring is subjective in nature, simply corresponding to a loss of knowledge about the state of the system. In any case, however it occurs, the Gibbs entropy increase is irreversible provided the blurring cannot be reversed.
The exactly evolving entropy, which does not increase, is known as fine-grained entropy. The blurred entropy is known as coarse-grained entropy.
Leonard Susskind analogizes this distinction to the notion of the volume of a fibrous ball of cotton: On one hand the volume of the fibers themselves is constant, but in another sense there is a larger coarse-grained volume, corresponding to the outline of the ball.
Gibbs' entropy increase mechanism solves some of the technical difficulties found in Boltzmann's H-theorem: The Gibbs entropy does not fluctuate nor does it exhibit Poincare recurrence, and so the increase in Gibbs entropy, when it occurs, is therefore irreversible as expected from thermodynamics. The Gibbs mechanism also applies equally well to systems with very few degrees of freedom, such as the single-particle system shown in the figure. To the extent that one accepts that the ensemble becomes blurred, then, Gibbs' approach is a cleaner proof of the second law of thermodynamics.
Unfortunately, as pointed out early on in the development of quantum statistical mechanics by John von Neumann and others, this kind of argument does not carry over to quantum mechanics. In quantum mechanics, the ensemble cannot support an ever-finer mixing process, because of the finite dimensionality of the relevant portion of Hilbert space. Instead of converging closer and closer to the equilibrium ensemble (time-averaged ensemble) as in the classical case, the density matrix of the quantum system will constantly show evolution, even showing recurrences. Developing a quantum version of the H-theorem without appeal to the Stosszahlansatz is thus significantly more complicated.
| Physical sciences | Thermodynamics | Physics |
424540 | https://en.wikipedia.org/wiki/Einstein%20field%20equations | Einstein field equations | In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
Mathematical form
The Einstein field equations (EFE) may be written in the form:
where is the Einstein tensor, is the metric tensor, is the stress–energy tensor, is the cosmological constant and is the Einstein gravitational constant.
The Einstein tensor is defined as
where is the Ricci curvature tensor, and is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
or
where is the Newtonian constant of gravitation and is the speed of light in vacuum.
The EFE can thus also be written as
In standard units, each term on the left has units of 1/length2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor , since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
Sign convention
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
The third sign above is related to the choice of convention for the Ricci tensor:
With these definitions Misner, Thorne, and Wheeler classify themselves as , whereas Weinberg (1972) is , Peebles (1980) and Efstathiou et al. (1990) are , Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are .
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
The sign of the cosmological term would change in both these versions if the metric sign convention is used rather than the MTW metric sign convention adopted here.
Equivalent formulations
Taking the trace with respect to the metric of both sides of the EFE one gets
where is the spacetime dimension. Solving for and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
In dimensions this reduces to
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace in the expression on the right with the Minkowski metric without significant loss of accuracy).
The cosmological constant
In the Einstein field equations
the term containing the cosmological constant was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned , remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
This tensor describes a vacuum state with an energy density and isotropic pressure that are fixed constants and given by
where it is assumed that has SI unit m and is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
Features
Conservation of energy and momentum
General relativity is consistent with the local conservation of energy and momentum expressed as
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
Nonlinearity
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is Schrödinger's equation of quantum mechanics, which is linear in the wavefunction.
The correspondence principle
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the slow-motion approximation. In fact, the constant appearing in the EFE is determined by making these two approximations.
Vacuum field equations
If the energy–momentum tensor is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
In the case of nonzero cosmological constant, the equations are
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, , are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
Einstein–Maxwell equations
If the energy–momentum tensor is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant , taken to be zero in conventional relativity theory):
Additionally, the covariant Maxwell equations are also applicable in free space:
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential such that
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
Solutions
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
The linearized EFE
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
Polynomial form
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein-Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
| Physical sciences | Theory of relativity | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.